text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We present a new map of interstellar reddening, covering the 39\\% of the sky\nwith low {\\rm HI} column densities ($N_{\\rm HI} < 4\\times10^{20}\\,\\rm cm^{-2}$\nor $E(B-V)\\approx 45\\rm\\, mmag$) at $16\\overset{'}{.}1$ resolution, based on\nall-sky observations of Galactic HI emission by the HI4PI Survey. In this low\ncolumn density regime, we derive a characteristic value of $N_{\\rm HI}/E(B-V) =\n8.8\\times10^{21}\\, \\rm\\, cm^{2}\\, mag^{-1}$ for gas with $|v_{\\rm LSR}| <\n90\\,\\rm km\\, s^{-1}$ and find no significant reddening associated with gas at\nhigher velocities. We compare our HI-based reddening map with the Schlegel,\nFinkbeiner, and Davis (1998, SFD) reddening map and find them consistent to\nwithin a scatter of $\\simeq 5\\,\\rm mmag$. Further, the differences between our\nmap and the SFD map are in excellent agreement with the low resolution\n($4\\overset{\\circ}{.}5$) corrections to the SFD map derived by Peek and Graves\n(2010) based on observed reddening toward passive galaxies. We therefore argue\nthat our HI-based map provides the most accurate interstellar reddening\nestimates in the low column density regime to date. Our reddening map is made\npublicly available (this http URL).\n", "title": "A new, large-scale map of interstellar reddening derived from HI emission" }
null
null
null
null
true
null
18101
null
Default
null
null
null
{ "abstract": " Seasonal patterns associated with stress modulation, as evidenced by\nearthquake occurrence, have been detected in regions characterized by present\nday mountain building and glacial retreat in the Northern Hemisphere. In the\nHimalaya and the Alps, seismicity is peaking in spring and summer; opposite\nbehaviour is observed in the Apennines. This diametrical behaviour, confirmed\nby recent strong earthquakes, well correlates with the dominant tectonic\nregime: peak in spring and summer in shortening areas, peak in fall and winter\nin extensional areas. The analysis of the seasonal effect is extended to\nseveral shortening (e.g. Zagros and Caucasus) and extensional regions, and\ncounter-examples from regions where no seasonal modulation is expected (e.g.\nTropical Atlantic Ridge). This study generalizes to different seismotectonic\nsettings the early observations made about short-term (seasonal) and long-term\n(secular) modulation of seismicity and confirms, with some statistical\nsignificance, that snow and ice thaw may cause crustal deformations that\nmodulate the occurrence of major earthquakes.\n", "title": "Seasonal modulation of seismicity: the competing/collaborative effect of the snow and ice load on the lithosphere" }
null
null
null
null
true
null
18102
null
Default
null
null
null
{ "abstract": " In this paper, we consider the existence (and nonexistence) of solutions to\n\\[\n-\\mathcal{M}_{\\lambda,\\Lambda}^\\pm (u'') + V(x) u = f(u) \\quad {\\rm in} \\\n\\mathbf{R}\n\\] where $\\mathcal{M}_{\\lambda,\\Lambda}^+$ and\n$\\mathcal{M}_{\\lambda,\\Lambda}^-$ denote the Pucci operators with $0< \\lambda\n\\leq \\Lambda < \\infty$, $V(x)$ is a bounded function, $f(s)$ is a continuous\nfunction and its typical example is a power-type nonlinearity $f(s)\n=|s|^{p-1}s$ $(p>1)$. In particular, we are interested in positive solutions\nwhich decay at infinity, and the existence (and nonexistence) of such solutions\nis proved.\n", "title": "Existence and nonexistence of positive solutions to some fully nonlinear equation in one dimension" }
null
null
[ "Mathematics" ]
null
true
null
18103
null
Validated
null
null
null
{ "abstract": " We present a study of Andreev Quantum Dots (QDots) fabricated with\nsmall-diameter (30 nm) Si-doped InAs nanowires where the Fermi level can be\ntuned across a mobility edge separating localized states from delocalized\nstates. The transition to the insulating phase is identified by a drop in the\namplitude and width of the excited levels and is found to have remarkable\nconsequences on the spectrum of superconducting SubGap Resonances (SGRs). While\nat deeply localized levels, only quasiparticles co-tunneling is observed, for\nslightly delocalized levels, Shiba bound states form and a parity changing\nquantum phase transition is identified by a crossing of the bound states at\nzero energy. Finally, in the metallic regime, single Andreev resonances are\nobserved.\n", "title": "Shiba Bound States across the mobility edge in doped InAs nanowires" }
null
null
null
null
true
null
18104
null
Default
null
null
null
{ "abstract": " We introduce the Helsinki Neural Machine Translation system (HNMT) and how it\nis applied in the news translation task at WMT 2017, where it ranked first in\nboth the human and automatic evaluations for English--Finnish. We discuss the\nsuccess of English--Finnish translations and the overall advantage of NMT over\na strong SMT baseline. We also discuss our submissions for English--Latvian,\nEnglish--Chinese and Chinese--English.\n", "title": "The Helsinki Neural Machine Translation System" }
null
null
null
null
true
null
18105
null
Default
null
null
null
{ "abstract": " Phase transformations ruled by non-simultaneous nucleation and growth do not\nlead to random distribution of nuclei. Since nucleation is only allowed in the\nuntransformed portion of space, positions of nuclei are correlated. In this\narticle an analytical approach is presented for computing pair-correlation\nfunction of nuclei in progressive nucleation. This quantity is further employed\nfor characterizing the spatial distribution of nuclei through the nearest\nneighbor distribution function. The modeling is developed for nucleation in 2D\nspace with power growth law and it is applied to describe electrochemical\nnucleation where correlation effects are significant. Comparison with both\ncomputer simulations and experimental data lends support to the model which\ngives insights into the transition from Poissonian to correlated nearest\nneighbor probability density.\n", "title": "Spatial distribution of nuclei in progressive nucleation: modeling and application" }
null
null
null
null
true
null
18106
null
Default
null
null
null
{ "abstract": " This work considers resilient, cooperative state estimation in unreliable\nmulti-agent networks. A network of agents aims to collaboratively estimate the\nvalue of an unknown vector parameter, while an {\\em unknown} subset of agents\nsuffer Byzantine faults. Faulty agents malfunction arbitrarily and may send out\n{\\em highly unstructured} messages to other agents in the network. As opposed\nto fault-free networks, reaching agreement in the presence of Byzantine faults\nis far from trivial. In this paper, we propose a computationally-efficient\nalgorithm that is provably robust to Byzantine faults. At each iteration of the\nalgorithm, a good agent (1) performs a gradient descent update based on noisy\nlocal measurements, (2) exchanges its update with other agents in its\nneighborhood, and (3) robustly aggregates the received messages using\ncoordinate-wise trimmed means. Under mild technical assumptions, we establish\nthat good agents learn the true parameter asymptotically in almost sure sense.\nWe further complement our analysis by proving (high probability) {\\em\nfinite-time} convergence rate, encapsulating network characteristics.\n", "title": "Finite-time Guarantees for Byzantine-Resilient Distributed State Estimation with Noisy Measurements" }
null
null
null
null
true
null
18107
null
Default
null
null
null
{ "abstract": " A critical challenge in the observation of the redshifted 21-cm line is its\nseparation from bright Galactic and extragalactic foregrounds. In particular,\nthe instrumental leakage of polarized foregrounds, which undergo significant\nFaraday rotation as they propagate through the interstellar medium, may\nharmfully contaminate the 21-cm power spectrum. We develop a formalism to\ndescribe the leakage due to instrumental widefield effects in visibility-based\npower spectra measured with redundant arrays, extending the delay-spectrum\napproach presented in Parsons et al. (2012). We construct polarized sky models\nand propagate them through the instrument model to simulate realistic full-sky\nobservations with the Precision Array to Probe the Epoch of Reionization. We\nfind that the leakage due to a population of polarized point sources is\nexpected to be higher than diffuse Galactic polarization at any $k$ mode for a\n30~m reference baseline. For the same reference baseline, a foreground-free\nwindow at $k > 0.3 \\, h$~Mpc$^{-1}$ can be defined in terms of leakage from\ndiffuse Galactic polarization even under the most pessimistic assumptions. If\nmeasurements of polarized foreground power spectra or a model of polarized\nforegrounds are given, our method is able to predict the polarization leakage\nin actual 21-cm observations, potentially enabling its statistical subtraction\nfrom the measured 21-cm power spectrum.\n", "title": "Constraining Polarized Foregrounds for EOR Experiments II: Polarization Leakage Simulations in the Avoidance Scheme" }
null
null
null
null
true
null
18108
null
Default
null
null
null
{ "abstract": " For autonomous robots in dynamic environments mixed with human, it is vital\nto detect impending collision quickly and robustly. The biological visual\nsystems evolved over millions of years may provide us efficient solutions for\ncollision detection in complex environments. In the cockpit of locusts, two\nLobula Giant Movement Detectors, i.e. LGMD1 and LGMD2, have been identified\nwhich respond to looming objects rigorously with high firing rates. Compared to\nLGMD1, LGMD2 matures early in the juvenile locusts with specific selectivity to\ndark moving objects against bright background in depth while not responding to\nlight objects embedded in dark background - a similar situation which ground\nvehicles and robots are facing with. However, little work has been done on\nmodeling LGMD2, let alone its potential in robotics and other vision-based\napplications. In this article, we propose a novel way of modeling LGMD2 neuron,\nwith biased ON and OFF pathways splitting visual streams into parallel channels\nencoding brightness increments and decrements separately to fulfill its\nselectivity. Moreover, we apply a biophysical mechanism of spike frequency\nadaptation to shape the looming selectivity in such a collision-detecting\nneuron model. The proposed visual neural network has been tested with\nsystematic experiments, challenged against synthetic and real physical stimuli,\nas well as image streams from the sensor of a miniature robot. The results\ndemonstrated this framework is able to detect looming dark objects embedded in\nbright backgrounds selectively, which make it ideal for ground mobile\nplatforms. The robotic experiments also showed its robustness in collision\ndetection - it performed well for near range navigation in an arena with many\nobstacles. Its enhanced collision selectivity to dark approaching objects\nversus receding and translating ones has also been verified via systematic\nexperiments.\n", "title": "Collision Selective Visual Neural Network Inspired by LGMD2 Neurons in Juvenile Locusts" }
null
null
null
null
true
null
18109
null
Default
null
null
null
{ "abstract": " We analyze invariant measures of two coupled piecewise linear and everywhere\nexpanding maps on the synchronization manifold. We observe that though the\nindividual maps have simple and smooth functions as their stationary densities,\nthey become multifractal as soon as two of them are coupled nonlinearly even\nwith a small coupling. For some maps, the multifractal spectrum seems to be\nrobust with the coupling or map parameters and for some other maps, there is a\nsubstantial variation. The origin of the multifractal spectrum here is\nintriguing as it does not seem to conform to the existing theory of\nmultifractal functions.\n", "title": "Multifractal invariant measures in expanding piecewise linear coupled maps" }
null
null
[ "Physics" ]
null
true
null
18110
null
Validated
null
null
null
{ "abstract": " Any virtually free group $H$ containing no non-trivial finite normal subgroup\n(e.g., the infinite dihedral group) is a retract of any finitely generated\ngroup containing $H$ as a verbally closed subgroup.\n", "title": "Virtually free finite-normal-subgroup-free groups are strongly verbally closed" }
null
null
null
null
true
null
18111
null
Default
null
null
null
{ "abstract": " This letter presents a performance comparison of two popular secrecy\nenhancement techniques in wireless networks: (i) creating guard zones by\nrestricting transmissions of legitimate transmitters whenever any eavesdropper\nis detected in their vicinity, and (ii) adding artificial noise to the\nconfidential messages to make it difficult for the eavesdroppers to decode\nthem. Focusing on a noise-limited regime, we use tools from stochastic geometry\nto derive the secrecy outage probability at the eavesdroppers as well as the\ncoverage probability at the legitimate users for both these techniques. Using\nthese results, we derive a threshold on the density of the eavesdroppers below\nwhich no secrecy enhancing technique is required to ensure a target secrecy\noutage probability. For eavesdropper densities above this threshold, we\nconcretely characterize the regimes in which each technique outperforms the\nother. Our results demonstrate that guard zone technique is better when the\ndistances between the transmitters and their legitimate receivers are higher\nthan a certain threshold.\n", "title": "Stochastic Geometry-based Comparison of Secrecy Enhancement Techniques in D2D Networks" }
null
null
null
null
true
null
18112
null
Default
null
null
null
{ "abstract": " In this paper, we consider stochastic dual coordinate (SDCA) {\\em without}\nstrongly convex assumption or convex assumption. We show that SDCA converges\nlinearly under mild conditions termed restricted strong convexity. This covers\na wide array of popular statistical models including Lasso, group Lasso, and\nlogistic regression with $\\ell_1$ regularization, corrected Lasso and linear\nregression with SCAD regularizer. This significantly improves previous\nconvergence results on SDCA for problems that are not strongly convex. As a by\nproduct, we derive a dual free form of SDCA that can handle general\nregularization term, which is of interest by itself.\n", "title": "Linear convergence of SDCA in statistical estimation" }
null
null
null
null
true
null
18113
null
Default
null
null
null
{ "abstract": " Regularized inversion methods for image reconstruction are used widely due to\ntheir tractability and ability to combine complex physical sensor models with\nuseful regularity criteria. Such methods motivated the recently developed\nPlug-and-Play prior method, which provides a framework to use advanced\ndenoising algorithms as regularizers in inversion. However, the need to\nformulate regularized inversion as the solution to an optimization problem\nlimits the possible regularity conditions and physical sensor models.\nIn this paper, we introduce Consensus Equilibrium (CE), which generalizes\nregularized inversion to include a much wider variety of both forward\ncomponents and prior components without the need for either to be expressed\nwith a cost function. CE is based on the solution of a set of equilibrium\nequations that balance data fit and regularity. In this framework, the problem\nof MAP estimation in regularized inversion is replaced by the problem of\nsolving these equilibrium equations, which can be approached in multiple ways.\nThe key contribution of CE is to provide a novel framework for fusing\nmultiple heterogeneous models of physical sensors or models learned from data.\nWe describe the derivation of the CE equations and prove that the solution of\nthe CE equations generalizes the standard MAP estimate under appropriate\ncircumstances.\nWe also discuss algorithms for solving the CE equations, including ADMM with\na novel form of preconditioning and Newton's method. We give examples to\nillustrate consensus equilibrium and the convergence properties of these\nalgorithms and demonstrate this method on some toy problems and on a denoising\nexample in which we use an array of convolutional neural network denoisers,\nnone of which is tuned to match the noise level in a noisy image but which in\nconsensus can achieve a better result than any of them individually.\n", "title": "Plug-and-Play Unplugged: Optimization Free Reconstruction using Consensus Equilibrium" }
null
null
null
null
true
null
18114
null
Default
null
null
null
{ "abstract": " Three dimensional galaxy clustering measurements provide a wealth of\ncosmological information. However, obtaining spectra of galaxies is expensive,\nand surveys often only measure redshifts for a subsample of a target galaxy\npopulation. Provided that the spectroscopic data is representative, we argue\nthat angular pair upweighting should be used in these situations to improve the\n3D clustering measurements. We present a toy model showing mathematically how\nsuch a weighting can improve measurements, and provide a practical example of\nits application using mocks created for the Baryon Oscillation Spectroscopic\nSurvey (BOSS). Our analysis of mocks suggests that, if an angular clustering\nmeasurement is available over twice the area covered spectroscopically,\nweighting gives a $\\sim$10-20% reduction of the variance of the monopole\ncorrelation function on the BAO scale.\n", "title": "Using angular pair upweighting to improve 3D clustering measurements" }
null
null
null
null
true
null
18115
null
Default
null
null
null
{ "abstract": " Let $L_u=\\begin{bmatrix}1 & 0\\\\u & 1\\end{bmatrix}$ and $R_v=\\begin{bmatrix}1\n& v\\\\0 & 1\\end{bmatrix}$ be matrices in $SL_2(\\mathbb Z)$ with $u, v\\geq 1$.\nSince the monoid generated by $L_u$ and $R_v$ is free, we can associate a depth\nto each element based on its product representation. In the cases where $u=v=2$\nand $u=v=3$, Bromberg, Shpilrain, and Vdovina determined the depth $n$ matrices\ncontaining the maximal entry for each $n\\geq 1$. By using ideas from our\nprevious work on $(u,v)$-Calkin-Wilf trees, we extend their results for any $u,\nv\\geq 1$ and in the process we recover the Fibonacci and some Lucas sequences.\nAs a consequence we obtain bounds which guarantee collision resistance on a\nfamily of hashing functions based on $L_u$ and $R_v$.\n", "title": "Maximal entries of elements in certain matrix monoids" }
null
null
null
null
true
null
18116
null
Default
null
null
null
{ "abstract": " This work presents a methodology for modeling and predicting human behavior\nin settings with N humans interacting in highly multimodal scenarios (i.e.\nwhere there are many possible highly-distinct futures). A motivating example\nincludes robots interacting with humans in crowded environments, such as\nself-driving cars operating alongside human-driven vehicles or human-robot\ncollaborative bin packing in a warehouse. Our approach to model human behavior\nin such uncertain environments is to model humans in the scene as nodes in a\ngraphical model, with edges encoding relationships between them. For each\nhuman, we learn a multimodal probability distribution over future actions from\na dataset of multi-human interactions. Learning such distributions is made\npossible by recent advances in the theory of conditional variational\nautoencoders and deep learning approximations of probabilistic graphical\nmodels. Specifically, we learn action distributions conditioned on interaction\nhistory, neighboring human behavior, and candidate future agent behavior in\norder to take into account response dynamics. We demonstrate the performance of\nsuch a modeling approach in modeling basketball player trajectories, a highly\nmultimodal, multi-human scenario which serves as a proxy for many robotic\napplications.\n", "title": "Generative Modeling of Multimodal Multi-Human Behavior" }
null
null
null
null
true
null
18117
null
Default
null
null
null
{ "abstract": " We study the problem of designing models for machine learning tasks defined\non \\emph{sets}. In contrast to traditional approach of operating on fixed\ndimensional vectors, we consider objective functions defined on sets that are\ninvariant to permutations. Such problems are widespread, ranging from\nestimation of population statistics \\cite{poczos13aistats}, to anomaly\ndetection in piezometer data of embankment dams \\cite{Jung15Exploration}, to\ncosmology \\cite{Ntampaka16Dynamical,Ravanbakhsh16ICML1}. Our main theorem\ncharacterizes the permutation invariant functions and provides a family of\nfunctions to which any permutation invariant objective function must belong.\nThis family of functions has a special structure which enables us to design a\ndeep network architecture that can operate on sets and which can be deployed on\na variety of scenarios including both unsupervised and supervised learning\ntasks. We also derive the necessary and sufficient conditions for permutation\nequivariance in deep models. We demonstrate the applicability of our method on\npopulation statistic estimation, point cloud classification, set expansion, and\noutlier detection.\n", "title": "Deep Sets" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
18118
null
Validated
null
null
null
{ "abstract": " In this paper, we consider a Markov chain choice model with single\ntransition. In this model, customers arrive at each product with a certain\nprobability. If the arrived product is unavailable, then the seller can\nrecommend a subset of available products to the customer and the customer will\npurchase one of the recommended products or choose not to purchase with certain\ntransition probabilities. The distinguishing features of the model are that the\nseller can control which products to recommend depending on the arrived product\nand that each customer either purchases a product or leaves the market after\none transition.\nWe study the assortment optimization problem under this model. Particularly,\nwe show that this problem is generally NP-Hard even if each product could only\ntransit to at most two products. Despite the complexity of the problem, we\nprovide polynomial time algorithms for several special cases, such as when the\ntransition probabilities are homogeneous with respect to the starting point, or\nwhen each product can only transit to one other product. We also provide a\ntight performance bound for revenue-ordered assortments. In addition, we\npropose a compact mixed integer program formulation that can solve this problem\nof large size. Through extensive numerical experiments, we show that the\nproposed algorithms can solve the problem efficiently and the obtained\nassortments could significantly improve the revenue of the seller than under\nthe Markov chain choice model.\n", "title": "Assortment Optimization under a Single Transition Model" }
null
null
null
null
true
null
18119
null
Default
null
null
null
{ "abstract": " SAS introduced Type III methods to address difficulties in dummy-variable\nmodels for effects of multiple factors and covariates. Type III methods are\nwidely used in practice; they are the default method in many statistical\ncomputing packages. Type III sums of squares (SSs) are defined by an algorithm,\nand an explicit mathematical formulation does not seem to exist. For that\nreason, their properties have not been rigorously proven. Some that are widely\nbelieved to be true are not always true. An explicit formulation is derived in\nthis paper. It is used as a basis to prove fundamental properties of Type III\nestimable functions and SSs. It is shown that, in any given setting, Type III\neffects include all estimable ANOVA effects, and that if all of an ANOVA effect\nis estimable then the Type III SS tests it exactly. The setting for these\nresults is general, comprising linear models for the mean vector of a response\nthat include arbitrary sets of effects of factors and covariates.\n", "title": "Deconstructing Type III" }
null
null
null
null
true
null
18120
null
Default
null
null
null
{ "abstract": " Parents and teachers often express concern about the extensive use of social\nmedia by youngsters. Some of them see emoticons, undecipherable initialisms and\nloose grammar typical for social media as evidence of language degradation. In\nthis paper, we use a simple measure of text complexity to investigate how the\ncomplexity of public posts on a popular social networking site changes over\ntime. We analyze a unique dataset that contains texts posted by 942, 336 users\nfrom a large European city across nine years. We show that the chosen\ncomplexity measure is correlated with the academic performance of users: users\nfrom high-performing schools produce more complex texts than users from\nlow-performing schools. We also find that complexity of posts increases with\nage. Finally, we demonstrate that overall language complexity of posts on the\nsocial networking site is constantly increasing. We call this phenomenon the\ndigital Flynn effect. Our results may suggest that the worries about language\ndegradation are not warranted.\n", "title": "The Digital Flynn Effect: Complexity of Posts on Social Media Increases over Time" }
null
null
[ "Computer Science" ]
null
true
null
18121
null
Validated
null
null
null
{ "abstract": " Existing logical models do not fairly represent epistemic situations with\nfallible justifications, e.g., Russell's Prime Minister example, though such\nscenarios have long been at the center of epistemic studies. We introduce\njustification epistemic models, JEM, which can handle such scenarios. JEM makes\njustifications prime objects and draws a distinction between accepted and\nknowledge-producing justifications; belief and knowledge become derived\nnotions. Furthermore, Kripke models can be viewed as special cases of JEMs with\nadditional assumptions of evidence insensitivity and common knowledge of the\nmodel. We argue that JEM can be applied to a range of epistemic scenarios in\nCS, AI, Game Theory, etc.\n", "title": "Epistemic Modeling with Justifications" }
null
null
null
null
true
null
18122
null
Default
null
null
null
{ "abstract": " We study the critical behavior of the 2D $N$-color Ashkin-Teller model in the\npresence of random bond disorder whose correlations decays with the distance\n$r$ as a power-law $r^{-a}$. We consider the case when the spins of different\ncolors sitting at the same site are coupled by the same bond and map this\nproblem onto the 2D system of $N/2$ flavors of interacting Dirac fermions in\nthe presence of correlated disorder. Using renormalization group we show that\nfor $N=2$, a \"weakly universal\" scaling behavior at the continuous transition\nbecomes universal with new critical exponents. For $N>2$, the first-order phase\ntransition is rounded by the correlated disorder and turns into a continuous\none.\n", "title": "Emergent universal critical behavior of the 2D $N$-color Ashkin-Teller model in the presence of correlated disorder" }
null
null
null
null
true
null
18123
null
Default
null
null
null
{ "abstract": " We study the conductance of a junction between the normal and superconducting\nsegments of a nanowire, both of which are subjected to spin-orbit coupling and\nan external magnetic field. We directly compare the transport properties of the\nnanowire assuming two different models for the superconducting segment: one\nwhere we put superconductivity by hand into the wire, and one where\nsuperconductivity is induced through a tunneling junction with a bulk s-wave\nsuperconductor. While these two models are equivalent at low energies and at\nweak coupling between the nanowire and the superconductor, we show that there\nare several interesting qualitative differences away from these two limits. In\nparticular, the tunneling model introduces an additional conductance peak at\nthe energy corresponding to the bulk gap of the parent superconductor. By\nemploying a combination of analytical methods at zero temperature and numerical\nmethods at finite temperature, we show that the tunneling model of the\nproximity effect reproduces many more of the qualitative features that are seen\nexperimentally in such a nanowire system.\n", "title": "Transport signatures of topological superconductivity in a proximity-coupled nanowire" }
null
null
null
null
true
null
18124
null
Default
null
null
null
{ "abstract": " Training of neural machine translation (NMT) models usually uses mini-batches\nfor efficiency purposes. During the mini-batched training process, it is\nnecessary to pad shorter sentences in a mini-batch to be equal in length to the\nlongest sentence therein for efficient computation. Previous work has noted\nthat sorting the corpus based on the sentence length before making mini-batches\nreduces the amount of padding and increases the processing speed. However,\ndespite the fact that mini-batch creation is an essential step in NMT training,\nwidely used NMT toolkits implement disparate strategies for doing so, which\nhave not been empirically validated or compared. This work investigates\nmini-batch creation strategies with experiments over two different datasets.\nOur results suggest that the choice of a mini-batch creation strategy has a\nlarge effect on NMT training and some length-based sorting strategies do not\nalways work well compared with simple shuffling.\n", "title": "An Empirical Study of Mini-Batch Creation Strategies for Neural Machine Translation" }
null
null
null
null
true
null
18125
null
Default
null
null
null
{ "abstract": " Chromosome conformation capture and Hi-C technologies provide gene-gene\nproximity datasets of stationary cells, revealing chromosome territories,\ntopologically associating domains, and chromosome topology. Imaging of tagged\nDNA sequences in live cells through the lac operator reporter system provides\ndynamic datasets of chromosomal loci. Chromosome modeling explores the\nmechanisms underlying 3D genome structure and dynamics. Here, we automate 4D\ngenome dataset analysis with network-based tools as an alternative to gene-gene\nproximity statistics and visual structure determination. Temporal network\nmodels and community detection algorithms are applied to 4D modeling of G1 in\nbudding yeast with transient crosslinking of $5 kb$ domains in the nucleolus,\nanalyzing datasets from four decades of transient binding timescales. Network\ntools detect and track transient gene communities (clusters) within the\nnucleolus, their size, number, persistence time, and frequency of gene\nexchanges. An optimal, weak binding affinity is revealed that maximizes\ncommunity-scale plasticity whereby large communities persist, frequently\nexchanging genes.\n", "title": "Network analyses of 4D genome datasets automate detection of community-scale gene structure and plasticity" }
null
null
null
null
true
null
18126
null
Default
null
null
null
{ "abstract": " It is well-known that the verification of partial correctness properties of\nimperative programs can be reduced to the satisfiability problem for\nconstrained Horn clauses (CHCs). However, state-of-the-art solvers for CHCs\n(CHC solvers) based on predicate abstraction are sometimes unable to verify\nsatisfiability because they look for models that are definable in a given class\nA of constraints, called A-definable models. We introduce a transformation\ntechnique, called Predicate Pairing (PP), which is able, in many interesting\ncases, to transform a set of clauses into an equisatisfiable set whose\nsatisfiability can be proved by finding an A-definable model, and hence can be\neffectively verified by CHC solvers. We prove that, under very general\nconditions on A, the unfold/fold transformation rules preserve the existence of\nan A-definable model, i.e., if the original clauses have an A-definable model,\nthen the transformed clauses have an A-definable model. The converse does not\nhold in general, and we provide suitable conditions under which the transformed\nclauses have an A-definable model iff the original ones have an A-definable\nmodel. Then, we present the PP strategy which guides the application of the\ntransformation rules with the objective of deriving a set of clauses whose\nsatisfiability can be proved by looking for A-definable models. PP introduces a\nnew predicate defined by the conjunction of two predicates together with some\nconstraints. We show through some examples that an A-definable model may exist\nfor the new predicate even if it does not exist for its defining atomic\nconjuncts. We also present some case studies showing that PP plays a crucial\nrole in the verification of relational properties of programs (e.g., program\nequivalence and non-interference). Finally, we perform an experimental\nevaluation to assess the effectiveness of PP in increasing the power of CHC\nsolving.\n", "title": "Predicate Pairing for Program Verification" }
null
null
[ "Computer Science" ]
null
true
null
18127
null
Validated
null
null
null
{ "abstract": " We study the growth of entanglement entropy in density matrix renormalization\ngroup calculations of the real-time quench dynamics of the Anderson impurity\nmodel. We find that with appropriate choice of basis, the entropy growth is\nlogarithmic in both the interacting and noninteracting single-impurity models.\nThe logarithmic entropy growth is understood from a noninteracting chain model\nas a critical behavior separating regimes of linear growth and saturation of\nentropy, corresponding respectively to an overlapping and gapped energy spectra\nof the set of bath states. We find that with an appropriate choices of basis\n(energy-ordered bath orbitals), logarithmic entropy growth is the generic\nbehavior of quenched impurity models. A noninteracting calculation of a\ndouble-impurity Anderson model supports the conclusion in the multi-impurity\ncase. The logarithmic growth of entanglement entropy enables studies of quench\ndynamics to very long times.\n", "title": "Entanglement entropy and computational complexity of the Anderson impurity model out of equilibrium I: quench dynamics" }
null
null
null
null
true
null
18128
null
Default
null
null
null
{ "abstract": " In this work, we show that saturating output activation functions, such as\nthe softmax, impede learning on a number of standard classification tasks.\nMoreover, we present results showing that the utility of softmax does not stem\nfrom the normalization, as some have speculated. In fact, the normalization\nmakes things worse. Rather, the advantage is in the exponentiation of error\ngradients. This exponential gradient boosting is shown to speed up convergence\nand improve generalization. To this end, we demonstrate faster convergence and\nbetter performance on diverse classification tasks: image classification using\nCIFAR-10 and ImageNet, and semantic segmentation using PASCAL VOC 2012. In the\nlatter case, using the state-of-the-art neural network architecture, the model\nconverged 33% faster with our method (roughly two days of training less) than\nwith the standard softmax activation, and with a slightly better performance to\nboot.\n", "title": "Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting" }
null
null
null
null
true
null
18129
null
Default
null
null
null
{ "abstract": " We defined a notion of quantum 2-torus $T_\\theta$ in \"Masanori Itai and Boris\nZilber, Notes on a model theory of quantum 2-torus $T_q^2$ for generic $q$,\narXiv:1503.06045v1 [mathLO]\" and studied its model theoretic property. In this\nnote we associate quantum 2-tori $T_\\theta$ with the structure over ${\\mathbb\nC}_\\theta = ({\\mathbb C}, +, \\cdot, y = x^\\theta),$ where $\\theta \\in {\\mathbb\nR} \\setminus {\\mathbb Q}$, and introduce the notion of geometric isomorphisms\nbetween such quantum 2-tori.\nWe show that this notion is closely connected with the fundamental notion of\nMorita equivalence of non-commutative geometry. Namely, we prove that the\nquantum 2-tori $T_{\\theta_1}$ and $T_{\\theta_2}$ are Morita equivalent if and\nonly if $\\theta_2 = {\\displaystyle \\frac{a \\theta_1 + b}{c \\theta_1 + d}}$ for\nsome $ \\left( \\begin{array}{cc} a & b \\\\ c & d \\end{array} \\right)\n\\in {\\rm GL}_2({\\mathbb Z})$ with $|ad - bc| = 1$. This is our version of\nRieffel's Theorem in \"M. A. Rieffel and A. Schwarz, Morita equivalence of\nmultidimensional noncummutative tori, Internat. J. Math. 10, 2 (1999) 289-299\"\nwhich characterises Morita equivalence of quantum tori in the same terms.\nThe result in essence confirms that the representation $T_\\theta$ in terms of\nmodel-theoretic geometry \\cite{IZ} is adequate to its original definition in\nterms of non-commutative geometry.\n", "title": "A model theoretic Rieffel's theorem of quantum 2-torus" }
null
null
[ "Mathematics" ]
null
true
null
18130
null
Validated
null
null
null
{ "abstract": " We release two artificial datasets, Simulated Flying Shapes and Simulated\nPlanar Manipulator that allow to test the learning ability of video processing\nsystems. In particular, the dataset is meant as a tool which allows to easily\nassess the sanity of deep neural network models that aim to encode, reconstruct\nor predict video frame sequences. The datasets each consist of 90000 videos.\nThe Simulated Flying Shapes dataset comprises scenes showing two objects of\nequal shape (rectangle, triangle and circle) and size in which one object\napproaches its counterpart. The Simulated Planar Manipulator shows a 3-DOF\nplanar manipulator that executes a pick-and-place task in which it has to place\na size-varying circle on a squared platform. Different from other widely used\ndatasets such as moving MNIST [1], [2], the two presented datasets involve\ngoal-oriented tasks (e.g. the manipulator grasping an object and placing it on\na platform), rather than showing random movements. This makes our datasets more\nsuitable for testing prediction capabilities and the learning of sophisticated\nmotions by a machine learning model. This technical document aims at providing\nan introduction into the usage of both datasets.\n", "title": "Introducing the Simulated Flying Shapes and Simulated Planar Manipulator Datasets" }
null
null
null
null
true
null
18131
null
Default
null
null
null
{ "abstract": " We introduce a new probabilistic approach to quantify convergence to\nequilibrium for (kinetic) Langevin processes. In contrast to previous analytic\napproaches that focus on the associated kinetic Fokker-Planck equation, our\napproach is based on a specific combination of reflection and synchronous\ncoupling of two solutions of the Langevin equation. It yields contractions in a\nparticular Wasserstein distance, and it provides rather precise bounds for\nconvergence to equilibrium at the borderline between the overdamped and the\nunderdamped regime. In particular, we are able to recover kinetic behavior in\nterms of explicit lower bounds for the contraction rate. For example, for a\nrescaled double-well potential with local minima at distance $a$, we obtain a\nlower bound for the contraction rate of order $\\Omega (a^{-1})$ provided the\nfriction coefficient is of order $\\Theta (a^{-1})$.\n", "title": "Couplings and quantitative contraction rates for Langevin dynamics" }
null
null
null
null
true
null
18132
null
Default
null
null
null
{ "abstract": " Tropical cyclone wind-intensity prediction is a challenging task considering\ndrastic changes climate patterns over the last few decades. In order to develop\nrobust prediction models, one needs to consider different characteristics of\ncyclones in terms of spatial and temporal characteristics. Transfer learning\nincorporates knowledge from a related source dataset to compliment a target\ndatasets especially in cases where there is lack or data. Stacking is a form of\nensemble learning focused for improving generalization that has been recently\nused for transfer learning problems which is referred to as transfer stacking.\nIn this paper, we employ transfer stacking as a means of studying the effects\nof cyclones whereby we evaluate if cyclones in different geographic locations\ncan be helpful for improving generalization performs. Moreover, we use\nconventional neural networks for evaluating the effects of duration on cyclones\nin prediction performance. Therefore, we develop an effective strategy that\nevaluates the relationships between different types of cyclones through\ntransfer learning and conventional learning methods via neural networks.\n", "title": "Stacked transfer learning for tropical cyclone intensity prediction" }
null
null
null
null
true
null
18133
null
Default
null
null
null
{ "abstract": " Given pairwise distinct vertices $\\{\\alpha_i , \\beta_i\\}^k_{i=1}$ of the\n$n$-dimensional hypercube $Q_n$ such that the distance of $\\alpha_i$ and\n$\\beta_i$ is odd, are there paths $P_i$ between $\\alpha_i$ and $\\beta_i$ such\nthat $\\{V (P_i)\\}^k_{i=1}$ partitions $V(Q_n)$? A positive solution for every\n$n\\ge1$ and $k=1$ is known as a Gray code of dimension $n$. In this paper we\nsettle this problem for small values of $n$.\n", "title": "Generalized Gray codes with prescribed ends of small dimensions" }
null
null
[ "Computer Science" ]
null
true
null
18134
null
Validated
null
null
null
{ "abstract": " Cirquent calculus is a proof system manipulating circuit-style constructs\nrather than formulas. Using it, this article constructs a sound and complete\naxiomatization CL16 of the propositional fragment of computability logic (the\ngame-semantically conceived logic of computational problems - see\nthis http URL ) whose logical vocabulary consists\nof negation and parallel and choice connectives, and whose atoms represent\nelementary, i.e. moveless, games.\n", "title": "Elementary-base cirquent calculus I: Parallel and choice connectives" }
null
null
null
null
true
null
18135
null
Default
null
null
null
{ "abstract": " Dual-functional nanoparticles, with the property of aggregation-induced\nemission and the capability of reactive oxygen species, were used to achieve\npassive/active targeting of tumor. Good contrast in in vivo imaging and obvious\ntherapeutic efficiency were realized with a low dose of AIE nanoparticles as\nwell as a low power density of light, resulting in negligible side effects.\n", "title": "Targeted and Imaging-guided In Vivo Photodynamic Therapy of Tumors Using Dual-functional, Aggregation-induced Emission Nanoparticles" }
null
null
null
null
true
null
18136
null
Default
null
null
null
{ "abstract": " Photometric observations of planetary transits may show localized bumps,\ncalled transit anomalies, due to the possible crossing of photospheric\nstarspots. The aim of this work is to analyze the transit anomalies and derive\nthe temperature profile inside the transit belt along the transit direction. We\ndevelop the algorithm TOSC, a tomographic inverse-approach tool which, by means\nof simple algebra, reconstructs the flux distribution along the transit belt.\nWe test TOSC against some simulated scenarios. We find that TOSC provides\nrobust results for light curves with photometric accuracies better than 1~mmag,\nreturning the spot-photosphere temperature contrast with an accuracy better\nthan 100~K. TOSC is also robust against the presence of unocculted spots,\nprovided that the apparent planetary radius given by the fit of the transit\nlight curve is used in place of the true radius. The analysis of real data with\nTOSC returns results consistent with previous studies.\n", "title": "TOSC: an algorithm for the tomography of spotted transit chords" }
null
null
[ "Physics" ]
null
true
null
18137
null
Validated
null
null
null
{ "abstract": " While there exist a number of mathematical approaches to modeling the spread\nof disease on a network, analyzing such systems in the presence of uncertainty\nintroduces significant complexity. In scenarios where system parameters must be\ninferred from limited observations, general approaches to uncertainty\nquantification can generate approximate distributions of the unknown\nparameters, but these methods often become computationally expensive if the\nunderlying disease model is complex. In this paper, we apply the recent\nmassively parallelizable Bayesian uncertainty quantification framework $\\Pi4U$\nto a model of a disease spreading on a network of communities, showing that the\nmethod can accurately and tractably recover system parameters and select\noptimal models in this setting.\n", "title": "Bayesian uncertainty quantification for epidemic spread on networks" }
null
null
null
null
true
null
18138
null
Default
null
null
null
{ "abstract": " When recorded in an enclosed room, a sound signal will most certainly get\naffected by reverberation. This not only undermines audio quality, but also\nposes a problem for many human-machine interaction technologies that use speech\nas their input. In this work, a new blind, two-stage dereverberation approach\nbased in a generalized \\beta-divergence as a fidelity term over a non-negative\nrepresentation is proposed. The first stage consists of learning the spectral\nstructure of the signal solely from the observed spectrogram, while the second\nstage is devoted to model reverberation. Both steps are taken by minimizing a\ncost function in which the aim is put either in constructing a dictionary or a\ngood representation by changing the divergence involved. In addition, an\napproach for finding an optimal fidelity parameter for dictionary learning is\nproposed. An algorithm for implementing the proposed method is described and\ntested against state-of-the-art methods. Results show improvements for both\nartificial reverberation and real recordings.\n", "title": "Switching divergences for spectral learning in blind speech dereverberation" }
null
null
null
null
true
null
18139
null
Default
null
null
null
{ "abstract": " The sizes of entire systems of globular clusters (GCs) depend on the\nformation and destruction histories of the GCs themselves, but also on the\nassembly, merger and accretion history of the dark matter (DM) haloes that they\ninhabit. Recent work has shown a linear relation between total mass of globular\nclusters in the globular cluster system and the mass of its host dark matter\nhalo, calibrated from weak lensing. Here we extend this to GC system sizes, by\nstudying the radial density profiles of GCs around galaxies in nearby galaxy\ngroups. We find that radial density profiles of the GC systems are well fit\nwith a de Vaucouleurs profile. Combining our results with those from the\nliterature, we find tight relationship ($\\sim 0.2$ dex scatter) between the\neffective radius of the GC system and the virial radius (or mass) of its host\nDM halo. The steep non-linear dependence of this relationship ($R_{e, GCS}\n\\propto R_{200}^{2.5 - 3}$) is currently not well understood, but is an\nimportant clue regarding the assembly history of DM haloes and of the GC\nsystems that they host.\n", "title": "The correlation between the sizes of globular cluster systems and their host dark matter haloes" }
null
null
[ "Physics" ]
null
true
null
18140
null
Validated
null
null
null
{ "abstract": " The field enhancement factor at the emitter tip and its variation in a close\nneighbourhood determines the emitter current in a Fowler-Nordheim like\nformulation. For an axially symmetric emitter with a smooth tip, it is shown\nthat the variation can be accounted by a $\\cos{\\tilde{\\theta}}$ factor in\nappropriately defined normalized co-ordinates. This is shown analytically for a\nhemi-ellipsoidal emitter and confirmed numerically for other emitter shapes\nwith locally quadratic tips.\n", "title": "Variation of field enhancement factor near the emitter tip" }
null
null
null
null
true
null
18141
null
Default
null
null
null
{ "abstract": " As demand drives systems to generalize to various domains and problems, the\nstudy of multitask, transfer and lifelong learning has become an increasingly\nimportant pursuit. In discrete domains, performance on the Atari game suite has\nemerged as the de facto benchmark for assessing multitask learning. However, in\ncontinuous domains there is a lack of agreement on standard multitask\nevaluation environments which makes it difficult to compare different\napproaches fairly. In this work, we describe a benchmark set of tasks that we\nhave developed in an extendable framework based on OpenAI Gym. We run a simple\nbaseline using Trust Region Policy Optimization and release the framework\npublicly to be expanded and used for the systematic comparison of multitask,\ntransfer, and lifelong learning in continuous domains.\n", "title": "Benchmark Environments for Multitask Learning in Continuous Domains" }
null
null
null
null
true
null
18142
null
Default
null
null
null
{ "abstract": " In this paper, market values of the football players in the forward positions\nare estimated using multiple linear regression by including the physical and\nperformance factors in 2017-2018 season. Players from 4 major leagues of Europe\nare examined, and by applying the test for homoscedasticity, a reasonable\nregression model within 0.10 significance level is built, and the most and the\nleast affecting factors are explained in detail.\n", "title": "A Multiple Linear Regression Approach For Estimating the Market Value of Football Players in Forward Position" }
null
null
null
null
true
null
18143
null
Default
null
null
null
{ "abstract": " The Huang-Hilbert transform is applied to Seismic Electric Signal (SES)\nactivities in order to decompose them into a number of Intrinsic Mode Functions\n(IMFs) and study which of these functions better represent the SES. The results\nare compared to those obtained from the analysis in a new time domain termed\nnatural time after having subtracted the magnetotelluric background from the\noriginal signal. It is shown that the instantaneous amplitudes of the IMFs can\nbe used for the distinction of SES from artificial noises when combined with\nthe natural time analysis.\n", "title": "Application of the Huang-Hilbert transform and natural time to the analysis of Seismic Electric Signal activities" }
null
null
[ "Physics" ]
null
true
null
18144
null
Validated
null
null
null
{ "abstract": " We prove that the exceptional group $E_6(2)$ is not a Hurwitz group. In the\ncourse of proving this, we complete the classification up to conjugacy of all\nHurwitz subgroups of $E_6(2)$, in particular, those isomorphic to $L_2(8)$ and\n$L_3(2)$.\n", "title": "The Hurwitz Subgroups of $E_6(2)$" }
null
null
null
null
true
null
18145
null
Default
null
null
null
{ "abstract": " The rapid development of artificial intelligence has brought the artificial\nintelligence threat theory as well as the problem about how to evaluate the\nintelligence level of intelligent products. Both need to find a quantitative\nmethod to evaluate the intelligence level of intelligence systems, including\nhuman intelligence. Based on the standard intelligence system and the extended\nVon Neumann architecture, this paper proposes General IQ, Service IQ and Value\nIQ evaluation methods for intelligence systems, depending on different\nevaluation purposes. Among them, the General IQ of intelligence systems is to\nanswer the question of whether the artificial intelligence can surpass the\nhuman intelligence, which is reflected in putting the intelligence systems on\nan equal status and conducting the unified evaluation. The Service IQ and Value\nIQ of intelligence systems are used to answer the question of how the\nintelligent products can better serve the human, reflecting the intelligence\nand required cost of each intelligence system as a product in the process of\nserving human.\n", "title": "Three IQs of AI Systems and their Testing Methods" }
null
null
null
null
true
null
18146
null
Default
null
null
null
{ "abstract": " Let M be a smooth manifold, and let O(M) be the poset of open subsets of M.\nManifold calculus, due to Goodwillie and Weiss, is a calculus of functors\nsuitable for studying contravariant functors (cofunctors) F: O(M)--> Top from\nO(M) to the category of spaces. Weiss showed that polynomial cofunctors of\ndegree <= k are determined by their values on O_k(M), where O_k(M) is the full\nsubposet of O(M) whose objects are open subsets diffeomorphic to the disjoint\nunion of at most k balls. Afterwards Pryor showed that one can replace O_k(M)\nby more general subposets and still recover the same notion of polynomial\ncofunctor. In this paper, we generalize these results to cofunctors from O(M)\nto any simplicial model category C. If conf(k, M) stands for the unordered\nconfiguration space of k points in M, we also show that the category of\nhomogeneous cofunctors O(M) --> C of degree k is weakly equivalent to the\ncategory of linear cofunctors O(conf(k, M)) --> C provided that C has a zero\nobject. Using a completely different approach, we also show that if C is a\ngeneral model category and F: O_k(M) --> C is an isotopy cofunctor, then the\nhomotopy right Kan extension of F along the inclusion O_k(M) --> O(M) is also\nan isotopy cofunctor.\n", "title": "Polynomial functors in manifold calculus" }
null
null
null
null
true
null
18147
null
Default
null
null
null
{ "abstract": " It is a well-known fact that adding noise to the input data often improves\nnetwork performance. While the dropout technique may be a cause of memory loss,\nwhen it is applied to recurrent connections, Tikhonov regularization, which can\nbe regarded as the training with additive noise, avoids this issue naturally,\nthough it implies regularizer derivation for different architectures. In case\nof feedforward neural networks this is straightforward, while for networks with\nrecurrent connections and complicated layers it leads to some difficulties. In\nthis paper, a Tikhonov regularizer is derived for Long-Short Term Memory (LSTM)\nnetworks. Although it is independent of time for simplicity, it considers\ninteraction between weights of the LSTM unit, which in theory makes it possible\nto regularize the unit with complicated dependences by using only one parameter\nthat measures the input data perturbation. The regularizer that is proposed in\nthis paper has three parameters: one to control the regularization process, and\nother two to maintain computation stability while the network is being trained.\nThe theory developed in this paper can be applied to get such regularizers for\ndifferent recurrent neural networks with Hadamard products and Lipschitz\ncontinuous functions.\n", "title": "Tikhonov Regularization for Long Short-Term Memory Networks" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
18148
null
Validated
null
null
null
{ "abstract": " We deal with hypersurfaces in the framework of the $n$-dimensional relative\ndifferential geometry. We consider a hypersurface $\\varPhi$ of\n$\\mathbb{R}^{n+1}$ with position vector field $\\mathbf{x}$, which is relatively\nnormalized by a relative normalization $\\mathbf{y}$. Then $\\mathbf{y}$ is also\na relative normalization of every member of the one-parameter family\n$\\mathcal{F}$ of hypersurfaces $\\varPhi_\\mu$ with position vector field\n$$\\mathbf{x}_\\mu = \\mathbf{x} + \\mu \\, \\mathbf{y},$$ where $\\mu$ is a real\nconstant. We call every hypersurface $\\varPhi_\\mu \\in \\mathcal{F}$ relatively\nparallel to $\\varPhi$ at the \"relative distance\" $\\mu$. In this paper we study\n(a) the shape (or Weingarten) operator,\n(b) the relative principal curvatures,\n(c) the relative mean curvature functions and\n(d) the affine normalization\nof a relatively parallel hypersurface $\\left( \\varPhi_\\mu,\\mathbf{y}\\right)$\nto $\\left(\\varPhi,\\mathbf{y}\\right)$.\n", "title": "On the shape operator of relatively parallel hypersurfaces in the $n$-dimensional relative differential geometry" }
null
null
null
null
true
null
18149
null
Default
null
null
null
{ "abstract": " We present a fully edible pneumatic actuator based on gelatin-glycerol\ncomposite. The actuator is monolithic, fabricated via a molding process, and\nmeasures 90 mm in length, 20 mm in width, and 17 mm in thickness. Thanks to the\ncomposite mechanical characteristics similar to those of silicone elastomers,\nthe actuator exhibits a bending angle of 170.3 ° and a blocked force of\n0.34 N at the applied pressure of 25 kPa. These values are comparable to\nelastomer based pneumatic actuators. As a validation example, two actuators are\nintegrated to form a gripper capable of handling various objects, highlighting\nthe high performance and applicability of the edible actuator. These edible\nactuators, combined with other recent edible materials and electronics, could\nlay the foundation for a new type of edible robots.\n", "title": "Soft Pneumatic Gelatin Actuator for Edible Robotics" }
null
null
null
null
true
null
18150
null
Default
null
null
null
{ "abstract": " State-of-the-art methods for protein-protein interaction (PPI) extraction are\nprimarily feature-based or kernel-based by leveraging lexical and syntactic\ninformation. But how to incorporate such knowledge in the recent deep learning\nmethods remains an open question. In this paper, we propose a multichannel\ndependency-based convolutional neural network model (McDepCNN). It applies one\nchannel to the embedding vector of each word in the sentence, and another\nchannel to the embedding vector of the head of the corresponding word.\nTherefore, the model can use richer information obtained from different\nchannels. Experiments on two public benchmarking datasets, AIMed and BioInfer,\ndemonstrate that McDepCNN compares favorably to the state-of-the-art\nrich-feature and single-kernel based methods. In addition, McDepCNN achieves\n24.4% relative improvement in F1-score over the state-of-the-art methods on\ncross-corpus evaluation and 12% improvement in F1-score over kernel-based\nmethods on \"difficult\" instances. These results suggest that McDepCNN\ngeneralizes more easily over different corpora, and is capable of capturing\nlong distance features in the sentences.\n", "title": "Deep learning for extracting protein-protein interactions from biomedical literature" }
null
null
[ "Computer Science" ]
null
true
null
18151
null
Validated
null
null
null
{ "abstract": " Decision making is a process that is extremely prone to different biases. In\nthis paper we consider learning fair representations that aim at removing\nnuisance (sensitive) information from the decision process. For this purpose,\nwe propose to use deep generative modeling and adapt a hierarchical Variational\nAuto-Encoder to learn these fair representations. Moreover, we utilize the\nmutual information as a useful regularizer for enforcing fairness of a\nrepresentation. In experiments on two benchmark datasets and two scenarios\nwhere the sensitive variables are fully and partially observable, we show that\nthe proposed approach either outperforms or performs on par with the current\nbest model.\n", "title": "Hierarchical VampPrior Variational Fair Auto-Encoder" }
null
null
[ "Statistics" ]
null
true
null
18152
null
Validated
null
null
null
{ "abstract": " Recent reports claiming tentative association of the massive star binary\nsystem gamma^2 Velorum (WR 11) with a high-energy gamma-ray source observed by\nFermi-LAT contrast the so-far exclusive role of Eta Carinae as the hitherto\nonly detected gamma-ray emitter in the source class of particle-accelerating\ncolliding-wind binary systems. We aim to shed light on this claim of\nassociation by providing dedicated model predictions for the nonthermal photon\nemission spectrum of WR 11. We use three-dimensional magneto-hydrodynamic\nmodeling to trace the structure and conditions of the wind-collision region of\nWR 11 throughout its 78.5 day orbit, including the important effect of\nradiative braking in the stellar winds. A transport equation is then solved in\nthe wind-collision region to determine the population of relativistic electrons\nand protons which are subsequently used to compute nonthermal photon emission\ncomponents. We find that - if WR 11 be indeed confirmed as the responsible\nobject for the observed gamma-ray emission - its radiation will unavoidably be\nof hadronic origin owing to the strong radiation fields in the binary system\nwhich inhibit the acceleration of electrons to energies suffciently high for\nobservable inverse Compton radiation. Different conditions in wind-collision\nregion near the apastron and periastron configuration lead to significant\nvariability on orbital time scales. The bulk of the hadronic gamma-ray emission\noriginates at a 400 solar radii wide region at the apex.\n", "title": "MHD Models of Gamma-ray Emission in WR 11" }
null
null
[ "Physics" ]
null
true
null
18153
null
Validated
null
null
null
{ "abstract": " Simplistic estimation of neural connectivity in MEEG sensor space is\nimpossible due to volume conduction. The only viable alternative is to carry\nout connectivity estimation in source space. Among the neuroscience community\nthis is claimed to be impossible or misleading due to Leakage: linear mixing of\nthe reconstructed sources. To address this problematic we propose a novel\nsolution method that caulks the Leakage in MEEG source activity and\nconnectivity estimates: BC-VARETA. It is based on a joint estimation of source\nactivity and connectivity in the frequency domain representation of MEEG time\nseries. To achieve this, we go beyond current methods that assume a fixed\ngaussian graphical model for source connectivity. In contrast we estimate this\ngraphical model in a Bayesian framework by placing priors on it, which allows\nfor highly optimized computations of the connectivity, via a new procedure\nbased on the local quadratic approximation under quite general prior models. A\nfurther contribution of this paper is the rigorous definition of leakage via\nthe Spatial Dispersion Measure and Earth Movers Distance based on the geodesic\ndistances over the cortical manifold. Both measures are extended for the first\ntime to quantify Connectivity Leakage by defining them on the cartesian product\nof cortical manifolds. Using these measures, we show that BC-VARETA outperforms\nmost state of the art inverse solvers by several orders of magnitude.\n", "title": "Caulking the Leakage Effect in MEEG Source Connectivity Analysis" }
null
null
[ "Quantitative Biology" ]
null
true
null
18154
null
Validated
null
null
null
{ "abstract": " In this paper we provide an update concerning the operations of the NASA\nAstrophysics Data System (ADS), its services and user interface, and the\ncontent currently indexed in its database. As the primary information system\nused by researchers in Astronomy, the ADS aims to provide a comprehensive index\nof all scholarly resources appearing in the literature. With the current effort\nin our community to support data and software citations, we discuss what steps\nthe ADS is taking to provide the needed infrastructure in collaboration with\npublishers and data providers. A new API provides access to the ADS search\ninterface, metrics, and libraries allowing users to programmatically automate\ndiscovery and curation tasks. The new ADS interface supports a greater\nintegration of content and services with a variety of partners, including ORCID\nclaiming, indexing of SIMBAD objects, and article graphics from a variety of\npublishers. Finally, we highlight how librarians can facilitate the ingest of\ngray literature that they curate into our system.\n", "title": "New ADS Functionality for the Curator" }
null
null
null
null
true
null
18155
null
Default
null
null
null
{ "abstract": " Reason and inference require process as well as memory skills by humans.\nNeural networks are able to process tasks like image recognition (better than\nhumans) but in memory aspects are still limited (by attention mechanism, size).\nRecurrent Neural Network (RNN) and it's modified version LSTM are able to solve\nsmall memory contexts, but as context becomes larger than a threshold, it is\ndifficult to use them. The Solution is to use large external memory. Still, it\nposes many challenges like, how to train neural networks for discrete memory\nrepresentation, how to describe long term dependencies in sequential data etc.\nMost prominent neural architectures for such tasks are Memory networks:\ninference components combined with long term memory and Neural Turing Machines:\nneural networks using external memory resources. Also, additional techniques\nlike attention mechanism, end to end gradient descent on discrete memory\nrepresentation are needed to support these solutions. Preliminary results of\nabove neural architectures on simple algorithms (sorting, copying) and Question\nAnswering (based on story, dialogs) application are comparable with the state\nof the art. In this paper, I explain these architectures (in general), the\nadditional techniques used and the results of their application.\n", "title": "Survey of reasoning using Neural networks" }
null
null
null
null
true
null
18156
null
Default
null
null
null
{ "abstract": " The pharmaceutical industry has witnessed exponential growth in transforming\noperations towards continuous manufacturing to effectively achieve increased\nprofitability, reduced waste, and extended product range. Model Predictive\nControl (MPC) can be applied for enabling this vision, in providing superior\nregulation of critical quality attributes. For MPC, obtaining a workable model\nis of fundamental importance, especially in the presence of complex reaction\nkinetics and process dynamics. Whilst physics-based models are desirable, it is\nnot always practical to obtain one effective and fit-for-purpose model.\nInstead, within industry, data-driven system-identification approaches have\nbeen found to be useful and widely deployed in MPC solutions. In this work, we\ndemonstrated the applicability of Recurrent Neural Networks (RNNs) for MPC\napplications in continuous pharmaceutical manufacturing. We have shown that\nRNNs are especially well-suited for modeling dynamical systems due to their\nmathematical structure and satisfactory closed-loop control performance can be\nyielded for MPC in continuous pharmaceutical manufacturing.\n", "title": "Recurrent Neural Network-based Model Predictive Control for Continuous Pharmaceutical Manufacturing" }
null
null
null
null
true
null
18157
null
Default
null
null
null
{ "abstract": " Let $K$ be a simply connected compact Lie group and $T^{\\ast}(K)$ its\ncotangent bundle. We consider the problem of \"quantization commutes with\nreduction\" for the adjoint action of $K$ on $T^{\\ast}(K).$ We quantize both\n$T^{\\ast}(K)$ and the reduced phase space using geometric quantization with\nhalf-forms. We then construct a geometrically natural map from the space of\ninvariant elements in the quantization of $T^{\\ast}(K)$ to the quantization of\nthe reduced phase space. We show that this map is a constant multiple of a\nunitary map.\n", "title": "A unitary \"quantization commutes with reduction\" map for the adjoint action of a compact Lie group" }
null
null
null
null
true
null
18158
null
Default
null
null
null
{ "abstract": " We performed magnetic field and frequency tunable electron paramagnetic\nresonance spectroscopy of an Er$^{3+}$ doped Y$_2$SiO$_5$ crystal by observing\nthe change in flux induced on a direct current-superconducting quantum\ninterference device (dc-SQUID) loop of a tunable Josephson bifurcation\namplifer. The observed spectra show multiple transitions which agree well with\nthe simulated energy levels, taking into account the hyperfine and quadrupole\ninteractions of $^{167}$Er. The sensing volume is about 0.15 pl, and our\ninferred measurement sensitivity (limited by external flux noise) is\napproximately $1.5\\times10^4$ electron spins for a 1 s measurement. The\nsensitivity value is two orders of magnitude better than similar schemes using\ndc-SQUID switching readout.\n", "title": "Electron Paramagnetic Resonance Spectroscopy of Er$^{3+}$:Y$_2$SiO$_5$ Using Josephson Bifurcation Amplifier: Observation of Hyperfine and Quadrupole Structures" }
null
null
null
null
true
null
18159
null
Default
null
null
null
{ "abstract": " In this paper, a new Smartphone sensor based algorithm is proposed to detect\naccurate distance estimation. The algorithm consists of two phases, the first\nphase is for detecting the peaks from the Smartphone accelerometer sensor. The\nother one is for detecting the step length which varies from step to step. The\nproposed algorithm is tested and implemented in real environment and it showed\npromising results. Unlike the conventional approaches, the error of the\nproposed algorithm is fixed and is not affected by the long distance.\nKeywords distance estimation, peaks, step length, accelerometer.\n", "title": "Step Detection Algorithm For Accurate Distance Estimation Using Dynamic Step Length" }
null
null
null
null
true
null
18160
null
Default
null
null
null
{ "abstract": " Depth-sensing is important for both navigation and scene understanding.\nHowever, procuring RGB images with corresponding depth data for training deep\nmodels is challenging; large-scale, varied, datasets with ground truth training\ndata are scarce. Consequently, several recent methods have proposed treating\nthe training of monocular color-to-depth estimation networks as an image\nreconstruction problem, thus forgoing the need for ground truth depth.\nThere are multiple concepts and design decisions for these networks that seem\nsensible, but give mixed or surprising results when tested. For example,\nbinocular stereo as the source of self-supervision seems cumbersome and hard to\nscale, yet results are less blurry compared to training with monocular videos.\nSuch decisions also interplay with questions about architectures, loss\nfunctions, image scales, and motion handling. In this paper, we propose a\nsimple yet effective model, with several general architectural and loss\ninnovations, that surpasses all other self-supervised depth estimation\napproaches on KITTI.\n", "title": "Digging Into Self-Supervised Monocular Depth Estimation" }
null
null
[ "Statistics" ]
null
true
null
18161
null
Validated
null
null
null
{ "abstract": " In this work, answer-set programs that specify repairs of databases are used\nas a basis for solving computational and reasoning problems about causes for\nquery answers from databases.\n", "title": "The Causality/Repair Connection in Databases: Causality-Programs" }
null
null
null
null
true
null
18162
null
Default
null
null
null
{ "abstract": " We develop a variant of multiclass logistic regression that achieves three\nproperties: i) We minimize a non-convex surrogate loss which makes the method\nrobust to outliers, ii) our method allows transitioning between non-convex and\nconvex losses by the choice of the parameters, iii) the surrogate loss is Bayes\nconsistent, even in the non-convex case. The algorithm has one weight vector\nper class and the surrogate loss is a function of the linear activations (one\nper class). The surrogate loss of an example with linear activation vector\n$\\mathbf{a}$ and class $c$ has the form $-\\log_{t_1} \\exp_{t_2} (a_c -\nG_{t_2}(\\mathbf{a}))$ where the two temperatures $t_1$ and $t_2$ \"temper\" the\n$\\log$ and $\\exp$, respectively, and $G_{t_2}$ is a generalization of the\nlog-partition function. We motivate this loss using the Tsallis divergence. As\nthe temperature of the logarithm becomes smaller than the temperature of the\nexponential, the surrogate loss becomes \"more quasi-convex\". Various tunings of\nthe temperatures recover previous methods and tuning the degree of\nnon-convexity is crucial in the experiments. The choice $t_1<1$ and $t_2>1$\nperforms best experimentally. We explain this by showing that $t_1 < 1$ caps\nthe surrogate loss and $t_2 >1$ makes the predictive distribution have a heavy\ntail.\n", "title": "Two-temperature logistic regression based on the Tsallis divergence" }
null
null
null
null
true
null
18163
null
Default
null
null
null
{ "abstract": " Approximate Bayesian computation (ABC) and synthetic likelihood (SL)\ntechniques have enabled the use of Bayesian inference for models that may be\nsimulated, but for which the likelihood cannot be evaluated pointwise at values\nof an unknown parameter $\\theta$. The main idea in ABC and SL is to, for\ndifferent values of $\\theta$ (usually chosen using a Monte Carlo algorithm),\nbuild estimates of the likelihood based on simulations from the model\nconditional on $\\theta$. The quality of these estimates determines the\nefficiency of an ABC/SL algorithm. In standard ABC/SL, the only means to\nimprove an estimated likelihood at $\\theta$ is to simulate more times from the\nmodel conditional on $\\theta$, which is infeasible in cases where the simulator\nis computationally expensive. In this paper we describe how to use\nbootstrapping as a means for improving SL estimates whilst using fewer\nsimulations from the model, and also investigate its use in ABC. Further, we\ninvestigate the use of the bag of little bootstraps as a means for applying\nthis approach to large datasets, yielding Monte Carlo algorithms that\naccurately approximate posterior distributions whilst only simulating\nsubsamples of the full data. Examples of the approach applied to i.i.d.,\ntemporal and spatial data are given.\n", "title": "Bootstrapped synthetic likelihood" }
null
null
null
null
true
null
18164
null
Default
null
null
null
{ "abstract": " In this paper, we introduce the notions of an iterated planar Lefschetz\nfibration and an iterated planar open book decomposition and prove the\nWeinstein conjecture for contact manifolds supporting an open book that has\niterated planar pages. For $n\\geq 1$, we show that a $(2n+1)$-dimensional\ncontact manifold $M$ supporting an iterated planar open book decomposition\nsatisfies the Weinstein conjecture.\n", "title": "The Weinstein conjecture for iterated planar contact structures" }
null
null
null
null
true
null
18165
null
Default
null
null
null
{ "abstract": " This paper considers the scenario that multiple data owners wish to apply a\nmachine learning method over the combined dataset of all owners to obtain the\nbest possible learning output but do not want to share the local datasets owing\nto privacy concerns. We design systems for the scenario that the stochastic\ngradient descent (SGD) algorithm is used as the machine learning method because\nSGD (or its variants) is at the heart of recent deep learning techniques over\nneural networks. Our systems differ from existing systems in the following\nfeatures: {\\bf (1)} any activation function can be used, meaning that no\nprivacy-preserving-friendly approximation is required; {\\bf (2)} gradients\ncomputed by SGD are not shared but the weight parameters are shared instead;\nand {\\bf (3)} robustness against colluding parties even in the extreme case\nthat only one honest party exists. We prove that our systems, while\nprivacy-preserving, achieve the same learning accuracy as SGD and hence retain\nthe merit of deep learning with respect to accuracy. Finally, we conduct\nseveral experiments using benchmark datasets, and show that our systems\noutperform previous system in terms of learning accuracies.\n", "title": "Privacy-Preserving Deep Learning via Weight Transmission" }
null
null
null
null
true
null
18166
null
Default
null
null
null
{ "abstract": " Peer code review locates common coding rule violations and simple logical\nerrors in the early phases of software development, and thus reduces overall\ncost. However, in GitHub, identifying an appropriate code reviewer for a pull\nrequest is a non-trivial task given that reliable information for reviewer\nidentification is often not readily available. In this paper, we propose a code\nreviewer recommendation technique that considers not only the relevant\ncross-project work history (e.g., external library experience) but also the\nexperience of a developer in certain specialized technologies associated with a\npull request for determining her expertise as a potential code reviewer. We\nfirst motivate our technique using an exploratory study with 10 commercial\nprojects and 10 associated libraries external to those projects. Experiments\nusing 17,115 pull requests from 10 commercial projects and six open source\nprojects show that our technique provides 85%--92% recommendation accuracy,\nabout 86% precision and 79%--81% recall in code reviewer recommendation, which\nare highly promising. Comparison with the state-of-the-art technique also\nvalidates the empirical findings and the superiority of our recommendation\ntechnique.\n", "title": "CORRECT: Code Reviewer Recommendation in GitHub Based on Cross-Project and Technology Experience" }
null
null
null
null
true
null
18167
null
Default
null
null
null
{ "abstract": " Considering a spherically-symmetric non-static cosmological flat model of\nRobertson-Walker universe we have investigated the problem of perfect fluid\ndistribution interacting with the gravitational field in presence of massive\nscalar field and electromagnetic field in B-D theory. Exact solutions have been\nobtained by using a general approach of solving the partial differential\nequations and it has been observed that the electromagnetic field cannot\nsurvive for the cosmological flat model due to the influence caused by the\npresence of massive scalar field.\n", "title": "Charged Perfect Fluid Distribution for Cosmological Universe Interacting With Massive Scalar Field in Brans-Dicke Theory" }
null
null
null
null
true
null
18168
null
Default
null
null
null
{ "abstract": " This paper reproduces the text of a part of the Author's DPhil thesis. It\ngives a proof of the classification of non-trivial, finite homogeneous\ngeometries of sufficiently high dimension which does not depend on the\nclassification of the finite simple groups.\n", "title": "Finite homogeneous geometries" }
null
null
[ "Mathematics" ]
null
true
null
18169
null
Validated
null
null
null
{ "abstract": " Private information retrieval (PIR) protocols make it possible to retrieve a\nfile from a database without disclosing any information about the identity of\nthe file being retrieved. These protocols have been rigorously explored from an\ninformation-theoretic perspective in recent years. While existing protocols\nstrictly impose that no information is leaked on the file's identity, this work\ninitiates the study of the tradeoffs that can be achieved by relaxing the\nrequirement of perfect privacy. In case the user is willing to leak some\ninformation on the identity of the retrieved file, we study how the PIR rate,\nas well as the upload cost and access complexity, can be improved. For the\nparticular case of replicated servers, we propose two weakly-private\ninformation retrieval schemes based on two recent PIR protocols and a family of\nschemes based on partitioning. Lastly, we compare the performance of the\nproposed schemes.\n", "title": "Weakly-Private Information Retrieval" }
null
null
null
null
true
null
18170
null
Default
null
null
null
{ "abstract": " Iterative algorithms, like gradient descent, are common tools for solving a\nvariety of problems, such as model fitting. For this reason, there is interest\nin creating differentially private versions of them. However, their conversion\nto differentially private algorithms is often naive. For instance, a fixed\nnumber of iterations are chosen, the privacy budget is split evenly among them,\nand at each iteration, parameters are updated with a noisy gradient. In this\npaper, we show that gradient-based algorithms can be improved by a more careful\nallocation of privacy budget per iteration. Intuitively, at the beginning of\nthe optimization, gradients are expected to be large, so that they do not need\nto be measured as accurately. However, as the parameters approach their optimal\nvalues, the gradients decrease and hence need to be measured more accurately.\nWe add a basic line-search capability that helps the algorithm decide when more\naccurate gradient measurements are necessary. Our gradient descent algorithm\nworks with the recently introduced zCDP version of differential privacy. It\noutperforms prior algorithms for model fitting and is competitive with the\nstate-of-the-art for $(\\epsilon,\\delta)$-differential privacy, a strictly\nweaker definition than zCDP.\n", "title": "Concentrated Differentially Private Gradient Descent with Adaptive per-Iteration Privacy Budget" }
null
null
null
null
true
null
18171
null
Default
null
null
null
{ "abstract": " Commented translation of the paper \"Universelle Bedeutung des\nWirkungsquantums\", published by Jun Ishiwara in German in the Proceedings of\nTokyo Mathematico-Physical Society 8 106-116 (1915). In his work, Ishiwara,\ntenured at Sendai University, Japan, proposed - simultaneously with Arnold\nSommerfeld, William Wilson and Niels Bohr in Europe - the pase-space-integral\nquantization, a rule that would be incorporated into the old-quantum-mechanics\nformalism.\n", "title": "\"The universal meaning of the quantum of action\", by Jun Ishiwara" }
null
null
null
null
true
null
18172
null
Default
null
null
null
{ "abstract": " Electronic charge carriers in ionic materials can self-trap to form large\npolarons. Interference between the ionic displacements associated with\noppositely charged large polarons increases as they approach one another.\nInitially this interference produces an attractive potential that fosters their\nmerger. However, for small enough separations this interference generates a\nrepulsive interaction between oppositely charged large polarons. In suitable\ncircumstances this repulsion can overwhelm their direct Coulomb attraction.\nThen the resulting net repulsion between oppositely charged large polarons\nconstitutes a potential barrier which impedes their recombination.\n", "title": "Barrier to recombination of oppositely charged large polarons" }
null
null
null
null
true
null
18173
null
Default
null
null
null
{ "abstract": " In this paper, metric reduction in generalized geometry is investigated. We\nshow how the Bismut connections on the quotient manifold are obtained from\nthose on the original manifold. The result facilitates the analysis of\ngeneralized K$\\ddot{a}$hler reduction, which motivates the concept of metric\ngeneralized principal bundles and our approach to construct a family of\ngeneralized holomorphic line bundles over $\\mathbb{C}P^2$ equipped with some\nnon-trivial generalized K$\\ddot{a}$hler structures.\n", "title": "Metric Reduction and Generalized Holomorphic Structures" }
null
null
[ "Mathematics" ]
null
true
null
18174
null
Validated
null
null
null
{ "abstract": " We consider the forecast aggregation problem in repeated settings, where the\nforecasts are done on a binary event. At each period multiple experts provide\nforecasts about an event. The goal of the aggregator is to aggregate those\nforecasts into a subjective accurate forecast. We assume that experts are\nBayesian; namely they share a common prior, each expert is exposed to some\nevidence, and each expert applies Bayes rule to deduce his forecast. The\naggregator is ignorant with respect to the information structure (i.e.,\ndistribution over evidence) according to which experts make their prediction.\nThe aggregator observes the experts' forecasts only. At the end of each period\nthe actual state is realized. We focus on the question whether the aggregator\ncan learn to aggregate optimally the forecasts of the experts, where the\noptimal aggregation is the Bayesian aggregation that takes into account all the\ninformation (evidence) in the system.\nWe consider the class of partial evidence information structures, where each\nexpert is exposed to a different subset of conditionally independent signals.\nOur main results are positive; We show that optimal aggregation can be learned\nin polynomial time in a quite wide range of instances of the partial evidence\nenvironments. We provide a tight characterization of the instances where\nlearning is possible and impossible.\n", "title": "Learning of Optimal Forecast Aggregation in Partial Evidence Environments" }
null
null
[ "Statistics" ]
null
true
null
18175
null
Validated
null
null
null
{ "abstract": " In this paper, cyber attack detection and isolation is studied on a network\nof UAVs in a formation flying setup. As the UAVs communicate to reach consensus\non their states while making the formation, the communication network among the\nUAVs makes them vulnerable to a potential attack from malicious adversaries.\nTwo types of attacks pertinent to a network of UAVs have been considered: a\nnode attack on the UAVs and a deception attack on the communication between the\nUAVs. UAVs formation control presented using a consensus algorithm to reach a\npre-specified formation. A node and a communication path deception cyber\nattacks on the UAV's network are considered with their respective models in the\nformation setup. For these cyber attacks detection, a bank of Unknown Input\nObserver (UIO) based distributed fault detection scheme proposed to detect and\nidentify the compromised UAV in the formation. A rule based on the residuals\ngenerated using the bank of UIOs are used to detect attacks and identify the\ncompromised UAV in the formation. Further, an algorithm developed to remove the\nfaulty UAV from the network once an attack detected and the compromised UAV\nisolated while maintaining the formation flight with a missing UAV node.\n", "title": "Distributed Unknown-Input-Observers for Cyber Attack Detection and Isolation in Formation Flying UAVs" }
null
null
[ "Computer Science" ]
null
true
null
18176
null
Validated
null
null
null
{ "abstract": " In this paper, we study the non-self dual extended Harper's model with\nLiouvillean frequency. By establishing quantitative reducibility results\ntogether with the averaging method, we prove that the lengths of spectral gaps\ndecay exponentially.\n", "title": "Exponential Decay of the lengths of Spectral Gaps for Extended Harper's Model with Liouvillean Frequency" }
null
null
null
null
true
null
18177
null
Default
null
null
null
{ "abstract": " The Lieb Lattice exhibits intriguing properties that are of general interest\nin both the fundamental physics and practical applications. Here, we\ninvestigate the topological Landau-Zener Bloch oscillation in a photonic\nFloquet Lieb lattice, where the dimerized helical waveguides is constructed to\nrealize the synthetic spin-orbital interaction through the Floquet mechanism,\nrendering us to study the impacts of topological transition from trivial gaps\nto non-trivial ones. The compact localized states of flat bands supported by\nthe local symmetry of Lieb lattice will be associated with other bands by\ntopological invariants, Chern number, and involved into Landau-Zener transition\nduring Bloch oscillation. Importantly, the non-trivial geometrical phases after\ntopological transitions will be taken into account for constructive and\ndestructive interferences of wave functions. The numerical calculations of\ncontinuum photonic medium demonstrate reasonable agreements with theoretical\ntight-binding model. Our results provide an ongoing effort to realize designed\nquantum materials with tailored properties.\n", "title": "Topological Landau-Zener Bloch Oscillations in Photonic Floquet Lieb Lattices" }
null
null
null
null
true
null
18178
null
Default
null
null
null
{ "abstract": " It is analyzed the effects of both bulk and shear viscosities on the\nperturbations, relevant for structure formation in late time cosmology. It is\nshown that shear viscosity can be as effective as the bulk viscosity on\nsuppressing the growth of perturbations and delaying the nonlinear regime. A\nstatistical analysis of the shear and bulk viscous effects is performed and\nsome constraints on these viscous effects are given.\n", "title": "Assessing the impact of bulk and shear viscosities on large scale structure formation" }
null
null
[ "Physics" ]
null
true
null
18179
null
Validated
null
null
null
{ "abstract": " In most realistic models for quantum chaotic systems, the Hamiltonian\nmatrices in unperturbed bases have a sparse structure. We study correlations in\neigenfunctions of such systems and derive explicit expressions for some of the\ncorrelation functions with respect to energy. The analytical results are tested\nin several models by numerical simulations. An application is given for a\nrelation between transition probabilities.\n", "title": "Correlations in eigenfunctions of quantum chaotic systems with sparse Hamiltonian matrices" }
null
null
null
null
true
null
18180
null
Default
null
null
null
{ "abstract": " This paper combines the fast Zero-Moment-Point (ZMP) approaches that work\nwell in practice with the broader range of capabilities of a Trajectory\nOptimization formulation, by optimizing over body motion, footholds and Center\nof Pressure simultaneously. We introduce a vertex-based representation of the\nsupport-area constraint, which can treat arbitrarily oriented point-, line-,\nand area-contacts uniformly. This generalization allows us to create motions\nsuch quadrupedal walking, trotting, bounding, pacing, combinations and\ntransitions between these, limping, bipedal walking and push-recovery all with\nthe same approach. This formulation constitutes a minimal representation of the\nphysical laws (unilateral contact forces) and kinematic restrictions (range of\nmotion) in legged locomotion, which allows us to generate various motion in\nless than a second. We demonstrate the feasibility of the generated motions on\na real quadruped robot.\n", "title": "Fast Trajectory Optimization for Legged Robots using Vertex-based ZMP Constraints" }
null
null
null
null
true
null
18181
null
Default
null
null
null
{ "abstract": " This work investigates the influence of geometric variations in dipole\nmicro/nano antennas, regarding their implications on the characteristics of the\nelectric field inside the gap space of antenna monopoles. The gap is the\ninterface for a metal-Insulator-Metal (MIM) rectifier diode and it needs to be\ncarefully optimized, in order to allow better electric current generation by\ntunneling current mechanisms. The arrangement (antenna + diode or rectenna) was\ndesigned to operate around 30 Terahertz (THz).\n", "title": "Electric Field Properties inside Central Gap of Dipole Micro/Nano Antennas Operating at 30 THz" }
null
null
null
null
true
null
18182
null
Default
null
null
null
{ "abstract": " Several recent works have empirically observed that Convolutional Neural Nets\n(CNNs) are (approximately) invertible. To understand this approximate\ninvertibility phenomenon and how to leverage it more effectively, we focus on a\ntheoretical explanation and develop a mathematical model of sparse signal\nrecovery that is consistent with CNNs with random weights. We give an exact\nconnection to a particular model of model-based compressive sensing (and its\nrecovery algorithms) and random-weight CNNs. We show empirically that several\nlearned networks are consistent with our mathematical analysis and then\ndemonstrate that with such a simple theoretical framework, we can obtain\nreasonable re- construction results on real images. We also discuss gaps\nbetween our model assumptions and the CNN trained for classification in\npractical scenarios.\n", "title": "Towards Understanding the Invertibility of Convolutional Neural Networks" }
null
null
null
null
true
null
18183
null
Default
null
null
null
{ "abstract": " A major challenge in designing neural network (NN) systems is to determine\nthe best structure and parameters for the network given the data for the\nmachine learning problem at hand. Examples of parameters are the number of\nlayers and nodes, the learning rates, and the dropout rates. Typically, these\nparameters are chosen based on heuristic rules and manually fine-tuned, which\nmay be very time-consuming, because evaluating the performance of a single\nparametrization of the NN may require several hours. This paper addresses the\nproblem of choosing appropriate parameters for the NN by formulating it as a\nbox-constrained mathematical optimization problem, and applying a\nderivative-free optimization tool that automatically and effectively searches\nthe parameter space. The optimization tool employs a radial basis function\nmodel of the objective function (the prediction accuracy of the NN) to\naccelerate the discovery of configurations yielding high accuracy. Candidate\nconfigurations explored by the algorithm are trained to a small number of\nepochs, and only the most promising candidates receive full training. The\nperformance of the proposed methodology is assessed on benchmark sets and in\nthe context of predicting drug-drug interactions, showing promising results.\nThe optimization tool used in this paper is open-source.\n", "title": "An effective algorithm for hyperparameter optimization of neural networks" }
null
null
null
null
true
null
18184
null
Default
null
null
null
{ "abstract": " In this paper, we study a smoothness regularization method for a varying\ncoefficient model based on sparse and irregularly sampled functional data which\nis contaminated with some measurement errors. We estimate the one-dimensional\ncovariance and cross-covariance functions of the underlying stochastic\nprocesses based on a reproducing kernel Hilbert space approach. We then obtain\nleast squares estimates of the coefficient functions. Simulation studies\ndemonstrate that the proposed method has good performance. We illustrate our\nmethod by an analysis of longitudinal primary biliary liver cirrhosis data.\n", "title": "On estimation in varying coefficient models for sparse and irregularly sampled functional data" }
null
null
null
null
true
null
18185
null
Default
null
null
null
{ "abstract": " The Painleve-IV equation has three families of rational solutions generated\nby the generalized Hermite polynomials. Each family is indexed by two positive\nintegers m and n. These functions have applications to nonlinear wave\nequations, random matrices, fluid dynamics, and quantum mechanics. Numerical\nstudies suggest the zeros and poles form a deformed n by m rectangular grid.\nProperly scaled, the zeros and poles appear to densely fill certain curvilinear\nrectangles as m and n tend to infinity with r=m/n fixed. Generalizing a method\nof Bertola and Bothner used to study rational Painleve-II functions, we express\nthe generalized Hermite rational Painleve-IV functions in terms of certain\northogonal polynomials on the unit circle. Using the Deift-Zhou nonlinear\nsteepest-descent method, we asymptotically analyze the associated\nRiemann-Hilbert problem in the limit as n tends to infinity with m=r*n for r\nfixed. We obtain an explicit characterization of the boundary curve and\ndetermine the leading-order asymptotic expansion of the functions in the\npole-free region.\n", "title": "Large-degree asymptotics of rational Painleve-IV functions associated to generalized Hermite polynomials" }
null
null
null
null
true
null
18186
null
Default
null
null
null
{ "abstract": " We numerically study the behavior of self-propelled liquid droplets whose\nmotion is triggered by a Marangoni-like flow. This latter is generated by\nvariations of surfactant concentration which affect the droplet surface tension\npromoting its motion. In the present paper a model for droplets with a third\namphiphilic component is adopted. The dynamics is described by Navier-Stokes\nand convection-diffusion equations, solved by lattice Boltzmann method coupled\nwith finite-difference schemes. We focus on two cases. First the study of\nself-propulsion of an isolated droplet is carried on and, then, the interaction\nof two self-propelled droplets is investigated. In both cases, when the\nsurfactant migrates towards the interface, a quadrupolar vortex of the velocity\nfield forms inside the droplet and causes the motion. A weaker dipolar field\nemerges instead when the surfactant is mainly diluted in the bulk. The dynamics\nof two interacting droplets is more complex and strongly depends on their\nreciprocal distance. If, in a head-on collision, droplets are close enough, the\nvelocity field initially attracts them until a motionless steady state is\nachieved. If the droplets are vertically shifted, the hydrodynamic field leads\nto an initial reciprocal attraction followed by a scattering along opposite\ndirections. This hydrodynamic interaction acts on a separation of some droplet\nradii otherwise it becomes negligible and droplets motion is only driven by\nMarangoni effect. Finally, if one of the droplets is passive, this latter is\ngenerally advected by the fluid flow generated by the active one.\n", "title": "Lattice Boltzmann study of chemically-driven self-propelled droplets" }
null
null
null
null
true
null
18187
null
Default
null
null
null
{ "abstract": " RDMA is increasingly adopted by cloud computing platforms to provide low CPU\noverhead, low latency, high throughput network services. On the other hand,\nhowever, it is still challenging for developers to realize fast deployment of\nRDMA-aware applications in the datacenter, since the performance is highly\nrelated to many lowlevel details of RDMA operations. To address this problem,\nwe present a simple and scalable RDMA as Service (RaaS) to mitigate the impact\nof RDMA operational details. RaaS provides careful message buffer management to\nimprove CPU/memory utilization and improve the scalability of RDMA operations.\nThese optimized designs lead to simple and flexible programming model for\ncommon and knowledgeable users. We have implemented a prototype of RaaS, named\nRDMAvisor, and evaluated its performance on a cluster with a large number of\nconnections. Our experiment results demonstrate that RDMAvisor achieves high\nthroughput for thousand of connections and maintains low CPU and memory\noverhead through adaptive RDMA transport selection.\n", "title": "RDMAvisor: Toward Deploying Scalable and Simple RDMA as a Service in Datacenters" }
null
null
null
null
true
null
18188
null
Default
null
null
null
{ "abstract": " We recently introduced a method to approximate functions of Hermitian Matrix\nProduct Operators or Tensor Trains that are of the form $\\mathsf{Tr} f(A)$.\nFunctions of this type occur in several applications, most notably in quantum\nphysics. In this work we aim at extending the theoretical understanding of our\nmethod by showing several properties of our algorithm that can be used to\ndetect and correct errors in its results. Most importantly, we show that there\nexists a more computationally efficient version of our algorithm for certain\ninputs. To illustrate the usefulness of our finding, we prove that several\nclasses of spin Hamiltonians in quantum physics fall into this input category.\nWe finally support our findings with numerical results obtained for an example\nfrom quantum physics.\n", "title": "Towards a better understanding of the matrix product function approximation algorithm in application to quantum physics" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
18189
null
Validated
null
null
null
{ "abstract": " We answer a question of K. Mulmuley: In [Efremenko-Landsberg-Schenck-Weyman]\nit was shown that the method of shifted partial derivatives cannot be used to\nseparate the padded permanent from the determinant. Mulmuley asked if this\n\"no-go\" result could be extended to a model without padding. We prove this is\nindeed the case using the iterated matrix multiplication polynomial. We also\nprovide several examples of polynomials with maximal space of partial\nderivatives, including the complete symmetric polynomials. We apply Koszul\nflattenings to these polynomials to have the first explicit sequence of\npolynomials with symmetric border rank lower bounds higher than the bounds\nattainable via partial derivatives.\n", "title": "Explicit polynomial sequences with maximal spaces of partial derivatives and a question of K. Mulmuley" }
null
null
null
null
true
null
18190
null
Default
null
null
null
{ "abstract": " We develop a one-dimensional notion of affine processes under parameter\nuncertainty, which we call non-linear affine processes. This is done as\nfollows: given a set of parameters for the process, we construct a\ncorresponding non-linear expectation on the path space of continuous processes.\nBy a general dynamic programming principle we link this non-linear expectation\nto a variational form of the Kolmogorov equation, where the generator of a\nsingle affine process is replaced by the supremum over all corresponding\ngenerators of affine processes with parameters in the parameter set. This\nnon-linear affine process yields a tractable model for Knightian uncertainty,\nespecially for modelling interest rates under ambiguity.\nWe then develop an appropriate Ito-formula, the respective term-structure\nequations and study the non-linear versions of the Vasicek and the\nCox-Ingersoll-Ross (CIR) model. Thereafter we introduce the non-linear\nVasicek-CIR model. This model is particularly suitable for modelling interest\nrates when one does not want to restrict the state space a priori and hence the\napproach solves this modelling issue arising with negative interest rates.\n", "title": "Affine processes under parameter uncertainty" }
null
null
null
null
true
null
18191
null
Default
null
null
null
{ "abstract": " Listed as No. 53 among the one hundred famous unsolved problems in [J. A.\nBondy, U. S. R. Murty, Graph Theory, Springer, Berlin, 2008] is Steinberg's\nconjecture, which states that every planar graph without 4- and 5-cycles is\n3-colorable. In this paper, we show that plane graphs without 4- and 5-cycles\nare 3-colorable if they have no ext-triangular 7-cycles. This implies that (1)\nplanar graphs without 4-, 5-, 7-cycles are 3-colorable, and (2) planar graphs\nwithout 4-, 5-, 8-cycles are 3-colorable, which cover a number of known results\nin the literature motivated by Steinberg's conjecture.\n", "title": "Plane graphs without 4- and 5-cycles and without ext-triangular 7-cycles are 3-colorable" }
null
null
[ "Mathematics" ]
null
true
null
18192
null
Validated
null
null
null
{ "abstract": " In this paper, we develop a distributed intermittent communication and task\nplanning framework for mobile robot teams. The goal of the robots is to\naccomplish complex tasks, captured by local Linear Temporal Logic formulas, and\nshare the collected information with all other robots and possibly also with a\nuser. Specifically, we consider situations where the robot communication\ncapabilities are not sufficient to form reliable and connected networks while\nthe robots move to accomplish their tasks. In this case, intermittent\ncommunication protocols are necessary that allow the robots to temporarily\ndisconnect from the network in order to accomplish their tasks free of\ncommunication constraints. We assume that the robots can only communicate with\neach other when they meet at common locations in space. Our distributed control\nframework jointly determines local plans that allow all robots fulfill their\nassigned temporal tasks, sequences of communication events that guarantee\ninformation exchange infinitely often, and optimal communication locations that\nminimize a desired distance metric. Simulation results verify the efficacy of\nthe proposed controllers.\n", "title": "Temporal Logic Task Planning and Intermittent Connectivity Control of Mobile Robot Networks" }
null
null
null
null
true
null
18193
null
Default
null
null
null
{ "abstract": " The paper conducts a second-order variational analysis for an important class\nof nonpolyhedral conic programs generated by the so-called\nsecond-order/Lorentz/ice-cream cone $Q$. From one hand, we prove that the\nindicator function of $Q$ is always twice epi-differentiable and apply this\nresult to characterizing the uniqueness of Lagrange multipliers at stationary\npoints together with an error bound estimate in the general second-order cone\nsetting involving ${\\cal C}^2$-smooth data. On the other hand, we precisely\ncalculate the graphical derivative of the normal cone mapping to $Q$ under the\nweakest metric subregularity constraint qualification and then give an\napplication of the latter result to a complete characterization of isolated\ncalmness for perturbed variational systems associated with second-order cone\nprograms. The obtained results seem to be the first in the literature in these\ndirections for nonpolyhedral problems without imposing any nondegeneracy\nassumptions.\n", "title": "Second-oder analysis in second-oder cone programming" }
null
null
null
null
true
null
18194
null
Default
null
null
null
{ "abstract": " Interpreting the performance of deep learning models beyond test set accuracy\nis challenging. Characteristics of individual data points are often not\nconsidered during evaluation, and each data point is treated equally. We\nexamine the impact of a test set question's difficulty to determine if there is\na relationship between difficulty and performance. We model difficulty using\nwell-studied psychometric methods on human response patterns. Experiments on\nNatural Language Inference (NLI) and Sentiment Analysis (SA) show that the\nlikelihood of answering a question correctly is impacted by the question's\ndifficulty. As DNNs are trained with more data, easy examples are learned more\nquickly than hard examples.\n", "title": "Understanding Deep Learning Performance through an Examination of Test Set Difficulty: A Psychometric Case Study" }
null
null
null
null
true
null
18195
null
Default
null
null
null
{ "abstract": " We apply the Acyclicity Theorem of Hess, Kerdziorek, Riehl, and Shipley\n(recently corrected by Garner, Kedziorek, and Riehl) to establishing the\nexistence of model category structure on categories of coalgebras over comonads\narising from simplicial adjunctions, under mild conditions on the adjunction\nand the associated comonad. We study three concrete examples of such\nadjunctions where the left adjoint is comonadic and show that in each case the\ncomponent of the derived counit of the comparison adjunction at any fibrant\nobject is an isomorphism, while the component of the derived unit at any\n1-connected object is a weak equivalence. To prove this last result, we explain\nhow to construct explicit fibrant replacements for 1-connected coalgebras in\nthe image of the canonical comparison functor from the Postnikov decompositions\nof their underlying simplicial sets. We also show in one case that the derived\nunit is precisely the Bousfield-Kan completion map.\n", "title": "The homotopy theory of coalgebras over simplicial comonads" }
null
null
null
null
true
null
18196
null
Default
null
null
null
{ "abstract": " This study aims to analyze the methodologies that can be used to estimate the\ntotal number of unemployed, as well as the unemployment rates for 28 regions of\nPortugal, designated as NUTS III regions, using model based approaches as\ncompared to the direct estimation methods currently employed by INE (National\nStatistical Institute of Portugal). Model based methods, often known as small\narea estimation methods (Rao, 2003), \"borrow strength\" from neighbouring\nregions and in doing so, aim to compensate for the small sample sizes often\nobserved in these areas. Consequently, it is generally accepted that model\nbased methods tend to produce estimates which have lesser variation. Other\nbenefit in employing model based methods is the possibility of including\nauxiliary information in the form of variables of interest and latent random\nstructures. This study focuses on the application of Bayesian hierarchical\nmodels to the Portuguese Labor Force Survey data from the 1st quarter of 2011\nto the 4th quarter of 2013. Three different data modeling strategies are\nconsidered and compared: Modeling of the total unemployed through Poisson,\nBinomial and Negative Binomial models; modeling of rates using a Beta model;\nand modeling of the three states of the labor market (employed, unemployed and\ninactive) by a Multinomial model. The implementation of these models is based\non the \\textit{Integrated Nested Laplace Approximation} (INLA) approach, except\nfor the Multinomial model which is implemented based on the method of Monte\nCarlo Markov Chain (MCMC). Finally, a comparison of the performance of these\nmodels, as well as the comparison of the results with those obtained by direct\nestimation methods at NUTS III level are given.\n", "title": "Spatio-temporal analysis of regional unemployment rates: A comparison of model based approaches" }
null
null
null
null
true
null
18197
null
Default
null
null
null
{ "abstract": " The expedient design of precision components in aerospace and other high-tech\nindustries requires simulations of physical phenomena often described by\npartial differential equations (PDEs) without exact solutions. Modern design\nproblems require simulations with a level of resolution difficult to achieve in\nreasonable amounts of time---even in effectively parallelized solvers. Though\nthe scale of the problem relative to available computing power is the greatest\nimpediment to accelerating these applications, significant performance gains\ncan be achieved through careful attention to the details of memory\ncommunication and access. The swept time-space decomposition rule reduces\ncommunication between sub-domains by exhausting the domain of influence before\ncommunicating boundary values. Here we present a GPU implementation of the\nswept rule, which modifies the algorithm for improved performance on this\nprocessing architecture by prioritizing use of private (shared) memory,\navoiding interblock communication, and overwriting unnecessary values. It shows\nsignificant improvement in the execution time of finite-difference solvers for\none-dimensional unsteady PDEs, producing speedups of 2--9$\\times$ for a range\nof problem sizes, respectively, compared with simple GPU versions and\n7--300$\\times$ compared with parallel CPU versions. However, for a more\nsophisticated one-dimensional system of equations discretized with a\nsecond-order finite-volume scheme, the swept rule performs 1.2--1.9$\\times$\nworse than a standard implementation for all problem sizes.\n", "title": "Accelerating solutions of one-dimensional unsteady PDEs with GPU-based swept time-space decomposition" }
null
null
null
null
true
null
18198
null
Default
null
null
null
{ "abstract": " This article presents the use of Answer Set Programming (ASP) to mine\nsequential patterns. ASP is a high-level declarative logic programming paradigm\nfor high level encoding combinatorial and optimization problem solving as well\nas knowledge representation and reasoning. Thus, ASP is a good candidate for\nimplementing pattern mining with background knowledge, which has been a data\nmining issue for a long time. We propose encodings of the classical sequential\npattern mining tasks within two representations of embeddings (fill-gaps vs\nskip-gaps) and for various kinds of patterns: frequent, constrained and\ncondensed. We compare the computational performance of these encodings with\neach other to get a good insight into the efficiency of ASP encodings. The\nresults show that the fill-gaps strategy is better on real problems due to\nlower memory consumption. Finally, compared to a constraint programming\napproach (CPSM), another declarative programming paradigm, our proposal showed\ncomparable performance.\n", "title": "Efficiency Analysis of ASP Encodings for Sequential Pattern Mining Tasks" }
null
null
null
null
true
null
18199
null
Default
null
null
null
{ "abstract": " In this paper, we define $A_{\\infty}$-Koszul duals for directed\n$A_{\\infty}$-categories in terms of twists in their $A_{\\infty}$-derived\ncategories. Then, we compute a concrete formula of $A_{\\infty}$-Koszul duals\nfor path algebras with directed $A_n$-type Gabriel quivers. To compute an\n$A_\\infty$-Koszul dual of such an algebra $A$, we construct a directed\nsubcategory of a Fukaya category which are $A_\\infty$-derived equivalent to the\ncategory of $A$-modules and compute Dehn twists as twists. The formula unveils\nall the ext groups of simple modules of the parh algebras and their higher\ncomposition structures.\n", "title": "Fukaya categories in Koszul duality theory" }
null
null
[ "Mathematics" ]
null
true
null
18200
null
Validated
null
null