text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " We construct examples of cohomogeneity one special Lagrangian submanifolds in\nthe cotangent bundle over the complex projective space, whose Calabi-Yau\nstructure was given by Stenzel. For each example, we describe the condition of\nspecial Lagrangian as an ordinary differential equation. Our method is based on\na moment map technique and the classification of cohomogeneity one actions on\nthe complex projective space classified by Takagi.\n",
"title": "Special Lagrangian submanifolds and cohomogeneity one actions on the complex projective space"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16501
| null |
Validated
| null | null |
null |
{
"abstract": " In regression analysis of multivariate data, it is tacitly assumed that\nresponse and predictor variables in each observed response-predictor pair\ncorrespond to the same entity or unit. In this paper, we consider the situation\nof \"permuted data\" in which this basic correspondence has been lost. Several\nrecent papers have considered this situation without further assumptions on the\nunderlying permutation. In applications, the latter is often to known to have\nadditional structure that can be leveraged. Specifically, we herein consider\nthe common scenario of \"sparsely permuted data\" in which only a small fraction\nof the data is affected by a mismatch between response and predictors. However,\nan adverse effect already observed for sparsely permuted data is that the least\nsquares estimator as well as other estimators not accounting for such partial\nmismatch are inconsistent. One approach studied in detail herein is to treat\npermuted data as outliers which motivates the use of robust regression\nformulations to estimate the regression parameter. The resulting estimate can\nsubsequently be used to recover the permutation. A notable benefit of the\nproposed approach is its computational simplicity given the general lack of\nprocedures for the above problem that are both statistically sound and\ncomputationally appealing.\n",
"title": "Linear Regression with Sparsely Permuted Data"
}
| null | null | null | null | true | null |
16502
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we study matrix scaling and balancing, which are fundamental\nproblems in scientific computing, with a long line of work on them that dates\nback to the 1960s. We provide algorithms for both these problems that, ignoring\nlogarithmic factors involving the dimension of the input matrix and the size of\nits entries, both run in time $\\widetilde{O}\\left(m\\log \\kappa \\log^2\n(1/\\epsilon)\\right)$ where $\\epsilon$ is the amount of error we are willing to\ntolerate. Here, $\\kappa$ represents the ratio between the largest and the\nsmallest entries of the optimal scalings. This implies that our algorithms run\nin nearly-linear time whenever $\\kappa$ is quasi-polynomial, which includes, in\nparticular, the case of strictly positive matrices. We complement our results\nby providing a separate algorithm that uses an interior-point method and runs\nin time $\\widetilde{O}(m^{3/2} \\log (1/\\epsilon))$.\nIn order to establish these results, we develop a new second-order\noptimization framework that enables us to treat both problems in a unified and\nprincipled manner. This framework identifies a certain generalization of linear\nsystem solving that we can use to efficiently minimize a broad class of\nfunctions, which we call second-order robust. We then show that in the context\nof the specific functions capturing matrix scaling and balancing, we can\nleverage and generalize the work on Laplacian system solving to make the\nalgorithms obtained via this framework very efficient.\n",
"title": "Matrix Scaling and Balancing via Box Constrained Newton's Method and Interior Point Methods"
}
| null | null | null | null | true | null |
16503
| null |
Default
| null | null |
null |
{
"abstract": " We present the data profile and the evaluation plan of the second oriental\nlanguage recognition (OLR) challenge AP17-OLR. Compared to the event last year\n(AP16-OLR), the new challenge involves more languages and focuses more on short\nutterances. The data is offered by SpeechOcean and the NSFC M2ASR project. Two\ntypes of baselines are constructed to assist the participants, one is based on\nthe i-vector model and the other is based on various neural networks. We report\nthe baseline results evaluated with various metrics defined by the AP17-OLR\nevaluation plan and demonstrate that the combined database is a reasonable data\nresource for multilingual research. All the data is free for participants, and\nthe Kaldi recipes for the baselines have been published online.\n",
"title": "AP17-OLR Challenge: Data, Plan, and Baseline"
}
| null | null | null | null | true | null |
16504
| null |
Default
| null | null |
null |
{
"abstract": " The IllustrisTNG project is a new suite of cosmological\nmagneto-hydrodynamical simulations of galaxy formation performed with the Arepo\ncode and updated models for feedback physics. Here we introduce the first two\nsimulations of the series, TNG100 and TNG300, and quantify the stellar mass\ncontent of about 4000 massive galaxy groups and clusters ($10^{13} \\leq M_{\\rm\n200c}/M_{\\rm sun} \\leq 10^{15}$) at recent times ($z \\leq 1$). The richest\nclusters have half of their total stellar mass bound to satellite galaxies,\nwith the other half being associated with the central galaxy and the diffuse\nintra-cluster light. The exact ICL fraction depends sensitively on the\ndefinition of a central galaxy's mass and varies in our most massive clusters\nbetween 20 to 40% of the total stellar mass. Haloes of $5\\times 10^{14}M_{\\rm\nsun}$ and above have more diffuse stellar mass outside 100 kpc than within 100\nkpc, with power-law slopes of the radial mass density distribution as shallow\nas the dark matter's ( $-3.5 < \\alpha_{\\rm 3D} < -3$). Total halo mass is a\nvery good predictor of stellar mass, and vice versa: at $z=0$, the 3D stellar\nmass measured within 30 kpc scales as $\\propto (M_{\\rm 500c})^{0.49}$ with a\n$\\sim 0.12$ dex scatter. This is possibly too steep in comparison to the\navailable observational constraints, even though the abundance of TNG less\nmassive galaxies ($< 10^{11}M_{\\rm sun}$ in stars) is in good agreement with\nthe measured galaxy stellar mass functions at recent epochs. The 3D sizes of\nmassive galaxies fall too on a tight ($\\sim$0.16 dex scatter) power-law\nrelation with halo mass, with $r^{\\rm stars}_{\\rm 0.5} \\propto (M_{\\rm\n500c})^{0.53}$. Even more fundamentally, halo mass alone is a good predictor\nfor the whole stellar mass profiles beyond the inner few kpc, and we show how\non average these can be precisely recovered given a single mass measurement of\nthe galaxy or its halo.\n",
"title": "First results from the IllustrisTNG simulations: the stellar mass content of groups and clusters of galaxies"
}
| null | null | null | null | true | null |
16505
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we present a fast implementation of the Singular Value\nThresholding (SVT) algorithm for matrix completion. A rank-revealing randomized\nsingular value decomposition (R3SVD) algorithm is used to adaptively carry out\npartial singular value decomposition (SVD) to fast approximate the SVT operator\ngiven a desired, fixed precision. We extend the R3SVD algorithm to a recycling\nrank revealing randomized singular value decomposition (R4SVD) algorithm by\nreusing the left singular vectors obtained from the previous iteration as the\napproximate basis in the current iteration, where the computational cost for\npartial SVD at each SVT iteration is significantly reduced. A simulated\nannealing style cooling mechanism is employed to adaptively adjust the low-rank\napproximation precision threshold as SVT progresses. Our fast SVT\nimplementation is effective in both large and small matrices, which is\ndemonstrated in matrix completion applications including image recovery and\nmovie recommendation system.\n",
"title": "A Fast Implementation of Singular Value Thresholding Algorithm using Recycling Rank Revealing Randomized Singular Value Decomposition"
}
| null | null | null | null | true | null |
16506
| null |
Default
| null | null |
null |
{
"abstract": " Gallium arsenide (GaAs) is the widest used second generation semiconductor\nwith a direct band gap and increasingly used as nanofilms. However, the\nmagnetic properties of GaAs nanofilms have never been studied. Here we find by\ncomprehensive density functional theory calculations that GaAs nanofilms\ncleaved along the <111> and <100> directions become intrinsically metallic\nfilms with strong surface magnetism and magnetoelectric (ME) effect. The\nsurface magnetism and electrical conductivity are realized via a combined\neffect of transferring charge induced by spontaneous electric-polarization\nthrough the film thickness and spin-polarized surface states. The surface\nmagnetism of <111> nanofilms can be significantly and linearly tuned by\nvertically applied electric field, endowing the nanofilms unexpectedly high ME\ncoefficients, which are tens of times higher than those of ferromagnetic metals\nand transition metal oxides.\n",
"title": "Surface magnetism of gallium arsenide nanofilms"
}
| null | null | null | null | true | null |
16507
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we use detailed Monte Carlo simulations to demonstrate that\nliquid xenon (LXe) can be used to build a Cherenkov-based TOF-PET, with an\nintrinsic coincidence resolving time (CRT) in the vicinity of 10 ps. This\nextraordinary performance is due to three facts: a) the abundant emission of\nCherenkov photons by liquid xenon; b) the fact that LXe is transparent to\nCherenkov light; and c) the fact that the fastest photons in LXe have\nwavelengths higher than 300 nm, therefore making it possible to separate the\ndetection of scintillation and Cherenkov light. The CRT in a Cherenkov LXe\nTOF-PET detector is, therefore, dominated by the resolution (time jitter)\nintroduced by the photosensors and the electronics. However, we show that for\nsufficiently fast photosensors (e.g, an overall 40 ps jitter, which can be\nachieved by current micro-channel plate photomultipliers) the overall CRT\nvaries between 30 and 55 ps, depending of the detection efficiency. This is\nstill one order of magnitude better than commercial CRT devices and improves by\na factor 3 the best CRT obtained with small laboratory prototypes.\n",
"title": "Monte Carlo study of the Coincidence Resolving Time of a liquid xenon PET scanner, using Cherenkov radiation"
}
| null | null | null | null | true | null |
16508
| null |
Default
| null | null |
null |
{
"abstract": " Social networks contain implicit knowledge that can be used to infer\nhierarchical relations that are not explicitly present in the available data.\nInteraction patterns are typically affected by users' social relations. We\npresent an approach to inferring such information that applies a link-analysis\nranking algorithm at different levels of time granularity. In addition, a\nvoting scheme is employed for obtaining the hierarchical relations. The\napproach is evaluated on two datasets: the Enron email data set, where the goal\nis to infer manager-subordinate relationships, and the Co-author data set,\nwhere the goal is to infer PhD advisor-advisee relations. The experimental\nresults indicate that the proposed approach outperforms more traditional\napproaches to inferring hierarchical relations from social networks.\n",
"title": "Detecting Hierarchical Ties Using Link-Analysis Ranking at Different Levels of Time Granularity"
}
| null | null | null | null | true | null |
16509
| null |
Default
| null | null |
null |
{
"abstract": " Consolidation of synaptic changes in response to neural activity is thought\nto be fundamental for memory maintenance over a timescale of hours. In\nexperiments, synaptic consolidation can be induced by repeatedly stimulating\npresynaptic neurons. However, the effectiveness of such protocols depends\ncrucially on the repetition frequency of the stimulations and the mechanisms\nthat cause this complex dependence are unknown. Here we propose a simple\nmathematical model that allows us to systematically study the interaction\nbetween the stimulation protocol and synaptic consolidation. We show the\nexistence of optimal stimulation protocols for our model and, similarly to LTP\nexperiments, the repetition frequency of the stimulation plays a crucial role\nin achieving consolidation. Our results show that the complex dependence of LTP\non the stimulation frequency emerges naturally from a model which satisfies\nonly minimal bistability requirements.\n",
"title": "Optimal stimulation protocol in a bistable synaptic consolidation model"
}
| null | null | null | null | true | null |
16510
| null |
Default
| null | null |
null |
{
"abstract": " This paper investigates gradient recovery schemes for data defined on\ndiscretized manifolds. The proposed method, parametric polynomial preserving\nrecovery (PPPR), does not require the tangent spaces of the exact manifolds\nwhich have been assumed for some significant gradient recovery methods in the\nliterature. Another advantage is that superconvergence is guaranteed for PPPR\nwithout the symmetric condition which has been asked in the existing\ntechniques. There is also numerical evidence that the superconvergence by PPPR\nis high curvature stable, which distinguishes itself from the other methods. As\nan application, we show that its capability of constructing an asymptotically\nexact \\textit{a posteriori} error estimator. Several numerical examples on\ntwo-dimensional surfaces are presented to support the theoretical results and\nmake comparisons with state of the art methods.\n",
"title": "Parametric Polynomial Preserving Recovery on Manifolds"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16511
| null |
Validated
| null | null |
null |
{
"abstract": " Let $Y$ be the complement of a plane quartic curve $D$ defined over a number\nfield. Our main theorem confirms the Lang-Vojta conjecture for $Y$ when $D$ is\na generic smooth quartic curve, by showing that its integral points are\nconfined in a curve except for a finite number of exceptions. The required\nfiniteness will be obtained by reducing it to the Shafarevich conjecture for K3\nsurfaces. Some variants of our method confirm the same conjecture when $D$ is a\nreducible generic quartic curve which consists of four lines, two lines and a\nconic, or two conics.\n",
"title": "Integral points on the complement of plane quartics"
}
| null | null | null | null | true | null |
16512
| null |
Default
| null | null |
null |
{
"abstract": " Collective motion is an intriguing phenomenon, especially considering that it\narises from a set of simple rules governing local interactions between\nindividuals. In theoretical models, these rules are normally \\emph{assumed} to\ntake a particular form, possibly constrained by heuristic arguments. We propose\na new class of models, which describe the individuals as \\emph{agents}, capable\nof deciding for themselves how to act and learning from their experiences. The\nlocal interaction rules do not need to be postulated in this model, since they\n\\emph{emerge} from the learning process. We apply this ansatz to a concrete\nscenario involving marching locusts, in order to model the phenomenon of\ndensity-dependent alignment. We show that our learning agent-based model can\naccount for a Fokker-Planck equation that describes the collective motion and,\nmost notably, that the agents can learn the appropriate local interactions,\nrequiring no strong previous assumptions on their form. These results suggest\nthat learning agent-based models are a powerful tool for studying a broader\nclass of problems involving collective motion and animal agency in general.\n",
"title": "Modelling collective motion based on the principle of agency"
}
| null | null | null | null | true | null |
16513
| null |
Default
| null | null |
null |
{
"abstract": " We present the first CMB power spectra from numerical simulations of the\nglobal O(N) linear $\\sigma$-model with N = 2,3, which have global strings and\nmonopoles as topological defects. In order to compute the CMB power spectra we\ncompute the unequal time correlators (UETCs) of the energy-momentum tensor,\nshowing that they fall off at high wave number faster than naive estimates\nbased on the geometry of the defects, indicating non-trivial\n(anti-)correlations between the defects and the surrounding Goldstone boson\nfield. We obtain source functions for Einstein-Boltzmann solvers from the\nUETCs, using a recent method that improves the modelling at the radiation-\nmatter transition. We show that the interpolation function that mimics the\ntransition is similar to other defect models, but not identical, confirming the\nnon-universality of the interpolation function. The CMB power spectra for\nglobal strings and monopoles have the same overall shape as those obtained\nusing the non-linear $\\sigma$-model approximation, which is well captured by a\nlarge-N calculation. However, the amplitudes are larger than the large-N\ncalculation predict, and in the case of global strings much larger: a factor of\n20 at the peak. Finally we compare the CMB power spectra with the latest CMB\ndata to put limits on the allowed contribution to the temperature power\nspectrum at multipole $\\ell$ = 10 of 1.7% for global strings and 2.4% for\nglobal monopoles. These limits correspond to symmetry-breaking scales of\n2.9x1015 GeV (6.3x1014 GeV with the expected logarithmic scaling of the\neffective string tension between the simulation time and decoupling) and\n6.4x1015 GeV respectively. The bound on global strings is a significant one for\nthe ultra-light axion scenario with axion masses ma 10-28 eV. These upper\nlimits indicate that gravitational wave from global topological defects will\nnot be observable at the GW observatory LISA.\n",
"title": "Cosmic Microwave Background constraints for global strings and global monopoles"
}
| null | null | null | null | true | null |
16514
| null |
Default
| null | null |
null |
{
"abstract": " Community detection in networks is a very actual and important field of\nresearch with applications in many areas. But, given that the amount of\nprocessed data increases more and more, existing algorithms need to be adapted\nfor very large graphs. The objective of this project was to parallelise the\nSynchronised Louvain Method, a community detection algorithm developed by\nArnaud Browet, in order to improve its performances in terms of computation\ntime and thus be able to faster detect communities in very large graphs. To\nreach this goal, we used the API OpenMP to parallelise the algorithm and then\ncarried out performance tests. We studied the computation time and speedup of\nthe parallelised algorithm and were able to bring out some qualitative trends.\nWe obtained a great speedup, compared with the theoretical prediction of Amdahl\nlaw. To conclude, using the parallel implementation of the algorithm of Browet\non large graphs seems to give good results, both in terms of computation time\nand speedup. Further tests should be carried out in order to obtain more\nquantitative results.\n",
"title": "A parallel implementation of the Synchronised Louvain method"
}
| null | null | null | null | true | null |
16515
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, new index coding problems are studied, where each receiver has\nerroneous side information. Although side information is a crucial part of\nindex coding, the existence of erroneous side information has not yet been\nconsidered. We study an index code with receivers that have erroneous side\ninformation symbols in the error-free broadcast channel, which is called an\nindex code with side information errors (ICSIE). The encoding and decoding\nprocedures of the ICSIE are proposed, based on the syndrome decoding. Then, we\nderive the bounds on the optimal codelength of the proposed index code with\nerroneous side information. Furthermore, we introduce a special graph for the\nproposed index coding problem, called a $\\delta_s$-cycle whose properties are\nsimilar to those of the cycle in the conventional index coding problem.\nProperties of the ICSIE are also discussed in the $\\delta_s$-cycle and clique.\nFinally, the proposed ICSIE is generalized to an index code for the scenario\nhaving both additive channel errors and side information errors, called a\ngeneralized error correcting index code (GECIC).\n",
"title": "Index coding with erroneous side information"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16516
| null |
Validated
| null | null |
null |
{
"abstract": " We report on the result of a campaign to monitor 25 HATSouth candidates using\nthe K2 space telescope during Campaign 7 of the K2 mission. We discover\nHATS-36b (EPIC 215969174b), a hot Jupiter with a mass of 2.79$\\pm$0.40 M$_J$\nand a radius of 1.263$\\pm$0.045 R$_J$ which transits a solar-type G0V star\n(V=14.386) in a 4.1752d period. We also refine the properties of three\npreviously discovered HATSouth transiting planets (HATS-9b, HATS-11b, and\nHATS-12b) and search the K2 data for TTVs and additional transiting planets in\nthese systems. In addition we also report on a further three systems that\nremain as Jupiter-radius transiting exoplanet candidates. These candidates do\nnot have determined masses, however pass all of our other vetting observations.\nFinally we report on the 18 candidates which we are now able to classify as\neclipsing binary or blended eclipsing binary systems based on a combination of\nthe HATSouth data, the K2 data, and follow-up ground-based photometry and\nspectroscopy. These range in periods from 0.7 days to 16.7 days, and down to\n1.5 mmag in eclipse depths. Our results show the power of combining\nground-based imaging and spectroscopy with higher precision space-based\nphotometry, and serve as an illustration as to what will be possible when\ncombining ground-based observations with TESS data.\n",
"title": "HATS-36b and 24 other transiting/eclipsing systems from the HATSouth - K2 Campaign 7 program"
}
| null | null | null | null | true | null |
16517
| null |
Default
| null | null |
null |
{
"abstract": " Ultra-faint dwarf galaxies (UFDs) are the faintest known galaxies and due to\ntheir incredibly low surface brightness, it is difficult to find them beyond\nthe Local Group. We report a serendipitous discovery of an UFD, Fornax UFD1, in\nthe outskirts of NGC 1316, a giant galaxy in the Fornax cluster. The new galaxy\nis located at a projected radius of 55 kpc in the south-east of NGC 1316. This\nUFD is found as a small group of resolved stars in the Hubble Space Telescope\nimages of a halo field of NGC 1316, obtained as part of the Carnegie-Chicago\nHubble Program. Resolved stars in this galaxy are consistent with being mostly\nmetal-poor red giant branch (RGB) stars. Applying the tip of the RGB method to\nthe mean magnitude of the two brightest RGB stars, we estimate the distance to\nthis galaxy, 19.0 +- 1.3 Mpc. Fornax UFD1 is probably a member of the Fornax\ncluster. The color-magnitude diagram of these stars is matched by a 12 Gyr\nisochrone with low metallicity ([Fe/H] ~ -2.4). Total magnitude and effective\nradius of Fornax UFD1 are Mv ~ -7.6 +- 0.2 mag and r_eff = 146 +- 9 pc, which\nare similar to those of Virgo UFD1 that was discovered recently in the\nintracluster field of Virgo by Jang & Lee (2014).Fornax UFD1 is the most\ndistant known UFD that is confirmed by resolved stars. This indicates that UFDs\nare ubiquitous and that more UFDs remain to be discovered in the Fornax\ncluster.\n",
"title": "The Carnegie-Chicago Hubble Program: Discovery of the Most Distant Ultra-faint Dwarf Galaxy in the Local Universe"
}
| null | null | null | null | true | null |
16518
| null |
Default
| null | null |
null |
{
"abstract": " Enterprise Resource Planning (ERP) systems have been covered in both\nmainstream Information Technology (IT) periodicals, and in academic literature,\nas a result of extensive adoption by organisations in the last two decades.\nSome of the past studies have reported operational efficiency and other gains,\nwhile other studies have pointed out the challenges. ERP systems continue to\nevolve, moving into the cloud hosted sphere, and being implemented by\nrelatively smaller and regional companies. This project has carried out an\nexploratory study into the use of ERP systems, within Hawke's Bay New Zealand.\nERP systems make up a major investment and undertaking by those companies.\nTherefore, research and lessons learned in this area are very important. In\naddition to a significant initial literature review, this project has conducted\na survey on the local users' experience with Microsoft Dynamics NAV (a popular\nERP brand). As a result, this study will contribute new and relevant\ninformation to the literature on business information systems and to ERP\nsystems, in particular.\n",
"title": "An Exploratory Study on the Implementation and Adoption of ERP Solutions for Businesses"
}
| null | null | null | null | true | null |
16519
| null |
Default
| null | null |
null |
{
"abstract": " A general approach to selective inference is considered for hypothesis\ntesting of the null hypothesis represented as an arbitrary shaped region in the\nparameter space of multivariate normal model. This approach is useful for\nhierarchical clustering where confidence levels of clusters are calculated only\nfor those appeared in the dendrogram, thus subject to heavy selection bias. Our\ncomputation is based on a raw confidence measure, called bootstrap probability,\nwhich is easily obtained by counting how many times the same cluster appears in\nbootstrap replicates of the dendrogram. We adjust the bias of the bootstrap\nprobability by utilizing the scaling-law in terms of geometric quantities of\nthe region in the abstract parameter space, namely, signed distance and mean\ncurvature. Although this idea has been used for non-selective inference of\nhierarchical clustering, its selective inference version has not been discussed\nin the literature. Our bias-corrected $p$-values are asymptotically\nsecond-order accurate in the large sample theory of smooth boundary surfaces of\nregions, and they are also justified for nonsmooth surfaces such as polyhedral\ncones. The $p$-values are asymptotically equivalent to those of the iterated\nbootstrap but with less computation.\n",
"title": "Selective inference for the problem of regions via multiscale bootstrap"
}
| null | null | null | null | true | null |
16520
| null |
Default
| null | null |
null |
{
"abstract": " For future mmWave mobile communication systems the use of analog/hybrid\nbeamforming is envisioned be a key as- pect. The synthesis of beams is a key\ntechnology of enable the best possible operation during beamsearch, data\ntransmission and MU MIMO operation. The developed method for synthesizing beams\nis based on previous work in radar technology considering only phase array\nantennas. With this technique it is possible to generate a desired beam of any\nshape with the constraints of the desired target transceiver antenna frontend.\nIt is not constraint to a certain antenna array geometry, but can handle 1d, 2d\nand even 3d antenna array geometries like cylindric arrays. The numerical\nexamples show that the method can synthesize beams by considering a user\ndefined tradeoff between gain, transition width and passband ripples.\n",
"title": "Arbitrary Beam Synthesis of Different Hybrid Beamforming Systems"
}
| null | null | null | null | true | null |
16521
| null |
Default
| null | null |
null |
{
"abstract": " In all approaches to convergence where the concept of filter is taken as\nprimary, the usual motivation is the notion of neighborhood filter in a\ntopological space. However, these approaches often lead to spaces more general\nthan topological ones, thereby calling into question the need to use filters in\nthe first place. In this note we overturn the usual view and take as primary\nthe notion of convergence in the most general context of centered spaces. In\nthis setting, the notion of filterbase emerges from the concept of germ of a\nfunction, while the concept of filter emerges from an amnestic modification of\nthe subcategory of centered spaces admitting germs at each point.\n",
"title": "The emergence of the concept of filter in topological categories"
}
| null | null | null | null | true | null |
16522
| null |
Default
| null | null |
null |
{
"abstract": " Neural machine translation (NMT) has achieved notable success in recent\ntimes, however it is also widely recognized that this approach has limitations\nwith handling infrequent words and word pairs. This paper presents a novel\nmemory-augmented NMT (M-NMT) architecture, which stores knowledge about how\nwords (usually infrequently encountered ones) should be translated in a memory\nand then utilizes them to assist the neural model. We use this memory mechanism\nto combine the knowledge learned from a conventional statistical machine\ntranslation system and the rules learned by an NMT system, and also propose a\nsolution for out-of-vocabulary (OOV) words based on this framework. Our\nexperiments on two Chinese-English translation tasks demonstrated that the\nM-NMT architecture outperformed the NMT baseline by $9.0$ and $2.7$ BLEU points\non the two tasks, respectively. Additionally, we found this architecture\nresulted in a much more effective OOV treatment compared to competitive\nmethods.\n",
"title": "Memory-augmented Neural Machine Translation"
}
| null | null | null | null | true | null |
16523
| null |
Default
| null | null |
null |
{
"abstract": " The Internet of Things (IoT) enables numerous business opportunities in\nfields as diverse as e-health, smart cities, smart homes, among many others.\nThe IoT incorporates multiple long-range, short-range, and personal area\nwireless networks and technologies into the designs of IoT applications.\nLocalisation in indoor positioning systems plays an important role in the IoT.\nLocation Based IoT applications range from tracking objects and people in\nreal-time, assets management, agriculture, assisted monitoring technologies for\nhealthcare, and smart homes, to name a few. Radio Frequency based systems for\nindoor positioning such as Radio Frequency Identification (RFID) is a key\nenabler technology for the IoT due to its costeffective, high readability\nrates, automatic identification and, importantly, its energy efficiency\ncharacteristic. This paper reviews the state-of-the-art RFID technologies in\nIoT Smart Homes applications. It presents several comparable studies of RFID\nbased projects in smart homes and discusses the applications, techniques,\nalgorithms, and challenges of adopting RFID technologies in IoT smart home\nsystems.\n",
"title": "RFID Localisation For Internet Of Things Smart Homes: A Survey"
}
| null | null | null | null | true | null |
16524
| null |
Default
| null | null |
null |
{
"abstract": " We use positive S^1-equivariant symplectic homology to define a sequence of\nsymplectic capacities c_k for star-shaped domains in R^{2n}. These capacities\nare conjecturally equal to the Ekeland-Hofer capacities, but they satisfy\naxioms which allow them to be computed in many more examples. In particular, we\ngive combinatorial formulas for the capacities c_k of any \"convex toric domain\"\nor \"concave toric domain\". As an application, we determine optimal symplectic\nembeddings of a cube into any convex or concave toric domain. We also extend\nthe capacities c_k to functions of Liouville domains which are almost but not\nquite symplectic capacities.\n",
"title": "Symplectic capacities from positive S^1-equivariant symplectic homology"
}
| null | null | null | null | true | null |
16525
| null |
Default
| null | null |
null |
{
"abstract": " In global models/priors (for example, using wavelet frames), there is a well\nknown analysis vs synthesis dichotomy in the way signal/image priors are\nformulated. In patch-based image models/priors, this dichotomy is also present\nin the choice of how each patch is modeled. This paper shows that there is\nanother analysis vs synthesis dichotomy, in terms of how the whole image is\nrelated to the patches, and that all existing patch-based formulations that\nprovide a global image prior belong to the analysis category. We then propose a\nsynthesis formulation, where the image is explicitly modeled as being\nsynthesized by additively combining a collection of independent patches. We\nformally establish that these analysis and synthesis formulations are not\nequivalent in general and that both formulations are compatible with analysis\nand synthesis formulations at the patch level. Finally, we present an instance\nof the alternating direction method of multipliers (ADMM) that can be used to\nperform image denoising under the proposed synthesis formulation, showing its\ncomputational feasibility. Rather than showing the superiority of the synthesis\nor analysis formulations, the contributions of this paper is to establish the\nexistence of both alternatives, thus closing the corresponding gap in the field\nof patch-based image processing.\n",
"title": "Synthesis versus analysis in patch-based image priors"
}
| null | null | null | null | true | null |
16526
| null |
Default
| null | null |
null |
{
"abstract": " We prove new upper and lower bounds on the VC-dimension of deep neural\nnetworks with the ReLU activation function. These bounds are tight for almost\nthe entire range of parameters. Letting $W$ be the number of weights and $L$ be\nthe number of layers, we prove that the VC-dimension is $O(W L \\log(W))$, and\nprovide examples with VC-dimension $\\Omega( W L \\log(W/L) )$. This improves\nboth the previously known upper bounds and lower bounds. In terms of the number\n$U$ of non-linear units, we prove a tight bound $\\Theta(W U)$ on the\nVC-dimension. All of these bounds generalize to arbitrary piecewise linear\nactivation functions, and also hold for the pseudodimensions of these function\nclasses.\nCombined with previous results, this gives an intriguing range of\ndependencies of the VC-dimension on depth for networks with different\nnon-linearities: there is no dependence for piecewise-constant, linear\ndependence for piecewise-linear, and no more than quadratic dependence for\ngeneral piecewise-polynomial.\n",
"title": "Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks"
}
| null | null | null | null | true | null |
16527
| null |
Default
| null | null |
null |
{
"abstract": " We compare performances of well-known numerical time-stepping methods that\nare widely used to compute solutions of the doubly-infinite\nFermi-Pasta-Ulam-Tsingou (FPUT) lattice equations. The methods are benchmarked\naccording to (1) their accuracy in capturing the soliton peaks and (2) in\ncapturing highly-oscillatory parts of the solutions of the Toda lattice\nresulting from a variety of initial data. The numerical inverse scattering\ntransform method is used to compute a reference solution with high accuracy. We\nfind that benchmarking a numerical method on pure-soliton initial data can lead\none to overestimate the accuracy of the method.\n",
"title": "Benchmarking Numerical Methods for Lattice Equations with the Toda Lattice"
}
| null | null | null | null | true | null |
16528
| null |
Default
| null | null |
null |
{
"abstract": " Due to its wide field of view, cone-beam computed tomography (CBCT) is\nplagued by large amounts of scatter, where attenuated photons hit the detector,\nand corrupt the linear models used for reconstruction. Given that one can\ngenerate a good estimate of scatter however, then image accuracy can be\nretained. In the context of adaptive radiotherapy, one usually has a\nlow-scatter planning CT image of the same patient at an earlier time.\nCorrecting for scatter in the subsequent CBCT scan can either be self\nconsistent with the new measurements or exploit the prior image, and there are\nseveral recent methods that report high accuracy with the latter. In this\nstudy, we will look at the accuracy of various scatter estimation methods, how\nthey can be effectively incorporated into a statistical reconstruction\nalgorithm, along with introducing a method for matching off-line Monte-Carlo\n(MC) prior estimates to the new measurements. Conclusions we draw from testing\non a neck cancer patient are: statistical reconstruction that incorporates the\nscatter estimate significantly outperforms analytic and iterative methods with\npre-correction; and although the most accurate scatter estimates can be made\nfrom the MC on planning image, they only offer a slight advantage over the\nmeasurement based scatter kernel superposition (SKS) in reconstruction error.\n",
"title": "Can Planning Images Reduce Scatter in Follow-Up Cone-Beam CT?"
}
| null | null | null | null | true | null |
16529
| null |
Default
| null | null |
null |
{
"abstract": " While the Internet of things (IoT) promises to improve areas such as energy\nefficiency, health care, and transportation, it is highly vulnerable to\ncyberattacks. In particular, distributed denial-of-service (DDoS) attacks\noverload the bandwidth of a server. But many IoT devices form part of\ncyber-physical systems (CPS). Therefore, they can be used to launch \"physical\"\ndenial-of-service attacks (PDoS) in which IoT devices overflow the \"physical\nbandwidth\" of a CPS. In this paper, we quantify the population-based risk to a\ngroup of IoT devices targeted by malware for a PDoS attack. In order to model\nthe recruitment of bots, we develop a \"Poisson signaling game,\" a signaling\ngame with an unknown number of receivers, which have varying abilities to\ndetect deception. Then we use a version of this game to analyze two mechanisms\n(legal and economic) to deter botnet recruitment. Equilibrium results indicate\nthat 1) defenders can bound botnet activity, and 2) legislating a minimum level\nof security has only a limited effect, while incentivizing active defense can\ndecrease botnet activity arbitrarily. This work provides a quantitative\nfoundation for proactive PDoS defense.\n",
"title": "Proactive Defense Against Physical Denial of Service Attacks using Poisson Signaling Games"
}
| null | null | null | null | true | null |
16530
| null |
Default
| null | null |
null |
{
"abstract": " The weakly compact reflection principle $\\text{Refl}_{\\text{wc}}(\\kappa)$\nstates that $\\kappa$ is a weakly compact cardinal and every weakly compact\nsubset of $\\kappa$ has a weakly compact proper initial segment. The weakly\ncompact reflection principle at $\\kappa$ implies that $\\kappa$ is an\n$\\omega$-weakly compact cardinal. In this article we show that the weakly\ncompact reflection principle does not imply that $\\kappa$ is\n$(\\omega+1)$-weakly compact. Moreover, we show that if the weakly compact\nreflection principle holds at $\\kappa$ then there is a forcing extension\npreserving this in which $\\kappa$ is the least $\\omega$-weakly compact\ncardinal. Along the way we generalize the well-known result which states that\nif $\\kappa$ is a regular cardinal then in any forcing extension by\n$\\kappa$-c.c. forcing the nonstationary ideal equals the ideal generated by the\nground model nonstationary ideal; our generalization states that if $\\kappa$ is\na weakly compact cardinal then after forcing with a `typical' Easton-support\niteration of length $\\kappa$ the weakly compact ideal equals the ideal\ngenerated by the ground model weakly compact ideal.\n",
"title": "The weakly compact reflection principle need not imply a high order of weak compactness"
}
| null | null | null | null | true | null |
16531
| null |
Default
| null | null |
null |
{
"abstract": " Techniques such as ensembling and distillation promise model quality\nimprovements when paired with almost any base model. However, due to increased\ntest-time cost (for ensembles) and increased complexity of the training\npipeline (for distillation), these techniques are challenging to use in\nindustrial settings. In this paper we explore a variant of distillation which\nis relatively straightforward to use as it does not require a complicated\nmulti-stage setup or many new hyperparameters. Our first claim is that online\ndistillation enables us to use extra parallelism to fit very large datasets\nabout twice as fast. Crucially, we can still speed up training even after we\nhave already reached the point at which additional parallelism provides no\nbenefit for synchronous or asynchronous stochastic gradient descent. Two neural\nnetworks trained on disjoint subsets of the data can share knowledge by\nencouraging each model to agree with the predictions the other model would have\nmade. These predictions can come from a stale version of the other model so\nthey can be safely computed using weights that only rarely get transmitted. Our\nsecond claim is that online distillation is a cost-effective way to make the\nexact predictions of a model dramatically more reproducible. We support our\nclaims using experiments on the Criteo Display Ad Challenge dataset, ImageNet,\nand the largest to-date dataset used for neural language modeling, containing\n$6\\times 10^{11}$ tokens and based on the Common Crawl repository of web data.\n",
"title": "Large scale distributed neural network training through online distillation"
}
| null | null | null | null | true | null |
16532
| null |
Default
| null | null |
null |
{
"abstract": " Finding an easy-to-build coils set has been a critical issue for stellarator\ndesign for decades. Conventional approaches assume a toroidal \"winding\"\nsurface. We'll investigate if the existence of winding surface unnecessarily\nconstrains the optimization, and a new method to design coils for stellarators\nis presented. Each discrete coil is represented as an arbitrary, closed,\none-dimensional curve embedded in three-dimensional space. A target function to\nbe minimized that covers both physical requirements and engineering constraints\nis constructed. The derivatives of the target function are calculated\nanalytically. A numerical code, named FOCUS, has been developed. Applications\nto a simple configuration, the W7-X, and LHD plasmas are presented.\n",
"title": "New method to design stellarator coils without the winding surface"
}
| null | null |
[
"Physics"
] | null | true | null |
16533
| null |
Validated
| null | null |
null |
{
"abstract": " We propose a new cellular network model that captures both deterministic and\nrandom aspects of base station deployments. Namely, the base station locations\nare modeled as the superposition of two independent stationary point processes:\na random shifted grid with intensity $\\lambda_g$ and a Poisson point process\n(PPP) with intensity $\\lambda_p$. Grid and PPP deployments are special cases\nwith $\\lambda_p \\to 0$ and $\\lambda_g \\to 0$, with actual deployments in\nbetween these two extremes, as we demonstrate with deployment data. Assuming\nthat each user is associated with the base station that provides the strongest\naverage received signal power, we obtain the probability that a typical user is\nassociated with either a grid or PPP base station. Assuming Rayleigh fading\nchannels, we derive the expression for the coverage probability of the typical\nuser, resulting in the following observations. First, the association and the\ncoverage probability of the typical user are fully characterized as functions\nof intensity ratio $\\rho_\\lambda = \\lambda_p/\\lambda_g$. Second, the user\nassociation is biased towards the base stations located on a grid. Finally, the\nproposed model predicts the coverage probability of the actual deployment with\ngreat accuracy.\n",
"title": "An Analytical Framework for Modeling a Spatially Repulsive Cellular Network"
}
| null | null | null | null | true | null |
16534
| null |
Default
| null | null |
null |
{
"abstract": " Aims: In this paper we focus on the occurrence of glycolaldehyde (HCOCH2OH)\nin young solar analogs by performing the first homogeneous and unbiased study\nof this molecule in the Class 0 protostars of the nearby Perseus star forming\nregion. Methods: We obtained sub-arcsec angular resolution maps at 1.3mm and\n1.4mm of glycolaldehyde emission lines using the IRAM Plateau de Bure (PdB)\ninterferometer in the framework of the CALYPSO IRAM large program. Results:\nGlycolaldehyde has been detected towards 3 Class 0 and 1 Class I protostars out\nof the 13 continuum sources targeted in Perseus: NGC1333-IRAS2A1,\nNGC1333-IRAS4A2, NGC1333-IRAS4B1, and SVS13-A. The NGC1333 star forming region\nlooks particularly glycolaldehyde rich, with a rate of occurrence up to 60%.\nThe glycolaldehyde spatial distribution overlaps with the continuum one,\ntracing the inner 100 au around the protostar. A large number of lines (up to\n18), with upper-level energies Eu from 37 K up to 375 K has been detected. We\nderived column densities > 10^15 cm^-2 and rotational temperatures Trot between\n115 K and 236 K, imaging for the first time hot-corinos around NGC1333-IRAS4B1\nand SVS13-A. Conclusions: In multiple systems glycolaldehyde emission is\ndetected only in one component. The case of the SVS13-A+B and IRAS4-A1+A2\nsystems support that the detection of glycolaldehyde (at least in the present\nPerseus sample) indicates older protostars (i.e. SVS13-A and IRAS4-A2), evolved\nenough to develop the hot-corino region (i.e. 100 K in the inner 100 au).\nHowever, only two systems do not allow us to firmly conclude whether the\nprimary factor leading to the detection of glycolaldehyde emission is the\nenvironments hosting the protostars, evolution (e.g. low value of Lsubmm/Lint),\nor accretion luminosity (high Lint).\n",
"title": "Glycolaldehyde in Perseus young solar analogs"
}
| null | null | null | null | true | null |
16535
| null |
Default
| null | null |
null |
{
"abstract": " We consider generation and comprehension of natural language referring\nexpression for objects in an image. Unlike generic \"image captioning\" which\nlacks natural standard evaluation criteria, quality of a referring expression\nmay be measured by the receiver's ability to correctly infer which object is\nbeing described. Following this intuition, we propose two approaches to utilize\nmodels trained for comprehension task to generate better expressions. First, we\nuse a comprehension module trained on human-generated expressions, as a\n\"critic\" of referring expression generator. The comprehension module serves as\na differentiable proxy of human evaluation, providing training signal to the\ngeneration module. Second, we use the comprehension module in a\ngenerate-and-rerank pipeline, which chooses from candidate expressions\ngenerated by a model according to their performance on the comprehension task.\nWe show that both approaches lead to improved referring expression generation\non multiple benchmark datasets.\n",
"title": "Comprehension-guided referring expressions"
}
| null | null | null | null | true | null |
16536
| null |
Default
| null | null |
null |
{
"abstract": " The human brain is one of the most complex living structures in the known\nUniverse. It consists of billions of neurons and synapses. Due to its intrinsic\ncomplexity, it can be a formidable task to accurately depict brain's structure\nand functionality. In the past, numerous studies have been conducted on\nmodeling brain disease, structure, and functionality. Some of these studies\nhave employed Agent-based approaches including multiagent-based simulation\nmodels as well as brain complex networks. While these models have all been\ndeveloped using agent-based computing, however, to our best knowledge, none of\nthem have employed the use of Agent-Oriented Software Engineering (AOSE)\nmethodologies in developing the brain or disease model. This is a problem\nbecause without due process, developed models can miss out on important\nrequirements. AOSE has the unique capability of merging concepts from\nmultiagent systems, agent-based modeling, artificial intelligence, besides\nconcepts from distributed systems. AOSE involves the various tested software\nengineering principles in various phases of the model development ranging from\nanalysis, design, implementation, and testing phases. In this paper, we employ\nthe use of three different AOSE methodologies for modeling the Multiple\nSclerosis brain disease namely GAIA, TROPOS, and MASE. After developing the\nmodels, we further employ the use of Exploratory Agent-based Modeling (EABM) to\ndevelop an actual model replicating previous results as a proof of concept. The\nkey objective of this study is to demonstrate and explore the viability and\neffectiveness of AOSE methodologies in the development of complex brain\nstructure and cognitive process models. Our key finding include demonstration\nthat AOSE methodologies can be considerably helpful in modeling various living\ncomplex systems, in general, and the human brain, in particular.\n",
"title": "Modeling the Multiple Sclerosis Brain Disease Using Agents: What Works and What Doesn't?"
}
| null | null | null | null | true | null |
16537
| null |
Default
| null | null |
null |
{
"abstract": " Feature aided tracking can often yield improved tracking performance over the\nstandard multiple target tracking (MTT) algorithms with only kinematic\nmeasurements. However, in many applications, the feature signal of the targets\nconsists of sparse Fourier-domain signals. It changes quickly and nonlinearly\nin the time domain, and the feature measurements are corrupted by missed\ndetections and mis-associations. These two factors make it hard to extract the\nfeature information to be used in MTT. In this paper, we develop a\nfeature-aided nearest neighbour joint probabilistic data association filter\n(NN-JPDAF) for joint MTT and feature extraction in dense target environments.\nTo estimate the rapidly varying feature signal from incomplete and corrupted\nmeasurements, we use the atomic norm constraint to formulate the sparsity of\nfeature signal and use the $\\ell_1$-norm to formulate the sparsity of the\ncorruption induced by mis-associations. Based on the sparse representation, the\nfeature signal are estimated by solving a semidefinite program (SDP) which is\nconvex. We also provide an iterative method for solving this SDP via the\nalternating direction method of multipliers (ADMM) where each iteration\ninvolves closed-form computation. With the estimated feature signal,\nre-filtering is performed to estimate the kinematic states of the targets,\nwhere the association makes use of both kinematic and feature information.\nSimulation results are presented to illustrate the performance of the proposed\nalgorithm in a radar application.\n",
"title": "Improved NN-JPDAF for Joint Multiple Target Tracking and Feature Extraction"
}
| null | null | null | null | true | null |
16538
| null |
Default
| null | null |
null |
{
"abstract": " The heavyweight stellar initial mass function (IMF) observed in the cores of\nmassive early-type galaxies (ETGs) has been linked to formation of their cores\nin an initial swiftly-quenched rapid starburst. However, the outskirts of ETGs\nare thought to be assembled via the slow accumulation of smaller systems in\nwhich the star formation is less extreme; this suggests the form of the IMF\nshould exhibit a radial trend in ETGs. Here we report radial stellar population\ngradients out to the half-light radii of a sample of eight nearby ETGs.\nSpatially resolved spectroscopy at 0.8-1.35{\\mu}m from the VLT's KMOS\ninstrument was used to measure radial trends in the strengths of a variety of\nIMF-sensitive absorption features (including some which are previously\nunexplored). We find weak or no radial variation in some of these which, given\na radial IMF trend, ought to vary measurably, e.g. for the Wing-Ford band we\nmeasure a gradient of +0.06$\\pm$0.04 per decade in radius.\nUsing stellar population models to fit stacked and individual spectra, we\ninfer that the measured radial changes in absorption feature strengths are\nprimarily accounted for by abundance gradients which are fairly consistent\nacross our sample (e.g. we derive an average [Na/H] gradient of\n-0.53$\\pm$0.07). The inferred contribution of dwarf stars to the total light\ntypically corresponds to a bottom heavy IMF, but we find no evidence for radial\nIMF variations in the majority of our sample galaxies.\n",
"title": "KINETyS: Constraining spatial variations of the stellar initial mass function in early-type galaxies"
}
| null | null |
[
"Physics"
] | null | true | null |
16539
| null |
Validated
| null | null |
null |
{
"abstract": " Deep learning has enabled traditional reinforcement learning methods to deal\nwith high-dimensional problems. However, one of the disadvantages of deep\nreinforcement learning methods is the limited exploration capacity of learning\nagents. In this paper, we introduce an approach that integrates human\nstrategies to increase the exploration capacity of multiple deep reinforcement\nlearning agents. We also report the development of our own multi-agent\nenvironment called Multiple Tank Defence to simulate the proposed approach. The\nresults show the significant performance improvement of multiple agents that\nhave learned cooperatively with human strategies. This implies that there is a\ncritical need for human intellect teamed with machines to solve complex\nproblems. In addition, the success of this simulation indicates that our\ndeveloped multi-agent environment can be used as a testbed platform to develop\nand validate other multi-agent control algorithms. Details of the environment\nimplementation can be referred to\nthis http URL\n",
"title": "Multi-Agent Deep Reinforcement Learning with Human Strategies"
}
| null | null |
[
"Statistics"
] | null | true | null |
16540
| null |
Validated
| null | null |
null |
{
"abstract": " (Abridged) The formation of large-scale (hundreds to few thousands of AU)\nbipolar structures in the circumstellar envelopes (CSEs) of post-Asymptotic\nGiant Branch (post-AGB) stars is poorly understood. The shape of these\nstructures, traced by emission from fast molecular outflows, suggests that the\ndynamics at the innermost regions of these CSEs does not depend only on the\nenergy of the radiation field of the central star. Deep into the Water\nFountains is an observational project based on the results of programs carried\nout with three telescope facilities: The Karl G. Jansky Very Large Array\n(JVLA), The Australia Telescope Compact Array (ATCA), and the Very Large\nTelescope (SINFONI-VLT). Here we report the results of the observations towards\nthe WF nebula IRAS 18043$-$2116: Detection of radio continuum emission in the\nfrequency range 1.5GHz - 8.0GHz; H$_{2}$O maser spectral features and radio\ncontinuum emission detected at 22GHz, and H$_{2}$ ro-vibrational emission lines\ndetected at the near infrared. The high-velocity H$_{2}$O maser spectral\nfeatures, and the shock-excited H$_{2}$ emission detected could be produced in\nmolecular layers which are swept up as a consequence of the propagation of a\njet-driven wind. Using the derived H$_{2}$ column density, we estimated a\nmolecular mass-loss rate of the order of $10^{-9}$M$_{\\odot}$yr$^{-1}$. On the\nother hand, if the radio continuum flux detected is generated as a consequence\nof the propagation of a thermal radio jet, the mass-loss rate associated to the\noutflowing ionized material is of the order of 10$^{-5}$M$_{\\odot}$yr$^{-1}$.\nThe presence of a rotating disk could be a plausible explanation for the\nmass-loss rates estimated.\n",
"title": "Deep into the Water Fountains: The case of IRAS 18043-2116"
}
| null | null | null | null | true | null |
16541
| null |
Default
| null | null |
null |
{
"abstract": " Computational quantum technologies are entering a new phase in which noisy\nintermediate-scale quantum computers are available, but are still too small to\nbenefit from active error correction. Even with a finite coherence budget to\ninvest in quantum information processing, noisy devices with about 50 qubits\nare expected to experimentally demonstrate quantum supremacy in the next few\nyears. Defined in terms of artificial tasks, current proposals for quantum\nsupremacy, even if successful, will not help to provide solutions to practical\nproblems. Instead, we believe that future users of quantum computers are\ninterested in actual applications and that noisy quantum devices may still\nprovide value by approximately solving hard combinatorial problems via hybrid\nclassical-quantum algorithms. To lower bound the size of quantum computers with\npractical utility, we perform realistic simulations of the Quantum Approximate\nOptimization Algorithm and conclude that quantum speedup will not be\nattainable, at least for a representative combinatorial problem, until several\nhundreds of qubits are available.\n",
"title": "QAOA for Max-Cut requires hundreds of qubits for quantum speed-up"
}
| null | null | null | null | true | null |
16542
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, a mixed-effect modeling scheme is proposed to construct a\npredictor for different features of cancer tumor. For this purpose, a set of\nfeatures is extracted from two groups of patients with the same type of cancer\nbut with two medical outcome: 1) survived and 2) passed away. The goal is to\nbuild different models for the two groups, where in each group,\npatient-specified behavior of individuals can be characterized. These models\nare then used as predictors to forecast future state of patients with a given\nhistory or initial state. To this end, a leave-on-out cross validation method\nis used to measure the prediction accuracy of each patient-specified model.\nExperiments show that compared to fixed-effect modeling (regression),\nmixed-effect modeling has a superior performance on some of the extracted\nfeatures and similar or worse performance on the others.\n",
"title": "Mixed-Effect Modeling for Longitudinal Prediction of Cancer Tumor"
}
| null | null |
[
"Statistics"
] | null | true | null |
16543
| null |
Validated
| null | null |
null |
{
"abstract": " The distributed single-source shortest paths problem is one of the most\nfundamental and central problems in the message-passing distributed computing.\nClassical Bellman-Ford algorithm solves it in $O(n)$ time, where $n$ is the\nnumber of vertices in the input graph $G$. Peleg and Rubinovich (FOCS'99)\nshowed a lower bound of $\\tilde{\\Omega}(D + \\sqrt{n})$ for this problem, where\n$D$ is the hop-diameter of $G$.\nWhether or not this problem can be solved in $o(n)$ time when $D$ is\nrelatively small is a major notorious open question. Despite intensive research\n\\cite{LP13,N14,HKN15,EN16,BKKL16} that yielded near-optimal algorithms for the\napproximate variant of this problem, no progress was reported for the original\nproblem.\nIn this paper we answer this question in the affirmative. We devise an\nalgorithm that requires $O((n \\log n)^{5/6})$ time, for $D = O(\\sqrt{n \\log\nn})$, and $O(D^{1/3} \\cdot (n \\log n)^{2/3})$ time, for larger $D$. This\nrunning time is sublinear in $n$ in almost the entire range of parameters,\nspecifically, for $D = o(n/\\log^2 n)$. For the all-pairs shortest paths\nproblem, our algorithm requires $O(n^{5/3} \\log^{2/3} n)$ time, regardless of\nthe value of $D$.\nWe also devise the first algorithm with non-trivial complexity guarantees for\ncomputing exact shortest paths in the multipass semi-streaming model of\ncomputation.\nFrom the technical viewpoint, our algorithm computes a hopset $G\"$ of a\nskeleton graph $G'$ of $G$ without first computing $G'$ itself. We then conduct\na Bellman-Ford exploration in $G' \\cup G\"$, while computing the required edges\nof $G'$ on the fly. As a result, our algorithm computes exactly those edges of\n$G'$ that it really needs, rather than computing approximately the entire $G'$.\n",
"title": "Distributed Exact Shortest Paths in Sublinear Time"
}
| null | null | null | null | true | null |
16544
| null |
Default
| null | null |
null |
{
"abstract": " The formation of vortices is usually considered to be the main mechanism of\nangular momentum disposal in superfluids. Recently, it was predicted that a\nsuperfluid can acquire angular momentum via an alternative, microscopic route\n-- namely, through interaction with rotating impurities, forming so-called\n`angulon quasiparticles' [Phys. Rev. Lett. 114, 203001 (2015)]. The angulon\ninstabilities correspond to transfer of a small number of angular momentum\nquanta from the impurity to the superfluid, as opposed to vortex instabilities,\nwhere angular momentum is quantized in units of $\\hbar$ per atom. Furthermore,\nsince conventional impurities (such as molecules) represent three-dimensional\n(3D) rotors, the angular momentum transferred is intrinsically 3D as well, as\nopposed to a merely planar rotation which is inherent to vortices. Herein we\nshow that the angulon theory can explain the anomalous broadening of the\nspectroscopic lines observed for CH$_3$ and NH$_3$ molecules in superfluid\nhelium nanodroplets, thereby providing a fingerprint of the emerging angulon\ninstabilities in experiment.\n",
"title": "Fingerprints of angulon instabilities in the spectra of matrix-isolated molecules"
}
| null | null | null | null | true | null |
16545
| null |
Default
| null | null |
null |
{
"abstract": " Security-Constrained Unit Commitment (SCUC) is one of the most significant\nproblems in secure and optimal operation of modern electricity markets. New\nsources of uncertainties such as wind speed volatility and price-sensitive\nloads impose additional challenges to this large-scale problem. This paper\nproposes a new Stochastic SCUC using point estimation method to model the power\nsystem uncertainties more efficiently. Conventional scenario-based Stochastic\nSCUC approaches consider the Mont Carlo method; which presents additional\ncomputational burdens to this large-scale problem. In this paper we use point\nestimation instead of scenario generating to detract computational burdens of\nthe problem. The proposed approach is implemented on a six-bus system and on a\nmodified IEEE 118-bus system with 94 uncertain variables. The efficacy of\nproposed algorithm is confirmed, especially in the last case with notable\nreduction in computational burden without considerable loss of precision.\n",
"title": "Considering Multiple Uncertainties in Stochastic Security-Constrained Unit Commitment Using Point Estimation Method"
}
| null | null | null | null | true | null |
16546
| null |
Default
| null | null |
null |
{
"abstract": " We construct constant mean curvature surfaces in euclidean space by gluing n\nhalf Delaunay surfaces to a non-degenerate minimal n-noid, using the DPW\nmethod.\n",
"title": "Gluing Delaunay ends to minimal n-noids using the DPW method"
}
| null | null | null | null | true | null |
16547
| null |
Default
| null | null |
null |
{
"abstract": " We consider a hyperkähler reduction and describe it via frame bundles.\nTracing the connection through the various reductions, we recover the results\nof Gocho and Nakajima. In addition, we show that the fibers of such a reduction\nare necessarily totally geodesic. As an independent result, we describe\nO'Neill's submersion tensors on principal bundles.\n",
"title": "Some Remarks on the Hyperkähler Reduction"
}
| null | null | null | null | true | null |
16548
| null |
Default
| null | null |
null |
{
"abstract": " We introduce a novel approach to Maximum A Posteriori inference based on\ndiscrete graphical models. By utilizing local Wasserstein distances for\ncoupling assignment measures across edges of the underlying graph, a given\ndiscrete objective function is smoothly approximated and restricted to the\nassignment manifold. A corresponding multiplicative update scheme combines in a\nsingle process (i) geometric integration of the resulting Riemannian gradient\nflow and (ii) rounding to integral solutions that represent valid labelings.\nThroughout this process, local marginalization constraints known from the\nestablished LP relaxation are satisfied, whereas the smooth geometric setting\nresults in rapidly converging iterations that can be carried out in parallel\nfor every edge.\n",
"title": "Image Labeling Based on Graphical Models Using Wasserstein Messages and Geometric Assignment"
}
| null | null | null | null | true | null |
16549
| null |
Default
| null | null |
null |
{
"abstract": " We prove an inverse theorem for the Gowers $U^2$-norm for maps $G\\to\\mathcal\nM$ from an countable, discrete, amenable group $G$ into a von Neumann algebra\n$\\mathcal M$ equipped with an ultraweakly lower semi-continuous, unitarily\ninvariant (semi-)norm $\\Vert\\cdot\\Vert$. We use this result to prove a\nstability result for unitary-valued $\\varepsilon$-representations $G\\to\\mathcal\nU(\\mathcal M)$ with respect to $\\Vert\\cdot \\Vert$.\n",
"title": "Operator algebraic approach to inverse and stability theorems for amenable groups"
}
| null | null | null | null | true | null |
16550
| null |
Default
| null | null |
null |
{
"abstract": " We extensively explore networks of weakly unbalanced, leaky\nintegrate-and-fire (LIF) neurons for different coupling strength, connectivity,\nand by varying the degree of refractoriness, as well as the delay in the spike\ntransmission. We find that the neural network does not only exhibit a\nmicroscopic (single-neuron) stochastic-like evolution, but also a collective\nirregular dynamics (CID). Our analysis is based on the computation of a\nsuitable order parameter, typically used to characterize synchronization\nphenomena and on a detailed scaling analysis (i.e. simulations of different\nnetwork sizes). As a result, we can conclude that CID is a true thermodynamic\nphase, intrinsically different from the standard asynchronous regime.\n",
"title": "Collective irregular dynamics in balanced networks of leaky integrate-and-fire neurons"
}
| null | null | null | null | true | null |
16551
| null |
Default
| null | null |
null |
{
"abstract": " Human movement is used as an indicator of human activity in modern society.\nThe velocity of moving humans is calculated based on position information\nobtained from mobile phones. The level of human activity, as recorded by\nvelocity, varies throughout the day. Therefore, velocity can be used to\nidentify the intervals of highest and lowest activity. More specifically, we\nobtained mobile-phone GPS data from the people around Shibuya station in Tokyo,\nwhich has the highest population density in Japan. From these data, we observe\nthat velocity tends to consistently increase with the changes in social\nactivities. For example, during the earthquake in Kumamoto Prefecture in April\n2016, the activity on that day was much lower than usual. In this research, we\nfocus on natural disasters such as earthquakes owing to their significant\neffects on human activities in developed countries like Japan. In the event of\na natural disaster in another developed country, considering the change in\nhuman behavior at the time of the disaster (e.g., the 2016 Kumamoto Great\nEarthquake) from the viewpoint of velocity allows us to improve our planning\nfor mitigation measures. Thus, we analyze the changes in human activity through\nvelocity calculations in Shibuya, Tokyo, and compare times of disasters with\nnormal times.\n",
"title": "Measurement of human activity using velocity GPS data obtained from mobile phones"
}
| null | null | null | null | true | null |
16552
| null |
Default
| null | null |
null |
{
"abstract": " Real-time instrument tracking is a crucial requirement for various\ncomputer-assisted interventions. In order to overcome problems such as specular\nreflections and motion blur, we propose a novel method that takes advantage of\nthe interdependency between localization and segmentation of the surgical tool.\nIn particular, we reformulate the 2D instrument pose estimation as heatmap\nregression and thereby enable a concurrent, robust and near real-time\nregression of both tasks via deep learning. As demonstrated by our experimental\nresults, this modeling leads to a significantly improved performance than\ndirectly regressing the tool position and allows our method to outperform the\nstate of the art on a Retinal Microsurgery benchmark and the MICCAI EndoVis\nChallenge 2015.\n",
"title": "Concurrent Segmentation and Localization for Tracking of Surgical Instruments"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16553
| null |
Validated
| null | null |
null |
{
"abstract": " Blockchain systems are designed to produce blocks at a constant average rate.\nThe most popular systems currently employ a Proof of Work (PoW) algorithm as a\nmeans of creating these blocks. Bitcoin produces, on average, one block every\n10 minutes. An unfortunate limitation of all deployed PoW blockchain systems is\nthat the time between blocks has high variance. For example, 5% of the time,\nBitcoin's inter-block time is at least 40 minutes. This variance impedes the\nconsistent flow of validated transactions through the system. We propose an\nalternative process for PoW-based block discovery that results in an\ninter-block time with significantly lower variance. Our algorithm, called\nBobtail, generalizes the current algorithm by comparing the mean of the k\nlowest order statistics to a target. We show that the variance of inter-block\ntimes decreases as k increases. If our approach were applied to Bitcoin, about\n80% of blocks would be found within 7 to 12 minutes, and nearly every block\nwould be found within 5 to 18 minutes; the average inter-block time would\nremain at 10 minutes. Further, we show that low-variance mining significantly\nthwarts doublespend and selfish mining attacks. For Bitcoin and Ethereum\ncurrently (k=1), an attacker with 40% of the mining power will succeed with 30%\nprobability when the merchant sets up an embargo of 8 blocks; however, when\nk>=20, the probability of success falls to less than 1%. Similarly, for Bitcoin\nand Ethereum currently, a selfish miner with 40% of the mining power will claim\nabout 66% of blocks; however, when k>=5, the same miner will find that selfish\nmining is less successful than honest mining. The cost of our approach is a\nlarger block header.\n",
"title": "Bobtail: A Proof-of-Work Target that Minimizes Blockchain Mining Variance (Draft)"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16554
| null |
Validated
| null | null |
null |
{
"abstract": " We theoretically study a one-dimensional (1D) mutually incommensurate\nbichromatic lattice system which has been implemented in ultracold atoms to\nstudy quantum localization. It has been universally believed that the\ntight-binding version of this bichromatic incommensurate system is represented\nby the well-known Aubry-Andre model. Here we establish that this belief is\nincorrect and that the Aubry-Andre model description, which applies only in the\nextreme tight-binding limit of very deep primary lattice potential, generically\nbreaks down near the localization transition due to the unavoidable appearance\nof single-particle mobility edges (SPME). In fact, we show that the 1D\nbichromatic incommensurate potential system manifests generic mobility edges\nwhich disappear in the tight-binding limit, leading to the well-studied\nAubry-Andre physics. We carry out an extensive study of the localization\nproperties of the 1D incommensurate optical lattice without making any\ntight-binding approximation. We find that, for the full lattice system, an\nintermediate phase between completely localized and completely delocalized\nregions appears due to the existence of the SPME, making the system\nqualitatively distinct from the Aubry-Andre prediction. Using the Wegner flow\napproach, we show that the SPME in the real lattice system can be attributed to\nsignificant corrections of higher-order harmonics in the lattice potential\nwhich are absent in the strict tight-binding limit. We calculate the dynamical\nconsequences of the intermediate phase in detail to guide future experimental\ninvestigations for the observation of 1D SPME and the associated intermediate\nphase. We consider effects of interaction numerically, and conjecture the\nstability of SPME to weak interaction effects, thus leading to the exciting\npossibility of an experimentally viable nonergodic extended phase in\ninteracting 1D optical lattices.\n",
"title": "Mobility Edges in 1D Bichromatic Incommensurate Potentials"
}
| null | null |
[
"Physics"
] | null | true | null |
16555
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we consider a privacy preserving encoding framework for\nidentification applications covering biometrics, physical object security and\nthe Internet of Things (IoT). The proposed framework is based on a sparsifying\ntransform, which consists of a trained linear map, an element-wise\nnonlinearity, and privacy amplification. The sparsifying transform and privacy\namplification are not symmetric for the data owner and data user. We\ndemonstrate that the proposed approach is closely related to sparse ternary\ncodes (STC), a recent information-theoretic concept proposed for fast\napproximate nearest neighbor (ANN) search in high dimensional feature spaces\nthat being machine learning in nature also offers significant benefits in\ncomparison to sparse approximation and binary embedding approaches. We\ndemonstrate that the privacy of the database outsourced to a server as well as\nthe privacy of the data user are preserved at a low computational cost, storage\nand communication burdens.\n",
"title": "Privacy Preserving Identification Using Sparse Approximation with Ambiguization"
}
| null | null | null | null | true | null |
16556
| null |
Default
| null | null |
null |
{
"abstract": " We consider the problem of recovering a $d-$dimensional manifold $\\mathcal{M}\n\\subset \\mathbb{R}^n$ when provided with noiseless samples from $\\mathcal{M}$.\nThere are many algorithms (e.g., Isomap) that are used in practice to fit\nmanifolds and thus reduce the dimensionality of a given data set. Ideally, the\nestimate $\\mathcal{M}_\\mathrm{put}$ of $\\mathcal{M}$ should be an actual\nmanifold of a certain smoothness; furthermore, $\\mathcal{M}_\\mathrm{put}$\nshould be arbitrarily close to $\\mathcal{M}$ in Hausdorff distance given a\nlarge enough sample. Generally speaking, existing manifold learning algorithms\ndo not meet these criteria. Fefferman, Mitter, and Narayanan (2016) have\ndeveloped an algorithm whose output is provably a manifold. The key idea is to\ndefine an approximate squared-distance function (asdf) to $\\mathcal{M}$. Then,\n$\\mathcal{M}_\\mathrm{put}$ is given by the set of points where the gradient of\nthe asdf is orthogonal to the subspace spanned by the largest $n - d$\neigenvectors of the Hessian of the asdf. As long as the asdf meets certain\nregularity conditions, $\\mathcal{M}_\\mathrm{put}$ is a manifold that is\narbitrarily close in Hausdorff distance to $\\mathcal{M}$. In this paper, we\ndefine two asdfs that can be calculated from the data and show that they meet\nthe required regularity conditions. The first asdf is based on kernel density\nestimation, and the second is based on estimation of tangent spaces using local\nprincipal components analysis.\n",
"title": "Manifold Learning Using Kernel Density Estimation and Local Principal Components Analysis"
}
| null | null |
[
"Statistics"
] | null | true | null |
16557
| null |
Validated
| null | null |
null |
{
"abstract": " The RGB-D camera maintains a limited range for working and is hard to\naccurately measure the depth information in a far distance. Besides, the RGB-D\ncamera will easily be influenced by strong lighting and other external factors,\nwhich will lead to a poor accuracy on the acquired environmental depth\ninformation. Recently, deep learning technologies have achieved great success\nin the visual SLAM area, which can directly learn high-level features from the\nvisual inputs and improve the estimation accuracy of the depth information.\nTherefore, deep learning technologies maintain the potential to extend the\nsource of the depth information and improve the performance of the SLAM system.\nHowever, the existing deep learning-based methods are mainly supervised and\nrequire a large amount of ground-truth depth data, which is hard to acquire\nbecause of the realistic constraints. In this paper, we first present an\nunsupervised learning framework, which not only uses image reconstruction for\nsupervising but also exploits the pose estimation method to enhance the\nsupervised signal and add training constraints for the task of monocular depth\nand camera motion estimation. Furthermore, we successfully exploit our\nunsupervised learning framework to assist the traditional ORB-SLAM system when\nthe initialization module of ORB-SLAM method could not match enough features.\nQualitative and quantitative experiments have shown that our unsupervised\nlearning framework performs the depth estimation task comparable to the\nsupervised methods and outperforms the previous state-of-the-art approach by\n$13.5\\%$ on KITTI dataset. Besides, our unsupervised learning framework could\nsignificantly accelerate the initialization process of ORB-SLAM system and\neffectively improve the accuracy on environmental mapping in strong lighting\nand weak texture scenes.\n",
"title": "Unsupervised Learning-based Depth Estimation aided Visual SLAM Approach"
}
| null | null | null | null | true | null |
16558
| null |
Default
| null | null |
null |
{
"abstract": " This letter provides conditions determining the rank of the nodal admittance\nmatrix, and arbitrary block partitions of it, for connected AC power networks\nwith complex admittances. Furthermore, some implications of these properties\nconcerning Kron Reduction and Hybrid Network Parameters are outlined.\n",
"title": "On the Properties of the Power Systems Nodal Admittance Matrix"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
16559
| null |
Validated
| null | null |
null |
{
"abstract": " Random geometric graphs consist of randomly distributed nodes (points), with\npairs of nodes within a given mutual distance linked. In the usual model the\ndistribution of nodes is uniform on a square, and in the limit of infinitely\nmany nodes and shrinking linking range, the number of isolated nodes is Poisson\ndistributed, and the probability of no isolated nodes is equal to the\nprobability the whole graph is connected. Here we examine these properties for\nseveral self-similar node distributions, including smooth and fractal, uniform\nand nonuniform, and finitely ramified or otherwise. We show that nonuniformity\ncan break the Poisson distribution property, but it strengthens the link\nbetween isolation and connectivity. It also stretches out the connectivity\ntransition. Finite ramification is another mechanism for lack of connectivity.\nThe same considerations apply to fractal distributions as smooth, with some\ntechnical differences in evaluation of the integrals and analytical arguments.\n",
"title": "Isolation and connectivity in random geometric graphs with self-similar intensity measures"
}
| null | null | null | null | true | null |
16560
| null |
Default
| null | null |
null |
{
"abstract": " Motivation: Word-based or `alignment-free' methods for phylogeny\nreconstruction are much faster than traditional approaches, but they are\ngenerally less accurate. Most of these methods calculate pairwise distances for\na set of input sequences, for example from word frequencies, from so-called\nspaced-word matches or from the average length of common substrings.\nResults: In this paper, we propose the first word-based approach to tree\nreconstruction that is based on multiple sequence comparison and Maximum\nLikelihood. Our algorithm first samples small, gap-free alignments involving\nfour taxa each. For each of these alignments, it then calculates a quartet tree\nand, finally, the program Quartet MaxCut is used to infer a super tree topology\nfor the full set of input taxa from the calculated quartet trees. Experimental\nresults show that trees calculated with our approach are of high quality.\nAvailability: The source code of the program is available at\nthis https URL\nContact: [email protected]\n",
"title": "Multi-SpaM: a Maximum-Likelihood approach to Phylogeny reconstruction based on Multiple Spaced-Word Matches"
}
| null | null |
[
"Quantitative Biology"
] | null | true | null |
16561
| null |
Validated
| null | null |
null |
{
"abstract": " We show how geodesics, Jacobi vector fields and flag curvature of a Finsler\nmetric behave under Zermelo deformation with respect to a Killing vector field.\nWe also show that Zermelo deformation with respect to a Killing vector field of\na locally symmetric Finsler metric is also locally symmetric.\n",
"title": "Zermelo deformation of Finsler metrics by Killing vector fields"
}
| null | null | null | null | true | null |
16562
| null |
Default
| null | null |
null |
{
"abstract": " In finite mixture models, apart from underlying mixing measure, true kernel\ndensity function of each subpopulation in the data is, in many scenarios,\nunknown. Perhaps the most popular approach is to choose some kernel functions\nthat we empirically believe our data are generated from and use these kernels\nto fit our models. Nevertheless, as long as the chosen kernel and the true\nkernel are different, statistical inference of mixing measure under this\nsetting will be highly unstable. To overcome this challenge, we propose\nflexible and efficient robust estimators of the mixing measure in these models,\nwhich are inspired by the idea of minimum Hellinger distance estimator, model\nselection criteria, and superefficiency phenomenon. We demonstrate that our\nestimators consistently recover the true number of components and achieve the\noptimal convergence rates of parameter estimation under both the well- and\nmis-specified kernel settings for any fixed bandwidth. These desirable\nasymptotic properties are illustrated via careful simulation studies with both\nsynthetic and real data.\n",
"title": "Robust estimation of mixing measures in finite mixture models"
}
| null | null | null | null | true | null |
16563
| null |
Default
| null | null |
null |
{
"abstract": " This article explains phase noise, jitter, and some slower phenomena in\ndigital integrated circuits, focusing on high-demanding, noise-critical\napplications. We introduce the concept of phase type and time type phase noise.\nThe rules for scaling the noise with frequency are chiefly determined by the\nspectral properties of these two basic types, by the aliasing phenomenon, and\nby the input and output circuits. Then, we discuss the parameter extraction\nfrom experimental data and we report on the measured phase noise in some\nselected devices of different node size and complexity. We observed flicker\nnoise between -80 and -130 dBrad^2/Hz at 1 Hz offset, and white noise down to\n-165 dBrad^2/Hz in some fortunate cases and using the appropriate tricks. It\nturns out that flicker noise is proportional to the reciprocal of the volume of\nthe transistor. This unpleasant conclusion is supported by a gedanken\nexperiment. Further experiments provide understanding on: (i) the interplay\nbetween noise sources in the internal PLL, often present in FPGAs; (ii) the\nchattering phenomenon, which consists in multiple bouncing at transitions; and\n(iii) thermal time constants, and their effect on phase wander and on the Allan\nvariance.\n",
"title": "Phase Noise and Jitter in Digital Electronics"
}
| null | null |
[
"Physics"
] | null | true | null |
16564
| null |
Validated
| null | null |
null |
{
"abstract": " In order to pursue the vision of the RoboCup Humanoid League of beating the\nsoccer world champion by 2050, new rules and competitions are added or modified\neach year fostering novel technological advances. In 2017, the number of\nplayers in the TeenSize class soccer games was increase to 3 vs. 3, which\nallowed for more team play strategies. Improvements in individual skills were\nalso demanded through a set of technical challenges. This paper presents the\nlatest individual skills and team play developments used in RoboCup 2017 that\nlead our team Nimbro winning the 2017 TeenSize soccer tournament, the technical\nchallenges, and the drop-in games.\n",
"title": "Advanced Soccer Skills and Team Play of RoboCup 2017 TeenSize Winner NimbRo"
}
| null | null | null | null | true | null |
16565
| null |
Default
| null | null |
null |
{
"abstract": " A novel low cost, near equi-atomic alloy comprising of Al, Cu, Fe and Mn is\nsynthesized using arc-melting technique. The cast alloy possesses a dendritic\nmicrostructure where the dendrites consist of disordered FCC and ordered FCC\nphases. The inter-dendritic region is comprised of ordered FCC phase and\nspinodally decomposed BCC phases. A Cu segregation is observed in the\ninter-dendritic region while dendritic region is rich in Fe. The bulk hardness\nof the alloy is ~ 380 HV, indicating significant yield strength.\n",
"title": "Phase partitioning in a novel near equi-atomic AlCuFeMn alloy"
}
| null | null | null | null | true | null |
16566
| null |
Default
| null | null |
null |
{
"abstract": " Project 8 is a tritium endpoint neutrino mass experiment utilizing a phased\nprogram to achieve sensitivity to the range of neutrino masses allowed by the\ninverted mass hierarchy. The Cyclotron Radiation Emission Spectroscopy (CRES)\ntechnique is employed to measure the differential energy spectrum of decay\nelectrons with high precision. We present an overview of the Project 8\nexperimental program, from first demonstration of the CRES technique to\nultimate sensitivity with an atomic tritium source. We highlight recent\nadvances in preparation for the first measurement of the continuous tritium\nspectrum with CRES.\n",
"title": "Overview of Project 8 and Progress Towards Tritium Operation"
}
| null | null | null | null | true | null |
16567
| null |
Default
| null | null |
null |
{
"abstract": " In this report, two general concepts for proper efficiency in vector\noptimization are studied. Properly efficient elements can be defined as\nminimizers of functionals with certain monotonicity properties or as weakly\nefficient elements with respect to sets that contain the domination set.\nInterdependencies between both concepts are proved in topological vector spaces\nby means of Gerstewitz functionals. The investigation includes proper\nefficiency notions introduced by Henig and by Nehse and Iwanow. In contrary to\nHenig's notion, proper efficiency by Nehse and Iwanow is defined as efficiency\nwith respect to certain convex sets which are not necessarily cones. For the\nfinite-dimensional case, we turn to Geoffrion's proper efficiency as a special\ncase of Henig's proper efficiency. It is characterized as efficiency with\nregard to subclasses of the set of polyhedral cones. Conditions for the\nexistence of Geoffrion's properly efficient points are proved. For closed\nfeasible point sets, Geoffrion's properly efficient point set is empty or\ncoincides with that of Nehse and Iwanow. Properly efficient elements by Nehse\nand Iwanow are the minimizers of continuous convex functionals with certain\nmonotonicity properties. Henig's proper efficiency can be described by means of\nminimizers of continuous sublinear functionals with certain monotonicity\nproperties.\n",
"title": "Proper efficiency and cone efficiency"
}
| null | null | null | null | true | null |
16568
| null |
Default
| null | null |
null |
{
"abstract": " This paper provides an outline of the algorithms submitted for the WSDM Cup\n2019 Spotify Sequential Skip Prediction Challenge (team name: mimbres). In the\nchallenge, complete information including acoustic features and user\ninteraction logs for the first half of a listening session is provided. Our\ngoal is to predict whether the individual tracks in the second half of the\nsession will be skipped or not, only given acoustic features. We proposed two\ndifferent kinds of algorithms that were based on metric learning and sequence\nlearning. The experimental results showed that the sequence learning approach\nperformed significantly better than the metric learning approach. Moreover, we\nconducted additional experiments to find that significant performance gain can\nbe achieved using complete user log information.\n",
"title": "Sequential Skip Prediction with Few-shot in Streamed Music Contents"
}
| null | null | null | null | true | null |
16569
| null |
Default
| null | null |
null |
{
"abstract": " We derive a general statistical model of interactions, starting from\nprobabilistic principles and elementary requirements. Prevailing interaction\nmodels in biomedical researches diverge both mathematically and practically. In\nparticular, genetic interaction inquiries are formulated without an obvious\nmathematical unity. Our model reveals theoretical properties unnoticed so far,\nparticularly valuable for genetic interaction mapping, where mechanistic\ndetails are mostly unknown, distribution of gene variants differ between\npopulations, and genetic susceptibilities are spuriously propagated by linkage\ndisequilibrium. When applied to data of the largest interaction mapping\nexperiment on Saccharomyces Cerevisiae to date, our results imply less aversion\nto positive interactions, detection of well-documented hubs and partial\nremapping of functional regions of the currently known genetic interaction\nlandscape. Assessment of divergent annotations across functional categories\nfurther suggests that positive interactions have a more important role on\nribosome biogenesis than previously realized. The unity of arguments elaborated\nhere enables the analysis of dissimilar interaction models and experimental\ndata with a common framework.\n",
"title": "Genetic interactions from first principles"
}
| null | null |
[
"Statistics"
] | null | true | null |
16570
| null |
Validated
| null | null |
null |
{
"abstract": " A sparse stochastic block model (SBM) with two communities is defined by the\ncommunity probability $\\pi_0,\\pi_1$, and the connection probability between\ncommunities $a,b\\in\\{0,1\\}$, namely $q_{ab} = \\frac{\\alpha_{ab}}{n}$. When\n$q_{ab}$ is constant in $a,b$, the random graph is simply the\nErdős-Rény random graph. We evaluate the log partition function of the\nIsing model on sparse SBM with two communities.\nAs an application, we give consistent parameter estimation of the sparse SBM\nwith two communities in a special case. More specifically, let $d_0,d_1$ be the\naverage degree of the two communities, i.e.,\n$d_0\\overset{def}{=}\\pi_0\\alpha_{00}+\\pi_1\\alpha_{01},d_1\\overset{def}{=}\\pi_0\\alpha_{10}+\\pi_1\\alpha_{11}$.\nWe focus on the regime $d_0=d_1$ (the regime $d_0\\ne d_1$ is trivial). In this\nregime, there exists $d,\\lambda$ and $r\\geq 0$ with $\\pi_0=\\frac{1}{1+r},\n\\pi_1=\\frac{r}{1+r}$, $\\alpha_{00}=d(1+r\\lambda), \\alpha_{01}=\\alpha_{10} =\nd(1-\\lambda), \\alpha_{11} = d(1+\\frac{\\lambda}{r})$. We give a consistent\nestimator of $r$ when $\\lambda<0$. The estimator of $\\lambda$ given by\n\\citep{mossel2015reconstruction} is valid in the general situation. We also\nprovide a random clustering algorithm which does not require knowledge of\nparameters and which is positively correlated with the true community label\nwhen $\\lambda<0$.\n",
"title": "On the Log Partition Function of Ising Model on Stochastic Block Model"
}
| null | null | null | null | true | null |
16571
| null |
Default
| null | null |
null |
{
"abstract": " Given constant data of density $\\rho_0$, velocity $-u_0{\\bf e}_r$, pressure\n$p_0$ and electric force $-E_0{\\bf e}_r$ for supersonic flow at the entrance,\nand constant pressure $p_{\\rm ex}$ for subsonic flow at the exit, we prove that\nEuler-Poisson system admits a unique transonic shock solution in a two\ndimensional convergent nozzle, provided that $u_0>0$, $E_0>0$, and that $E_0$\nis sufficiently large depending on $(\\rho_0, u_0, p_0)$ and the length of the\nnozzle.\n",
"title": "Radial transonic shock solutions of Euler-Poisson system in convergent nozzles"
}
| null | null | null | null | true | null |
16572
| null |
Default
| null | null |
null |
{
"abstract": " X-ray observations of two metal-deficient luminous compact galaxies (LCG)\n(SHOC~486 and SDSS J084220.94+115000.2) with properties similar to the\nso-called Green Pea galaxies were obtained using the {\\emph{Chandra X-ray\nObservatory}}. Green Pea galaxies are relatively small, compact (a few kpc\nacross) galaxies that get their green color from strong [OIII]$\\lambda$5007\\AA\\\nemission, an indicator of intense, recent star formation. These two galaxies\nwere predicted to have the highest observed count rates, using the X-ray\nluminosity -- star formation rate ($L_X$--SFR) relation for X-ray binaries,\nfrom a statistically complete sample drawn from optical criteria. We determine\nthe X-ray luminosity relative to star-formation rate and metallicity for these\ntwo galaxies. Neither exhibit any evidence of active galactic nuclei and we\nsuspect the X-ray emission originates from unresolved populations of high mass\nX-ray binaries. We discuss the $L_X$--SFR--metallicity plane for star-forming\ngalaxies and show that the two LCGs are consistent with the prediction of this\nrelation. This is the first detection of Green Pea analogs in X-rays.\n",
"title": "X-rays from Green Pea Analogs"
}
| null | null | null | null | true | null |
16573
| null |
Default
| null | null |
null |
{
"abstract": " CoRoT-9b is one of the rare long-period (P=95.3 days) transiting giant\nplanets with a measured mass known to date. We present a new analysis of the\nCoRoT-9 system based on five years of radial-velocity (RV) monitoring with\nHARPS and three new space-based transits observed with CoRoT and Spitzer.\nCombining our new data with already published measurements we redetermine the\nCoRoT-9 system parameters and find good agreement with the published values. We\nuncover a higher significance for the small but non-zero eccentricity of\nCoRoT-9b ($e=0.133^{+0.042}_{-0.037}$) and find no evidence for additional\nplanets in the system. We use simulations of planet-planet scattering to show\nthat the eccentricity of CoRoT-9b may have been generated by an instability in\nwhich a $\\sim 50~M_\\oplus$ planet was ejected from the system. This scattering\nwould not have produced a spin-orbit misalignment, so we predict that CoRoT-9b\norbit should lie within a few degrees of the initial plane of the\nprotoplanetary disk. As a consequence, any significant stellar obliquity would\nindicate that the disk was primordially tilted.\n",
"title": "A deeper view of the CoRoT-9 planetary system. A small non-zero eccentricity for CoRoT-9b likely generated by planet-planet scattering"
}
| null | null | null | null | true | null |
16574
| null |
Default
| null | null |
null |
{
"abstract": " The paper derives and analyses the (semi-)discrete dispersion relation of the\nParareal parallel-in-time integration method. It investigates Parareal's wave\npropagation characteristics with the aim to better understand what causes the\nwell documented stability problems for hyperbolic equations. The analysis shows\nthat the instability is caused by convergence of the amplification factor to\nthe exact value from above for medium to high wave numbers. Phase errors in the\ncoarse propagator are identified as the culprit, which suggests that\nspecifically tailored coarse level methods could provide a remedy.\n",
"title": "Wave propagation characteristics of Parareal"
}
| null | null | null | null | true | null |
16575
| null |
Default
| null | null |
null |
{
"abstract": " Deep learning models (aka Deep Neural Networks) have revolutionized many\nfields including computer vision, natural language processing, speech\nrecognition, and is being increasingly used in clinical healthcare\napplications. However, few works exist which have benchmarked the performance\nof the deep learning models with respect to the state-of-the-art machine\nlearning models and prognostic scoring systems on publicly available healthcare\ndatasets. In this paper, we present the benchmarking results for several\nclinical prediction tasks such as mortality prediction, length of stay\nprediction, and ICD-9 code group prediction using Deep Learning models,\nensemble of machine learning models (Super Learner algorithm), SAPS II and SOFA\nscores. We used the Medical Information Mart for Intensive Care III (MIMIC-III)\n(v1.4) publicly available dataset, which includes all patients admitted to an\nICU at the Beth Israel Deaconess Medical Center from 2001 to 2012, for the\nbenchmarking tasks. Our results show that deep learning models consistently\noutperform all the other approaches especially when the `raw' clinical time\nseries data is used as input features to the models.\n",
"title": "Benchmark of Deep Learning Models on Large Healthcare MIMIC Datasets"
}
| null | null | null | null | true | null |
16576
| null |
Default
| null | null |
null |
{
"abstract": " Topological Data Analysis (TDA) is a novel statistical technique,\nparticularly powerful for the analysis of large and high dimensional data sets.\nMuch of TDA is based on the tool of persistent homology, represented visually\nvia persistence diagrams. In an earlier paper we proposed a parametric\nrepresentation for the probability distributions of persistence diagrams, and\nbased on it provided a method for their replication. Since the typical\nsituation for big data is that only one persistence diagram is available, these\nreplications allow for conventional statistical inference, which, by its very\nnature, requires some form of replication. In the current paper we continue\nthis analysis, and further develop its practical statistical methodology, by\ninvestigating a wider class of examples than treated previously.\n",
"title": "Modeling of Persistent Homology"
}
| null | null | null | null | true | null |
16577
| null |
Default
| null | null |
null |
{
"abstract": " We build on auto-encoding sequential Monte Carlo (AESMC): a method for model\nand proposal learning based on maximizing the lower bound to the log marginal\nlikelihood in a broad family of structured probabilistic models. Our approach\nrelies on the efficiency of sequential Monte Carlo (SMC) for performing\ninference in structured probabilistic models and the flexibility of deep neural\nnetworks to model complex conditional probability distributions. We develop\nadditional theoretical insights and introduce a new training procedure which\nimproves both model and proposal learning. We demonstrate that our approach\nprovides a fast, easy-to-implement and scalable means for simultaneous model\nlearning and proposal adaptation in deep generative models.\n",
"title": "Auto-Encoding Sequential Monte Carlo"
}
| null | null | null | null | true | null |
16578
| null |
Default
| null | null |
null |
{
"abstract": " We show that the uniformly accelerated reference systems proposed by Einstein\nwhen introducing acceleration in the theory of relativity are Fermi-Walker\ncoordinate systems. We then consider more general accelerated motions and, on\nthe one hand we obtain Thomas precession and, on the other, we prove that the\nonly accelerated reference systems that at any time admit an instantaneously\ncomoving inertial system belong necessarily to the Fermi-Walker class.\n",
"title": "Einstein's accelerated reference systems and Fermi-Walker coordinates"
}
| null | null | null | null | true | null |
16579
| null |
Default
| null | null |
null |
{
"abstract": " We propose a deep learning-based approach to the problem of premise\nselection: selecting mathematical statements relevant for proving a given\nconjecture. We represent a higher-order logic formula as a graph that is\ninvariant to variable renaming but still fully preserves syntactic and semantic\ninformation. We then embed the graph into a vector via a novel embedding method\nthat preserves the information of edge ordering. Our approach achieves\nstate-of-the-art results on the HolStep dataset, improving the classification\naccuracy from 83% to 90.3%.\n",
"title": "Premise Selection for Theorem Proving by Deep Graph Embedding"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16580
| null |
Validated
| null | null |
null |
{
"abstract": " Assuming a conjecture on distinct zeros of Dirichlet L-functions we get\nasymptotic results on the average number of representations of an integer as\nthe sum of two primes in arithmetic progression. On the other hand the\nexistence of good error terms gives information on the the location of zeros of\nL-functions and possible Siegel zeros. Similar results are obtained for an\ninteger in a congruence class expressed as the sum of two primes.\n",
"title": "Goldbach Representations in Arithmetic Progressions and zeros of Dirichlet L-functions"
}
| null | null | null | null | true | null |
16581
| null |
Default
| null | null |
null |
{
"abstract": " It is well known that parameters for strongly correlated predictor variables\nin a linear model cannot be accurately estimated. We look for linear\ncombinations of these parameters that can be. Under a uniform model, we find\nsuch linear combinations in a neighborhood of a simple variability weighted\naverage of these parameters. Surprisingly, this variability weighted average is\nmore accurately estimated when the variables are more strongly correlated, and\nit is the only linear combination with this property. It can be easily computed\nfor strongly correlated predictor variables in all linear models and has\napplications in inference and estimation concerning parameters of such\nvariables.\n",
"title": "Estimable group effects for strongly correlated variables in linear models"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
16582
| null |
Validated
| null | null |
null |
{
"abstract": " Sum-product networks have recently emerged as an attractive representation\ndue to their dual view as a special type of deep neural network with clear\nsemantics and a special type of probabilistic graphical model for which\ninference is always tractable. Those properties follow from some conditions\n(i.e., completeness and decomposability) that must be respected by the\nstructure of the network. As a result, it is not easy to specify a valid\nsum-product network by hand and therefore structure learning techniques are\ntypically used in practice. This paper describes the first online structure\nlearning technique for continuous SPNs with Gaussian leaves. We also introduce\nan accompanying new parameter learning technique.\n",
"title": "Online Structure Learning for Sum-Product Networks with Gaussian Leaves"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
16583
| null |
Validated
| null | null |
null |
{
"abstract": " We propose a novel numerical approach for the optimal design of wide-area\nheterogeneous electromagnetic metasurfaces beyond the conventionally used\nunit-cell approximation. The proposed method exploits the combination of\nRigorous Coupled Wave Analysis (RCWA) and global optimization techniques (two\nevolutionary algorithms namely the Genetic Algorithm (GA) and a modified form\nof the Artificial Bee Colony (ABC with memetic search phase method) are\nconsidered). As a specific example, we consider the design of beam deflectors\nusing all-dielectric nanoantennae for operation in the visible wavelength\nregion; beam deflectors can serve as building blocks for other more complicated\ndevices like metalenses. Compared to previous reports using local optimization\napproaches our approach improves device efficiency; transmission efficiency is\nespecially improved for wide deflection angle beam deflectors. The ABC method\nwith memetic search phase is also an improvement over the more commonly used GA\nas it reaches similar efficiency levels with upto 35% reduction in computation\ntime. The method described here is of interest for the rapid design of a wide\nvariety of electromagnetic metasurfaces irrespective of their operational\nwavelength.\n",
"title": "Rapid Design of Wide-Area Heterogeneous Electromagnetic Metasurfaces beyond the Unit-Cell Approximation"
}
| null | null | null | null | true | null |
16584
| null |
Default
| null | null |
null |
{
"abstract": " Autonomous driving presents one of the largest problems that the robotics and\nartificial intelligence communities are facing at the moment, both in terms of\ndifficulty and potential societal impact. Self-driving vehicles (SDVs) are\nexpected to prevent road accidents and save millions of lives while improving\nthe livelihood and life quality of many more. However, despite large interest\nand a number of industry players working in the autonomous domain, there is\nstill more to be done in order to develop a system capable of operating at a\nlevel comparable to best human drivers. One reason for this is high uncertainty\nof traffic behavior and large number of situations that an SDV may encounter on\nthe roads, making it very difficult to create a fully generalizable system. To\nensure safe and efficient operations, an autonomous vehicle is required to\naccount for this uncertainty and to anticipate a multitude of possible\nbehaviors of traffic actors in its surrounding. In this work, we address this\ncritical problem and present a method to predict multiple possible trajectories\nof actors while also estimating their probabilities. The method encodes each\nactor's surrounding context into a raster image, used as input by deep\nconvolutional networks to automatically derive relevant features for the task.\nFollowing extensive offline evaluation and comparison to state-of-the-art\nbaselines, as well as closed course tests, the method was successfully deployed\nto a fleet of SDVs.\n",
"title": "Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks"
}
| null | null | null | null | true | null |
16585
| null |
Default
| null | null |
null |
{
"abstract": " We present a novel time- and phase-resolved, background-free scheme to study\nthe extreme ultraviolet dipole emission of a bound electronic wavepacket,\nwithout the use of any extreme ultraviolet exciting pulse. Using multiphoton\ntransitions, we populate a superposition of quantum states which coherently\nemit extreme ultraviolet radiation through free induction decay. This emission\nis probed and controlled, both in amplitude and phase, by a time-delayed\ninfrared femtosecond pulse. We directly measure the laser-induced dephasing of\nthe emission by using a simple heterodyne detection scheme based on two-source\ninterferometry. This technique provides rich information about the interplay\nbetween the laser field and the Coulombic potential on the excited electron\ndynamics. Its background-free nature enables us to use a large range of gas\npressures and to reveal the influence of collisions in the relaxation process.\n",
"title": "Phase-Resolved Two-Dimensional Spectroscopy of Electronic Wavepackets by Laser-Induced XUV Free Induction Decay"
}
| null | null | null | null | true | null |
16586
| null |
Default
| null | null |
null |
{
"abstract": " Optimized spatial partitioning algorithms are the corner stone of many\nsuccessful experimental designs and statistical methods. Of these algorithms,\nthe Centroidal Voronoi Tessellation (CVT) is the most widely utilized. CVT\nbased methods require global knowledge of spatial boundaries, do not readily\nallow for weighted regions, have challenging implementations, and are\ninefficiently extended to high dimensional spaces. We describe two simple\npartitioning schemes based on nearest and next nearest neighbor locations which\neasily incorporate these features at the slight expense of optimal placement.\nSeveral novel qualitative techniques which assess these partitioning schemes\nare also included. The feasibility of autonomous uninformed sensor networks\nutilizing these algorithms are considered. Some improvements in particle swarm\noptimizer results on multimodal test functions from partitioned initial\npositions in two space are also illustrated. Pseudo code for all of the novel\nalgorithms depicted here-in is available in the supplementary information of\nthis manuscript.\n",
"title": "Optimized Spatial Partitioning via Minimal Swarm Intelligence"
}
| null | null | null | null | true | null |
16587
| null |
Default
| null | null |
null |
{
"abstract": " This paper addresses the problem of handling spatial misalignments due to\ncamera-view changes or human-pose variations in person re-identification. We\nfirst introduce a boosting-based approach to learn a correspondence structure\nwhich indicates the patch-wise matching probabilities between images from a\ntarget camera pair. The learned correspondence structure can not only capture\nthe spatial correspondence pattern between cameras but also handle the\nviewpoint or human-pose variation in individual images. We further introduce a\nglobal constraint-based matching process. It integrates a global matching\nconstraint over the learned correspondence structure to exclude cross-view\nmisalignments during the image patch matching process, hence achieving a more\nreliable matching score between images. Finally, we also extend our approach by\nintroducing a multi-structure scheme, which learns a set of local\ncorrespondence structures to capture the spatial correspondence sub-patterns\nbetween a camera pair, so as to handle the spatial misalignments between\nindividual images in a more precise way. Experimental results on various\ndatasets demonstrate the effectiveness of our approach.\n",
"title": "Learning Correspondence Structures for Person Re-identification"
}
| null | null | null | null | true | null |
16588
| null |
Default
| null | null |
null |
{
"abstract": " A basic goal in complexity theory is to understand the communication\ncomplexity of number-on-the-forehead problems\n$f\\colon(\\{0,1\\}^n)^{k}\\to\\{0,1\\}$ with $k\\gg\\log n$ parties. We study the\nproblems of inner product and set disjointness and determine their randomized\ncommunication complexity for every $k\\geq\\log n$, showing in both cases that\n$\\Theta(1+\\lceil\\log n\\rceil/\\log\\lceil1+k/\\log n\\rceil)$ bits are necessary\nand sufficient. In particular, these problems admit constant-cost protocols if\nand only if the number of parties is $k\\geq n^{\\epsilon}$ for some constant\n$\\epsilon>0.$\n",
"title": "Inner Product and Set Disjointness: Beyond Logarithmically Many Parties"
}
| null | null | null | null | true | null |
16589
| null |
Default
| null | null |
null |
{
"abstract": " Effective gauge fields have allowed the emulation of matter under strong\nmagnetic fields leading to the realization of Harper-Hofstadter, Haldane\nmodels, and led to demonstrations of one-way waveguides and topologically\nprotected edge states. Central to these discoveries is the chirality induced by\ntime-symmetry breaking. Due to the discovery of quantum search algorithms based\non walks on graphs, recent work has discovered new implications the effect of\ntime-reversal symmetry breaking has on the transport of quantum states and has\nbrought with it a host of new experimental implementations. We provide a full\nclassification of the unitary operators defining quantum processes which break\ntime-reversal symmetry in their induced transition properties between basis\nelements in a preferred site-basis. Our results are furthermore proven in terms\nof the geometry of the corresponding Hamiltonian support graph and hence\nprovide a topological classification. A quantum process of this type is\nnecessarily time-symmetric for any choice of time-independent Hamiltonian if\nand only if the underlying support graph is bipartite. Moreover, for\nnon-bipartite support, there exists a time-independent Hamiltonian with\nnecessarily complex edge weights that induces time-asymmetric transition\nprobabilities between edge(s). We further prove that certain bipartite graphs\ngive rise to transition probability suppression, but not broken time-reversal\nsymmetry. These results fill an important missing gap in understanding the role\nthis omnipresent effect has in quantum physics. Furthermore, through our\ndevelopment of a general framework, along the way to our results we completely\ncharacterize gauge potentials on combinatorial graphs.\n",
"title": "Topological classification of time-asymmetry in unitary quantum processes"
}
| null | null | null | null | true | null |
16590
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the interplay between a modality for controlling the behaviour\nof recursive functional programs on infinite structures which are completely\nsilent in the syntax. The latter means that programs do not contain \"marks\"\nshowing the application of the introduction and elimination rules for the\nmodality. This shifts the burden of controlling recursion from the programmer\nto the compiler. To do this, we introduce a typed lambda calculus a la Curry\nwith a silent modality and guarded recursive types. The typing discipline\nguarantees normalisation and can be transformed into an algorithm which infers\nthe type of a program.\n",
"title": "A Light Modality for Recursion"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16591
| null |
Validated
| null | null |
null |
{
"abstract": " Modularity maximization using greedy algorithms continues to be a popular\napproach toward community detection in graphs, even after various better\nforming algorithms have been proposed. Apart from its clear mechanism and ease\nof implementation, this approach is persistently popular because, presumably,\nits risk of algorithmic failure is not well understood. This Rapid\nCommunication provides insight into this issue by estimating the algorithmic\nperformance limit of modularity maximization. This is achieved by counting the\nnumber of metastable states under a local update rule. Our results offer a\nquantitative insight into the level of sparsity at which a greedy algorithm\ntypically fails.\n",
"title": "Counting the number of metastable states in the modularity landscape: Algorithmic detectability limit of greedy algorithms in community detection"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16592
| null |
Validated
| null | null |
null |
{
"abstract": " We generalise surface cluster algebras to the case of infinite surfaces where\nthe surface contains finitely many accumulation points of boundary marked\npoints. To connect different triangulations of an infinite surface, we consider\ninfinite mutation sequences.\nWe show transitivity of infinite mutation sequences on triangulations of an\ninfinite surface and examine different types of mutation sequences. Moreover,\nwe use a hyperbolic structure on an infinite surface to extend the notion of\nsurface cluster algebras to infinite rank by giving cluster variables as lambda\nlengths of arcs. Furthermore, we study the structural properties of infinite\nrank surface cluster algebras in combinatorial terms, namely we extend \"snake\ngraph combinatorics\" to give an expansion formula for cluster variables. We\nalso show skein relations for infinite rank surface cluster algebras.\n",
"title": "Infinite rank surface cluster algebras"
}
| null | null | null | null | true | null |
16593
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the impact of choosing regressors and molecular\nrepresentations for the construction of fast machine learning (ML) models of\nthirteen electronic ground-state properties of organic molecules. The\nperformance of each regressor/representation/property combination is assessed\nusing learning curves which report out-of-sample errors as a function of\ntraining set size with up to $\\sim$117k distinct molecules. Molecular\nstructures and properties at hybrid density functional theory (DFT) level of\ntheory used for training and testing come from the QM9 database [Ramakrishnan\net al, {\\em Scientific Data} {\\bf 1} 140022 (2014)] and include dipole moment,\npolarizability, HOMO/LUMO energies and gap, electronic spatial extent, zero\npoint vibrational energy, enthalpies and free energies of atomization, heat\ncapacity and the highest fundamental vibrational frequency. Various\nrepresentations from the literature have been studied (Coulomb matrix, bag of\nbonds, BAML and ECFP4, molecular graphs (MG)), as well as newly developed\ndistribution based variants including histograms of distances (HD), and angles\n(HDA/MARAD), and dihedrals (HDAD). Regressors include linear models (Bayesian\nridge regression (BR) and linear regression with elastic net regularization\n(EN)), random forest (RF), kernel ridge regression (KRR) and two types of\nneural net works, graph convolutions (GC) and gated graph networks (GG). We\npresent numerical evidence that ML model predictions deviate from DFT less than\nDFT deviates from experiment for all properties. Furthermore, our out-of-sample\nprediction errors with respect to hybrid DFT reference are on par with, or\nclose to, chemical accuracy. Our findings suggest that ML models could be more\naccurate than hybrid DFT if explicitly electron correlated quantum (or\nexperimental) data was available.\n",
"title": "Machine learning prediction errors better than DFT accuracy"
}
| null | null | null | null | true | null |
16594
| null |
Default
| null | null |
null |
{
"abstract": " Let H(q,p) = p^2/2 + V(q) be a 1-degree of freedom mechanical Hamiltonian\nwith a C^n periodic potential V where n>4. The Nosé-thermostated system\nassociated to H is shown to have invariant tori near the infinite temperature\nlimit. This is shown to be true for all thermostats similar to Nosé's. These\nresults complement the result of Legoll, Luskin and Moeckel who proved the\nexistence of such tori near the decoupling limit.\n",
"title": "Invariant tori for the Nosé Thermostat near the High-Temperature Limit"
}
| null | null | null | null | true | null |
16595
| null |
Default
| null | null |
null |
{
"abstract": " A comprehensive theoretical analysis of photo-induced forces in an\nilluminated nanojunction, formed between an atomic force microscopy tip and a\nsample, is presented. The formalism is valid within the dipolar approximation\nand includes multiple scattering effects between the tip, sample and a planar\nsubstrate through a dyadic Green's function approach. This physically intuitive\ndescription allows a detailed look at the quantitative contribution of multiple\nscattering effects to the measured photo-induced force, effects that are\ntypically unaccounted for in simpler analytical models. Our findings show that\nthe presence of the planar substrate and anisotropy of the tip have a\nsubstantial effect on the magnitude and the spectral response of the\nphoto-induced force exerted on the tip. Unlike previous models, our\ncalculations predict photo-induced forces that are within range of\nexperimentally measured values in photo-induced force microscopy (PiFM)\nexperiments.\n",
"title": "Dyadic Green's function formalism for photo-induced forces in tip-sample nanojunctions"
}
| null | null | null | null | true | null |
16596
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we propose a novel framework, called Semi-supervised Embedding\nin Attributed Networks with Outliers (SEANO), to learn a low-dimensional vector\nrepresentation that systematically captures the topological proximity,\nattribute affinity and label similarity of vertices in a partially labeled\nattributed network (PLAN). Our method is designed to work in both transductive\nand inductive settings while explicitly alleviating noise effects from\noutliers. Experimental results on various datasets drawn from the web, text and\nimage domains demonstrate the advantages of SEANO over state-of-the-art methods\nin semi-supervised classification under transductive as well as inductive\nsettings. We also show that a subset of parameters in SEANO is interpretable as\noutlier score and can significantly outperform baseline methods when applied\nfor detecting network outliers. Finally, we present the use of SEANO in a\nchallenging real-world setting -- flood mapping of satellite images and show\nthat it is able to outperform modern remote sensing algorithms for this task.\n",
"title": "Semi-supervised Embedding in Attributed Networks with Outliers"
}
| null | null | null | null | true | null |
16597
| null |
Default
| null | null |
null |
{
"abstract": " We consider the kernel partial least squares algorithm for non-parametric\nregression with stationary dependent data. Probabilistic convergence rates of\nthe kernel partial least squares estimator to the true regression function are\nestablished under a source and an effective dimensionality condition. It is\nshown both theoretically and in simulations that long range dependence results\nin slower convergence rates. A protein dynamics example shows high predictive\npower of kernel partial least squares.\n",
"title": "Kernel partial least squares for stationary data"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
16598
| null |
Validated
| null | null |
null |
{
"abstract": " We define a homology theory of virtual links built out of the direct sum of\nthe standard Khovanov complex with itself, motivating the name doubled Khovanov\nhomology. We demonstrate that it can be used to show that some virtual links\nare non-classical, and that it yields a condition on a virtual knot being the\nconnect sum of two unknots. Further, we show that doubled Khovanov homology\npossesses a perturbation analogous to that defined by Lee in the classical case\nand define a doubled Rasmussen invariant. This invariant is used to obtain\nvarious cobordism obstructions; in particular it is an obstruction to\nsliceness. Finally, we show that the doubled Rasmussen invariant contains the\nodd writhe of a virtual knot, and use this to show that knots with non-zero odd\nwrithe are not slice.\n",
"title": "Doubled Khovanov Homology"
}
| null | null | null | null | true | null |
16599
| null |
Default
| null | null |
null |
{
"abstract": " The purpose of this paper is to investigate the asymptotic behavior of the\nmulti-dimensional elephant random walk (MERW). It is a non-Markovian random\nwalk which has a complete memory of its entire history. A wide range of\nliterature is available on the one-dimensional ERW. Surprisingly, no references\nare available on the MERW. The goal of this paper is to fill the gap by\nextending the results on the one-dimensional ERW to the MERW. In the diffusive\nand critical regimes, we establish the almost sure convergence, the law of\niterated logarithm and the quadratic strong law for the MERW. The asymptotic\nnormality of the MERW, properly normalized, is also provided. In the\nsuperdiffusive regime, we prove the almost sure convergence as well as the mean\nsquare convergence of the MERW. All our analysis relies on asymptotic results\nfor multi-dimensional martingales.\n",
"title": "On the multi-dimensional elephant random walk"
}
| null | null | null | null | true | null |
16600
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.