text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We formulate a new family of high order on-surface radiation conditions to\napproximate the outgoing solution to the Helmholtz equation in exterior\ndomains. Motivated by the pseudo-differential expansion of the\nDirichlet-to-Neumann operator developed by Antoine et al. (J. Math. Anal. Appl.\n229:184-211, 1999), we design a systematic procedure to apply\npseudo-differential symbols of arbitrarily high order. Numerical results are\npresented to illustrate the performance of the proposed method for solving both\nthe Dirichlet and the Neumann boundary value problems. Possible improvements\nand extensions are also discussed.\n", "title": "High order surface radiation conditions for time-harmonic waves in exterior domains" }
null
null
[ "Physics", "Mathematics" ]
null
true
null
5701
null
Validated
null
null
null
{ "abstract": " The PARAFAC2 is a multimodal factor analysis model suitable for analyzing\nmulti-way data when one of the modes has incomparable observation units, for\nexample because of differences in signal sampling or batch sizes. A fully\nprobabilistic treatment of the PARAFAC2 is desirable in order to improve\nrobustness to noise and provide a well founded principle for determining the\nnumber of factors, but challenging because the factor loadings are constrained\nto be orthogonal. We develop two probabilistic formulations of the PARAFAC2\nalong with variational procedures for inference: In the one approach, the mean\nvalues of the factor loadings are orthogonal leading to closed form variational\nupdates, and in the other, the factor loadings themselves are orthogonal using\na matrix Von Mises-Fisher distribution. We contrast our probabilistic\nformulation to the conventional direct fitting algorithm based on maximum\nlikelihood. On simulated data and real fluorescence spectroscopy and gas\nchromatography-mass spectrometry data, we compare our approach to the\nconventional PARAFAC2 model estimation and find that the probabilistic\nformulation is more robust to noise and model order misspecification. The\nprobabilistic PARAFAC2 thus forms a promising framework for modeling multi-way\ndata accounting for uncertainty.\n", "title": "Probabilistic PARAFAC2" }
null
null
[ "Statistics" ]
null
true
null
5702
null
Validated
null
null
null
{ "abstract": " Concepts from mathematical crystallography and group theory are used here to\nquantize the group of rigid-body motions, resulting in a \"motion alphabet\" with\nwhich to express robot motion primitives. From these primitives it is possible\nto develop a dictionary of physical actions. Equipped with an alphabet of the\nsort developed here, intelligent actions of robots in the world can be\napproximated with finite sequences of characters, thereby forming the\nfoundation of a language in which to articulate robot motion. In particular, we\nuse the discrete handedness-preserving symmetries of macromolecular crystals\n(known in mathematical crystallography as Sohncke space groups) to form a\ncoarse discretization of the space $\\rm{SE}(3)$ of rigid-body motions. This\ndiscretization is made finer by subdividing using the concept of double-coset\ndecomposition. More specifically, a very efficient, equivolumetric quantization\nof spatial motion can be defined using the group-theoretic concept of a\ndouble-coset decomposition of the form $\\Gamma \\backslash \\rm{SE}(3) / \\Delta$,\nwhere $\\Gamma$ is a Sohncke space group and $\\Delta$ is a finite group of\nrotational symmetries such as those of the icosahedron. The resulting discrete\nalphabet is based on a very uniform sampling of $\\rm{SE}(3)$ and is a tool for\ndescribing the continuous trajectories of robots and humans. The general\n\"signals to symbols\" problem in artificial intelligence is cast in this\nframework for robots moving continuously in the world, and we present a\ncoarse-to-fine search scheme here to efficiently solve this decoding problem in\npractice.\n", "title": "Quantizing Euclidean motions via double-coset decomposition" }
null
null
null
null
true
null
5703
null
Default
null
null
null
{ "abstract": " Each participant in peer-to-peer network prefers to free-ride on the\ncontribution of other participants. Reputation based resource sharing is a way\nto control the free riding. Instead of classical game theory we use\nevolutionary game theory to analyse the reputation based resource sharing in\npeer to peer system. Classical game-theoretical approach requires global\ninformation of the population. However, the evolutionary games only assumes\nlight cognitive capabilities of users, that is, each user imitates the behavior\nof other user with better payoff. We find that without any extra benefit\nreputation strategy is not stable in the system. We also find the fraction of\nusers who calculate the reputation for controlling the free riding in\nequilibrium. In this work first we made a game theoretical model for the\nreputation system and then we calculate the threshold of the fraction of users\nwith which the reputation strategy is sustainable in the system. We found that\nin simplistic conditions reputation calculation is not evolutionarily stable\nstrategy but if we impose some initial payment to all users and then distribute\nthat payment among the users who are calculating reputation then reputation is\nevolutionary stable strategy.\n", "title": "Evolutionary Stability of Reputation Management System in Peer to Peer Networks" }
null
null
null
null
true
null
5704
null
Default
null
null
null
{ "abstract": " FOSS is an acronym for Free and Open Source Software. The FOSS 2013 survey\nprimarily targets FOSS contributors and relevant anonymized dataset is publicly\navailable under CC by SA license. In this study, the dataset is analyzed from a\ncritical perspective using statistical and clustering techniques (especially\nmultiple correspondence analysis) with a strong focus on women contributors\ntowards discovering hidden trends and facts. Important inferences are drawn\nabout development practices and other facets of the free software and OSS\nworlds.\n", "title": "A Study of FOSS'2013 Survey Data Using Clustering Techniques" }
null
null
null
null
true
null
5705
null
Default
null
null
null
{ "abstract": " At the forefront of nanochemistry, there exists a research endeavor centered\naround intermetallic nanocrystals, which are unique in terms of long-range\natomic ordering, well-defined stoichiometry, and controlled crystal structure.\nIn contrast to alloy nanocrystals with no atomic ordering, it has been\nchallenging to synthesize intermetallic nanocrystals with a tight control over\ntheir size and shape. This review article highlights recent progress in the\nsynthesis of intermetallic nanocrystals with controllable sizes and\nwell-defined shapes. We begin with a simple analysis and some insights key to\nthe selection of experimental conditions for generating intermetallic\nnanocrystals. We then present examples to highlight the viable use of\nintermetallic nanocrystals as electrocatalysts or catalysts for various\nreactions, with a focus on the enhanced performance relative to their alloy\ncounterparts that lack atomic ordering. We conclude with perspectives on future\ndevelopments in the context of synthetic control, structure-property\nrelationship, and application.\n", "title": "Intermetallic Nanocrystals: Syntheses and Catalytic Applications" }
null
null
null
null
true
null
5706
null
Default
null
null
null
{ "abstract": " Linear-scaling electronic structure methods based on the calculation of\nmoments of the underlying electronic Hamiltonian offer a computationally\nefficient and numerically robust scheme to drive large-scale atomistic\nsimulations, in which the quantum-mechanical nature of the electrons is\nexplicitly taken into account. We compare the kernel polynomial method to the\nFermi operator expansion method and establish a formal connection between the\ntwo approaches. We show that the convolution of the kernel polynomial method\nmay be understood as an effective electron temperature. The results of a number\nof possible kernels are formally examined, and then applied to a representative\ntight-binding model.\n", "title": "Linear-scaling electronic structure theory: Electronic temperature in the Kernel Polynomial Method" }
null
null
null
null
true
null
5707
null
Default
null
null
null
{ "abstract": " Until recently, social media were seen to promote democratic discourse on\nsocial and political issues. However, this powerful communication ecosystem has\ncome under scrutiny for allowing hostile actors to exploit online discussions\nin an attempt to manipulate public opinion. A case in point is the ongoing U.S.\nCongress investigation of Russian interference in the 2016 U.S. election\ncampaign, with Russia accused of, among other things, using trolls (malicious\naccounts created for the purpose of manipulation) and bots (automated accounts)\nto spread propaganda and politically biased information. In this study, we\nexplore the effects of this manipulation campaign, taking a closer look at\nusers who re-shared the posts produced on Twitter by the Russian troll accounts\npublicly disclosed by U.S. Congress investigation. We collected a dataset of 13\nmillion election-related posts shared on Twitter in the year of 2016 by over a\nmillion distinct users. This dataset includes accounts associated with the\nidentified Russian trolls as well as users sharing posts in the same time\nperiod on a variety of topics around the 2016 elections. We use label\npropagation to infer the users' ideology based on the news sources they share.\nWe are able to classify a large number of users as liberal or conservative with\nprecision and recall above 84%. Conservative users who retweet Russian trolls\nproduced significantly more tweets than liberal ones, about 8 times as many in\nterms of tweets. Additionally, trolls' position in the retweet network is\nstable over time, unlike users who retweet them who form the core of the\nelection-related retweet network by the end of 2016. Using state-of-the-art bot\ndetection techniques, we estimate that about 5% and 11% of liberal and\nconservative users are bots, respectively.\n", "title": "Characterizing the 2016 Russian IRA Influence Campaign" }
null
null
null
null
true
null
5708
null
Default
null
null
null
{ "abstract": " This paper discusses discrete-time maps of the form $x(k + 1) = F(x(k))$,\nfocussing on equilibrium points of such maps. Under some circumstances,\nLefschetz fixed-point theory can be used to establish the existence of a single\nlocally attractive equilibrium (which is sometimes globally attractive) when a\ngeneral property of local attractivity is known for any equilibrium. Problems\nin social networks often involve such discrete-time systems, and we make an\napplication to one such problem.\n", "title": "Nonlinear Mapping Convergence and Application to Social Networks" }
null
null
null
null
true
null
5709
null
Default
null
null
null
{ "abstract": " To understand the biology of cancer, joint analysis of multiple data\nmodalities, including imaging and genomics, is crucial. The involved nature of\ngene-microenvironment interactions necessitates the use of algorithms which\ntreat both data types equally. We propose the use of canonical correlation\nanalysis (CCA) and a sparse variant as a preliminary discovery tool for\nidentifying connections across modalities, specifically between gene expression\nand features describing cell and nucleus shape, texture, and stain intensity in\nhistopathological images. Applied to 615 breast cancer samples from The Cancer\nGenome Atlas, CCA revealed significant correlation of several image features\nwith expression of PAM50 genes, known to be linked to outcome, while Sparse CCA\nrevealed associations with enrichment of pathways implicated in cancer without\nleveraging prior biological understanding. These findings affirm the utility of\nCCA for joint phenotype-genotype analysis of cancer.\n", "title": "Correlating Cellular Features with Gene Expression using CCA" }
null
null
null
null
true
null
5710
null
Default
null
null
null
{ "abstract": " We give infinitely many $2$-component links with unknotted components which\nare topologically concordant to the Hopf link, but not smoothly concordant to\nany $2$-component link with trivial Alexander polynomial. Our examples are\npairwise non-concordant.\n", "title": "Links with nontrivial Alexander polynomial which are topologically concordant to the Hopf link" }
null
null
[ "Mathematics" ]
null
true
null
5711
null
Validated
null
null
null
{ "abstract": " Computational design optimization in fluid dynamics usually requires to solve\nnon-linear partial differential equations numerically. In this work, we explore\na Bayesian optimization approach to minimize an object's drag coefficient in\nlaminar flow based on predicting drag directly from the object shape. Jointly\ntraining an architecture combining a variational autoencoder mapping shapes to\nlatent representations and Gaussian process regression allows us to generate\nimproved shapes in the two dimensional case we consider.\n", "title": "Shape optimization in laminar flow with a label-guided variational autoencoder" }
null
null
null
null
true
null
5712
null
Default
null
null
null
{ "abstract": " This paper examines software vulnerabilities in common Python packages used\nparticularly for web development. The empirical dataset is based on the PyPI\npackage repository and the so-called Safety DB used to track vulnerabilities in\nselected packages within the repository. The methodological approach builds on\na release-based time series analysis of the conditional probabilities for the\nreleases of the packages to be vulnerable. According to the results, many of\nthe Python vulnerabilities observed seem to be only modestly severe; input\nvalidation and cross-site scripting have been the most typical vulnerabilities.\nIn terms of the time series analysis based on the release histories, only the\nrecent past is observed to be relevant for statistical predictions; the\nclassical Markov property holds.\n", "title": "An Empirical Analysis of Vulnerabilities in Python Packages for Web Applications" }
null
null
null
null
true
null
5713
null
Default
null
null
null
{ "abstract": " In this paper, by using Logistic, Sine and Tent systems we define a\ncombination chaotic system. Some properties of the chaotic system are studied\nby using figures and numerical results. A color image encryption algorithm is\nintroduced based on new chaotic system. Also this encryption algorithm can be\nused for gray scale or binary images.\nThe experimental results of the encryption algorithm show that the encryption\nalgorithm is secure and practical.\n", "title": "A combination chaotic system and application in color image encryption" }
null
null
null
null
true
null
5714
null
Default
null
null
null
{ "abstract": " Dynamic economic dispatch with valve-point effect (DED-VPE) is a non-convex\nand non-differentiable optimization problem which is difficult to solve\nefficiently. In this paper, a hybrid mixed integer linear programming (MILP)\nand interior point method (IPM), denoted by MILP-IPM, is proposed to solve such\na DED-VPE problem, where the complicated transmission loss is also included.\nDue to the non-differentiable characteristic of DED-VPE, the classical\nderivative-based optimization methods can not be used any more. With the help\nof model reformulation, a differentiable non-linear programming (NLP)\nformulation which can be directly solved by IPM is derived. However, if the\nDED-VPE is solved by IPM in a single step, the optimization will easily trap in\na poor local optima due to its non-convex and multiple local minima\ncharacteristics. To exploit a better solution, an MILP method is required to\nsolve the DED-VPE without transmission loss, yielding a good initial point for\nIPM to improve the quality of the solution. Simulation results demonstrate the\nvalidity and effectiveness of the proposed MILP-IPM in solving DED-VPE.\n", "title": "A Hybrid MILP and IPM for Dynamic Economic Dispatch with Valve Point Effect" }
null
null
null
null
true
null
5715
null
Default
null
null
null
{ "abstract": " The HEP community is approaching an era were the excellent performances of\nthe particle accelerators in delivering collision at high rate will force the\nexperiments to record a large amount of information. The growing size of the\ndatasets could potentially become a limiting factor in the capability to\nproduce scientific results timely and efficiently. Recently, new technologies\nand new approaches have been developed in industry to answer to the necessity\nto retrieve information as quickly as possible to analyze PB and EB datasets.\nProviding the scientists with these modern computing tools will lead to\nrethinking the principles of data analysis in HEP, making the overall\nscientific process faster and smoother.\nIn this paper, we are presenting the latest developments and the most recent\nresults on the usage of Apache Spark for HEP analysis. The study aims at\nevaluating the efficiency of the application of the new tools both\nquantitatively, by measuring the performances, and qualitatively, focusing on\nthe user experience. The first goal is achieved by developing a data reduction\nfacility: working together with CERN Openlab and Intel, CMS replicates a real\nphysics search using Spark-based technologies, with the ambition of reducing 1\nPB of public data in 5 hours, collected by the CMS experiment, to 1 TB of data\nin a format suitable for physics analysis.\nThe second goal is achieved by implementing multiple physics use-cases in\nApache Spark using as input preprocessed datasets derived from official CMS\ndata and simulation. By performing different end-analyses up to the publication\nplots on different hardware, feasibility, usability and portability are\ncompared to the ones of a traditional ROOT-based workflow.\n", "title": "Using Big Data Technologies for HEP Analysis" }
null
null
[ "Computer Science" ]
null
true
null
5716
null
Validated
null
null
null
{ "abstract": " Brains need to predict how the body reacts to motor commands. It is an open\nquestion how networks of spiking neurons can learn to reproduce the non-linear\nbody dynamics caused by motor commands, using local, online and stable learning\nrules. Here, we present a supervised learning scheme for the feedforward and\nrecurrent connections in a network of heterogeneous spiking neurons. The error\nin the output is fed back through fixed random connections with a negative\ngain, causing the network to follow the desired dynamics, while an online and\nlocal rule changes the weights. The rule for Feedback-based Online Local\nLearning Of Weights (FOLLOW) is local in the sense that weight changes depend\non the presynaptic activity and the error signal projected onto the\npostsynaptic neuron. We provide examples of learning linear, non-linear and\nchaotic dynamics, as well as the dynamics of a two-link arm. Using the Lyapunov\nmethod, and under reasonable assumptions and approximations, we show that\nFOLLOW learning is stable uniformly, with the error going to zero\nasymptotically.\n", "title": "Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network" }
null
null
null
null
true
null
5717
null
Default
null
null
null
{ "abstract": " In this paper, we propose and study opportunistic bandits - a new variant of\nbandits where the regret of pulling a suboptimal arm varies under different\nenvironmental conditions, such as network load or produce price. When the\nload/price is low, so is the cost/regret of pulling a suboptimal arm (e.g.,\ntrying a suboptimal network configuration). Therefore, intuitively, we could\nexplore more when the load/price is low and exploit more when the load/price is\nhigh. Inspired by this intuition, we propose an Adaptive Upper-Confidence-Bound\n(AdaUCB) algorithm to adaptively balance the exploration-exploitation tradeoff\nfor opportunistic bandits. We prove that AdaUCB achieves $O(\\log T)$ regret\nwith a smaller coefficient than the traditional UCB algorithm. Furthermore,\nAdaUCB achieves $O(1)$ regret with respect to $T$ if the exploration cost is\nzero when the load level is below a certain threshold. Last, based on both\nsynthetic data and real-world traces, experimental results show that AdaUCB\nsignificantly outperforms other bandit algorithms, such as UCB and TS (Thompson\nSampling), under large load/price fluctuations.\n", "title": "Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits" }
null
null
null
null
true
null
5718
null
Default
null
null
null
{ "abstract": " Quantile estimation is a problem presented in fields such as quality control,\nhydrology, and economics. There are different techniques to estimate such\nquantiles. Nevertheless, these techniques use an overall fit of the sample when\nthe quantiles of interest are usually located in the tails of the distribution.\nRegression Approach for Quantile Estimation (RAQE) is a method based on\nregression techniques and the properties of the empirical distribution to\naddress this problem. The method was first presented for the problem of\ncapability analysis. In this paper, a generalization of the method is\npresented, extended to the multiple sample scenario, and data from real\nexamples is used to illustrate the proposed approaches. In addition,\ntheoretical framework is presented to support the extension for multiple\nhomogeneous samples and the use of the uncertainty of the estimated\nprobabilities as a weighting factor in the analysis.\n", "title": "A Quantile Estimate Based on Local Curve Fitting" }
null
null
[ "Statistics" ]
null
true
null
5719
null
Validated
null
null
null
{ "abstract": " Dynamical downscaling with high-resolution regional climate models may offer\nthe possibility of realistically reproducing precipitation and weather events\nin climate simulations. As resolutions fall to order kilometers, the use of\nexplicit rather than parametrized convection may offer even greater fidelity.\nHowever, these increased model resolutions both allow and require increasingly\ncomplex diagnostics for evaluating model fidelity. In this study we use a suite\nof dynamically downscaled simulations of the summertime U.S. (WRF driven by\nNCEP) with systematic variations in parameters and treatment of convection as a\ntest case for evaluation of model precipitation. In particular, we use a novel\nrainstorm identification and tracking algorithm that allocates essentially all\nrainfall to individual precipitation events (Chang et al. 2016). This approach\nallows multiple insights, including that, at least in these runs, model wet\nbias is driven by excessive areal extent of precipitating events. Biases are\ntime-dependent, producing excessive diurnal cycle amplitude. We show that this\neffect is produced not by new production of events but by excessive enlargement\nof long-lived precipitation events during daytime, and that in the domain\naverage, precipitation biases appear best represented as additive offsets. Of\nall model configurations evaluated, convection-permitting simulations most\nconsistently reduced biases in precipitation event characteristics.\n", "title": "Diagnosing added value of convection-permitting regional models using precipitation event identification and tracking" }
null
null
null
null
true
null
5720
null
Default
null
null
null
{ "abstract": " For a Riemannian $G$-structure, we compute the divergence of the vector field\ninduced by the intrinsic torsion. Applying the Stokes theorem, we obtain the\nintegral formula on a closed oriented Riemannian manifold, which we interpret\nin certain cases. We focus on almost harmitian and almost contact metric\nstructures.\n", "title": "An integral formula for Riemannian $G$-structures with applications to almost hermitian and almost contact structures" }
null
null
null
null
true
null
5721
null
Default
null
null
null
{ "abstract": " We study the anomalous prevalence of integer percentages in the last\nparliamentary (2016) and presidential (2018) Russian elections. We show how\nthis anomaly in Russian federal elections has evolved since 2000.\n", "title": "Putin's peaks: Russian election data revisited" }
null
null
null
null
true
null
5722
null
Default
null
null
null
{ "abstract": " The main result of this paper is the rate of convergence to Hermite-type\ndistributions in non-central limit theorems. To the best of our knowledge, this\nis the first result in the literature on rates of convergence of functionals of\nrandom fields to Hermite-type distributions with ranks greater than 2. The\nresults were obtained under rather general assumptions on the spectral\ndensities of random fields. These assumptions are even weaker than in the known\nconvergence results for the case of Rosenblatt distributions. Additionally,\nLévy concentration functions for Hermite-type distributions were\ninvestigated.\n", "title": "On rate of convergence in non-central limit theorems" }
null
null
null
null
true
null
5723
null
Default
null
null
null
{ "abstract": " This paper considers two different problems in trajectory tracking control\nfor linear systems. First, if the control is not unique which is most input\nenergy efficient. Second, if exact tracking is infeasible which control\nperforms most accurately. These are typical challenges for over-actuated\nsystems and for under-actuated systems, respectively. We formulate both goals\nas optimal output regulation problems. Then we contribute two new sets of\nregulator equations to output regulation theory that provide the desired\nsolutions. A thorough study indicates solvability and uniqueness under weak\nassumptions. E.g., we can always determine the solution of the classical\nregulator equations that is most input energy efficient. This is of great value\nif there are infinitely many solutions. We derive our results by a linear\nquadratic tracking approach and establish a useful link to output regulation\ntheory.\n", "title": "Optimal Output Regulation for Square, Over-Actuated and Under-Actuated Linear Systems" }
null
null
null
null
true
null
5724
null
Default
null
null
null
{ "abstract": " The difficulty of validating large-scale quantum devices, such as Boson\nSamplers, poses a major challenge for any research program that aims to show\nquantum advantages over classical hardware. To address this problem, we propose\na novel data-driven approach wherein models are trained to identify common\npathologies using unsupervised machine learning methods. We illustrate this\nidea by training a classifier that exploits K-means clustering to distinguish\nbetween Boson Samplers that use indistinguishable photons from those that do\nnot. We train the model on numerical simulations of small-scale Boson Samplers\nand then validate the pattern recognition technique on larger numerical\nsimulations as well as on photonic chips in both traditional Boson Sampling and\nscattershot experiments. The effectiveness of such method relies on\nparticle-type-dependent internal correlations present in the output\ndistributions. This approach performs substantially better on the test data\nthan previous methods and underscores the ability to further generalize its\noperation beyond the scope of the examples that it was trained on.\n", "title": "Pattern recognition techniques for Boson Sampling validation" }
null
null
null
null
true
null
5725
null
Default
null
null
null
{ "abstract": " Anomalies in time-series data give essential and often actionable information\nin many applications. In this paper we consider a model-free anomaly detection\nmethod for univariate time-series which adapts to non-stationarity in the data\nstream and provides probabilistic abnormality scores based on the conformal\nprediction paradigm. Despite its simplicity the method performs on par with\ncomplex prediction-based models on the Numenta Anomaly Detection benchmark and\nthe Yahoo! S5 dataset.\n", "title": "Conformal k-NN Anomaly Detector for Univariate Data Streams" }
null
null
null
null
true
null
5726
null
Default
null
null
null
{ "abstract": " Transportation agencies have an opportunity to leverage\nincreasingly-available trajectory datasets to improve their analyses and\ndecision-making processes. However, this data is typically purchased from\nvendors, which means agencies must understand its potential benefits beforehand\nin order to properly assess its value relative to the cost of acquisition.\nWhile the literature concerned with trajectory data is rich, it is naturally\nfragmented and focused on technical contributions in niche areas, which makes\nit difficult for government agencies to assess its value across different\ntransportation domains. To overcome this issue, the current paper explores\ntrajectory data from the perspective of a road transportation agency interested\nin acquiring trajectories to enhance its analyses. The paper provides a\nliterature review illustrating applications of trajectory data in six areas of\nroad transportation systems analysis: demand estimation, modeling human\nbehavior, designing public transit, traffic performance measurement and\nprediction, environment and safety. In addition, it visually explores 20\nmillion GPS traces in Maryland, illustrating existing and suggesting new\napplications of trajectory data.\n", "title": "Applications of Trajectory Data from the Perspective of a Road Transportation Agency: Literature Review and Maryland Case Study" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
5727
null
Validated
null
null
null
{ "abstract": " The dual motivic Steenrod algebra with mod $\\ell$ coefficients was computed\nby Voevodsky over a base field of characteristic zero, and by Hoyois, Kelly,\nand {\\O}stv{\\ae}r over a base field of characteristic $p \\neq \\ell$. In the\ncase $p = \\ell$, we show that the conjectured answer is a retract of the actual\nanswer. We also describe the slices of the algebraic cobordism spectrum $MGL$:\nwe show that the conjectured form of $s_n MGL$ is a retract of the actual\nanswer.\n", "title": "Towards the dual motivic Steenrod algebra in positive characteristic" }
null
null
null
null
true
null
5728
null
Default
null
null
null
{ "abstract": " Let $T^m_f $ be the Toeplitz quantization of a real $ C^{\\infty}$ function\ndefined on the sphere $ \\mathbb{CP}(1)$. $T^m_f $ is therefore a Hermitian\nmatrix with spectrum $\\lambda^m= (\\lambda_0^m,\\ldots,\\lambda_m^m)$. Schur's\ntheorem says that the diagonal of a Hermitian matrix $A$ that has the same\nspectrum of $ T^m_f $ lies inside a finite dimensional convex set whose extreme\npoints are $\\{( \\lambda_{\\sigma(0)}^m,\\ldots,\\lambda_{\\sigma(m)}^m)\\}$, where\n$\\sigma$ is any permutation of $(m+1)$ elements. In this paper, we prove that\nthese convex sets \"converge\" to a huge convex set in $L^2([0,1])$ whose extreme\npoints are $ f^*\\circ \\phi$, where $ f^*$ is the decreasing rearrangement of $\nf$ and $ \\phi $ ranges over the set of measure preserving transformations of\nthe unit interval $ [0,1]$.\n", "title": "Toeplitz Quantization and Convexity" }
null
null
null
null
true
null
5729
null
Default
null
null
null
{ "abstract": " The Paulsen problem is a basic open problem in operator theory: Given vectors\n$u_1, \\ldots, u_n \\in \\mathbb R^d$ that are $\\epsilon$-nearly satisfying the\nParseval's condition and the equal norm condition, is it close to a set of\nvectors $v_1, \\ldots, v_n \\in \\mathbb R^d$ that exactly satisfy the Parseval's\ncondition and the equal norm condition? Given $u_1, \\ldots, u_n$, the squared\ndistance (to the set of exact solutions) is defined as $\\inf_{v} \\sum_{i=1}^n\n\\| u_i - v_i \\|_2^2$ where the infimum is over the set of exact solutions.\nPrevious results show that the squared distance of any $\\epsilon$-nearly\nsolution is at most $O({\\rm{poly}}(d,n,\\epsilon))$ and there are\n$\\epsilon$-nearly solutions with squared distance at least $\\Omega(d\\epsilon)$.\nThe fundamental open question is whether the squared distance can be\nindependent of the number of vectors $n$.\nWe answer this question affirmatively by proving that the squared distance of\nany $\\epsilon$-nearly solution is $O(d^{13/2} \\epsilon)$. Our approach is based\non a continuous version of the operator scaling algorithm and consists of two\nparts. First, we define a dynamical system based on operator scaling and use it\nto prove that the squared distance of any $\\epsilon$-nearly solution is $O(d^2\nn \\epsilon)$. Then, we show that by randomly perturbing the input vectors, the\ndynamical system will converge faster and the squared distance of an\n$\\epsilon$-nearly solution is $O(d^{5/2} \\epsilon)$ when $n$ is large enough\nand $\\epsilon$ is small enough. To analyze the convergence of the dynamical\nsystem, we develop some new techniques in lower bounding the operator capacity,\na concept introduced by Gurvits to analyze the operator scaling algorithm.\n", "title": "The Paulsen Problem, Continuous Operator Scaling, and Smoothed Analysis" }
null
null
null
null
true
null
5730
null
Default
null
null
null
{ "abstract": " We study mechanisms to characterize how the asymptotic convergence of\nbackpropagation in deep architectures, in general, is related to the network\nstructure, and how it may be influenced by other design choices including\nactivation type, denoising and dropout rate. We seek to analyze whether network\narchitecture and input data statistics may guide the choices of learning\nparameters and vice versa. Given the broad applicability of deep architectures,\nthis issue is interesting both from theoretical and a practical standpoint.\nUsing properties of general nonconvex objectives (with first-order\ninformation), we first build the association between structural, distributional\nand learnability aspects of the network vis-à-vis their interaction with\nparameter convergence rates. We identify a nice relationship between feature\ndenoising and dropout, and construct families of networks that achieve the same\nlevel of convergence. We then derive a workflow that provides systematic\nguidance regarding the choice of network sizes and learning parameters often\nmediated4 by input statistics. Our technical results are corroborated by an\nextensive set of evaluations, presented in this paper as well as independent\nempirical observations reported by other groups. We also perform experiments\nshowing the practical implications of our framework for choosing the best\nfully-connected design for a given problem.\n", "title": "On architectural choices in deep learning: From network structure to gradient convergence and parameter estimation" }
null
null
null
null
true
null
5731
null
Default
null
null
null
{ "abstract": " The decline of Mars' global magnetic field some 3.8-4.1 billion years ago is\nthought to reflect the demise of the dynamo that operated in its liquid core.\nThe dynamo was probably powered by planetary cooling and so its termination is\nintimately tied to the thermochemical evolution and present-day physical state\nof the Martian core. Bottom-up growth of a solid inner core, the\ncrystallization regime for Earth's core, has been found to produce a long-lived\ndynamo leading to the suggestion that the Martian core remains entirely liquid\nto this day. Motivated by the experimentally-determined increase in the Fe-S\nliquidus temperature with decreasing pressure at Martian core conditions, we\ninvestigate whether Mars' core could crystallize from the top down. We focus on\nthe \"iron snow\" regime, where newly-formed solid consists of pure Fe and is\ntherefore heavier than the liquid. We derive global energy and entropy\nequations that describe the long-timescale thermal and magnetic history of the\ncore from a general theory for two-phase, two-component liquid mixtures,\nassuming that the snow zone is in phase equilibrium and that all solid falls\nout of the layer and remelts at each timestep. Formation of snow zones occurs\nfor a wide range of interior and thermal properties and depends critically on\nthe initial sulfur concentration, x0. Release of gravitational energy and\nlatent heat during growth of the snow zone do not generate sufficient entropy\nto restart the dynamo unless the snow zone occupies at least 400 km of the\ncore. Snow zones can be 1.5-2 Gyrs old, though thermal stratification of the\nuppermost core, not included in our model, likely delays onset. Models that\nmatch the available magnetic and geodetic constraints have x0~10% and snow\nzones that occupy approximately the top 100 km of the present-day Martian core.\n", "title": "Iron Snow in the Martian Core?" }
null
null
[ "Physics" ]
null
true
null
5732
null
Validated
null
null
null
{ "abstract": " In this paper, we propose a new framework for segmenting feature-based moving\nobjects under affine subspace model. Since the feature trajectories in practice\nare high-dimensional and contain a lot of noise, we firstly apply the sparse\nPCA to represent the original trajectories with a low-dimensional global\nsubspace, which consists of the orthogonal sparse principal vectors.\nSubsequently, the local subspace separation will be achieved via automatically\nsearching the sparse representation of the nearest neighbors for each projected\ndata. In order to refine the local subspace estimation result and deal with the\nmissing data problem, we propose an error estimation to encourage the projected\ndata that span a same local subspace to be clustered together. In the end, the\nsegmentation of different motions is achieved through the spectral clustering\non an affinity matrix, which is constructed with both the error estimation and\nsparse neighbors optimization. We test our method extensively and compare it\nwith state-of-the-art methods on the Hopkins 155 dataset and Freiburg-Berkeley\nMotion Segmentation dataset. The results show that our method is comparable\nwith the other motion segmentation methods, and in many cases exceed them in\nterms of precision and computation time.\n", "title": "Motion Segmentation via Global and Local Sparse Subspace Optimization" }
null
null
null
null
true
null
5733
null
Default
null
null
null
{ "abstract": " We demonstrate a topological classification of vortices in three dimensional\ntime-reversal invariant topological superconductors based on superconducting\nDirac semimetals with an s-wave superconducting order parameter by means of a\npair of numbers $(N_\\Phi,N)$, accounting how many units $N_\\Phi$ of magnetic\nfluxes $hc/4e$ and how many $N$ chiral Majorana modes the vortex carries. From\nthese quantities, we introduce a topological invariant which further classifies\nthe properties of such vortices under linking processes. While such processes\nare known to be related to instanton processes in a field theoretic\ndescription, we demonstrate here that they are, in fact, also equivalent to the\nfractional Josephson effect on junctions based at the edges of quantum spin\nHall systems. This allows one to consider microscopically the effects of\ninteractions in the linking problem. We therefore demonstrate that associated\nto links between vortices, one has the exchange of quasi-particles, either\nMajorana zero-modes or $e/2$ quasi-particles, which allows for a topological\nclassification of vortices in these systems, seen to be $\\mathbb{Z}_8$\nclassified. While $N_\\Phi$ and $N$ are shown to be both even or odd in the\nweakly-interacting limit, in the strongly interacting scenario one loosens this\nconstraint. In this case, one may have further fractionalization possibilities\nfor the vortices, whose excitations are described by $SO(3)_3$-like conformal\nfield theories with quasi-particle exchanges of more exotic types.\n", "title": "Topological strings linking with quasi-particle exchange in superconducting Dirac semimetals" }
null
null
null
null
true
null
5734
null
Default
null
null
null
{ "abstract": " This paper proposes Self-Imitation Learning (SIL), a simple off-policy\nactor-critic algorithm that learns to reproduce the agent's past good\ndecisions. This algorithm is designed to verify our hypothesis that exploiting\npast good experiences can indirectly drive deep exploration. Our empirical\nresults show that SIL significantly improves advantage actor-critic (A2C) on\nseveral hard exploration Atari games and is competitive to the state-of-the-art\ncount-based exploration methods. We also show that SIL improves proximal policy\noptimization (PPO) on MuJoCo tasks.\n", "title": "Self-Imitation Learning" }
null
null
[ "Statistics" ]
null
true
null
5735
null
Validated
null
null
null
{ "abstract": " We consider the problem of controlling the spatiotemporal probability\ndistribution of a robotic swarm that evolves according to a reflected diffusion\nprocess, using the space- and time-dependent drift vector field parameter as\nthe control variable. In contrast to previous work on control of the\nFokker-Planck equation, a zero-flux boundary condition is imposed on the\npartial differential equation that governs the swarm probability distribution,\nand only bounded vector fields are considered to be admissible as control\nparameters. Under these constraints, we show that any initial probability\ndistribution can be transported to a target probability distribution under\ncertain assumptions on the regularity of the target distribution. In\nparticular, we show that if the target distribution is (essentially) bounded,\nhas bounded first-order and second-order partial derivatives, and is bounded\nfrom below by a strictly positive constant, then this distribution can be\nreached exactly using a drift vector field that is bounded in space and time.\nOur proof is constructive and based on classical linear semigroup theoretic\nconcepts.\n", "title": "Controllability to Equilibria of the 1-D Fokker-Planck Equation with Zero-Flux Boundary Condition" }
null
null
null
null
true
null
5736
null
Default
null
null
null
{ "abstract": " In this paper we discuss existing approaches for Bitcoin payments, as\nsuitable for a small business for small-value transactions. We develop an\nevaluation framework utilizing security, usability, deployability criteria,,\nexamine several existing systems, tools. Following a requirements engineering\napproach, we designed, implemented a new Point of Sale (PoS) system that\nsatisfies an optimal set of criteria within our evaluation framework. Our open\nsource system, Aunja PoS, has been deployed in a real world cafe since October\n2014.\n", "title": "Buy your coffee with bitcoin: Real-world deployment of a bitcoin point of sale terminal" }
null
null
null
null
true
null
5737
null
Default
null
null
null
{ "abstract": " We present the extension of the effective field theory framework to the\nmildly non-linear scales. The effective field theory approach has been\nsuccessfully applied to the late time cosmic acceleration phenomenon and it has\nbeen shown to be a powerful method to obtain predictions about cosmological\nobservables on linear scales. However, mildly non-linear scales need to be\nconsistently considered when testing gravity theories because a large part of\nthe data comes from those scales. Thus, non-linear corrections to predictions\non observables coming from the linear analysis can help in discriminating among\ndifferent gravity theories. We proceed firstly by identifying the necessary\noperators which need to be included in the effective field theory Lagrangian in\norder to go beyond the linear order in perturbations and then we construct the\ncorresponding non-linear action. Moreover, we present the complete recipe to\nmap any single field dark energy and modified gravity models into the\nnon-linear effective field theory framework by considering a general action in\nthe Arnowitt-Deser-Misner formalism. In order to illustrate this recipe we\nproceed to map the beyond-Horndeski theory and low-energy Horava gravity into\nthe effective field theory formalism. As a final step we derived the 4th order\naction in term of the curvature perturbation. This allowed us to identify the\nnon-linear contributions coming from the linear order perturbations which at\nthe next order act like source terms. Moreover, we confirm that the stability\nrequirements, ensuring the positivity of the kinetic term and the speed of\npropagation for scalar mode, are automatically satisfied once the viability of\nthe theory is demanded at linear level. The approach we present here will allow\nto construct, in a model independent way, all the relevant predictions on\nobservables at mildly non-linear scales.\n", "title": "Tackling non-linearities with the effective field theory of dark energy and modified gravity" }
null
null
null
null
true
null
5738
null
Default
null
null
null
{ "abstract": " We build a collaborative filtering recommender system to restore images with\nimpulse noise for which the noisy pixels have been previously identified. We\ndefine this recommender system in terms of a new color image representation\nusing three matrices that depend on the noise-free pixels of the image to\nrestore, and two parameters: $k$, the number of features; and $\\lambda$, the\nregularization factor. We perform experiments on a well known image database to\ntest our algorithm and we provide image quality statistics for the results\nobtained. We discuss the roles of bias and variance in the performance of our\nalgorithm as determined by the values of $k$ and $\\lambda$, and provide\nguidance on how to choose the values of these parameters. Finally, we discuss\nthe possibility of using our collaborative filtering recommender system to\nperform image inpainting and super-resolution.\n", "title": "A recommender system to restore images with impulse noise" }
null
null
null
null
true
null
5739
null
Default
null
null
null
{ "abstract": " This paper presents our work on developing parallel computational methods for\ntwo-phase flow on modern parallel computers, where techniques for linear\nsolvers and nonlinear methods are studied and the standard and inexact Newton\nmethods are investigated. A multi-stage preconditioner for two-phase flow is\napplied and advanced matrix processing strategies are studied. A local\nreordering method is developed to speed the solution of linear systems.\nNumerical experiments show that these computational methods are effective and\nscalable, and are capable of computing large-scale reservoir simulation\nproblems using thousands of CPU cores on parallel computers. The nonlinear\ntechniques, preconditioner and matrix processing strategies can also be applied\nto three-phase black oil, compositional and thermal models.\n", "title": "A Parallel Simulator for Massive Reservoir Models Utilizing Distributed-Memory Parallel Systems" }
null
null
null
null
true
null
5740
null
Default
null
null
null
{ "abstract": " We consider a wide-aperture surface-emitting laser with a saturable absorber\nsection subjected to time-delayed feedback. We adopt the mean-field approach\nassuming a single longitudinal mode operation of the solitary VCSEL. We\ninvestigate cavity soliton dynamics under the effect of time- delayed feedback\nin a self-imaging configuration where diffraction in the external cavity is\nnegligible. Using bifurcation analysis, direct numerical simulations and\nnumerical path continuation methods, we identify the possible bifurcations and\nmap them in a plane of feedback parameters. We show that for both the\nhomogeneous and localized stationary lasing solutions in one spatial dimension\nthe time-delayed feedback induces complex spatiotemporal dynamics, in\nparticular a period doubling route to chaos, quasiperiodic oscillations and\nmultistability of the stationary solutions.\n", "title": "Bifurcation structure of cavity soliton dynamics in a VCSEL with saturable absorber and time-delayed feedback" }
null
null
null
null
true
null
5741
null
Default
null
null
null
{ "abstract": " The ability to reliably predict critical transitions in dynamical systems is\na long-standing goal of diverse scientific communities. Previous work focused\non early warning signals related to local bifurcations (critical slowing down)\nand non-bifurcation type transitions. We extend this toolbox and report on a\ncharacteristic scaling behavior (critical attractor growth) which is indicative\nof an impending global bifurcation, an interior crisis in excitable systems. We\ndemonstrate our early warning signal in a conceptual climate model as well as\nin a model of coupled neurons known to exhibit extreme events. We observed\ncritical attractor growth prior to interior crises of chaotic as well as\nstrange-nonchaotic attractors. These observations promise to extend the classes\nof transitions that can be predicted via early warning signals.\n", "title": "Early warning signal for interior crises in excitable systems" }
null
null
[ "Physics" ]
null
true
null
5742
null
Validated
null
null
null
{ "abstract": " We propose a new iteratively reweighted least squares (IRLS) algorithm for\nthe recovery of a matrix $X \\in \\mathbb{C}^{d_1\\times d_2}$ of rank $r\n\\ll\\min(d_1,d_2)$ from incomplete linear observations, solving a sequence of\nlow complexity linear problems. The easily implementable algorithm, which we\ncall harmonic mean iteratively reweighted least squares (HM-IRLS), optimizes a\nnon-convex Schatten-$p$ quasi-norm penalization to promote low-rankness and\ncarries three major strengths, in particular for the matrix completion setting.\nFirst, we observe a remarkable global convergence behavior of the algorithm's\niterates to the low-rank matrix for relevant, interesting cases, for which any\nother state-of-the-art optimization approach fails the recovery. Secondly,\nHM-IRLS exhibits an empirical recovery probability close to $1$ even for a\nnumber of measurements very close to the theoretical lower bound $r (d_1 +d_2\n-r)$, i.e., already for significantly fewer linear observations than any other\ntractable approach in the literature. Thirdly, HM-IRLS exhibits a locally\nsuperlinear rate of convergence (of order $2-p$) if the linear observations\nfulfill a suitable null space property. While for the first two properties we\nhave so far only strong empirical evidence, we prove the third property as our\nmain theoretical result.\n", "title": "Harmonic Mean Iteratively Reweighted Least Squares for Low-Rank Matrix Recovery" }
null
null
null
null
true
null
5743
null
Default
null
null
null
{ "abstract": " We study the gap between the state pension provided by the Italian pension\nsystem pre-Dini reform and post-Dini reform. The goal is to fill the gap\nbetween the old and the new pension by joining a defined contribution pension\nscheme and adopting an optimal investment strategy that is target-based. We\nfind that it is possible to cover, at least partially, this gap with the\nadditional income of the pension scheme, especially in the presence of late\nretirement and in the presence of stagnant career. Workers with dynamic career\nand workers who retire early are those who are most penalised by the reform.\nResults are intuitive and in line with previous studies on the subject.\n", "title": "The Italian Pension Gap: a Stochastic Optimal Control Approach" }
null
null
null
null
true
null
5744
null
Default
null
null
null
{ "abstract": " Power-law-distributed species counts or clone counts arise in many biological\nsettings such as multispecies cell populations, population genetics, and\necology. This empirical observation that the number of species $c_{k}$\nrepresented by $k$ individuals scales as negative powers of $k$ is also\nsupported by a series of theoretical birth-death-immigration (BDI) models that\nconsistently predict many low-population species, a few intermediate-population\nspecies, and very high-population species. However, we show how a simple global\npopulation-dependent regulation in a neutral BDI model destroys the power law\ndistributions. Simulation of the regulated BDI model shows a high probability\nof observing a high-population species that dominates the total population.\nFurther analysis reveals that the origin of this breakdown is associated with\nthe failure of a mean-field approximation for the expected species abundance\ndistribution. We find an accurate estimate for the expected distribution\n$\\langle c_k \\rangle$ by mapping the problem to a lower-dimensional Moran\nprocess, allowing us to also straightforwardly calculate the covariances\n$\\langle c_k c_\\ell \\rangle$. Finally, we exploit the concepts associated with\nenergy landscapes to explain the failure of the mean-field assumption by\nidentifying a phase transition in the quasi-steady-state species counts\ntriggered by a decreasing immigration rate.\n", "title": "Immigration-induced phase transition in a regulated multispecies birth-death process" }
null
null
null
null
true
null
5745
null
Default
null
null
null
{ "abstract": " Vortices play a crucial role in determining the properties of superconductors\nas well as their applications. Therefore, characterization and manipulation of\nvortices, especially at the single vortex level, is of great importance. Among\nmany techniques to study single vortices, scanning tunneling microscopy (STM)\nstands out as a powerful tool, due to its ability to detect the local\nelectronic states and high spatial resolution. However, local control of\nsuperconductivity as well as the manipulation of individual vortices with the\nSTM tip is still lacking. Here we report a new function of the STM, namely to\ncontrol the local pinning in a superconductor through the heating effect. Such\neffect allows us to quench the superconducting state at nanoscale, and leads to\nthe growth of vortex-clusters whose size can be controlled by the bias voltage.\nWe also demonstrate the use of an STM tip to assemble single quantum vortices\ninto desired nanoscale configurations.\n", "title": "Nanoscale assembly of superconducting vortices with scanning tunnelling microscope tip" }
null
null
[ "Physics" ]
null
true
null
5746
null
Validated
null
null
null
{ "abstract": " We present a new solution to the problem of classifying Type Ia supernovae\nfrom their light curves alone given a spectroscopically confirmed but biased\ntraining set, circumventing the need to obtain an observationally expensive\nunbiased training set. We use Gaussian processes (GPs) to model the\nsupernovae's (SN) light curves, and demonstrate that the choice of covariance\nfunction has only a small influence on the GPs ability to accurately classify\nSNe. We extend and improve the approach of Richards et al (2012} -- a diffusion\nmap combined with a random forest classifier -- to deal specifically with the\ncase of biassed training sets. We propose a novel method, called STACCATO\n(SynThetically Augmented Light Curve ClassificATiOn') that synthetically\naugments a biased training set by generating additional training data from the\nfitted GPs. Key to the success of the method is the partitioning of the\nobservations into subgroups based on their propensity score of being included\nin the training set. Using simulated light curve data, we show that STACCATO\nincreases performance, as measured by the area under the Receiver Operating\nCharacteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977\nobtained using the 'gold standard' of an unbiased training set and\nsignificantly improving on the previous best result of 0.88. STACCATO also\nincreases the true positive rate for SNIa classification by up to a factor of\n50 for high-redshift/low brightness SNe.\n", "title": "STACCATO: A Novel Solution to Supernova Photometric Classification with Biased Training Sets" }
null
null
null
null
true
null
5747
null
Default
null
null
null
{ "abstract": " Simulating complex processes in fractured media requires some type of model\nreduction. Well-known approaches include multi-continuum techniques, which have\nbeen commonly used in approximating subgrid effects for flow and transport in\nfractured media. Our goal in this paper is to (1) show a relation between\nmulti-continuum approaches and Generalized Multiscale Finite Element Method\n(GMsFEM) and (2) to discuss coupling these approaches for solving problems in\ncomplex multiscale fractured media. The GMsFEM, a systematic approach,\nconstructs multiscale basis functions via local spectral decomposition in\npre-computed snapshot spaces. We show that GMsFEM can automatically identify\nseparate fracture networks via local spectral problems. We discuss the relation\nbetween these basis functions and continuums in multi-continuum methods. The\nGMsFEM can automatically detect each continuum and represent the interaction\nbetween the continuum and its surrounding (matrix). For problems with\nsimplified fracture networks, we propose a simplified basis construction with\nthe GMsFEM. This simplified approach is effective when the fracture networks\nare known and have simplified geometries. We show that this approach can\nachieve a similar result compared to the results using the GMsFEM with spectral\nbasis functions. Further, we discuss the coupling between the GMsFEM and\nmulti-continuum approaches. In this case, many fractures are resolved while for\nunresolved fractures, we use a multi-continuum approach with local\nRepresentative Volume Element (RVE) information. As a result, the method deals\nwith a system of equations on a coarse grid, where each equation represents one\nof the continua on the fine grid. We present various basis construction\nmechanisms and numerical results.\n", "title": "Coupling of multiscale and multi-continuum approaches" }
null
null
[ "Mathematics" ]
null
true
null
5748
null
Validated
null
null
null
{ "abstract": " It is well known that if $X$ is a CW-complex, then for every weak homotopy\nequivalence $f:A\\to B$, the map $f_*:[X,A]\\to [X,B]$ induced in homotopy\nclasses is a bijection. For which spaces $X$ is $f^*:[B,X]\\to [A,X]$ a\nbijection for every weak equivalence $f$? This question was considered by J.\nStrom and T. Goodwillie. In this note we prove that a non-empty space inverts\nweak equivalences if and only if it is contractible.\n", "title": "Spaces which invert weak homotopy equivalences" }
null
null
null
null
true
null
5749
null
Default
null
null
null
{ "abstract": " For each integer $k \\geq 2$, we apply gluing methods to construct sequences\nof minimal surfaces embedded in the round $3$-sphere. We produce two types of\nsequences, all desingularizing collections of intersecting Clifford tori.\nSequences of the first type converge to a collection of $k$ Clifford tori\nintersecting with maximal symmetry along these two circles. Near each of the\ncircles, after rescaling, the sequences converge smoothly on compact subsets to\na Karcher-Scherk tower of order $k$. Sequences of the second type desingularize\na collection of the same $k$ Clifford tori supplemented by an additional\nClifford torus equidistant from the original two circles of intersection, so\nthat the latter torus orthogonally intersects each of the former $k$ tori along\na pair of disjoint orthogonal circles, near which the corresponding rescaled\nsequences converge to a singly periodic Scherk surface. The simpler examples of\nthe first type resemble surfaces constructed by Choe and Soret \\cite{CS} by\ndifferent methods where the number of handles desingularizing each circle is\nthe same. There is a plethora of new examples which are more complicated and on\nwhich the number of handles for the two circles differs. Examples of the second\ntype are new as well.\n", "title": "Minimal surfaces in the 3-sphere by desingularizing intersecting Clifford tori" }
null
null
null
null
true
null
5750
null
Default
null
null
null
{ "abstract": " We report constraints on the global $21$ cm signal due to neutral hydrogen at\nredshifts $14.8 \\geq z \\geq 6.5$. We derive our constraints from low foreground\nobservations of the average sky brightness spectrum conducted with the EDGES\nHigh-Band instrument between September $7$ and October $26$, $2015$.\nObservations were calibrated by accounting for the effects of antenna beam\nchromaticity, antenna and ground losses, signal reflections, and receiver\nparameters. We evaluate the consistency between the spectrum and\nphenomenological models for the global $21$ cm signal. For tanh-based\nrepresentations of the ionization history during the epoch of reionization, we\nrule out, at $\\geq2\\sigma$ significance, models with duration of up to $\\Delta\nz = 1$ at $z\\approx8.5$ and higher than $\\Delta z = 0.4$ across most of the\nobserved redshift range under the usual assumption that the $21$ cm spin\ntemperature is much larger than the temperature of the cosmic microwave\nbackground (CMB) during reionization. We also investigate a `cold' IGM scenario\nthat assumes perfect Ly$\\alpha$ coupling of the $21$ cm spin temperature to the\ntemperature of the intergalactic medium (IGM), but that the IGM is not heated\nby early stars or stellar remants. Under this assumption, we reject tanh-based\nreionization models of duration $\\Delta z \\lesssim 2$ over most of the observed\nredshift range. Finally, we explore and reject a broad range of Gaussian models\nfor the $21$ cm absorption feature expected in the First Light era. As an\nexample, we reject $100$ mK Gaussians with duration (full width at half\nmaximum) $\\Delta z \\leq 4$ over the range $14.2\\geq z\\geq 6.5$ at $\\geq2\\sigma$\nsignificance.\n", "title": "Results from EDGES High-Band: I. Constraints on Phenomenological Models for the Global $21$ cm Signal" }
null
null
[ "Physics" ]
null
true
null
5751
null
Validated
null
null
null
{ "abstract": " The electricity distribution grid was not designed to cope with load dynamics\nimposed by high penetration of electric vehicles, neither to deal with the\nincreasing deployment of distributed Renewable Energy Sources. Distribution\nSystem Operators (DSO) will increasingly rely on flexible Distributed Energy\nResources (flexible loads, controllable generation and storage) to keep the\ngrid stable and to ensure quality of supply. In order to properly integrate\ndemand-side flexibility, DSOs need new energy management architectures, capable\nof fostering collaboration with wholesale market actors and pro-sumers. We\npropose the creation of Virtual Distribution Grids (VDG) over a common physical\ninfrastructure , to cope with heterogeneity of resources and actors, and with\nthe increasing complexity of distribution grid management and related resources\nallocation problems. Focusing on residential VDG, we propose an agent-based\nhierarchical architecture for providing Demand-Side Management services through\na market-based approach, where households transact their surplus/lack of energy\nand their flexibility with neighbours, aggregators, utilities and DSOs. For\nimplementing the overall solution, we consider fine-grained control of smart\nhomes based on Inter-net of Things technology. Homes seamlessly transact\nself-enforcing smart contracts over a blockchain-based generic platform.\nFinally, we extend the architecture to solve existing problems on smart home\ncontrol, beyond energy management.\n", "title": "Novel paradigms for advanced distribution grid energy management" }
null
null
null
null
true
null
5752
null
Default
null
null
null
{ "abstract": " This paper addresses structures of state space in quasiperiodically forced\ndynamical systems. We develop a theory of ergodic partition of state space in a\nclass of measure-preserving and dissipative flows, which is a natural extension\nof the existing theory for measure-preserving maps. The ergodic partition\nresult is based on eigenspace at eigenvalue 0 of the associated Koopman\noperator, which is realized via time-averages of observables, and provides a\nconstructive way to visualize a low-dimensional slice through a\nhigh-dimensional invariant set. We apply the result to the systems with a\nfinite number of attractors and show that the time-average of a continuous\nobservable is well-defined and reveals the invariant sets, namely, a finite\nnumber of basins of attraction. We provide a characterization of invariant sets\nin the quasiperiodically forced systems. A theorem on uniform boundedness of\nthe invariant sets is proved. The series of analytical results enables\nnumerical analysis of invariant sets in the quasiperiodically forced systems\nbased on the ergodic partition and time-averages. Using this, we analyze a\nnonlinear model of complex power grids that represents the short-term swing\ninstability, named the coherent swing instability. We show that our analytical\nresults can be used to understand stability regions in such complex systems.\n", "title": "Uniformly Bounded Sets in Quasiperiodically Forced Dynamical Systems" }
null
null
null
null
true
null
5753
null
Default
null
null
null
{ "abstract": " Vector embedding is a foundational building block of many deep learning\nmodels, especially in natural language processing. In this paper, we present a\ntheoretical framework for understanding the effect of dimensionality on vector\nembeddings. We observe that the distributional hypothesis, a governing\nprinciple of statistical semantics, requires a natural unitary-invariance for\nvector embeddings. Motivated by the unitary-invariance observation, we propose\nthe Pairwise Inner Product (PIP) loss, a unitary-invariant metric on the\nsimilarity between two embeddings. We demonstrate that the PIP loss captures\nthe difference in functionality between embeddings, and that the PIP loss is\ntightly connect with two basic properties of vector embeddings, namely\nsimilarity and compositionality. By formulating the embedding training process\nas matrix factorization with noise, we reveal a fundamental bias-variance\ntrade-off between the signal spectrum and noise power in the dimensionality\nselection process. This bias-variance trade-off sheds light on many empirical\nobservations which have not been thoroughly explained, for example the\nexistence of an optimal dimensionality. Moreover, we discover two new results\nabout vector embeddings, namely their robustness against over-parametrization\nand their forward stability. The bias-variance trade-off of the PIP loss\nexplicitly answers the fundamental open problem of dimensionality selection for\nvector embeddings.\n", "title": "Understand Functionality and Dimensionality of Vector Embeddings: the Distributional Hypothesis, the Pairwise Inner Product Loss and Its Bias-Variance Trade-off" }
null
null
null
null
true
null
5754
null
Default
null
null
null
{ "abstract": " We consider multi-component quantum mixtures (bosonic, fermionic, or mixed)\nwith strongly repulsive contact interactions in a one-dimensional harmonic\ntrap. In the limit of infinitely strong repulsion and zero temperature, using\nthe class-sum method, we study the symmetries of the spatial wave function of\nthe mixture. We find that the ground state of the system has the most symmetric\nspatial wave function allowed by the type of mixture. This provides an example\nof the generalized Lieb-Mattis theorem. Furthermore, we show that the symmetry\nproperties of the mixture are embedded in the large-momentum tails of the\nmomentum distribution, which we evaluate both at infinite repulsion by an exact\nsolution and at finite interactions using a numerical DMRG approach. This\nimplies that an experimental measurement of the Tan's contact would allow to\nunambiguously determine the symmetry of any kind of multi-component mixture.\n", "title": "Strongly correlated one-dimensional Bose-Fermi quantum mixtures: symmetry and correlations" }
null
null
null
null
true
null
5755
null
Default
null
null
null
{ "abstract": " In this paper we present a novel joint approach for optimising surface\ncurvature and pose alignment. We present two implementations of this joint\noptimisation strategy, including a fast implementation that uses two frames and\nan offline multi-frame approach. We demonstrate an order of magnitude\nimprovement in simulation over state of the art dense relative point-to-plane\nIterative Closest Point (ICP) pose alignment using our dense joint\nframe-to-frame approach and show comparable pose drift to dense point-to-plane\nICP bundle adjustment using low-cost depth sensors. Additionally our improved\njoint quadric based approach can be used to more accurately estimate surface\ncurvature on noisy point clouds than previous approaches.\n", "title": "Joint Pose and Principal Curvature Refinement Using Quadrics" }
null
null
null
null
true
null
5756
null
Default
null
null
null
{ "abstract": " In this paper we show, using Deligne-Lusztig theory and Kawanaka's theory of\ngeneralised Gelfand-Graev representations, that the decomposition matrix of the\nspecial linear and unitary group in non defining characteristic can be made\nunitriangular with respect to a basic set that is stable under the action of\nautomorphisms.\n", "title": "Stable basic sets for finite special linear and unitary group" }
null
null
null
null
true
null
5757
null
Default
null
null
null
{ "abstract": " We study the formal properties of correspondences of curves without a core,\nfocusing on the case of étale correspondences. The motivating examples come\nfrom Hecke correspondences of Shimura curves. Given a correspondence without a\ncore, we construct an infinite graph $\\mathcal{G}_{gen}$ together with a large\ngroup of \"algebraic\" automorphisms $A$. The graph $\\mathcal{G}_{gen}$ measures\nthe \"generic dynamics\" of the correspondence. We construct specialization maps\n$\\mathcal{G}_{gen}\\rightarrow\\mathcal{G}_{phys}$ to the \"physical dynamics\" of\nthe correspondence. We also prove results on the number of bounded étale\norbits, in particular generalizing a recent theorem of Hallouin and Perret. We\nuse a variety of techniques: Galois theory, the theory of groups acting on\ninfinite graphs, and finite group schemes.\n", "title": "Correspondences without a Core" }
null
null
null
null
true
null
5758
null
Default
null
null
null
{ "abstract": " Oxidative stress is a pathological hallmark of neurodegenerative tauopathic\ndisorders such as Alzheimer's disease and Parkinson's disease-related dementia,\nwhich are characterized by altered forms of the microtubule-associated protein\n(MAP) tau. MAP tau is a key protein in stabilizing the microtubule architecture\nthat regulates neuron morphology and synaptic strength. The precise role of\nreactive oxygen species (ROS) in the tauopathic disease process, however, is\npoorly understood. It is known that the production of ROS by mitochondria can\nresult in ultraweak photon emission (UPE) within cells. One likely absorber of\nthese photons is the microtubule cytoskeleton, as it forms a vast network\nspanning neurons, is highly co-localized with mitochondria, and shows a high\ndensity of aromatic amino acids. Functional microtubule networks may traffic\nthis ROS-generated endogenous photon energy for cellular signaling, or they may\nserve as dissipaters/conduits of such energy. Experimentally, after in vitro\nexposure to exogenous photons, microtubules have been shown to reorient and\nreorganize in a dose-dependent manner with the greatest effect being observed\naround 280 nm, in the tryptophan and tyrosine absorption range. In this paper,\nrecent modeling efforts based on ambient temperature experiment are presented,\nshowing that tubulin polymers can feasibly absorb and channel these\nphotoexcitations via resonance energy transfer, on the order of dendritic\nlength scales. Since microtubule networks are compromised in tauopathic\ndiseases, patients with these illnesses would be unable to support effective\nchanneling of these photons for signaling or dissipation. Consequent emission\nsurplus due to increased UPE production or decreased ability to absorb and\ntransfer may lead to increased cellular oxidative damage, thus hastening the\nneurodegenerative process.\n", "title": "Oxidative species-induced excitonic transport in tubulin aromatic networks: Potential implications for neurodegenerative disease" }
null
null
null
null
true
null
5759
null
Default
null
null
null
{ "abstract": " This paper presents a thorough analysis of 1-dimensional Schroedinger\noperators whose potential is a linear combination of the Coulomb term 1/r and\nthe centrifugal term 1/r^2. We allow both coupling constants to be complex.\nUsing natural boundary conditions at 0, a two parameter holomorphic family of\nclosed operators is introduced. We call them the Whittaker operators, since in\nthe mathematical literature their eigenvalue equation is called the Whittaker\nequation. Spectral and scattering theory for Whittaker operators is studied.\nWhittaker operators appear in quantum mechanics as the radial part of the\nSchroedinger operator with a Coulomb potential.\n", "title": "On radial Schroedinger operators with a Coulomb potential" }
null
null
null
null
true
null
5760
null
Default
null
null
null
{ "abstract": " Motivation: Although there is a rich literature on methods for assessing the\nimpact of functional predictors, the focus has been on approaches for dimension\nreduction that can fail dramatically in certain applications. Examples of\nstandard approaches include functional linear models, functional principal\ncomponents regression, and cluster-based approaches, such as latent trajectory\nanalysis. This article is motivated by applications in which the dynamics in a\npredictor, across times when the value is relatively extreme, are particularly\ninformative about the response. For example, physicians are interested in\nrelating the dynamics of blood pressure changes during surgery to post-surgery\nadverse outcomes, and it is thought that the dynamics are more important when\nblood pressure is significantly elevated or lowered.\nMethods: We propose a novel class of extrema-weighted feature (XWF)\nextraction models. Key components in defining XWFs include the marginal density\nof the predictor, a function up-weighting values at high quantiles of this\nmarginal, and functionals characterizing local dynamics. Algorithms are\nproposed for fitting of XWF-based regression and classification models, and are\ncompared with current methods for functional predictors in simulations and a\nblood pressure during surgery application.\nResults: XWFs find features of intraoperative blood pressure trajectories\nthat are predictive of postoperative mortality. By their nature, most of these\nfeatures cannot be found by previous methods.\n", "title": "Extrema-weighted feature extraction for functional data" }
null
null
null
null
true
null
5761
null
Default
null
null
null
{ "abstract": " Many applications require stochastic processes specified on two- or\nhigher-dimensional domains; spatial or spatial-temporal modelling, for example.\nIn these applications it is attractive, for conceptual simplicity and\ncomputational tractability, to propose a covariance function that is separable;\ne.g., the product of a covariance function in space and one in time. This paper\npresents a representation theorem for such a proposal, and shows that all\nprocesses with continuous separable covariance functions are second-order\nidentical to the product of second-order uncorrelated processes. It discusses\nthe implications of separable or nearly separable prior covariances for the\nstatistical emulation of complicated functions such as computer codes, and\ncritically reexamines the conventional wisdom concerning emulator structure,\nand size of design.\n", "title": "A representation theorem for stochastic processes with separable covariance functions, and its implications for emulation" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
5762
null
Validated
null
null
null
{ "abstract": " Privacy and Security are two universal rights and, to ensure that in our\ndaily life we are secure, a lot of research is going on in the field of home\nsecurity, and IoT is the turning point for the industry, where we connect\neveryday objects to share data for our betterment. Facial recognition is a\nwell-established process in which the face is detected and identified out of\nthe image. We aim to create a smart door, which secures the gateway on the\nbasis of who we are. In our proof of concept of a smart door we have used a\nlive HD camera on the front side of setup attached to a display monitor\nconnected to the camera to show who is standing in front of the door, also the\nwhole system will be able to give voice outputs by processing text them on the\nRaspberry Pi ARM processor used and show the answers as output on the screen.\nWe are using a set of electromagnets controlled by the microcontroller, which\nwill act as a lock. So a person can open the smart door with the help of facial\nrecognition and at the same time also be able to interact with it. The facial\nrecognition is done by Microsoft face API but our state of the art desktop\napplication operating over Microsoft Visual Studio IDE reduces the\ncomputational time by detecting the face out of the photo and giving that as\nthe output to Microsoft Face API, which is hosted over Microsoft Azure cloud\nsupport.\n", "title": "Facial Recognition Enabled Smart Door Using Microsoft Face API" }
null
null
[ "Computer Science" ]
null
true
null
5763
null
Validated
null
null
null
{ "abstract": " This paper summarizes the development of Veamy, an object-oriented C++\nlibrary for the virtual element method (VEM) on general polygonal meshes, whose\nmodular design is focused on its extensibility. The linear elastostatic and\nPoisson problems in two dimensions have been chosen as the starting stage for\nthe development of this library. The theory of the VEM, upon which Veamy is\nbuilt, is presented using a notation and a terminology that resemble the\nlanguage of the finite element method (FEM) in engineering analysis. Several\nexamples are provided to demonstrate the usage of Veamy, and in particular, one\nof them features the interaction between Veamy and the polygonal mesh generator\nPolyMesher. A computational performance comparison between VEM and FEM is also\nconducted. Veamy is free and open source software.\n", "title": "Veamy: an extensible object-oriented C++ library for the virtual element method" }
null
null
[ "Computer Science" ]
null
true
null
5764
null
Validated
null
null
null
{ "abstract": " Most musical programming languages are developed purely for coding virtual\ninstruments or algorithmic compositions. Although there has been some work in\nthe domain of musical query languages for music information retrieval, there\nhas been little attempt to unify the principles of musical programming and\nquery languages with cognitive and natural language processing models that\nwould facilitate the activity of composition by conversation. We present a\nprototype framework, called MusECI, that merges these domains, permitting\nscore-level algorithmic composition in a text editor while also supporting\nconnectivity to existing natural language processing frameworks.\n", "title": "Composition by Conversation" }
null
null
null
null
true
null
5765
null
Default
null
null
null
{ "abstract": " Experiments show that at 298~K and 1 atm pressure the transfer free energy,\n$\\mu^{\\rm ex}$, of water from its vapor to liquid normal alkanes $C_nH_{2n+2}$\n($n=5\\ldots12$) is negative. Earlier it was found that with the united-atom\nTraPPe model for alkanes and the SPC/E model for water, one had to artificially\nenhance the attractive alkane-water cross interaction to capture this behavior.\nHere we revisit the calculation of $\\mu^{\\rm ex}$ using the polarizable AMOEBA\nand the non-polarizable Charmm General (CGenFF) forcefields. We test both the\nAMOEBA03 and AMOEBA14 water models; the former has been validated with the\nAMOEBA alkane model while the latter is a revision of AMOEBA03 to better\ndescribe liquid water. We calculate $\\mu^{\\rm ex}$ using the test particle\nmethod. With CGenFF, $\\mu^{\\rm ex}$ is positive and the error relative to\nexperiments is about 1.5 $k_{\\rm B}T$. With AMOEBA, $\\mu^{\\rm ex}$ is negative\nand deviations relative to experiments are between 0.25 $k_{\\rm B}T$ (AMOEBA14)\nand 0.5 $k_{\\rm B}T$ (AMOEBA03). Quantum chemical calculations in a continuum\nsolvent suggest that zero point effects may account for some of the deviation.\nForcefield limitations notwithstanding, electrostatic and induction effects,\ncommonly ignored in considerations of water-alkane interactions, appear to be\ndecisive in the solubility of water in alkanes.\n", "title": "Electrostatic and induction effects in the solubility of water in alkanes" }
null
null
null
null
true
null
5766
null
Default
null
null
null
{ "abstract": " We investigate the impact of general conditions of theoretical stability and\ncosmological viability on dynamical dark energy models. As a powerful example,\nwe study whether minimally coupled, single field Quintessence models that are\nsafe from ghost instabilities, can source the CPL expansion history recently\nshown to be mildly favored by a combination of CMB (Planck) and Weak Lensing\n(KiDS) data. We find that in their most conservative form, the theoretical\nconditions impact the analysis in such a way that smooth single field\nQuintessence becomes significantly disfavored with respect to the standard LCDM\ncosmological model. This is due to the fact that these conditions cut a\nsignificant portion of the (w0;wa) parameter space for CPL, in particular\neliminating the region that would be favored by weak lensing data. Within the\nscenario of a smooth dynamical dark energy parametrized with CPL, weak lensing\ndata favors a region that would require multiple fields to ensure gravitational\nstability.\n", "title": "Impact of theoretical priors in cosmological analyses: the case of single field quintessence" }
null
null
null
null
true
null
5767
null
Default
null
null
null
{ "abstract": " The Main Injector (MI) at Fermilab currently produces high-intensity beams of\nprotons at energies of 120 GeV for a variety of physics experiments.\nAcceleration of polarized protons in the MI would provide opportunities for a\nrich spin physics program at Fermilab. To achieve polarized proton beams in the\nFermilab accelerator complex, detailed spin tracking simulations with realistic\nparameters based on the existing facility are required. This report presents\nstudies at the MI using a single 4-twist Siberian snake to determine the\ndepolarizing spin resonances for the relevant synchrotrons. Results will be\npresented first for a perfect MI lattice, followed by a lattice that includes\nthe real MI imperfections, such as the measured magnet field errors and\nquadrupole misalignments. The tolerances of each of these factors in\nmaintaining polarization in the Main Injector will be discussed.\n", "title": "Spin tracking of polarized protons in the Main Injector at Fermilab" }
null
null
null
null
true
null
5768
null
Default
null
null
null
{ "abstract": " There exist non-trivial stationary points of the Euclidean action for an\naxion particle minimally coupled to Einstein gravity, dubbed wormholes. They\nexplicitly break the continuos global shift symmetry of the axion in a\nnon-perturbative way, and generate an effective potential that may compete with\nQCD depending on the value of the axion decay constant. In this paper, we\nexplore both theoretical and phenomenological aspects of this issue. On the\ntheory side, we address the problem of stability of the wormhole solutions, and\nwe show that the spectrum of the quadratic action features only positive\neigenvalues. On the phenomenological side, we discuss, beside the obvious\napplication to the QCD axion, relevant consequences for models with ultralight\ndark matter, black hole superradiance, and the relaxation of the electroweak\nscale. We conclude discussing wormhole solutions for a generic coset and the\npotential they generate.\n", "title": "Wormholes and masses for Goldstone bosons" }
null
null
null
null
true
null
5769
null
Default
null
null
null
{ "abstract": " The spin transport in isotropic Heisenberg model in the sector with zero\nmagnetization is generically super-diffusive. Despite that, we here demonstrate\nthat for a specific set of domain-wall-like initial product states it can\ninstead be diffusive. We theoretically explain the time evolution of such\nstates by showing that in the limiting regime of weak spatial modulation they\nare approximately product states for very long times, and demonstrate that even\nin the case of larger spatial modulation the bipartite entanglement entropy\ngrows only logarithmically in time. In the limiting regime we derive a simple\nclosed equation governing the dynamics, which in the continuum limit and for\nthe initial step magnetization profile results in a solution expressed in terms\nof Fresnel integrals.\n", "title": "A class of states supporting diffusive spin dynamics in the isotropic Heisenberg model" }
null
null
[ "Physics" ]
null
true
null
5770
null
Validated
null
null
null
{ "abstract": " The promise of compressive sensing (CS) has been offset by two significant\nchallenges. First, real-world data is not exactly sparse in a fixed basis.\nSecond, current high-performance recovery algorithms are slow to converge,\nwhich limits CS to either non-real-time applications or scenarios where massive\nback-end computing is available. In this paper, we attack both of these\nchallenges head-on by developing a new signal recovery framework we call {\\em\nDeepInverse} that learns the inverse transformation from measurement vectors to\nsignals using a {\\em deep convolutional network}. When trained on a set of\nrepresentative images, the network learns both a representation for the signals\n(addressing challenge one) and an inverse map approximating a greedy or convex\nrecovery algorithm (addressing challenge two). Our experiments indicate that\nthe DeepInverse network closely approximates the solution produced by\nstate-of-the-art CS recovery algorithms yet is hundreds of times faster in run\ntime. The tradeoff for the ultrafast run time is a computationally intensive,\noff-line training procedure typical to deep networks. However, the training\nneeds to be completed only once, which makes the approach attractive for a host\nof sparse recovery problems.\n", "title": "Learning to Invert: Signal Recovery via Deep Convolutional Networks" }
null
null
null
null
true
null
5771
null
Default
null
null
null
{ "abstract": " In this paper, we investigate multi-message authentication to combat\nadversaries with infinite computational capacity. An authentication framework\nover a wiretap channel $(W_1,W_2)$ is proposed to achieve information-theoretic\nsecurity with the same key. The proposed framework bridges the two research\nareas in physical (PHY) layer security: secure transmission and message\nauthentication. Specifically, the sender Alice first transmits message $M$ to\nthe receiver Bob over $(W_1,W_2)$ with an error correction code; then Alice\nemploys a hash function (i.e., $\\varepsilon$-AWU$_2$ hash functions) to\ngenerate a message tag $S$ of message $M$ using key $K$, and encodes $S$ to a\ncodeword $X^n$ by leveraging an existing strongly secure channel coding with\nexponentially small (in code length $n$) average probability of error; finally,\nAlice sends $X^n$ over $(W_1,W_2)$ to Bob who authenticates the received\nmessages. We develop a theorem regarding the requirements/conditions for the\nauthentication framework to be information-theoretic secure for authenticating\na polynomial number of messages in terms of $n$. Based on this theorem, we\npropose an authentication protocol that can guarantee the security\nrequirements, and prove its authentication rate can approach infinity when $n$\ngoes to infinity. Furthermore, we design and implement an efficient and\nfeasible authentication protocol over binary symmetric wiretap channel (BSWC)\nby using \\emph{Linear Feedback Shifting Register} based (LFSR-based) hash\nfunctions and strong secure polar code. Through extensive experiments, it is\ndemonstrated that the proposed protocol can achieve low time cost, high\nauthentication rate, and low authentication error rate.\n", "title": "Multi-message Authentication over Noisy Channel with Secure Channel Codes" }
null
null
null
null
true
null
5772
null
Default
null
null
null
{ "abstract": " We review topics in the theory of cellular automata and dynamical systems\nthat are related to the Moore-Myhill Garden of Eden theorem.\n", "title": "The Garden of Eden theorem: old and new" }
null
null
null
null
true
null
5773
null
Default
null
null
null
{ "abstract": " We demonstrate explicitly the correspondence between all protected operators\nin a 2+1 dimensional non-supersymmetric bosonization duality in the\nnon-relativistic limit. Roughly speaking we consider $SU(N)$ Chern-Simons field\ntheory at level $k$ with $N_f$ flavours of fundamental boson, and match its\nchiral sector to that of a $SU(k)$ theory at level $N$ with $N_f$ fundamental\nfermions. We present the matching at the level of indices and individual\noperators, seeing the mechanism of failure for $N_f > N$, and point out that\nthe non-relativistic setting is a particularly friendly setting for studying\ninteresting questions about such dualities.\n", "title": "Bosonization in Non-Relativistic CFTs" }
null
null
null
null
true
null
5774
null
Default
null
null
null
{ "abstract": " In this paper, we introduce the Variational Autoencoder (VAE) to an\nend-to-end speech synthesis model, to learn the latent representation of\nspeaking styles in an unsupervised manner. The style representation learned\nthrough VAE shows good properties such as disentangling, scaling, and\ncombination, which makes it easy for style control. Style transfer can be\nachieved in this framework by first inferring style representation through the\nrecognition network of VAE, then feeding it into TTS network to guide the style\nin synthesizing speech. To avoid Kullback-Leibler (KL) divergence collapse in\ntraining, several techniques are adopted. Finally, the proposed model shows\ngood performance of style control and outperforms Global Style Token (GST)\nmodel in ABX preference tests on style transfer.\n", "title": "Learning latent representations for style control and transfer in end-to-end speech synthesis" }
null
null
null
null
true
null
5775
null
Default
null
null
null
{ "abstract": " We consider a directed variant of the negative-weight percolation model in a\ntwo-dimensional, periodic, square lattice. The problem exhibits edge weights\nwhich are taken from a distribution that allows for both positive and negative\nvalues. Additionally, in this model variant all edges are directed. For a given\nrealization of the disorder, a minimally weighted loop/path configuration is\ndetermined by performing a non-trivial transformation of the original lattice\ninto a minimum weight perfect matching problem. For this problem, fast\npolynomial-time algorithms are available, thus we could study large systems\nwith high accuracy. Depending on the fraction of negatively and positively\nweighted edges in the lattice, a continuous phase transition can be identified,\nwhose characterizing critical exponents we have estimated by a finite-size\nscaling analyses of the numerically obtained data. We observe a strong change\nof the universality class with respect to standard directed percolation, as\nwell as with respect to undirected negative-weight percolation. Furthermore,\nthe relation to directed polymers in random media is illustrated.\n", "title": "Directed negative-weight percolation" }
null
null
null
null
true
null
5776
null
Default
null
null
null
{ "abstract": " In this article we analyze a generalized trapezoidal rule for initial value\nproblems with piecewise smooth right hand side \\(F:\\R^n\\to\\R^n\\). When applied\nto such a problem the classical trapezoidal rule suffers from a loss of\naccuracy if the solution trajectory intersects a nondifferentiability of \\(F\\).\nThe advantage of the proposed generalized trapezoidal rule is threefold:\nFirstly we can achieve a higher convergence order than with the classical\nmethod. Moreover, the method is energy preserving for piecewise linear\nHamiltonian systems. Finally, in analogy to the classical case we derive a\nthird order interpolation polynomial for the numerical trajectory. In the\nsmooth case the generalized rule reduces to the classical one. Hence, it is a\nproper extension of the classical theory. An error estimator is given and\nnumerical results are presented.\n", "title": "Integrating Lipschitzian Dynamical Systems using Piecewise Algorithmic Differentiation" }
null
null
null
null
true
null
5777
null
Default
null
null
null
{ "abstract": " A remarkable discovery of NASA's Kepler mission is the wide diversity in the\naverage densities of planets of similar mass. After gas disk dissipation, fully\nformed planets could interact with nearby planetesimals from a remnant\nplanetesimal disk. These interactions would often lead to planetesimal\naccretion due to the relatively high ratio between the planet size and the hill\nradius for typical planets. We present calculations using the open-source\nstellar evolution toolkit MESA (Modules for Experiments in Stellar\nAstrophysics) modified to include the deposition of planetesimals into the H/He\nenvelopes of sub-Neptunes (~1-20 MEarth). We show that planetesimal accretion\ncan alter the mass-radius isochrones for these planets. The same initial planet\nas a result of the same total accreted planetesimal mass can have up to ~5%\ndifference in mean densities several Gyr after the last accretion due to\ninherent stochasticity of the accretion process. During the phase of rapid\naccretion these differences are more dramatic. The additional energy deposition\nfrom the accreted planetesimals increase the ratio between the planet's radius\nto that of the core during rapid accretion, which in turn leads to enhanced\nloss of atmospheric mass. As a result, the same initial planet can end up with\nvery different envelope mass fractions. These differences manifest as\ndifferences in mean densities long after accretion stops. These effects are\nparticularly important for planets initially less massive than ~10 MEarth and\nwith envelope mass fraction less than ~10%, thought to be the most common type\nof planets discovered by Kepler.\n", "title": "Effects of Planetesimal Accretion on the Thermal and Structural Evolution of Sub-Neptunes" }
null
null
null
null
true
null
5778
null
Default
null
null
null
{ "abstract": " We obtain estimation error rates and sharp oracle inequalities for\nregularization procedures of the form \\begin{equation*}\n\\hat f \\in argmin_{f\\in\nF}\\left(\\frac{1}{N}\\sum_{i=1}^N\\ell(f(X_i), Y_i)+\\lambda \\|f\\|\\right)\n\\end{equation*} when $\\|\\cdot\\|$ is any norm, $F$ is a convex class of\nfunctions and $\\ell$ is a Lipschitz loss function satisfying a Bernstein\ncondition over $F$. We explore both the bounded and subgaussian stochastic\nframeworks for the distribution of the $f(X_i)$'s, with no assumption on the\ndistribution of the $Y_i$'s. The general results rely on two main objects: a\ncomplexity function, and a sparsity equation, that depend on the specific\nsetting in hand (loss $\\ell$ and norm $\\|\\cdot\\|$).\nAs a proof of concept, we obtain minimax rates of convergence in the\nfollowing problems: 1) matrix completion with any Lipschitz loss function,\nincluding the hinge and logistic loss for the so-called 1-bit matrix completion\ninstance of the problem, and quantile losses for the general case, which\nenables to estimate any quantile on the entries of the matrix; 2) logistic\nLASSO and variants such as the logistic SLOPE; 3) kernel methods, where the\nloss is the hinge loss, and the regularization function is the RKHS norm.\n", "title": "Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions" }
null
null
null
null
true
null
5779
null
Default
null
null
null
{ "abstract": " For a simple $C^*$-algebra $A$ and any other $C^*$-algebra $B$, it is proved\nthat every closed ideal of $A \\otimes^{\\min} B$ is a product ideal if either\n$A$ is exact or $B$ is nuclear. Closed commutator of a closed ideal in a Banach\nalgebra whose every closed ideal possesses a quasi-central approximate identity\nis described in terms of the commutator of the Banach algebra. If $\\alpha$ is\neither the Haagerup norm, the operator space projective norm or the\n$C^*$-minimal norm, then this allows us to identify all closed Lie ideals of $A\n\\otimes^{\\alpha} B$, where $A$ and $B$ are simple, unital $C^*$-algebras with\none of them admitting no tracial functionals, and to deduce that every\nnon-central closed Lie ideal of $B(H) \\otimes^{\\alpha} B(H)$ contains the\nproduct ideal $K(H) \\otimes^{\\alpha} K(H)$. Closed Lie ideals of $A\n\\otimes^{\\min} C(X)$ are also determined, $A$ being any simple unital\n$C^*$-algebra with at most one tracial state and $X$ any compact Hausdorff\nspace. And, it is shown that closed Lie ideals of $A \\otimes^{\\alpha} K(H)$ are\nprecisely the product ideals, where $A$ is any unital $C^*$-algebra and\n$\\alpha$ any completely positive uniform tensor norm.\n", "title": "On closed Lie ideals of certain tensor products of $C^*$-algebras" }
null
null
null
null
true
null
5780
null
Default
null
null
null
{ "abstract": " We present the first gas-grain astrochemical model of the NGC 2264 CMM3\nprotostellar core. The chemical evolution of the core is affected by changing\nits physical parameters such as the total density and the amount of\ngas-depletion onto grain surfaces as well as the cosmic ray ionisation rate,\n$\\zeta$. We estimated $\\zeta_{\\text {CMM3}}$ = 1.6 $\\times$ 10$^{-17}$\ns$^{-1}$. This value is 1.3 times higher than the standard CR ionisation rate,\n$\\zeta_{\\text {ISM}}$ = 1.3 $\\times$ 10$^{-17}$ s$^{-1}$. Species response\ndifferently to changes into the core physical conditions, but they are more\nsensitive to changes in the depletion percentage and CR ionisation rate than to\nvariations in the core density. Gas-phase models highlighted the importance of\nsurface reactions as factories of large molecules and showed that for sulphur\nbearing species depletion is important to reproduce observations.\nComparing the results of the reference model with the most recent millimeter\nobservations of the NGC 2264 CMM3 core showed that our model is capable of\nreproducing the observed abundances of most of the species during early stages\n($\\le$ 3$\\times$10$^4$ yrs) of their chemical evolution. Models with variations\nin the core density between 1 - 20 $\\times$ 10$^6$ cm$^{-3}$ are also in good\nagreement with observations during the early time interval 1 $\\times$ 10$^4 <$\nt (yr) $<$ 5 $\\times$ 10$^4$. In addition, models with higher CR ionisation\nrates (5 - 10) $\\times \\zeta_{\\text {ISM}}$ are often overestimating the\nfractional abundances of the species. However, models with $\\zeta_{\\text\n{CMM3}}$ = 5 $\\zeta_{\\text {ISM}}$ may best fit observations at times $\\sim$ 2\n$\\times$ 10$^4$ yrs. Our results suggest that CMM3 is (1 - 5) $\\times$ 10$^4$\nyrs old. Therefore, the core is chemically young and it may host a Class 0\nobject as suggested by previous studies.\n", "title": "On the Chemistry of the Young Massive Protostellar core NGC 2264 CMM3" }
null
null
null
null
true
null
5781
null
Default
null
null
null
{ "abstract": " \\cite{bickel2009nonparametric} developed a general framework to establish\nconsistency of community detection in stochastic block model (SBM). In most\napplications of this framework, the community label is discrete. For example,\nin \\citep{bickel2009nonparametric,zhao2012consistency} the degree corrected SBM\nis assumed to have a discrete degree parameter. In this paper, we generalize\nthe method of \\cite{bickel2009nonparametric} to give consistency analysis of\nmaximum likelihood estimator (MLE) in SBM with continuous community label. We\nshow that there is a standard procedure to transform the $||\\cdot||_2$ error\nbound to the uniform error bound. We demonstrate the application of our general\nresults by proving the uniform consistency (strong consistency) of the MLE in\nthe exponential network model with interaction effect. Unfortunately, in the\ncontinuous parameter case, the condition ensuring uniform consistency we\nobtained is much stronger than that in the discrete parameter case, namely\n$n\\mu_n^5/(\\log n)^{8}\\rightarrow\\infty$ versus $n\\mu_n/\\log\nn\\rightarrow\\infty$. Where $n\\mu_n$ represents the average degree of the\nnetwork. But continuous is the limit of discrete. So it is not surprising as we\nshow that by discretizing the community label space into sufficiently small\n(but not too small) pieces and applying the MLE on the discretized community\nlabel space, uniform consistency holds under almost the same condition as in\ndiscrete community label space. Such a phenomenon is surprising since the\ndiscretization does not depend on the data or the model. This reminds us of the\nthresholding method.\n", "title": "Uniform Consistency in Stochastic Block Model with Continuous Community Label" }
null
null
null
null
true
null
5782
null
Default
null
null
null
{ "abstract": " In the last decades, dispersal studies have benefitted from the use of\nmolecular markers for detecting patterns differing between categories of\nindividuals, and have highlighted sex-biased dispersal in several species. To\nexplain this phenomenon, sex-related handicaps such as parental care have been\nrecently proposed as a hypothesis. Herein we tested this hypothesis in\nArmadillidium vulgare, a terrestrial isopod in which females bear the totality\nof the high parental care costs. We performed a fine-scale analysis of\nsex-specific dispersal patterns, using males and females originating from five\nsampling points located within 70 meters of each other. Based on microsatellite\nmarkers and both F-statistics and spatial autocorrelation analyses, our results\nrevealed that while males did not present a significant structure at this\ngeographic scale, females were significantly more similar to each other when\nthey were collected in the same sampling point. These results support the\nsex-handicap hypothesis, and we suggest that widening dispersal studies to\nother isopods or crustaceans, displaying varying levels of parental care but\ndiffering in their ecology or mating system, might shed light on the processes\nunderlying the evolution of sex-biased dispersal.\n", "title": "Fine-scale population structure analysis in Armadillidium vulgare (Isopoda: Oniscidea) reveals strong female philopatry" }
null
null
null
null
true
null
5783
null
Default
null
null
null
{ "abstract": " An additive fast Fourier transform over a finite field of characteristic two\nefficiently evaluates polynomials at every element of an $\\mathbb{F}_2$-linear\nsubspace of the field. We view these transforms as performing a change of basis\nfrom the monomial basis to the associated Lagrange basis, and consider the\nproblem of performing the various conversions between these two bases, the\nassociated Newton basis, and the '' novel '' basis of Lin, Chung and Han (FOCS\n2014). Existing algorithms are divided between two families, those designed for\narbitrary subspaces and more efficient algorithms designed for specially\nconstructed subspaces of fields with degree equal to a power of two. We\ngeneralise techniques from both families to provide new conversion algorithms\nthat may be applied to arbitrary subspaces, but which benefit equally from the\nspecially constructed subspaces. We then construct subspaces of fields with\nsmooth degree for which our algorithms provide better performance than existing\nalgorithms.\n", "title": "Fast transforms over finite fields of characteristic two" }
null
null
[ "Computer Science" ]
null
true
null
5784
null
Validated
null
null
null
{ "abstract": " Working in the framework of Borel reducibility, we study various notions of\nembeddability between groups. We prove that the embeddability between countable\ngroups, the topological embeddability between (discrete) Polish groups, and the\nisometric embeddability between separable groups with a bounded bi-invariant\ncomplete metric are all invariantly universal analytic quasi-orders. This\nstrengthens some results from [Wil14] and [FLR09].\n", "title": "Universality of group embeddability" }
null
null
null
null
true
null
5785
null
Default
null
null
null
{ "abstract": " In order to fully function in human environments, robot perception will need\nto account for the uncertainty caused by translucent materials. Translucency\nposes several open challenges in the form of transparent objects (e.g.,\ndrinking glasses), refractive media (e.g., water), and diffuse partial\nocclusions (e.g., objects behind stained glass panels). This paper presents\nPlenoptic Monte Carlo Localization (PMCL) as a method for localizing object\nposes in the presence of translucency using plenoptic (light-field)\nobservations. We propose a new depth descriptor, the Depth Likelihood Volume\n(DLV), and its use within a Monte Carlo object localization algorithm. We\npresent results of localizing and manipulating objects with translucent\nmaterials and objects occluded by layers of translucency. Our PMCL\nimplementation uses observations from a Lytro first generation light field\ncamera to allow a Michigan Progress Fetch robot to perform grasping.\n", "title": "Plenoptic Monte Carlo Object Localization for Robot Grasping under Layered Translucency" }
null
null
null
null
true
null
5786
null
Default
null
null
null
{ "abstract": " A general framework for solving the subspace clustering problem using the CUR\ndecomposition is presented. The CUR decomposition provides a natural way to\nconstruct similarity matrices for data that come from a union of unknown\nsubspaces $\\mathscr{U}=\\underset{i=1}{\\overset{M}\\bigcup}S_i$. The similarity\nmatrices thus constructed give the exact clustering in the noise-free case.\nAdditionally, this decomposition gives rise to many distinct similarity\nmatrices from a given set of data, which allow enough flexibility to perform\naccurate clustering of noisy data. We also show that two known methods for\nsubspace clustering can be derived from the CUR decomposition. An algorithm\nbased on the theoretical construction of similarity matrices is presented, and\nexperiments on synthetic and real data are presented to test the method.\nAdditionally, an adaptation of our CUR based similarity matrices is utilized\nto provide a heuristic algorithm for subspace clustering; this algorithm yields\nthe best overall performance to date for clustering the Hopkins155 motion\nsegmentation dataset.\n", "title": "CUR Decompositions, Similarity Matrices, and Subspace Clustering" }
null
null
null
null
true
null
5787
null
Default
null
null
null
{ "abstract": " We study the two-dimensional geometric knapsack problem (2DK) in which we are\ngiven a set of n axis-aligned rectangular items, each one with an associated\nprofit, and an axis-aligned square knapsack. The goal is to find a\n(non-overlapping) packing of a maximum profit subset of items inside the\nknapsack (without rotating items). The best-known polynomial-time approximation\nfactor for this problem (even just in the cardinality case) is (2 + \\epsilon)\n[Jansen and Zhang, SODA 2004].\nIn this paper, we break the 2 approximation barrier, achieving a\npolynomial-time (17/9 + \\epsilon) < 1.89 approximation, which improves to\n(558/325 + \\epsilon) < 1.72 in the cardinality case. Essentially all prior work\non 2DK approximation packs items inside a constant number of rectangular\ncontainers, where items inside each container are packed using a simple greedy\nstrategy. We deviate for the first time from this setting: we show that there\nexists a large profit solution where items are packed inside a constant number\nof containers plus one L-shaped region at the boundary of the knapsack which\ncontains items that are high and narrow and items that are wide and thin. As a\nsecond major and the main algorithmic contribution of this paper, we present a\nPTAS for this case. We believe that this will turn out to be useful in future\nwork in geometric packing problems.\nWe also consider the variant of the problem with rotations (2DKR), where\nitems can be rotated by 90 degrees. Also, in this case, the best-known\npolynomial-time approximation factor (even for the cardinality case) is (2 +\n\\epsilon) [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery\ndeveloped for 2DK plus a few additional ideas, we obtain a polynomial-time (3/2\n+ \\epsilon)-approximation for 2DKR, which improves to (4/3 + \\epsilon) in the\ncardinality case.\n", "title": "Approximating Geometric Knapsack via L-packings" }
null
null
null
null
true
null
5788
null
Default
null
null
null
{ "abstract": " It is widely established that extreme space weather events associated with\nsolar flares are capable of causing widespread technological damage. We develop\na simple mathematical model to assess the economic losses arising from these\nphenomena over time. We demonstrate that the economic damage is characterized\nby an initial period of power-law growth, followed by exponential amplification\nand eventual saturation. We outline a mitigation strategy to protect our planet\nby setting up a magnetic shield to deflect charged particles at the Lagrange\npoint L$_1$, and demonstrate that this approach appears to be realizable in\nterms of its basic physical parameters. We conclude our analysis by arguing\nthat shielding strategies adopted by advanced civilizations will lead to\ntechnosignatures that are detectable by upcoming missions.\n", "title": "Impact and mitigation strategy for future solar flares" }
null
null
null
null
true
null
5789
null
Default
null
null
null
{ "abstract": " We develop an empirical Bayes (EB) algorithm for the matrix completion\nproblems. The EB algorithm is motivated from the singular value shrinkage\nestimator for matrix means by Efron and Morris (1972). Since the EB algorithm\nis essentially the EM algorithm applied to a simple model, it does not require\nheuristic parameter tuning other than tolerance. Numerical results demonstrated\nthat the EB algorithm achieves a good trade-off between accuracy and efficiency\ncompared to existing algorithms and that it works particularly well when the\ndifference between the number of rows and columns is large. Application to real\ndata also shows the practical utility of the EB algorithm.\n", "title": "Empirical Bayes Matrix Completion" }
null
null
null
null
true
null
5790
null
Default
null
null
null
{ "abstract": " Electron correlation effects are studied in ZrSiS using a combination of\nfirst-principles and model approaches. We show that basic electronic properties\nof ZrSiS can be described within a two-dimensional lattice model of two nested\nsquare lattices. High degree of electron-hole symmetry characteristic for ZrSiS\nis one of the key features of this model. Having determined model parameters\nfrom first-principles calculations, we then explicitly take electron-electron\ninteractions into account and show that at moderately low temperatures ZrSiS\nexhibits excitonic instability, leading to the formation of a pseudogap in the\nelectronic spectrum. The results can be understood in terms of\nCoulomb-interaction-assisted pairing of electrons and holes reminiscent to that\nof an excitonic insulator. Our finding allows us to provide a physical\ninterpretation to the unusual mass enhancement of charge carriers in ZrSiS\nrecently observed experimentally.\n", "title": "Excitonic Instability and Pseudogap Formation in Nodal Line Semimetal ZrSiS" }
null
null
null
null
true
null
5791
null
Default
null
null
null
{ "abstract": " Deep generative models learn a mapping from a low dimensional latent space to\na high-dimensional data space. Under certain regularity conditions, these\nmodels parameterize nonlinear manifolds in the data space. In this paper, we\ninvestigate the Riemannian geometry of these generated manifolds. First, we\ndevelop efficient algorithms for computing geodesic curves, which provide an\nintrinsic notion of distance between points on the manifold. Second, we develop\nan algorithm for parallel translation of a tangent vector along a path on the\nmanifold. We show how parallel translation can be used to generate analogies,\ni.e., to transport a change in one data point into a semantically similar\nchange of another data point. Our experiments on real image data show that the\nmanifolds learned by deep generative models, while nonlinear, are surprisingly\nclose to zero curvature. The practical implication is that linear paths in the\nlatent space closely approximate geodesics on the generated manifold. However,\nfurther investigation into this phenomenon is warranted, to identify if there\nare other architectures or datasets where curvature plays a more prominent\nrole. We believe that exploring the Riemannian geometry of deep generative\nmodels, using the tools developed in this paper, will be an important step in\nunderstanding the high-dimensional, nonlinear spaces these models learn.\n", "title": "The Riemannian Geometry of Deep Generative Models" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
5792
null
Validated
null
null
null
{ "abstract": " Building and deploying software on high-end computing systems is a\nchallenging task. High performance applications have to reliably run across\nmultiple platforms and environments, and make use of site-specific resources\nwhile resolving complicated software-stack dependencies. Containers are a type\nof lightweight virtualization technology that attempt to solve this problem by\npackaging applications and their environments into standard units of software\nthat are: portable, easy to build and deploy, have a small footprint, and low\nruntime overhead. In this work we present an extension to the container runtime\nof Shifter that provides containerized applications with a mechanism to access\nGPU accelerators and specialized networking from the host system, effectively\nenabling performance portability of containers across HPC resources. The\npresented extension makes possible to rapidly deploy high-performance software\non supercomputers from containerized applications that have been developed,\nbuilt, and tested in non-HPC commodity hardware, e.g. the laptop or workstation\nof a researcher.\n", "title": "Portable, high-performance containers for HPC" }
null
null
null
null
true
null
5793
null
Default
null
null
null
{ "abstract": " Adversarially trained deep neural networks have significantly improved\nperformance of single image super resolution, by hallucinating photorealistic\nlocal textures, thereby greatly reducing the perception difference between a\nreal high resolution image and its super resolved (SR) counterpart. However,\napplication to medical imaging requires preservation of diagnostically relevant\nfeatures while refraining from introducing any diagnostically confusing\nartifacts. We propose using a deep convolutional super resolution network\n(SRNet) trained for (i) minimising reconstruction loss between the real and SR\nimages, and (ii) maximally confusing learned relativistic visual Turing test\n(rVTT) networks to discriminate between (a) pair of real and SR images (T1) and\n(b) pair of patches in real and SR selected from region of interest (T2). The\nadversarial loss of T1 and T2 while backpropagated through SRNet helps it learn\nto reconstruct pathorealism in the regions of interest such as white blood\ncells (WBC) in peripheral blood smears or epithelial cells in histopathology of\ncancerous biopsy tissues, which are experimentally demonstrated here.\nExperiments performed for measuring signal distortion loss using peak signal to\nnoise ratio (pSNR) and structural similarity (SSIM) with variation of SR scale\nfactors, impact of rVTT adversarial losses, and impact on reporting using SR on\na commercially available artificial intelligence (AI) digital pathology system\nsubstantiate our claims.\n", "title": "Learning a Deep Convolution Network with Turing Test Adversaries for Microscopy Image Super Resolution" }
null
null
null
null
true
null
5794
null
Default
null
null
null
{ "abstract": " We consider a hydrogen atom confined in time-dependent trap created by a\nspherical impenetrable box with time-dependent radius. For such model we study\nthe behavior of atomic electron under the (non-adiabatic) dynamical confinement\ncaused by the rapidly moving wall of the box. The expectation values of the\ntotal and kinetic energy, average force, pressure and coordinate are analyzed\nas a function of time for linearly expanding, contracting and harmonically\nbreathing boxes. It is shown that linearly extending box leads to de-excitation\nof the atom, while the rapidly contracting box causes the creation of very high\npressure on the atom and transition of the atomic electron into the unbound\nstate. In harmonically breathing box diffusive excitation of atomic electron\nmay occur in analogy with that for atom in a microwave field.\n", "title": "Quantum dynamics of a hydrogen-like atom in a time-dependent box: non-adiabatic regime" }
null
null
null
null
true
null
5795
null
Default
null
null
null
{ "abstract": " The constant pairing Hamiltonian holds exact solutions worked out by\nRichardson in the early Sixties. This exact solution of the pairing Hamiltonian\nregained interest at the end of the Nineties. The discret complex-energy states\nhad been included in the Richardson's solutions by Hasegawa et al. [1]. In this\ncontribution we reformulate the problem of determining the exact eigenenergies\nof the pairing Hamiltonian when the continuum is included through the single\nparticle level density. The solutions with discret complex-energy states is\nrecovered by analytic continuation of the equations to the complex energy\nplane. This formulation may be applied to loosely bound system where the\ncorrelations with the continuum-spectrum of energy is really important. Some\ndetails are given to show how the many-body eigenenergy emerges as sum of the\npair-energies.\n", "title": "Richardson's solutions in the real- and complex-energy spectrum" }
null
null
null
null
true
null
5796
null
Default
null
null
null
{ "abstract": " This paper considers how to obtain MCMC quantitative convergence bounds which\ncan be translated into tight complexity bounds in high-dimensional setting. We\npropose a modified drift-and-minorization approach, which establishes a\ngeneralized drift condition defined in a subset of the state space. The subset\nis called the \"large set\", and is chosen to rule out some \"bad\" states which\nhave poor drift property when the dimension gets large. Using the \"large set\"\ntogether with a \"centered\" drift function, a quantitative bound can be obtained\nwhich can be translated into a tight complexity bound. As a demonstration, we\nanalyze a certain realistic Gibbs sampler algorithm and obtain a complexity\nupper bound for the mixing time, which shows that the number of iterations\nrequired for the Gibbs sampler to converge is constant. It is our hope that\nthis modified drift-and-minorization approach can be employed in many other\nspecific examples to obtain complexity bounds for high-dimensional Markov\nchains.\n", "title": "Complexity Results for MCMC derived from Quantitative Bounds" }
null
null
null
null
true
null
5797
null
Default
null
null
null
{ "abstract": " Let $f:{\\mathbb B}^n \\to {\\mathbb B}^N$ be a holomorphic map. We study\nsubgroups $\\Gamma_f \\subseteq {\\rm Aut}({\\mathbb B}^n)$ and $T_f \\subseteq {\\rm\nAut}({\\mathbb B}^N)$. When $f$ is proper, we show both these groups are Lie\nsubgroups. When $\\Gamma_f$ contains the center of ${\\bf U}(n)$, we show that\n$f$ is spherically equivalent to a polynomial. When $f$ is minimal we show that\nthere is a homomorphism $\\Phi:\\Gamma_f \\to T_f$ such that $f$ is equivariant\nwith respect to $\\Phi$. To do so, we characterize minimality via the triviality\nof a third group $H_f$. We relate properties of ${\\rm Ker}(\\Phi)$ to older\nresults on invariant proper maps between balls. When $f$ is proper but\ncompletely non-rational, we show that either both $\\Gamma_f$ and $T_f$ are\nfinite or both are noncompact.\n", "title": "Symmetries and regularity for holomorphic maps between balls" }
null
null
null
null
true
null
5798
null
Default
null
null
null
{ "abstract": " Energy-conserving, angular momentum-changing collisions between protons and\nhighly excited Rydberg hydrogen atoms are important for precise understanding\nof atomic recombination at the photon decoupling era, and the elemental\nabundance after primordial nucleosynthesis. Early approaches to $\\ell$-changing\ncollisions used perturbation theory for only dipole-allowed ($\\Delta \\ell=\\pm\n1$) transitions. An exact non-perturbative quantum mechanical treatment is\npossible, but it comes at computational cost for highly excited Rydberg states.\nIn this note we show how to obtain a semi-classical limit that is accurate and\nsimple, and develop further physical insights afforded by the non-perturbative\nquantum mechanical treatment.\n", "title": "On the treatment of $\\ell$-changing proton-hydrogen Rydberg atom collisions" }
null
null
null
null
true
null
5799
null
Default
null
null
null
{ "abstract": " Why do some economic activities agglomerate more than others? And, why does\nthe agglomeration of some economic activities continue to increase despite\nrecent developments in communication and transportation technologies? In this\npaper, we present evidence that complex economic activities concentrate more in\nlarge cities. We find this to be true for technologies, scientific\npublications, industries, and occupations. Using historical patent data, we\nshow that the urban concentration of complex economic activities has been\ncontinuously increasing since 1850. These findings suggest that the increasing\nurban concentration of jobs and innovation might be a consequence of the\ngrowing complexity of the economy.\n", "title": "Complex Economic Activities Concentrate in Large Cities" }
null
null
[ "Computer Science" ]
null
true
null
5800
null
Validated
null
null