text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " In J.D. Jackson's Classical Electrodynamics textbook, the analysis of Dirac's\ncharge quantization condition in the presence of a magnetic monopole has a\nmathematical omission and an all too brief physical argument that might mislead\nsome students. This paper presents a detailed derivation of Jackson's main\nresult, explains the significance of the missing term, and highlights the close\nconnection between Jackson's findings and Dirac's original argument.\n", "title": "Comment on Jackson's analysis of electric charge quantization due to interaction with Dirac's magnetic monopole" }
null
null
null
null
true
null
5901
null
Default
null
null
null
{ "abstract": " The OPERA experiment was designed to search for $\\nu_{\\mu} \\rightarrow\n\\nu_{\\tau}$ oscillations in appearance mode through the direct observation of\ntau neutrinos in the CNGS neutrino beam. In this paper, we report a study of\nthe multiplicity of charged particles produced in charged-current neutrino\ninteractions in lead. We present charged hadron average multiplicities, their\ndispersion and investigate the KNO scaling in different kinematical regions.\nThe results are presented in detail in the form of tables that can be used in\nthe validation of Monte Carlo generators of neutrino-lead interactions.\n", "title": "Study of charged hadron multiplicities in charged-current neutrino-lead interactions in the OPERA detector" }
null
null
null
null
true
null
5902
null
Default
null
null
null
{ "abstract": " This paper explains a method to calculate the coefficients of the\nAlekseev-Torossian associator as linear combinations of iterated integrals of\nKontsevich weight forms of Lie graphs.\n", "title": "On the coefficients of the Alekseev Torossian associator" }
null
null
null
null
true
null
5903
null
Default
null
null
null
{ "abstract": " We determine the exact time-dependent non-idempotent one-particle reduced\ndensity matrix and its spectral decomposition for a harmonically confined\ntwo-particle correlated one-dimensional system when the interaction terms in\nthe Schrödinger Hamiltonian are changed abruptly. Based on this matrix in\ncoordinate space we derivea precise condition for the equivalence of the purity\nand the overlap-square of the correlated and non-correlated wave functions as\nthe system evolves in time. This equivalence holds only if the interparticle\ninteractions are affected, while the confinement terms are unaffected within\nthe stability range of the system. Under this condition we also analyze various\ntime-dependent measures of entanglement and demonstrate that, depending on the\nmagnitude of the changes made in the Schrödinger Hamiltonian, periodic,\nlogarithmically incresing or constant value behavior of the von Neumann entropy\ncan occur.\n", "title": "Exact spectral decomposition of a time-dependent one-particle reduced density matrix" }
null
null
null
null
true
null
5904
null
Default
null
null
null
{ "abstract": " Two main families of reinforcement learning algorithms, Q-learning and policy\ngradients, have recently been proven to be equivalent when using a softmax\nrelaxation on one part, and an entropic regularization on the other. We relate\nthis result to the well-known convex duality of Shannon entropy and the softmax\nfunction. Such a result is also known as the Donsker-Varadhan formula. This\nprovides a short proof of the equivalence. We then interpret this duality\nfurther, and use ideas of convex analysis to prove a new policy inequality\nrelative to soft Q-learning.\n", "title": "A short variational proof of equivalence between policy gradients and soft Q learning" }
null
null
[ "Computer Science" ]
null
true
null
5905
null
Validated
null
null
null
{ "abstract": " Storage and transmission in big data are discussed in this paper, where\nmessage importance is taken into account. Similar to Shannon Entropy and Renyi\nEntropy, we define non-parametric message important measure (NMIM) as a measure\nfor the message importance in the scenario of big data, which can characterize\nthe uncertainty of random events. It is proved that the proposed NMIM can\nsufficiently describe two key characters of big data: rare events finding and\nlarge diversities of events. Based on NMIM, we first propose an effective\ncompressed encoding mode for data storage, and then discuss the channel\ntransmission over some typical channel models. Numerical simulation results\nshow that using our proposed strategy occupies less storage space without\nlosing too much message importance, and there are growth region and saturation\nregion for the maximum transmission, which contributes to designing of better\npractical communication system.\n", "title": "Non-parametric Message Important Measure: Storage Code Design and Transmission Planning for Big Data" }
null
null
null
null
true
null
5906
null
Default
null
null
null
{ "abstract": " Graphene and some graphene like two dimensional materials; hexagonal boron\nnitride (hBN) and silicene have unique mechanical properties which severely\nlimit the suitability of conventional theories used for common brittle and\nductile materials to predict the fracture response of these materials. This\nstudy revealed the fracture response of graphene, hBN and silicene nanosheets\nunder different tiny crack lengths by molecular dynamics (MD) simulations using\nLAMMPS. The useful strength of these large area two dimensional materials are\ndetermined by their fracture toughness. Our study shows a comparative analysis\nof mechanical properties among the elemental analogues of graphene and\nsuggested that hBN can be a good substitute for graphene in terms of mechanical\nproperties. We have also found that the pre-cracked sheets fail in brittle\nmanner and their failure is governed by the strength of the atomic bonds at the\ncrack tip. The MD prediction of fracture toughness shows significant difference\nwith the fracture toughness determined by Griffth's theory of brittle failure\nwhich restricts the applicability of Griffith's criterion for these materials\nin case of nano-cracks. Moreover, the strengths measured in armchair and zigzag\ndirections of nanosheets of these materials implied that the bonds in armchair\ndirection has the stronger capability to resist crack propagation compared to\nzigzag direction.\n", "title": "Graphene and its elemental analogue: A molecular dynamics view of fracture phenomenon" }
null
null
null
null
true
null
5907
null
Default
null
null
null
{ "abstract": " Skilled robotic manipulation benefits from complex synergies between\nnon-prehensile (e.g. pushing) and prehensile (e.g. grasping) actions: pushing\ncan help rearrange cluttered objects to make space for arms and fingers;\nlikewise, grasping can help displace objects to make pushing movements more\nprecise and collision-free. In this work, we demonstrate that it is possible to\ndiscover and learn these synergies from scratch through model-free deep\nreinforcement learning. Our method involves training two fully convolutional\nnetworks that map from visual observations to actions: one infers the utility\nof pushes for a dense pixel-wise sampling of end effector orientations and\nlocations, while the other does the same for grasping. Both networks are\ntrained jointly in a Q-learning framework and are entirely self-supervised by\ntrial and error, where rewards are provided from successful grasps. In this\nway, our policy learns pushing motions that enable future grasps, while\nlearning grasps that can leverage past pushes. During picking experiments in\nboth simulation and real-world scenarios, we find that our system quickly\nlearns complex behaviors amid challenging cases of clutter, and achieves better\ngrasping success rates and picking efficiencies than baseline alternatives\nafter only a few hours of training. We further demonstrate that our method is\ncapable of generalizing to novel objects. Qualitative results (videos), code,\npre-trained models, and simulation environments are available at\nthis http URL\n", "title": "Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning" }
null
null
null
null
true
null
5908
null
Default
null
null
null
{ "abstract": " Evaluation and validation of complicated control systems are crucial to\nguarantee usability and safety. Usually, failure happens in some very rarely\nencountered situations, but once triggered, the consequence is disastrous.\nAccelerated Evaluation is a methodology that efficiently tests those\nrarely-occurring yet critical failures via smartly-sampled test cases. The\ndistribution used in sampling is pivotal to the performance of the method, but\nbuilding a suitable distribution requires case-by-case analysis. This paper\nproposes a versatile approach for constructing sampling distribution using\nkernel method. The approach uses statistical learning tools to approximate the\ncritical event sets and constructs distributions based on the unique properties\nof Gaussian distributions. We applied the method to evaluate the automated\nvehicles. Numerical experiments show proposed approach can robustly identify\nthe rare failures and significantly reduce the evaluation time.\n", "title": "A Versatile Approach to Evaluating and Testing Automated Vehicles based on Kernel Methods" }
null
null
null
null
true
null
5909
null
Default
null
null
null
{ "abstract": " Energy-efficiency plays a significant role given the battery lifetime\nconstraints in embedded systems and hand-held devices. In this work we target\nthe ARM big.LITTLE, a heterogeneous platform that is dominant in the mobile and\nembedded market, which allows code to run transparently on different\nmicroarchitectures with individual energy and performance characteristics. It\nallows to se more energy efficient cores to conserve power during simple tasks\nand idle times and switch over to faster, more power hungry cores when\nperformance is needed. This proposal explores the power-savings and the\nperformance gains that can be achieved by utilizing the ARM big.LITTLE core in\ncombination with Decoupled Access-Execute (DAE). DAE is a compiler technique\nthat splits code regions into two distinct phases: a memory-bound Access phase\nand a compute-bound Execute phase. By scheduling the memory-bound phase on the\nLITTLE core, and the compute-bound phase on the big core, we conserve energy\nwhile caching data from main memory and perform computations at maximum\nperformance. Our preliminary findings show that applying DAE on ARM big.LITTLE\nhas potential. By prefetching data in Access we can achieve an IPC improvement\nof up to 37% in the Execute phase, and manage to shift more than half of the\nprogram runtime to the LITTLE core. We also provide insight into advantages and\ndisadvantages of our approach, present preliminary results and discuss\npotential solutions to overcome locking overhead.\n", "title": "Decoupled Access-Execute on ARM big.LITTLE" }
null
null
[ "Computer Science" ]
null
true
null
5910
null
Validated
null
null
null
{ "abstract": " We study properties of classes of closure operators and closure systems\nparameterized by systems of isotone Galois connections. The parameterizations\nexpress stronger requirements on idempotency and monotony conditions of closure\noperators. The present approach extends previous approaches to fuzzy closure\noperators which appeared in analysis of object-attribute data with graded\nattributes and reasoning with if-then rules in graded setting and is also\nrelated to analogous results developed in linear temporal logic. In the paper,\nwe present foundations of the operators and include examples of general\nproblems in data analysis where such operators appear.\n", "title": "Closure structures parameterized by systems of isotone Galois connections" }
null
null
null
null
true
null
5911
null
Default
null
null
null
{ "abstract": " A compact multiple-input-multiple-output (MIMO) antenna with very high\nisolation is proposed for ultrawide-band (UWB) applications. The antenna with a\ncompact size of 30.1x20.5 mm^2 (0.31${\\lambda}_0$ x0.21${\\lambda}_0$ ) consists\nof two planar-monopole antenna elements. It is found that isolation of more\nthan 25 dB can be achieved between two parallel monopole antenna elements. For\nthe low-frequency isolation, an efficient technique of bending the feed-line\nand applying a new protruded ground is introduced. To increase isolation, a\ndesign based on suppressing surface wave, near-field, and far-field coupling is\napplied. The simulation and measurement results of the proposed antenna with\nthe good agreement are presented and show a bandwidth with S 11 < -10 dB, S 12\n< -25 dB ranged from 3.1 to 10.6 GHz making the proposed antenna a good\ncandidate for UWB MIMO systems.\n", "title": "High Isolation Improvement in a Compact UWB MIMO Antenna" }
null
null
null
null
true
null
5912
null
Default
null
null
null
{ "abstract": " Tensor decompositions are used in various data mining applications from\nsocial network to medical applications and are extremely useful in discovering\nlatent structures or concepts in the data. Many real-world applications are\ndynamic in nature and so are their data. To deal with this dynamic nature of\ndata, there exist a variety of online tensor decomposition algorithms. A\ncentral assumption in all those algorithms is that the number of latent\nconcepts remains fixed throughout the entire stream. However, this need not be\nthe case. Every incoming batch in the stream may have a different number of\nlatent concepts, and the difference in latent concepts from one tensor batch to\nanother can provide insights into how our findings in a particular application\nbehave and deviate over time. In this paper, we define \"concept\" and \"concept\ndrift\" in the context of streaming tensor decomposition, as the manifestation\nof the variability of latent concepts throughout the stream. Furthermore, we\nintroduce SeekAndDestroy, an algorithm that detects concept drift in streaming\ntensor decomposition and is able to produce results robust to that drift. To\nthe best of our knowledge, this is the first work that investigates concept\ndrift in streaming tensor decomposition. We extensively evaluate SeekAndDestroy\non synthetic datasets, which exhibit a wide variety of realistic drift. Our\nexperiments demonstrate the effectiveness of SeekAndDestroy, both in the\ndetection of concept drift and in the alleviation of its effects, producing\nresults with similar quality to decomposing the entire tensor in one shot.\nAdditionally, in real datasets, SeekAndDestroy outperforms other streaming\nbaselines, while discovering novel useful components.\n", "title": "Identifying and Alleviating Concept Drift in Streaming Tensor Decomposition" }
null
null
null
null
true
null
5913
null
Default
null
null
null
{ "abstract": " Classifiers operating in a dynamic, real world environment, are vulnerable to\nadversarial activity, which causes the data distribution to change over time.\nThese changes are traditionally referred to as concept drift, and several\napproaches have been developed in literature to deal with the problem of drift\nhandling and detection. However, most concept drift handling techniques,\napproach it as a domain independent task, to make them applicable to a wide\ngamut of reactive systems. These techniques were developed from an adversarial\nagnostic perspective, where they are naive and assume that drift is a benign\nchange, which can be fixed by updating the model. However, this is not the case\nwhen an active adversary is trying to evade the deployed classification system.\nIn such an environment, the properties of concept drift are unique, as the\ndrift is intended to degrade the system and at the same time designed to avoid\ndetection by traditional concept drift detection techniques. This special\ncategory of drift is termed as adversarial drift, and this paper analyzes its\ncharacteristics and impact, in a streaming environment. A novel framework for\ndealing with adversarial concept drift is proposed, called the Predict-Detect\nstreaming framework. Experimental evaluation of the framework, on generated\nadversarial drifting data streams, demonstrates that this framework is able to\nprovide reliable unsupervised indication of drift, and is able to recover from\ndrifts swiftly. While traditional partially labeled concept drift detection\nmethodologies fail to detect adversarial drifts, the proposed framework is able\nto detect such drifts and operates with <6% labeled data, on average. Also, the\nframework provides benefits for active learning over imbalanced data streams,\nby innately providing for feature space honeypots, where minority class\nadversarial samples may be captured.\n", "title": "Handling Adversarial Concept Drift in Streaming Data" }
null
null
null
null
true
null
5914
null
Default
null
null
null
{ "abstract": " We present a definition of intersection homology for real algebraic varieties\nthat is analogous to Goresky and MacPherson's original definition of\nintersection homology for complex varieties.\n", "title": "Real intersection homology" }
null
null
null
null
true
null
5915
null
Default
null
null
null
{ "abstract": " Starshades are a leading technology to enable the direct detection and\nspectroscopic characterization of Earth-like exoplanets. In an effort to\nadvance starshade technology through system level demonstrations, the\nMcMath-Pierce Solar Telescope was adapted to enable the suppression of\nastronomical sources with a starshade. The long baselines achievable with the\nheliostat provide measurements of starshade performance at a flight-like\nFresnel number and resolution, aspects critical to the validation of optical\nmodels. The heliostat has provided the opportunity to perform the first\nastronomical observations with a starshade and has made science accessible in a\nunique parameter space, high contrast at moderate inner working angles. On-sky\nimages are valuable for developing the experience and tools needed to extract\nscience results from future starshade observations. We report on high contrast\nobservations of nearby stars provided by a starshade. We achieve 5.6e-7\ncontrast at 30 arcseconds inner working angle on the star Vega and provide new\nphotometric constraints on background stars near Vega.\n", "title": "High Contrast Observations of Bright Stars with a Starshade" }
null
null
[ "Physics" ]
null
true
null
5916
null
Validated
null
null
null
{ "abstract": " The paper covers a formulation of the inverse quadratic programming problem\nin terms of unconstrained optimization where it is required to find the unknown\nparameters (the matrix of the quadratic form and the vector of the quasi-linear\npart of the quadratic form) provided that approximate estimates of the optimal\nsolution of the direct problem and those of the target function to be minimized\nin the form of pairs of values lying in the corresponding neighborhoods are\nonly known. The formulation of the inverse problem and its solution are based\non the least squares method. In the explicit form the inverse problem solution\nhas been derived in the form a system of linear equations. The parameters\nobtained can be used for reconstruction of the direct quadratic programming\nproblem and determination of the optimal solution and the extreme value of the\ntarget function, which were not known formerly. It is possible this approach\nopens new ways in over applications, for example, in neurocomputing and quadric\nsurfaces fitting. Simple numerical examples have been demonstrated. A scenario\nin the Octave/MATLAB programming language has been proposed for practical\nimplementation of the method.\n", "title": "Unconstrained inverse quadratic programming problem" }
null
null
null
null
true
null
5917
null
Default
null
null
null
{ "abstract": " Twisting a binary form $F_0(X,Y)\\in{\\mathbb{Z}}[X,Y]$ of degree $d\\ge 3$ by\npowers $\\upsilon^a$ ($a\\in{\\mathbb{Z}}$) of an algebraic unit $\\upsilon$ gives\nrise to a binary form $F_a(X,Y)\\in{\\mathbb{Z}}[X,Y]$. More precisely, when $K$\nis a number field of degree $d$, $\\sigma_1,\\sigma_2,\\dots,\\sigma_d$ the\nembeddings of $K$ into $\\mathbb{C}$, $\\alpha$ a nonzero element in $K$,\n$a_0\\in{\\mathbb{Z}}$, $a_0>0$ and $$ F_0(X,Y)=a_0\\displaystyle\\prod_{i=1}^d\n(X-\\sigma_i(\\alpha) Y), $$ then for $a\\in{\\mathbb{Z}}$ we set $$\nF_a(X,Y)=\\displaystyle a_0\\prod_{i=1}^d (X-\\sigma_i(\\alpha\\upsilon^a) Y). $$\nGiven $m\\ge 0$, our main result is an effective upper bound for the solutions\n$(x,y,a)\\in{\\mathbb{Z}}^3$ of the Diophantine inequalities $$ 0<|F_a(x,y)|\\le m\n$$ for which $xy\\not=0$ and ${\\mathbb{Q}}(\\alpha \\upsilon^a)=K$. Our estimate\ninvolves an effectively computable constant depending only on $d$; it is\nexplicit in terms of $m$, in terms of the heights of $F_0$ and of $\\upsilon$,\nand in terms of the regulator of the number field $K$.\n", "title": "Families of Thue equations associated with a rank one subgroup of the unit group of a number field" }
null
null
null
null
true
null
5918
null
Default
null
null
null
{ "abstract": " Defects between gapped boundaries provide a possible physical realization of\nprojective non-abelian braid statistics. A notable example is the projective\nMajorana/parafermion braid statistics of boundary defects in fractional quantum\nHall/topological insulator and superconductor heterostructures. In this paper,\nwe develop general theories to analyze the topological properties and\nprojective braiding of boundary defects of topological phases of matter in two\nspatial dimensions. We present commuting Hamiltonians to realize defects\nbetween gapped boundaries in any $(2+1)D$ untwisted Dijkgraaf-Witten theory,\nand use these to describe their topological properties such as their quantum\ndimension. By modeling the algebraic structure of boundary defects through\nmulti-fusion categories, we establish a bulk-edge correspondence between\ncertain boundary defects and symmetry defects in the bulk. Even though it is\nnot clear how to physically braid the defects, this correspondence elucidates\nthe projective braid statistics for many classes of boundary defects, both\namongst themselves and with bulk anyons. Specifically, three such classes of\nimportance to condensed matter physics/topological quantum computation are\nstudied in detail: (1) A boundary defect version of Majorana and parafermion\nzero modes, (2) a similar version of genons in bilayer theories, and (3)\nboundary defects in $\\mathfrak{D}(S_3)$.\n", "title": "On Defects Between Gapped Boundaries in Two-Dimensional Topological Phases of Matter" }
null
null
null
null
true
null
5919
null
Default
null
null
null
{ "abstract": " The present paper is the second part of a twofold work, whose first part is\nreported in [3], concerning a newly developed Virtual Element Method (VEM) for\n2D continuum problems. The first part of the work proposed a study for linear\nelastic problem. The aim of this part is to explore the features of the VEM\nformulation when material nonlinearity is considered, showing that the accuracy\nand easiness of implementation discovered in the analysis inherent to the first\npart of the work are still retained. Three different nonlinear constitutive\nlaws are considered in the VEM formulation. In particular, the generalized\nviscoplastic model, the classical Mises plasticity with isotropic/kinematic\nhardening and a shape memory alloy (SMA) constitutive law are implemented. The\nversatility with respect to all the considered nonlinear material constitutive\nlaws is demonstrated through several numerical examples, also remarking that\nthe proposed 2D VEM formulation can be straightforwardly implemented as in a\nstandard nonlinear structural finite element method (FEM) framework.\n", "title": "Arbitrary order 2D virtual elements for polygonal meshes: Part II, inelastic problem" }
null
null
null
null
true
null
5920
null
Default
null
null
null
{ "abstract": " This paper aims to develop a new and robust approach to feature\nrepresentation. Motivated by the success of Auto-Encoders, we first theoretical\nsummarize the general properties of all algorithms that are based on\ntraditional Auto-Encoders: 1) The reconstruction error of the input can not be\nlower than a lower bound, which can be viewed as a guiding principle for\nreconstructing the input. Additionally, when the input is corrupted with\nnoises, the reconstruction error of the corrupted input also can not be lower\nthan a lower bound. 2) The reconstruction of a hidden representation achieving\nits ideal situation is the necessary condition for the reconstruction of the\ninput to reach the ideal state. 3) Minimizing the Frobenius norm of the\nJacobian matrix of the hidden representation has a deficiency and may result in\na much worse local optimum value. We believe that minimizing the reconstruction\nerror of the hidden representation is more robust than minimizing the Frobenius\nnorm of the Jacobian matrix of the hidden representation. Based on the above\nanalysis, we propose a new model termed Double Denoising Auto-Encoders (DDAEs),\nwhich uses corruption and reconstruction on both the input and the hidden\nrepresentation. We demonstrate that the proposed model is highly flexible and\nextensible and has a potentially better capability to learn invariant and\nrobust feature representations. We also show that our model is more robust than\nDenoising Auto-Encoders (DAEs) for dealing with noises or inessential features.\nFurthermore, we detail how to train DDAEs with two different pre-training\nmethods by optimizing the objective function in a combined and separate manner,\nrespectively. Comparative experiments illustrate that the proposed model is\nsignificantly better for representation learning than the state-of-the-art\nmodels.\n", "title": "Reconstruction of Hidden Representation for Robust Feature Extraction" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
5921
null
Validated
null
null
null
{ "abstract": " This paper describes a general framework for learning Higher-Order Network\nEmbeddings (HONE) from graph data based on network motifs. The HONE framework\nis highly expressive and flexible with many interchangeable components. The\nexperimental results demonstrate the effectiveness of learning higher-order\nnetwork representations. In all cases, HONE outperforms recent embedding\nmethods that are unable to capture higher-order structures with a mean relative\ngain in AUC of $19\\%$ (and up to $75\\%$ gain) across a wide variety of networks\nand embedding methods.\n", "title": "HONE: Higher-Order Network Embeddings" }
null
null
null
null
true
null
5922
null
Default
null
null
null
{ "abstract": " A fourth-order theory of gravity is considered which in terms of dynamics has\nthe same degrees of freedom and number of constraints as those of scalar-tensor\ntheories. In addition it admits a canonical point-like Lagrangian description.\nWe study the critical points of the theory and we show that it can describe the\nmatter epoch of the universe and that two accelerated phases can be recovered\none of which describes a de Sitter universe. Finally for some models exact\nsolutions are presented.\n", "title": "Cosmological Evolution and Exact Solutions in a Fourth-order Theory of Gravity" }
null
null
[ "Physics", "Mathematics" ]
null
true
null
5923
null
Validated
null
null
null
{ "abstract": " A random walk $w_n$ on a separable, geodesic hyperbolic metric space $X$\nconverges to the boundary $\\partial X$ with probability one when the step\ndistribution supports two independent loxodromics. In particular, the random\nwalk makes positive linear progress. Progress is known to be linear with\nexponential decay when (1) the step distribution has exponential tail and (2)\nthe action on $X$ is acylindrical. We extend exponential decay to the\nnon-acylindrical case.\n", "title": "Linear Progress with Exponential Decay in Weakly Hyperbolic Groups" }
null
null
null
null
true
null
5924
null
Default
null
null
null
{ "abstract": " Nearly two centuries ago Talbot first observed the fascinating effect whereby\nlight propagating through a periodic structure generates a `carpet' of image\nrevivals in the near field. Here we report the first observation of the spatial\nTalbot effect for light interacting with periodic Bose-Einstein condensate\ninterference fringes. The Talbot effect can lead to dramatic loss of fringe\nvisibility in images, degrading precision interferometry, however we\ndemonstrate how the effect can also be used as a tool to enhance visibility, as\nwell as extend the useful focal range of matter wave detection systems by\norders of magnitude. We show that negative optical densities arise from\nmatter-wave induced lensing of detuned imaging light -- yielding\nTalbot-enhanced single-shot interference visibility of >135% compared to the\nideal visibility for resonant light.\n", "title": "Talbot-enhanced, maximum-visibility imaging of condensate interference" }
null
null
[ "Physics" ]
null
true
null
5925
null
Validated
null
null
null
{ "abstract": " We perform a numerical study of the F-model with domain-wall boundary\nconditions. Various exact results are known for this particular case of the\nsix-vertex model, including closed expressions for the partition function for\nany system size as well as its asymptotics and leading finite-size corrections.\nTo complement this picture we use a full lattice multi-cluster algorithm to\nstudy equilibrium properties of this model for systems of moderate size, up to\nL=512. We compare the energy to its exactly known large-L asymptotics. We\ninvestigate the model's infinite-order phase transition by means of finite-size\nscaling for an observable derived from the staggered polarization in order to\ntest the method put forward in our recent joint work with Duine and Barkema. In\naddition we analyse local properties of the model. Our data are perfectly\nconsistent with analytical expressions for the arctic curves. We investigate\nthe structure inside the temperate region of the lattice, confirming the\noscillations in vertex densities that were first observed by Sylju{\\aa}sen and\nZvonarev, and recently studied by Lyberg et al. We point out\n'(anti)ferroelectric' oscillations close to the corresponding frozen regions as\nwell as 'higher-order' oscillations forming an intricate pattern with\nsaddle-point-like features.\n", "title": "A numerical study of the F-model with domain-wall boundaries" }
null
null
null
null
true
null
5926
null
Default
null
null
null
{ "abstract": " Neural network training relies on our ability to find \"good\" minimizers of\nhighly non-convex loss functions. It is well-known that certain network\narchitecture designs (e.g., skip connections) produce loss functions that train\neasier, and well-chosen training parameters (batch size, learning rate,\noptimizer) produce minimizers that generalize better. However, the reasons for\nthese differences, and their effects on the underlying loss landscape, are not\nwell understood. In this paper, we explore the structure of neural loss\nfunctions, and the effect of loss landscapes on generalization, using a range\nof visualization methods. First, we introduce a simple \"filter normalization\"\nmethod that helps us visualize loss function curvature and make meaningful\nside-by-side comparisons between loss functions. Then, using a variety of\nvisualizations, we explore how network architecture affects the loss landscape,\nand how training parameters affect the shape of minimizers.\n", "title": "Visualizing the Loss Landscape of Neural Nets" }
null
null
null
null
true
null
5927
null
Default
null
null
null
{ "abstract": " The prototypical Hydrogen bond in water dimer and Hydrogen bonds in the\nprotonated water dimer, in other small molecules, in water cyclic clusters, and\nin ice, covering a wide range of bond strengths, are theoretically investigated\nby first-principles calculations based on the Density Functional Theory,\nconsidering a standard Generalized Gradient Approximation functional but also,\nfor the water dimer, hybrid and van-der-Waals corrected functionals. We compute\nstructural, energetic, and electrostatic (induced molecular dipole moments)\nproperties. In particular, Hydrogen bonds are characterized in terms of\ndifferential electron densities distributions and profiles, and of the shifts\nof the centres of Maximally localized Wannier Functions. The information from\nthe latter quantities can be conveyed into a single geometric bonding parameter\nthat appears to be correlated to the Mayer bond order parameter and can be\ntaken as an estimate of the covalent contribution to the Hydrogen bond. By\nconsidering the cyclic water hexamer and the hexagonal phase of ice we also\nelucidate the importance of cooperative/anticooperative effects in\nHydrogen-bonding formation.\n", "title": "Hydrogen bonding characterization in water and small molecules" }
null
null
null
null
true
null
5928
null
Default
null
null
null
{ "abstract": " We make a mixture of Milner's $\\pi$-calculus and our previous work on truly\nconcurrent process algebra, which is called $\\pi_{tc}$. We introduce syntax and\nsemantics of $\\pi_{tc}$, its properties based on strongly truly concurrent\nbisimilarities. Also, we include an axiomatization of $\\pi_{tc}$. $\\pi_{tc}$\ncan be used as a formal tool in verifying mobile systems in a truly concurrent\nflavor.\n", "title": "A Calculus of Truly Concurrent Mobile Processes" }
null
null
[ "Computer Science" ]
null
true
null
5929
null
Validated
null
null
null
{ "abstract": " Using polarization-resolved transient reflection spectroscopy, we investigate\nthe ultrafast modulation of light interacting with a metasurface consisting of\ncoherently vibrating nanophotonic meta-atoms in the form of U-shaped split-ring\nresonators, that exhibit co-localized optical and mechanical resonances. With a\ntwo-dimensional square-lattice array of these resonators formed of gold on a\nglass substrate, we monitor the visible-pump-pulse induced gigahertz\noscillations in intensity of reflected linearly-polarized infrared probe light\npulses, modulated by the resonators effectively acting as miniature tuning\nforks. A multimodal vibrational response involving the opening and closing\nmotion of the split rings is detected in this way. Numerical simulations of the\nassociated transient deformations and strain fields elucidate the complex\nnanomechanical dynamics contributing to the ultrafast optical modulation, and\npoint to the role of acousto-plasmonic interactions through the opening and\nclosing motion of the SRR gaps as the dominant effect. Applications include\nultrafast acoustooptic modulator design and sensing.\n", "title": "Gigahertz optomechanical modulation by split-ring-resonator nanophotonic meta-atom arrays" }
null
null
null
null
true
null
5930
null
Default
null
null
null
{ "abstract": " This paper considers the problem of fault detection and isolation (FDI) for\nswitched affine models. We first study the model invalidation problem and its\napplication to guaranteed fault detection. Novel and intuitive\noptimization-based formulations are proposed for model invalidation and\nT-distinguishability problems, which we demonstrate to be computationally more\nefficient than an earlier formulation that required a complicated change of\nvariables. Moreover, we introduce a distinguishability index as a measure of\nseparation between the system and fault models, which offers a practical method\nfor finding the smallest receding time horizon that is required for fault\ndetection, and for finding potential design recommendations for ensuring\nT-distinguishability. Then, we extend our fault detection guarantees to the\nproblem of fault isolation with multiple fault models, i.e., the identification\nof the type and location of faults, by introducing the concept of\nI-isolability. An efficient way to implement the FDI scheme is also proposed,\nwhose run-time does not grow with the number of fault models that are\nconsidered. Moreover, we derive bounds on detection and isolation delays and\npresent an adaptive scheme for reducing isolation delays. Finally, the\neffectiveness of the proposed method is illustrated using several examples,\nincluding an HVAC system model with multiple faults.\n", "title": "Guaranteed Fault Detection and Isolation for Switched Affine Models" }
null
null
null
null
true
null
5931
null
Default
null
null
null
{ "abstract": " The majority of industrial-strength object-oriented (OO) software is written\nusing nominally-typed OO programming languages. Extant domain-theoretic models\nof OOP developed to analyze OO type systems miss, however, a crucial feature of\nthese mainstream OO languages: nominality. This paper presents the construction\nof NOOP as the first domain-theoretic model of OOP that includes full\nclass/type names information found in nominally-typed OOP. Inclusion of nominal\ninformation in objects of NOOP and asserting that type inheritance in\nstatically-typed OO programming languages is an inherently nominal notion allow\nreadily proving that type inheritance and subtyping are completely identified\nin these languages. This conclusion is in full agreement with intuitions of\ndevelopers and language designers of these OO languages, and contrary to the\nbelief that \"inheritance is not subtyping,\" which came from assuming\nnon-nominal (a.k.a., structural) models of OOP.\nTo motivate the construction of NOOP, this paper briefly presents the\nbenefits of nominal-typing to mainstream OO developers and OO language\ndesigners, as compared to structural-typing. After presenting NOOP, the paper\nfurther briefly compares NOOP to the most widely known domain-theoretic models\nof OOP. Leveraging the development of NOOP, the comparisons presented in this\npaper provide clear, brief and precise technical and mathematical accounts for\nthe relation between nominal and structural OO type systems. NOOP, thus,\nprovides a firmer semantic foundation for analyzing and progressing\nnominally-typed OO programming languages.\n", "title": "NOOP: A Domain-Theoretic Model of Nominally-Typed OOP" }
null
null
null
null
true
null
5932
null
Default
null
null
null
{ "abstract": " A model of ice floe breakup under ocean wave forcing in the marginal ice zone\n(MIZ) is proposed to investigate how floe size distribution (FSD) evolves under\nrepeated wave breakup events. A three-dimensional linear model of ocean wave\nscattering by a finite array of compliant circular ice floes is coupled to a\nflexural failure model, which breaks a floe into two floes provided the\ntwo-dimensional stress field satisfies a breakup criterion. A closed-feedback\nloop algorithm is devised, which (i)~solves wave scattering problem for a given\nFSD under time-harmonic plane wave forcing, (ii)~computes the stress field in\nall the floes, (iii)~fractures the floes satisfying the breakup criterion and\n(iv)~generates an updated FSD, initialising the geometry for the next iteration\nof the loop.The FSD after 50 breakup events is uni-modal and near normal, or\nbi-modal. Multiple scattering is found to enhance breakup for long waves and\nthin ice, but to reduce breakup for short waves and thick ice. A breakup front\nmarches forward in the latter regime, as wave-induced fracture weakens the ice\ncover allowing waves to travel deeper into the MIZ.\n", "title": "Modelling wave-induced sea ice breakup in the marginal ice zone" }
null
null
[ "Physics" ]
null
true
null
5933
null
Validated
null
null
null
{ "abstract": " Hidden Markov models (HMMs) are popular time series model in many fields\nincluding ecology, economics and genetics. HMMs can be defined over discrete or\ncontinuous time, though here we only cover the former. In the field of movement\necology in particular, HMMs have become a popular tool for the analysis of\nmovement data because of their ability to connect observed movement data to an\nunderlying latent process, generally interpreted as the animal's unobserved\nbehavior. Further, we model the tendency to persist in a given behavior over\ntime. Notation presented here will generally follow the format of Zucchini et\nal. (2016) and cover HMMs applied in an unsupervised case to animal movement\ndata, specifically positional data. We provide Stan code to analyze movement\ndata of the wild haggis as presented first in Michelot et al. (2016).\n", "title": "An Introduction to Animal Movement Modeling with Hidden Markov Models using Stan for Bayesian Inference" }
null
null
null
null
true
null
5934
null
Default
null
null
null
{ "abstract": " The focus of the current research is to identify people of interest in social\nnetworks. We are especially interested in studying dark networks, which\nrepresent illegal or covert activity. In such networks, people are unlikely to\ndisclose accurate information when queried. We present REDLEARN, an algorithm\nfor sampling dark networks with the goal of identifying as many nodes of\ninterest as possible. We consider two realistic lying scenarios, which describe\nhow individuals in a dark network may attempt to conceal their connections. We\ntest and present our results on several real-world multilayered networks, and\nshow that REDLEARN achieves up to a 340% improvement over the next best\nstrategy.\n", "title": "Sampling a Network to Find Nodes of Interest" }
null
null
null
null
true
null
5935
null
Default
null
null
null
{ "abstract": " Recent character and phoneme-based parametric TTS systems using deep learning\nhave shown strong performance in natural speech generation. However, the choice\nbetween character or phoneme input can create serious limitations for practical\ndeployment, as direct control of pronunciation is crucial in certain cases. We\ndemonstrate a simple method for combining multiple types of linguistic\ninformation in a single encoder, named representation mixing, enabling flexible\nchoice between character, phoneme, or mixed representations during inference.\nExperiments and user studies on a public audiobook corpus show the efficacy of\nour approach.\n", "title": "Representation Mixing for TTS Synthesis" }
null
null
null
null
true
null
5936
null
Default
null
null
null
{ "abstract": " In multi-robot systems where a central decision maker is specifying the\nmovement of each individual robot, a communication failure can severely impair\nthe performance of the system. This paper develops a motion strategy that\nallows robots to safely handle critical communication failures for such\nmulti-robot architectures. For each robot, the proposed algorithm computes a\ntime horizon over which collisions with other robots are guaranteed not to\noccur. These safe time horizons are included in the commands being transmitted\nto the individual robots. In the event of a communication failure, the robots\nexecute the last received velocity commands for the corresponding safe time\nhorizons leading to a provably safe open-loop motion strategy. The resulting\nalgorithm is computationally effective and is agnostic to the task that the\nrobots are performing. The efficacy of the strategy is verified in simulation\nas well as on a team of differential-drive mobile robots.\n", "title": "Safe Open-Loop Strategies for Handling Intermittent Communications in Multi-Robot Systems" }
null
null
null
null
true
null
5937
null
Default
null
null
null
{ "abstract": " The paper addresses the stability of the co-authorship networks in time. The\nanalysis is done on the networks of Slovenian researchers in two time periods\n(1991-2000 and 2001-2010). Two researchers are linked if they published at\nleast one scientific bibliographic unit in a given time period. As proposed by\nKronegger et al. (2011), the global network structures are examined by\ngeneralized blockmodeling with the assumed\nmulti-core--semi-periphery--periphery blockmodel type. The term core denotes a\ngroup of researchers who published together in a systematic way with each\nother.\nThe obtained blockmodels are comprehensively analyzed by visualizations and\nthrough considering several statistics regarding the global network structure.\nTo measure the stability of the obtained blockmodels, different adjusted\nmodified Rand and Wallace indices are applied. Those enable to distinguish\nbetween the splitting and merging of cores when operationalizing the stability\nof cores. Also, the adjusted modified indices can be used when new researchers\noccur in the second time period (newcomers) and when some researchers are no\nlonger present in the second time period (departures). The research disciplines\nare described and clustered according to the values of these indices.\nConsidering the obtained clusters, the sources of instability of the research\ndisciplines are studied (e.g., merging or splitting of cores, newcomers or\ndepartures). Furthermore, the differences in the stability of the obtained\ncores on the level of scientific disciplines are studied by linear regression\nanalysis where some personal characteristics of the researchers (e.g., age,\ngender), are also considered.\n", "title": "Scientific co-authorship networks" }
null
null
null
null
true
null
5938
null
Default
null
null
null
{ "abstract": " Optimizing deep neural networks (DNNs) often suffers from the ill-conditioned\nproblem. We observe that the scaling-based weight space symmetry property in\nrectified nonlinear network will cause this negative effect. Therefore, we\npropose to constrain the incoming weights of each neuron to be unit-norm, which\nis formulated as an optimization problem over Oblique manifold. A simple yet\nefficient method referred to as projection based weight normalization (PBWN) is\nalso developed to solve this problem. PBWN executes standard gradient updates,\nfollowed by projecting the updated weight back to Oblique manifold. This\nproposed method has the property of regularization and collaborates well with\nthe commonly used batch normalization technique. We conduct comprehensive\nexperiments on several widely-used image datasets including CIFAR-10,\nCIFAR-100, SVHN and ImageNet for supervised learning over the state-of-the-art\nconvolutional neural networks, such as Inception, VGG and residual networks.\nThe results show that our method is able to improve the performance of DNNs\nwith different architectures consistently. We also apply our method to Ladder\nnetwork for semi-supervised learning on permutation invariant MNIST dataset,\nand our method outperforms the state-of-the-art methods: we obtain test errors\nas 2.52%, 1.06%, and 0.91% with only 20, 50, and 100 labeled samples,\nrespectively.\n", "title": "Projection Based Weight Normalization for Deep Neural Networks" }
null
null
null
null
true
null
5939
null
Default
null
null
null
{ "abstract": " It is well accepted that knowing the composition and the orbital evolution of\nasteroids may help us to understand the process of formation of the Solar\nSystem. It is also known that asteroids can represent a threat to our planet.\nSuch important role made space missions to asteroids a very popular topic in\nthe current astrodynamics and astronomy studies. By taking into account the\nincreasingly interest in space missions to asteroids, especially to multiple\nsystems, we present a study aimed to characterize the stable and unstable\nregions around the triple system of asteroids (45) Eugenia. The goal is to\ncharacterize unstable and stable regions of this system and compare with the\nsystem 2001 SN263 - the target of the ASTER mission. Besides, Prado (2014) used\na new concept for mapping orbits considering the disturbance received by the\nspacecraft from all the perturbing forces individually. This method was also\napplied to (45) Eugenia. We present the stable and unstable regions for\nparticles with relative inclination between 0 and 180 degrees. We found that\n(45) Eugenia presents larger stable regions for both, prograde and retrograde\ncases. This is mainly because the satellites of this system are small when\ncompared to the primary body, and because they are not so close to each other.\nWe also present a comparison between those two triple systems, and a discussion\non how these results may guide us in the planning of future missions.\n", "title": "Mapping stable direct and retrograde orbits around the triple system of asteroids (45) Eugenia" }
null
null
[ "Physics" ]
null
true
null
5940
null
Validated
null
null
null
{ "abstract": " Microorganisms, such as bacteria, are one of the first targets of\nnanoparticles in the environment. In this study, we tested the effect of two\nnanoparticles, ZnO and TiO2, with the salt ZnSO4 as the control, on the\nGram-positive bacterium Bacillus subtilis by 2D gel electrophoresis-based\nproteomics. Despite a significant effect on viability (LD50), TiO2 NPs had no\ndetectable effect on the proteomic pattern, while ZnO NPs and ZnSO4\nsignificantly modified B. subtilis metabolism. These results allowed us to\nconclude that the effects of ZnO observed in this work were mainly attributable\nto Zn dissolution in the culture media. Proteomic analysis highlighted twelve\nmodulated proteins related to central metabolism: MetE and MccB (cysteine\nmetabolism), OdhA, AspB, IolD, AnsB, PdhB and YtsJ (Krebs cycle) and XylA,\nYqjI, Drm and Tal (pentose phosphate pathway). Biochemical assays, such as free\nsulfhydryl, CoA-SH and malate dehydrogenase assays corroborated the observed\ncentral metabolism reorientation and showed that Zn stress induced oxidative\nstress, probably as a consequence of thiol chelation stress by Zn ions. The\nother patterns affected by ZnO and ZnSO4 were the stringent response and the\ngeneral stress response. Nine proteins involved in or controlled by the\nstringent response showed a modified expression profile in the presence of ZnO\nNPs or ZnSO4: YwaC, SigH, YtxH, YtzB, TufA, RplJ, RpsB, PdhB and Mbl. An\nincrease in the ppGpp concentration confirmed the involvement of the stringent\nresponse during a Zn stress. All these metabolic reorientations in response to\nZn stress were probably the result of complex regulatory mechanisms including\nat least the stringent response via YwaC.\n", "title": "Zinc oxide induces the stringent response and major reorientations in the central metabolism of Bacillus subtilis" }
null
null
null
null
true
null
5941
null
Default
null
null
null
{ "abstract": " In the field of exploratory data mining, local structure in data can be\ndescribed by patterns and discovered by mining algorithms. Although many\nsolutions have been proposed to address the redundancy problems in pattern\nmining, most of them either provide succinct pattern sets or take the interests\nof the user into account-but not both. Consequently, the analyst has to invest\nsubstantial effort in identifying those patterns that are relevant to her\nspecific interests and goals. To address this problem, we propose a novel\napproach that combines pattern sampling with interactive data mining. In\nparticular, we introduce the LetSIP algorithm, which builds upon recent\nadvances in 1) weighted sampling in SAT and 2) learning to rank in interactive\npattern mining. Specifically, it exploits user feedback to directly learn the\nparameters of the sampling distribution that represents the user's interests.\nWe compare the performance of the proposed algorithm to the state-of-the-art in\ninteractive pattern mining by emulating the interests of a user. The resulting\nsystem allows efficient and interleaved learning and sampling, thus\nuser-specific anytime data exploration. Finally, LetSIP demonstrates favourable\ntrade-offs concerning both quality-diversity and exploitation-exploration when\ncompared to existing methods.\n", "title": "Learning what matters - Sampling interesting patterns" }
null
null
null
null
true
null
5942
null
Default
null
null
null
{ "abstract": " The automatic verification of programs that maintain unbounded low-level data\nstructures is a critical and open problem. Analyzers and verifiers developed in\nprevious work can synthesize invariants that only describe data structures of\nheavily restricted forms, or require an analyst to provide predicates over\nprogram data and structure that are used in a synthesized proof of correctness.\nIn this work, we introduce a novel automatic safety verifier of programs that\nmaintain low-level data structures, named LTTP. LTTP synthesizes proofs of\nprogram safety represented as a grammar of a given program's control paths,\nannotated with invariants that relate program state at distinct points within\nits path of execution. LTTP synthesizes such proofs completely automatically,\nusing a novel inductive-synthesis algorithm.\nWe have implemented LTTP as a verifier for JVM bytecode and applied it to\nverify the safety of a collection of verification benchmarks. Our results\ndemonstrate that LTTP can be applied to automatically verify the safety of\nprograms that are beyond the scope of previously-developed verifiers.\n", "title": "Proofs as Relational Invariants of Synthesized Execution Grammars" }
null
null
[ "Computer Science" ]
null
true
null
5943
null
Validated
null
null
null
{ "abstract": " We associate to every central simple algebra with involution of orthogonal\ntype in characteristic two a totally singular quadratic form which reflects\ncertain anisotropy properties of the involution. It is shown that this\nquadratic form can be used to classify totally decomposable algebras with\northogonal involution. Also, using this form, a criterion is obtained for an\northogonal involution on a split algebra to be conjugated to the transpose\ninvolution.\n", "title": "Orthogonal involutions and totally singular quadratic forms in characteristic two" }
null
null
null
null
true
null
5944
null
Default
null
null
null
{ "abstract": " Sea-level rise (SLR) is magnifying the frequency and severity of coastal\nflooding. The rate and amount of global mean sea-level (GMSL) rise is a\nfunction of the trajectory of global mean surface temperature (GMST).\nTherefore, temperature stabilization targets (e.g., 1.5 °C and 2.0 °C\nof warming above pre-industrial levels, as from the Paris Agreement) have\nimportant implications for coastal flood risk. Here, we assess differences in\nthe return periods of coastal floods at a global network of tide gauges between\nscenarios that stabilize GMST warming at 1.5 °C, 2.0 °C, and 2.5\n°C above pre-industrial levels. We employ probabilistic, localized SLR\nprojections and long-term hourly tide gauge records to construct estimates of\nthe return levels of current and future flood heights for the 21st and 22nd\ncenturies. By 2100, under 1.5 °C, 2.0 °C, and 2.5 °C GMST\nstabilization, median GMSL is projected to rise 47 cm with a very likely range\nof 28-82 cm (90% probability), 55 cm (very likely 30-94 cm), and 58 cm (very\nlikely 36-93 cm), respectively. As an independent comparison, a semi-empirical\nsea level model calibrated to temperature and GMSL over the past two millennia\nestimates median GMSL will rise within < 13% of these projections. By 2150,\nrelative to the 2.0 °C scenario, GMST stabilization of 1.5 °C\ninundates roughly 5 million fewer inhabitants that currently occupy lands,\nincluding 40,000 fewer individuals currently residing in Small Island\nDeveloping States. Relative to a 2.0 °C scenario, the reduction in the\namplification of the frequency of the 100-yr flood arising from a 1.5 °C\nGMST stabilization is greatest in the eastern United States and in Europe, with\nflood frequency amplification being reduced by about half.\n", "title": "Coastal flood implications of 1.5 °C, 2.0 °C, and 2.5 °C temperature stabilization targets in the 21st and 22nd century" }
null
null
null
null
true
null
5945
null
Default
null
null
null
{ "abstract": " We present a factorized hierarchical variational autoencoder, which learns\ndisentangled and interpretable representations from sequential data without\nsupervision. Specifically, we exploit the multi-scale nature of information in\nsequential data by formulating it explicitly within a factorized hierarchical\ngraphical model that imposes sequence-dependent priors and sequence-independent\npriors to different sets of latent variables. The model is evaluated on two\nspeech corpora to demonstrate, qualitatively, its ability to transform speakers\nor linguistic content by manipulating different sets of latent variables; and\nquantitatively, its ability to outperform an i-vector baseline for speaker\nverification and reduce the word error rate by as much as 35% in mismatched\ntrain/test scenarios for automatic speech recognition tasks.\n", "title": "Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data" }
null
null
null
null
true
null
5946
null
Default
null
null
null
{ "abstract": " Cosmological surveys in the far infrared are known to suffer from confusion.\nThe Bayesian de-blending tool, XID+, currently provides one of the best ways to\nde-confuse deep Herschel SPIRE images, using a flat flux density prior. This\nwork is to demonstrate that existing multi-wavelength data sets can be\nexploited to improve XID+ by providing an informed prior, resulting in more\naccurate and precise extracted flux densities. Photometric data for galaxies in\nthe COSMOS field were used to constrain spectral energy distributions (SEDs)\nusing the fitting tool CIGALE. These SEDs were used to create Gaussian prior\nestimates in the SPIRE bands for XID+. The multi-wavelength photometry and the\nextracted SPIRE flux densities were run through CIGALE again to allow us to\ncompare the performance of the two priors. Inferred ALMA flux densities\n(F$^i$), at 870$\\mu$m and 1250$\\mu$m, from the best fitting SEDs from the\nsecond CIGALE run were compared with measured ALMA flux densities (F$^m$) as an\nindependent performance validation. Similar validations were conducted with the\nSED modelling and fitting tool MAGPHYS and modified black body functions to\ntest for model dependency. We demonstrate a clear improvement in agreement\nbetween the flux densities extracted with XID+ and existing data at other\nwavelengths when using the new informed Gaussian prior over the original\nuninformed prior. The residuals between F$^m$ and F$^i$ were calculated. For\nthe Gaussian prior, these residuals, expressed as a multiple of the ALMA error\n($\\sigma$), have a smaller standard deviation, 7.95$\\sigma$ for the Gaussian\nprior compared to 12.21$\\sigma$ for the flat prior, reduced mean, 1.83$\\sigma$\ncompared to 3.44$\\sigma$, and have reduced skew to positive values, 7.97\ncompared to 11.50. These results were determined to not be significantly model\ndependent. This results in statistically more reliable SPIRE flux densities.\n", "title": "De-blending Deep Herschel Surveys: A Multi-wavelength Approach" }
null
null
null
null
true
null
5947
null
Default
null
null
null
{ "abstract": " Room-temperature ionic liquids (RTIL) are a new class of organic salts whose\nmelting temperature falls below the conventional limit of 100C. Their low vapor\npressure, moreover, has made these ionic compounds the solvents of choice of\nthe so-called green chemistry. For these and other peculiar characteristics,\nthey are increasingly used in industrial applications. However, studies of\ntheir interaction with living organisms have highlighted mild to severe health\nhazards. Since their cytotoxicity shows a positive correlation with their\nlipo-philicity, several chemical-physical studies of their interaction with\nbiomembranes have been carried out in the last few years, aiming to identify\nthe microscopic mechanisms behind their toxicity. Cation chain length and anion\nnature have been seen to affect the lipo-philicity and, in turn, the toxicity\nof RTILs. The emerging picture, however, raises new questions, points to the\nneed to assess toxicity on a case-by-case basis, but also suggests a potential\npositive role of RTILs in pharmacology, bio-medicine, and, more in general,\nbio-nano-technology. Here, we review this new subject of research, and comment\non the future and the potential importance of this new field of study.\n", "title": "Room-Temperature Ionic Liquids Meet Bio-Membranes: the State-of-the- Art" }
null
null
[ "Physics" ]
null
true
null
5948
null
Validated
null
null
null
{ "abstract": " The use of spreadsheets in industry is widespread. Companies base decisions\non information coming from spreadsheets. Unfortunately, spreadsheets are\nerror-prone and this increases the risk that companies base their decisions on\ninaccurate information, which can lead to incorrect decisions and loss of\nmoney. In general, spreadsheet research is aimed to reduce the error-proneness\nof spreadsheets. Most research is concentrated on the use of formulas. However,\nthere are other constructions in spreadsheets, like charts, pivot tables, and\narray formulas, that are also used to present decision support information to\nthe user. There is almost no research about how these constructions are used.\nTo improve spreadsheet quality it is important to understand how spreadsheets\nare used and to obtain a complete understanding, the use of charts, pivot\ntables, and array formulas should be included in research. In this paper, we\nanalyze two popular spreadsheet corpora: Enron and EUSES on the use of the\naforementioned constructions.\n", "title": "The use of Charts, Pivot Tables, and Array Formulas in two Popular Spreadsheet Corpora" }
null
null
null
null
true
null
5949
null
Default
null
null
null
{ "abstract": " This thesis presents original results in two domains of disordered\nstatistical physics: logarithmic correlated Random Energy Models (logREMs), and\nlocalization transitions in long-range random matrices.\nIn the first part devoted to logREMs, we show how to characterise their\ncommon properties and model--specific data. Then we develop their replica\nsymmetry breaking treatment, which leads to the freezing scenario of their free\nenergy distribution and the general description of their minima process, in\nterms of decorated Poisson point process. We also report a series of new\napplications of the Jack polynomials in the exact predictions of some\nobservables in the circular model and its variants. Finally, we present the\nrecent progress on the exact connection between logREMs and the Liouville\nconformal field theory.\nThe goal of the second part is to introduce and study a new class of banded\nrandom matrices, the broadly distributed class, which is characterid an\neffective sparseness. We will first study a specific model of the class, the\nBeta Banded random matrices, inspired by an exact mapping to a recently studied\nstatistical model of long--range first--passage percolation/epidemics dynamics.\nUsing analytical arguments based on the mapping and numerics, we show the\nexistence of localization transitions with mobility edges in the\n\"stretch--exponential\" parameter--regime of the statistical models. Then, using\na block--diagonalization renormalization approach, we argue that such\nlocalization transitions occur generically in the broadly distributed class.\n", "title": "Disordered statistical physics in low dimensions: extremes, glass transition, and localization" }
null
null
null
null
true
null
5950
null
Default
null
null
null
{ "abstract": " In this extended abstract, we describe and analyze a lossy compression of\nMinHash from buckets of size $O(\\log n)$ to buckets of size $O(\\log\\log n)$ by\nencoding using floating-point notation. This new compressed sketch, which we\ncall HyperMinHash, as we build off a HyperLogLog scaffold, can be used as a\ndrop-in replacement of MinHash. Unlike comparable Jaccard index fingerprinting\nalgorithms in sub-logarithmic space (such as b-bit MinHash), HyperMinHash\nretains MinHash's features of streaming updates, unions, and cardinality\nestimation. For a multiplicative approximation error $1+ \\epsilon$ on a Jaccard\nindex $ t $, given a random oracle, HyperMinHash needs $O\\left(\\epsilon^{-2}\n\\left( \\log\\log n + \\log \\frac{1}{ t \\epsilon} \\right)\\right)$ space.\nHyperMinHash allows estimating Jaccard indices of 0.01 for set cardinalities on\nthe order of $10^{19}$ with relative error of around 10\\% using 64KiB of\nmemory; MinHash can only estimate Jaccard indices for cardinalities of\n$10^{10}$ with the same memory consumption.\n", "title": "HyperMinHash: MinHash in LogLog space" }
null
null
null
null
true
null
5951
null
Default
null
null
null
{ "abstract": " We propose a model for equity trading in a population of agents where each\nagent acts to achieve his or her target stock-to-bond ratio, and, as a feedback\nmechanism, follows a market adaptive strategy. In this model only a fraction of\nagents participates in buying and selling stock during a trading period, while\nthe rest of the group accepts the newly set price. Using numerical simulations\nwe show that the stochastic process settles on a stationary regime for the\nreturns. The mean return can be greater or less than the return on the bond and\nit is determined by the parameters of the adaptive mechanism. When the number\nof interacting agents is fixed, the distribution of the returns follows the\nlog-normal density. In this case, we give an analytic formula for the mean rate\nof return in terms of the rate of change of agents' risk levels and confirm the\nformula by numerical simulations. However, when the number of interacting\nagents per period is random, the distribution of returns can significantly\ndeviate from the log-normal, especially as the variance of the distribution for\nthe number of interacting agents increases.\n", "title": "Asynchronous stochastic price pump" }
null
null
[ "Quantitative Finance" ]
null
true
null
5952
null
Validated
null
null
null
{ "abstract": " We collect 14 representative corpora for major periods in Chinese history in\nthis study. These corpora include poetic works produced in several dynasties,\nnovels of the Ming and Qing dynasties, and essays and news reports written in\nmodern Chinese. The time span of these corpora ranges between 1046 BCE and 2007\nCE. We analyze their character and word distributions from the viewpoint of the\nZipf's law, and look for factors that affect the deviations and similarities\nbetween their Zipfian curves. Genres and epochs demonstrated their influences\nin our analyses. Specifically, the character distributions for poetic works of\nbetween 618 CE and 1644 CE exhibit striking similarity. In addition, although\ntexts of the same dynasty may tend to use the same set of characters, their\ncharacter distributions still deviate from each other.\n", "title": "Character Distributions of Classical Chinese Literary Texts: Zipf's Law, Genres, and Epochs" }
null
null
null
null
true
null
5953
null
Default
null
null
null
{ "abstract": " We investigate deep generative models that can exchange multiple modalities\nbi-directionally, e.g., generating images from corresponding texts and vice\nversa. A major approach to achieve this objective is to train a model that\nintegrates all the information of different modalities into a joint\nrepresentation and then to generate one modality from the corresponding other\nmodality via this joint representation. We simply applied this approach to\nvariational autoencoders (VAEs), which we call a joint multimodal variational\nautoencoder (JMVAE). However, we found that when this model attempts to\ngenerate a large dimensional modality missing at the input, the joint\nrepresentation collapses and this modality cannot be generated successfully.\nFurthermore, we confirmed that this difficulty cannot be resolved even using a\nknown solution. Therefore, in this study, we propose two models to prevent this\ndifficulty: JMVAE-kl and JMVAE-h. Results of our experiments demonstrate that\nthese methods can prevent the difficulty above and that they generate\nmodalities bi-directionally with equal or higher likelihood than conventional\nVAE methods, which generate in only one direction. Moreover, we confirm that\nthese methods can obtain the joint representation appropriately, so that they\ncan generate various variations of modality by moving over the joint\nrepresentation or changing the value of another modality.\n", "title": "Improving Bi-directional Generation between Different Modalities with Variational Autoencoders" }
null
null
[ "Statistics" ]
null
true
null
5954
null
Validated
null
null
null
{ "abstract": " Many machine learning tasks require finding per-part correspondences between\nobjects. In this work we focus on low-level correspondences - a highly\nambiguous matching problem. We propose to use a hierarchical semantic\nrepresentation of the objects, coming from a convolutional neural network, to\nsolve this ambiguity. Training it for low-level correspondence prediction\ndirectly might not be an option in some domains where the ground-truth\ncorrespondences are hard to obtain. We show how transfer from recognition can\nbe used to avoid such training. Our idea is to mark parts as \"matching\" if\ntheir features are close to each other at all the levels of convolutional\nfeature hierarchy (neural paths). Although the overall number of such paths is\nexponential in the number of layers, we propose a polynomial algorithm for\naggregating all of them in a single backward pass. The empirical validation is\ndone on the task of stereo correspondence and demonstrates that we achieve\ncompetitive results among the methods which do not use labeled target domain\ndata.\n", "title": "Matching neural paths: transfer from recognition to correspondence search" }
null
null
null
null
true
null
5955
null
Default
null
null
null
{ "abstract": " Let $\\mu$ be a borelian probability measure on\n$\\mathbf{G}:=\\mathrm{SL}_d(\\mathbb{Z}) \\ltimes \\mathbb{T}^d$. Define, for $x\\in\n\\mathbb{T}^d$, a random walk starting at $x$ denoting for $n\\in \\mathbb{N}$, \\[\n\\left\\{\\begin{array}{rcl} X_0 &=&x\\\\ X_{n+1} &=& a_{n+1} X_n + b_{n+1}\n\\end{array}\\right. \\] where $((a_n,b_n))\\in \\mathbf{G}^\\mathbb{N}$ is an iid\nsequence of law $\\mu$.\nThen, we denote by $\\mathbb{P}_x$ the measure on $(\\mathbb{T}^d)^\\mathbb{N}$\nthat is the image of $\\mu^{\\otimes \\mathbb{N}}$ by the map $\\left((g_n) \\mapsto\n(x,g_1 x, g_2 g_1 x, \\dots , g_n \\dots g_1 x, \\dots)\\right)$ and for any\n$\\varphi \\in \\mathrm{L}^1((\\mathbb{T}^d)^\\mathbb{N}, \\mathbb{P}_x)$, we set\n$\\mathbb{E}_x \\varphi((X_n)) = \\int \\varphi((X_n))\n\\mathrm{d}\\mathbb{P}_x((X_n))$.\nBourgain, Furmann, Lindenstrauss and Mozes studied this random walk when\n$\\mu$ is concentrated on $\\mathrm{SL}_d(\\mathbb{Z}) \\ltimes\\{0\\}$ and this\nallowed us to study, for any hölder-continuous function $f$ on the torus, the\nsequence $(f(X_n))$ when $x$ is not too well approximable by rational points.\nIn this article, we are interested in the case where $\\mu$ is not\nconcentrated on $\\mathrm{SL}_d(\\mathbb{Z}) \\ltimes \\mathbb{Q}^d/\\mathbb{Z}^d$\nand we prove that, under assumptions on the group spanned by the support of\n$\\mu$, the Lebesgue's measure $\\nu$ on the torus is the only stationary\nprobability measure and that for any hölder-continuous function $f$ on the\ntorus, $\\mathbb{E}_x f(X_n)$ converges exponentially fast to $\\int\nf\\mathrm{d}\\nu$.\nThen, we use this to prove the law of large numbers, a non-concentration\ninequality, the functional central limit theorem and it's almost-sure version\nfor the sequence $(f(X_n))$.\nIn the appendix, we state a non-concentration inequality for products of\nrandom matrices without any irreducibility assumption.\n", "title": "On the affine random walk on the torus" }
null
null
null
null
true
null
5956
null
Default
null
null
null
{ "abstract": " Hybrid cloud is an integrated cloud computing environment utilizing a mix of\npublic cloud, private cloud, and on-premise traditional IT infrastructures.\nWorkload awareness, defined as a detailed full range understanding of each\nindividual workload, is essential in implementing the hybrid cloud. While it is\ncritical to perform an accurate analysis to determine which workloads are\nappropriate for on-premise deployment versus which workloads can be migrated to\na cloud off-premise, the assessment is mainly performed by rule or policy based\napproaches. In this paper, we introduce StackInsights, a novel cognitive system\nto automatically analyze and predict the cloud readiness of workloads for an\nenterprise. Our system harnesses the critical metrics across the entire stack:\n1) infrastructure metrics, 2) data relevance metrics, and 3) application\ntaxonomy, to identify workloads that have characteristics of a) low sensitivity\nwith respect to business security, criticality and compliance, and b) low\nresponse time requirements and access patterns. Since the capture of the data\nrelevance metrics involves an intrusive and in-depth scanning of the content of\nstorage objects, a machine learning model is applied to perform the business\nrelevance classification by learning from the meta level metrics harnessed\nacross stack. In contrast to traditional methods, StackInsights significantly\nreduces the total time for hybrid cloud readiness assessment by orders of\nmagnitude.\n", "title": "StackInsights: Cognitive Learning for Hybrid Cloud Readiness" }
null
null
null
null
true
null
5957
null
Default
null
null
null
{ "abstract": " Risk-averse model predictive control (MPC) offers a control framework that\nallows one to account for ambiguity in the knowledge of the underlying\nprobability distribution and unifies stochastic and worst-case MPC. In this\npaper we study risk-averse MPC problems for constrained nonlinear Markovian\nswitching systems using generic cost functions, and derive Lyapunov-type\nrisk-averse stability conditions by leveraging the properties of risk-averse\ndynamic programming operators. We propose a controller design procedure to\ndesign risk-averse stabilizing terminal conditions for constrained nonlinear\nMarkovian switching systems. Lastly, we cast the resulting risk-averse optimal\ncontrol problem in a favorable form which can be solved efficiently and thus\ndeems risk-averse MPC suitable for applications.\n", "title": "Risk-averse model predictive control" }
null
null
null
null
true
null
5958
null
Default
null
null
null
{ "abstract": " Here we test Neutral models against the evolution of English word frequency\nand vocabulary at the population scale, as recorded in annual word frequencies\nfrom three centuries of English language books. Against these data, we test\nboth static and dynamic predictions of two neutral models, including the\nrelation between corpus size and vocabulary size, frequency distributions, and\nturnover within those frequency distributions. Although a commonly used Neutral\nmodel fails to replicate all these emergent properties at once, we find that\nmodified two-stage Neutral model does replicate the static and dynamic\nproperties of the corpus data. This two-stage model is meant to represent a\nrelatively small corpus (population) of English books, analogous to a `canon',\nsampled by an exponentially increasing corpus of books in the wider population\nof authors. More broadly, this mode -- a smaller neutral model within a larger\nneutral model -- could represent more broadly those situations where mass\nattention is focused on a small subset of the cultural variants.\n", "title": "Neutral evolution and turnover over centuries of English word popularity" }
null
null
null
null
true
null
5959
null
Default
null
null
null
{ "abstract": " Homomorphic encryption is an encryption scheme that allows computations to be\nevaluated on encrypted inputs without knowledge of their raw messages. Recently\nOuyang et al. constructed a quantum homomorphic encryption (QHE) scheme for\nClifford circuits with statistical security (or information-theoretic security\n(IT-security)). It is desired to see whether an\ninformation-theoretically-secure (ITS) quantum FHE exists. If not, what other\nnontrivial class of quantum circuits can be homomorphically evaluated with\nIT-security? We provide a limitation for the first question that an ITS quantum\nFHE necessarily incurs exponential overhead. As for the second one, we propose\na QHE scheme for the instantaneous quantum polynomial-time (IQP) circuits. Our\nQHE scheme for IQP circuits follows from the one-time pad.\n", "title": "On Statistically-Secure Quantum Homomorphic Encryption" }
null
null
[ "Computer Science" ]
null
true
null
5960
null
Validated
null
null
null
{ "abstract": " We present an extension of Monte Carlo Tree Search (MCTS) that strongly\nincreases its efficiency for trees with asymmetry and/or loops. Asymmetric\ntermination of search trees introduces a type of uncertainty for which the\nstandard upper confidence bound (UCB) formula does not account. Our first\nalgorithm (MCTS-T), which assumes a non-stochastic environment, backs-up tree\nstructure uncertainty and leverages it for exploration in a modified UCB\nformula. Results show vastly improved efficiency in a well-known asymmetric\ndomain in which MCTS performs arbitrarily bad. Next, we connect the ideas about\nasymmetric termination to the presence of loops in the tree, where the same\nstate appears multiple times in a single trace. An extension to our algorithm\n(MCTS-T+), which in addition to non-stochasticity assumes full state\nobservability, further increases search efficiency for domains with loops as\nwell. Benchmark testing on a set of OpenAI Gym and Atari 2600 games indicates\nthat our algorithms always perform better than or at least equivalent to\nstandard MCTS, and could be first-choice tree search algorithms for\nnon-stochastic, fully-observable environments.\n", "title": "Monte Carlo Tree Search for Asymmetric Trees" }
null
null
null
null
true
null
5961
null
Default
null
null
null
{ "abstract": " The difference-to-sum power ratio was proposed and used to suppress wind\nnoise under specific acoustic conditions. In this contribution, a general\nformulation of the difference-to-sum power ratio associated with a mixture of\nspeech and wind noise is proposed and analyzed. In particular, it is assumed\nthat the complex coherence of convective turbulence can be modelled by the\nCorcos model. In contrast to the work in which the power ratio was first\npresented, the employed Corcos model holds for every possible air stream\ndirection and takes into account the lateral coherence decay rate. The obtained\nexpression is subsequently validated with real data for a dual microphone\nset-up. Finally, the difference-to- sum power ratio is exploited as a spatial\nfeature to indicate the frame-wise presence of wind noise, obtaining improved\ndetection performance when compared to an existing multi-channel wind noise\ndetection approach.\n", "title": "On the difference-to-sum power ratio of speech and wind noise based on the Corcos model" }
null
null
null
null
true
null
5962
null
Default
null
null
null
{ "abstract": " In this paper, we construct a new even constrained B(C) type Toda hierarchy\nand derive its B(C) type Block type additional symmetry. Also we generalize the\nB(C) type Toda hierarchy to the $N$-component B(C) type Toda hierarchy which is\nproved to have symmetries of a coupled $\\bigotimes^NQT_+ $ algebra ( $N$-folds\ndirect product of the positive half of the quantum torus algebra $QT$).\n", "title": "Quantum torus algebras and B(C) type Toda systems" }
null
null
null
null
true
null
5963
null
Default
null
null
null
{ "abstract": " We introduce the logic $\\sf ITL^e$, an intuitionistic temporal logic based on\nstructures $(W,\\preccurlyeq,S)$, where $\\preccurlyeq$ is used to interpret\nintuitionistic implication and $S$ is a $\\preccurlyeq$-monotone function used\nto interpret temporal modalities. Our main result is that the satisfiability\nand validity problems for $\\sf ITL^e$ are decidable. We prove this by showing\nthat the logic enjoys the strong finite model property. In contrast, we also\nconsider a `persistent' version of the logic, $\\sf ITL^p$, whose models are\nsimilar to Cartesian products. We prove that, unlike $\\sf ITL^e$, $\\sf ITL^p$\ndoes not have the finite model property.\n", "title": "A Decidable Intuitionistic Temporal Logic" }
null
null
null
null
true
null
5964
null
Default
null
null
null
{ "abstract": " High density implants such as metals often lead to serious artifacts in the\nreconstructed CT images which hampers the accuracy of image based diagnosis and\ntreatment planning. In this paper, we propose a novel wavelet frame based CT\nimage reconstruction model to reduce metal artifacts. This model is built on a\njoint spatial and Radon (projection) domain (JSR) image reconstruction\nframework with a built-in weighting and re-weighting mechanism in Radon domain\nto repair degraded projection data. The new weighting strategy used in the\nproposed model not only makes the regularization in Radon domain by wavelet\nframe transform more effective, but also makes the commonly assumed linear\nmodel for CT imaging a more accurate approximation of the nonlinear physical\nproblem. The proposed model, which will be referred to as the re-weighted JSR\nmodel, combines the ideas of the recently proposed wavelet frame based JSR\nmodel \\cite{Dong2013} and the normalized metal artifact reduction model\n\\cite{meyer2010normalized}, and manages to achieve noticeably better CT\nreconstruction quality than both methods. To solve the proposed re-weighted JSR\nmodel, an efficient alternative iteration algorithm is proposed with guaranteed\nconvergence. Numerical experiments on both simulated and real CT image data\ndemonstrate the effectiveness of the re-weighted JSR model and its advantage\nover some of the state-of-the-art methods.\n", "title": "A Re-weighted Joint Spatial-Radon Domain CT Image Reconstruction Model for Metal Artifact Reduction" }
null
null
null
null
true
null
5965
null
Default
null
null
null
{ "abstract": " The 15 puzzle is a classic reconfiguration puzzle with fifteen uniquely\nlabeled unit squares within a $4 \\times 4$ board in which the goal is to slide\nthe squares (without ever overlapping) into a target configuration. By\ngeneralizing the puzzle to an $n \\times n$ board with $n^2-1$ squares, we can\nstudy the computational complexity of problems related to the puzzle; in\nparticular, we consider the problem of determining whether a given end\nconfiguration can be reached from a given start configuration via at most a\ngiven number of moves. This problem was shown NP-complete in Ratner and Warmuth\n(1990). We provide an alternative simpler proof of this fact by reduction from\nthe rectilinear Steiner tree problem.\n", "title": "A simple proof that the $(n^2-1)$-puzzle is hard" }
null
null
null
null
true
null
5966
null
Default
null
null
null
{ "abstract": " We address the important question of whether the newly discovered exoplanet,\nProxima Centauri b (PCb), is capable of retaining an atmosphere over long\nperiods of time. This is done by adapting a sophisticated multi-species MHD\nmodel originally developed for Venus and Mars, and computing the ion escape\nlosses from PCb. The results suggest that the ion escape rates are about two\norders of magnitude higher than the terrestrial planets of our Solar system if\nPCb is unmagnetized. In contrast, if the planet does have an intrinsic dipole\nmagnetic field, the rates are lowered for certain values of the stellar wind\ndynamic pressure, but they are still higher than the observed values for our\nSolar system's terrestrial planets. These results must be interpreted with due\ncaution, since most of the relevant parameters for PCb remain partly or wholly\nunknown.\n", "title": "Is Proxima Centauri b habitable? -- A study of atmospheric loss" }
null
null
null
null
true
null
5967
null
Default
null
null
null
{ "abstract": " Joint analysis of multiple phenotypes can increase statistical power in\ngenetic association studies. Principal component analysis, as a popular\ndimension reduction method, especially when the number of phenotypes is\nhigh-dimensional, has been proposed to analyze multiple correlated phenotypes.\nIt has been empirically observed that the first PC, which summarizes the\nlargest amount of variance, can be less powerful than higher order PCs and\nother commonly used methods in detecting genetic association signals. In this\npaper, we investigate the properties of PCA-based multiple phenotype analysis\nfrom a geometric perspective by introducing a novel concept called principal\nangle. A particular PC is powerful if its principal angle is $0^o$ and is\npowerless if its principal angle is $90^o$. Without prior knowledge about the\ntrue principal angle, each PC can be powerless. We propose linear, non-linear\nand data-adaptive omnibus tests by combining PCs. We show that the omnibus PC\ntest is robust and powerful in a wide range of scenarios. We study the\nproperties of the proposed methods using power analysis and eigen-analysis. The\nsubtle differences and close connections between these combined PC methods are\nillustrated graphically in terms of their rejection boundaries. Our proposed\ntests have convex acceptance regions and hence are admissible. The $p$-values\nfor the proposed tests can be efficiently calculated analytically and the\nproposed tests have been implemented in a publicly available R package {\\it\nMPAT}. We conduct simulation studies in both low and high dimensional settings\nwith various signal vectors and correlation structures. We apply the proposed\ntests to the joint analysis of metabolic syndrome related phenotypes with data\nsets collected from four international consortia to demonstrate the\neffectiveness of the proposed combined PC testing procedures.\n", "title": "A Geometric Perspective on the Power of Principal Component Association Tests in Multiple Phenotype Studies" }
null
null
null
null
true
null
5968
null
Default
null
null
null
{ "abstract": " Among the n-type metal oxide materials used in the planar perovskite solar\ncells, zinc oxide (ZnO) is a promising candidate to replace titanium dioxide\n(TiO2) due to its relatively high electron mobility, high transparency, and\nversatile nanostructures. Here, we present the application of low temperature\nsolution processed ZnO/Al-doped ZnO (AZO) bilayer thin film as electron\ntransport layers (ETLs) in the inverted perovskite solar cells, which provide a\nstair-case band profile. Experimental results revealed that the power\nconversion efficiency (PCE) of perovskite solar cells were significantly\nincreased from 12.25 to 16.07% by employing the AZO thin film as the buffer\nlayer. Meanwhile, the short-circuit current density (Jsc), open-circuit voltage\n(Voc), and fill factor (FF) were improved to 20.58 mA/cm2, 1.09V, and 71.6%,\nrespectively. The enhancement in performance is attributed to the modified\ninterface in ETL with stair-case band alignment of ZnO/AZO/CH3NH3PbI3, which\nallows more efficient extraction of photogenerated electrons in the CH3NH3PbI3\nactive layer. Thus, it is demonstrated that the ZnO/AZO bilayer ETLs would\nbenefit the electron extraction and contribute in enhancing the performance of\nperovskite solar cells.\n", "title": "A Design Based on Stair-case Band Alignment of Electron Transport Layer for Improving Performance and Stability in Planar Perovskite Solar Cells" }
null
null
null
null
true
null
5969
null
Default
null
null
null
{ "abstract": " We introduce a framework for the statistical analysis of functional data in a\nsetting where these objects cannot be fully observed, but only indirect and\nnoisy measurements are available, namely an inverse problem setting. The\nproposed methodology can be applied either to the analysis of indirectly\nobserved functional data or to the associated covariance operators,\nrepresenting second-order information, and thus lying on a non-Euclidean space.\nTo deal with the ill-posedness of the inverse problem, we exploit the spatial\nstructure of the sample data by introducing a flexible regularizing term\nembedded in the model. Thanks to its efficiency, the proposed model is applied\nto MEG data, leading to a novel statistical approach to the investigation of\nfunctional connectivity.\n", "title": "Statistics on functional data and covariance operators in linear inverse problems" }
null
null
[ "Statistics" ]
null
true
null
5970
null
Validated
null
null
null
{ "abstract": " Independent component analysis (ICA) is a widely used BSS method that can\nuniquely achieve source recovery, subject to only scaling and permutation\nambiguities, through the assumption of statistical independence on the part of\nthe latent sources. Independent vector analysis (IVA) extends the applicability\nof ICA by jointly decomposing multiple datasets through the exploitation of the\ndependencies across datasets. Though both ICA and IVA algorithms cast in the\nmaximum likelihood (ML) framework enable the use of all available statistical\ninformation in reality, they often deviate from their theoretical optimality\nproperties due to improper estimation of the probability density function\n(PDF). This motivates the development of flexible ICA and IVA algorithms that\nclosely adhere to the underlying statistical description of the data. Although\nit is attractive minimize the assumptions, important prior information about\nthe data, such as sparsity, is usually available. If incorporated into the ICA\nmodel, use of this additional information can relax the independence\nassumption, resulting in an improvement in the overall separation performance.\nTherefore, the development of a unified mathematical framework that can take\ninto account both statistical independence and sparsity is of great interest.\nIn this work, we first introduce a flexible ICA algorithm that uses an\neffective PDF estimator to accurately capture the underlying statistical\nproperties of the data. We then discuss several techniques to accurately\nestimate the parameters of the multivariate generalized Gaussian distribution,\nand how to integrate them into the IVA model. Finally, we provide a\nmathematical framework that enables direct control over the influence of\nstatistical independence and sparsity, and use this framework to develop an\neffective ICA algorithm that can jointly exploit these two forms of diversity.\n", "title": "Development of ICA and IVA Algorithms with Application to Medical Image Analysis" }
null
null
[ "Statistics" ]
null
true
null
5971
null
Validated
null
null
null
{ "abstract": " Context: A substantial fraction of protoplanetary disks forms around stellar\nbinaries. The binary system generates a time-dependent non-axisymmetric\ngravitational potential, inducing strong tidal forces on the circumbinary disk.\nThis leads to a change in basic physical properties of the circumbinary disk,\nwhich should in turn result in unique structures that are potentially\nobservable with the current generation of instruments.\nAims: The goal of this study is to identify these characteristic structures,\nto constrain the physical conditions that cause them, and to evaluate the\nfeasibility to observe them in circumbinary disks.\nMethods: To achieve this, at first two-dimensional hydrodynamic simulations\nare performed. The resulting density distributions are post-processed with a 3D\nradiative transfer code to generate re-emission and scattered light maps. Based\non these, we study the influence of various parameters, such as the mass of the\nstellar components, the mass of the disk and the binary separation on\nobservable features in circumbinary disks.\nResults: We find that the Atacama Large (sub-)Millimetre Array (ALMA) as well\nas the European Extremely Large Telescope (E-ELT) are capable of tracing\nasymmetries in the inner region of circumbinary disks which are affected most\nby the binary-disk interaction. Observations at submillimetre/millimetre\nwavelengths will allow the detection of the density waves at the inner rim of\nthe disk and the inner cavity. With the E-ELT one can partially resolve the\ninnermost parts of the disk in the infrared wavelength range, including the\ndisk's rim, accretion arms and potentially the expected circumstellar disks\naround each of the binary components.\n", "title": "Observability of characteristic binary-induced structures in circumbinary disks" }
null
null
null
null
true
null
5972
null
Default
null
null
null
{ "abstract": " As part of the 2016 public evaluation challenge on Detection and\nClassification of Acoustic Scenes and Events (DCASE 2016), the second task\nfocused on evaluating sound event detection systems using synthetic mixtures of\noffice sounds. This task, which follows the `Event Detection - Office\nSynthetic' task of DCASE 2013, studies the behaviour of tested algorithms when\nfacing controlled levels of audio complexity with respect to background noise\nand polyphony/density, with the added benefit of a very accurate ground truth.\nThis paper presents the task formulation, evaluation metrics, submitted\nsystems, and provides a statistical analysis of the results achieved, with\nrespect to various aspects of the evaluation dataset.\n", "title": "Sound Event Detection in Synthetic Audio: Analysis of the DCASE 2016 Task Results" }
null
null
null
null
true
null
5973
null
Default
null
null
null
{ "abstract": " Topological semimetal, a novel state of quantum matter hosting exotic\nemergent quantum phenomena dictated by the non-trivial band topology, has\nemerged as a new frontier in condensed-matter physics. Very recently, a\ncoexistence of triply degenerate points of band crossing and Weyl points near\nthe Fermi level was theoretically predicted and immediately experimentally\nverified in single crystalline molybdenum phosphide (MoP). Here we show in this\nmaterial the high-pressure electronic transport and synchrotron X-ray\ndiffraction (XRD) measurements, combined with density functional theory (DFT)\ncalculations. We report the emergence of pressure-induced superconductivity in\nMoP with a critical temperature Tc of about 2 K at 27.6 GPa, rising to 3.7 K at\nthe highest pressure of 95.0 GPa studied. No structural phase transitions is\ndetected up to 60.6 GPa from the XRD. Meanwhile, the Weyl points and triply\ndegenerate points topologically protected by the crystal symmetry are retained\nat high pressure as revealed by our DFT calculations. The coexistence of\nthree-component fermion and superconductivity in heavily pressurized MoP offers\nan excellent platform to study the interplay between topological phase of\nmatter and superconductivity.\n", "title": "Pressure-induced Superconductivity in the Three-component Fermion Topological Semimetal Molybdenum Phosphide" }
null
null
null
null
true
null
5974
null
Default
null
null
null
{ "abstract": " Small bodies of the Solar system, like asteroids, trans-Neptunian objects,\ncometary nuclei, planetary satellites, with diameters smaller than one thousand\nkilometers usually have irregular shapes, often resembling dumb-bells, or\ncontact binaries. The spinning of such a gravitating dumb-bell creates around\nit a zone of chaotic orbits. We determine its extent analytically and\nnumerically. We find that the chaotic zone swells significantly if the rotation\nrate is decreased, in particular, the zone swells more than twice if the\nrotation rate is decreased ten times with respect to the \"centrifugal breakup\"\nthreshold. We illustrate the properties of the chaotic orbital zones in\nexamples of the global orbital dynamics about asteroid 243 Ida (which has a\nmoon, Dactyl, orbiting near the edge of the chaotic zone) and asteroid 25143\nItokawa.\n", "title": "Chaotic zones around rotating small bodies" }
null
null
[ "Physics" ]
null
true
null
5975
null
Validated
null
null
null
{ "abstract": " We discuss the nature of symmetry breaking and the associated collective\nexcitations for a system of bosons coupled to the electromagnetic field of two\noptical cavities. For the specific configuration realized in a recent\nexperiment at ETH, we show that, in absence of direct intercavity scattering\nand for parameters chosen such that the atoms couple symmetrically to both\ncavities, the system possesses an approximate $U(1)$ symmetry which holds\nasymptotically for vanishing cavity field intensity. It corresponds to the\ninvariance with respect to redistributing the total intensity $I=I_1+I_2$\nbetween the two cavities. The spontaneous breaking of this symmetry gives rise\nto a broken continuous translation-invariance for the atoms, creating a\nsupersolid-like order in the presence of a Bose-Einstein condensate. In\nparticular, we show that atom-mediated scattering between the two cavities,\nwhich favors the state with equal light intensities $I_1=I_2$ and reduces the\nsymmetry to $\\mathbf{Z}_2\\otimes \\mathbf{Z}_2$, gives rise to a finite value\n$\\sim \\sqrt{I}$ of the effective Goldstone mass. For strong atom driving, this\nlow energy mode is clearly separated from an effective Higgs excitation\nassociated with changes of the total intensity $I$. In addition, we compute the\nspectral distribution of the cavity light field and show that both the Higgs\nand Goldstone mode acquire a finite lifetime due to Landau damping at non-zero\ntemperature.\n", "title": "Collective excitations and supersolid behavior of bosonic atoms inside two crossed optical cavities" }
null
null
null
null
true
null
5976
null
Default
null
null
null
{ "abstract": " In this paper, we introduce a generalized value iteration network (GVIN),\nwhich is an end-to-end neural network planning module. GVIN emulates the value\niteration algorithm by using a novel graph convolution operator, which enables\nGVIN to learn and plan on irregular spatial graphs. We propose three novel\ndifferentiable kernels as graph convolution operators and show that the\nembedding based kernel achieves the best performance. We further propose\nepisodic Q-learning, an improvement upon traditional n-step Q-learning that\nstabilizes training for networks that contain a planning module. Lastly, we\nevaluate GVIN on planning problems in 2D mazes, irregular graphs, and\nreal-world street networks, showing that GVIN generalizes well for both\narbitrary graphs and unseen graphs of larger scale and outperforms a naive\ngeneralization of VIN (discretizing a spatial graph into a 2D image).\n", "title": "Generalized Value Iteration Networks: Life Beyond Lattices" }
null
null
null
null
true
null
5977
null
Default
null
null
null
{ "abstract": " Hot Jupiters receive strong stellar irradiation, producing equilibrium\ntemperatures of $1000 - 2500 \\ \\mathrm{Kelvin}$. Incoming irradiation directly\nheats just their thin outer layer, down to pressures of $\\sim 0.1 \\\n\\mathrm{bars}$. In standard irradiated evolution models of hot Jupiters,\npredicted transit radii are too small. Previous studies have shown that deeper\nheating -- at a small fraction of the heating rate from irradiation -- can\nexplain observed radii. Here we present a suite of evolution models for HD\n209458b where we systematically vary both the depth and intensity of internal\nheating, without specifying the uncertain heating mechanism(s). Our models\nstart with a hot, high entropy planet whose radius decreases as the convective\ninterior cools. The applied heating suppresses this cooling. We find that very\nshallow heating -- at pressures of $1 - 10 \\ \\mathrm{bars}$ -- does not\nsignificantly suppress cooling, unless the total heating rate is $\\gtrsim 10\\%$\nof the incident stellar power. Deeper heating, at $100 \\ \\mathrm{bars}$,\nrequires heating at only $1\\%$ of the stellar irradiation to explain the\nobserved transit radius of $1.4 R_{\\rm Jup}$ after 5 Gyr of cooling. In\ngeneral, more intense and deeper heating results in larger hot Jupiter radii.\nSurprisingly, we find that heat deposited at $10^4 \\ \\mathrm{bars}$ -- which is\nexterior to $\\approx 99\\%$ of the planet's mass -- suppresses planetary cooling\nas effectively as heating at the center. In summary, we find that relatively\nshallow heating is required to explain the radii of most hot Jupiters, provided\nthat this heat is applied early and persists throughout their evolution.\n", "title": "Structure and Evolution of Internally Heated Hot Jupiters" }
null
null
null
null
true
null
5978
null
Default
null
null
null
{ "abstract": " Interference-aware resource allocation of time slots and frequency channels\nin single-antenna, halfduplex radio wireless sensor networks (WSN) is\nchallenging. Devising distributed algorithms for such task further complicates\nthe problem. This work studiesWSN joint time and frequency channel allocation\nfor a given routing tree, such that: a) allocation is performed in a fully\ndistributed way, i.e., information exchange is only performed among neighboring\nWSN terminals, within communication up to two hops, and b) detection of\npotential interfering terminals is simplified and can be practically realized.\nThe algorithm imprints space, time, frequency and radio hardware constraints\ninto a loopy factor graph and performs iterative message passing/ loopy belief\npropagation (BP) with randomized initial priors. Sufficient conditions for\nconvergence to a valid solution are offered, for the first time in the\nliterature, exploiting the structure of the proposed factor graph. Based on\ntheoretical findings, modifications of BP are devised that i) accelerate\nconvergence to a valid solution and ii) reduce computation cost. Simulations\nreveal promising throughput results of the proposed distributed algorithm, even\nthough it utilizes simplified interfering terminals set detection. Future work\ncould modify the constraints such that other disruptive wireless technologies\n(e.g., full-duplex radios or network coding) could be accommodated within the\nsame inference framework.\n", "title": "Inference-Based Distributed Channel Allocation in Wireless Sensor Networks" }
null
null
null
null
true
null
5979
null
Default
null
null
null
{ "abstract": " We define a switch function to be a function from an interval to $\\{1,-1\\}$\nwith a finite number of sign changes. (Special cases are the Walsh functions.)\nBy a topological argument, we prove that, given $n$ real-valued functions,\n$f_1, \\dots, f_n$, in $L^1[0,1]$, there exists a switch function, $\\sigma$,\nwith at most $n$ sign changes that is simultaneously orthogonal to all of them\nin the sense that $\\int_0^1 \\sigma(t)f_i(t)dt=0$, for all $i = 1, \\dots , n$.\nMoreover, we prove that, for each $\\lambda \\in (-1,1)$, there exists a unique\nswitch function, $\\sigma$, with $n$ switches such that $\\int_0^1 \\sigma(t) p(t)\ndt = \\lambda \\int_0^1 p(t)dt$ for every real polynomial $p$ of degree at most\n$n-1$. We also prove the same statement holds for every real even polynomial of\ndegree at most $2n-2$. Furthermore, for each of these latter results, we write\ndown, in terms of $\\lambda$ and $n$, a degree $n$ polynomial whose roots are\nthe switch points of $\\sigma$; we are thereby able to compute these switch\nfunctions.\n", "title": "Switch Functions" }
null
null
[ "Mathematics" ]
null
true
null
5980
null
Validated
null
null
null
{ "abstract": " We consider Schrödinger operators with periodic potentials in the positive\nquadrant for dim $>1$ with Dirichlet boundary condition. We show that for any\ninteger $N$ and any interval $I$ there exists a periodic potential such that\nthe Schrödinger operator has $N$ eigenvalues counted with the multiplicity on\nthis interval and there is no other spectrum on the interval. Furthermore, to\nthe right and to the left of it there is a essential spectrum.\nMoreover, we prove similar results for Schrödinger operators for other\ndomains. The proof is based on the inverse spectral theory for Hill operators\non the real line.\n", "title": "Schrödinger operators periodic in octants" }
null
null
null
null
true
null
5981
null
Default
null
null
null
{ "abstract": " We report on the first comparison of distant caesium fountain primary\nfrequency standards (PFSs) via an optical fiber link. The 1415 km long optical\nlink connects two PFSs at LNE-SYRTE (Laboratoire National de métrologie et\nd'Essais - SYstème de Références Temps-Espace) in Paris (France)\nwith two at PTB (Physikalisch-Technische Bundesanstalt) in Braunschweig\n(Germany). For a long time, these PFSs have been major contributors to accuracy\nof the International Atomic Time (TAI), with stated accuracies of around\n$3\\times 10^{-16}$. They have also been the references for a number of absolute\nmeasurements of clock transition frequencies in various optical frequency\nstandards in view of a future redefinition of the second. The phase coherent\noptical frequency transfer via a stabilized telecom fiber link enables far\nbetter resolution than any other means of frequency transfer based on satellite\nlinks. The agreement for each pair of distant fountains compared is well within\nthe combined uncertainty of a few 10$^{-16}$ for all the comparisons, which\nfully supports the stated PFSs' uncertainties. The comparison also includes a\nrubidium fountain frequency standard participating in the steering of TAI and\nenables a new absolute determination of the $^{87}$Rb ground state hyperfine\ntransition frequency with an uncertainty of $3.1\\times 10^{-16}$.\nThis paper is dedicated to the memory of André Clairon, who passed away\non the 24$^{th}$ of December 2015, for his pioneering and long-lasting efforts\nin atomic fountains. He also pioneered optical links from as early as 1997.\n", "title": "First international comparison of fountain primary frequency standards via a long distance optical fiber link" }
null
null
null
null
true
null
5982
null
Default
null
null
null
{ "abstract": " First the Hardy and Rellich inequalities are defined for the submarkovian\noperator associated with a local Dirichlet form. Secondly, two general\nconditions are derived which are sufficient to deduce the Rellich inequality\nfrom the Hardy inequality. In addition the Rellich constant is calculated from\nthe Hardy constant. Thirdly, we establish that the criteria for the Rellich\ninequality are verified for a large class of weighted second-order operators on\na domain $\\Omega\\subseteq \\Ri^d$. The weighting near the boundary $\\partial\n\\Omega$ can be different from the weighting at infinity. Finally these results\nare applied to weighted second-order operators on $\\Ri^d\\backslash\\{0\\}$ and to\na general class of operators of Grushin type.\n", "title": "Hardy inequalities, Rellich inequalities and local Dirichlet forms" }
null
null
null
null
true
null
5983
null
Default
null
null
null
{ "abstract": " Purpose: The analysis of optimized spin ensemble trajectories for relaxometry\nin the hybrid state.\nMethods: First, we constructed visual representations to elucidate the\ndifferential equation that governs spin dynamics in hybrid state. Subsequently,\nnumerical optimizations were performed to find spin ensemble trajectories that\nminimize the Cramér-Rao bound for $T_1$-encoding, $T_2$-encoding, and their\nweighted sum, respectively, followed by a comparison of the Cramér-Rao bounds\nobtained with our optimized spin-trajectories, as well as Look-Locker and\nmulti-spin-echo methods. Finally, we experimentally tested our optimized spin\ntrajectories with in vivo scans of the human brain.\nResults: After a nonrecurring inversion segment on the southern hemisphere of\nthe Bloch sphere, all optimized spin trajectories pursue repetitive loops on\nthe northern half of the sphere in which the beginning of the first and the end\nof the last loop deviate from the others. The numerical results obtained in\nthis work align well with intuitive insights gleaned directly from the\ngoverning equation. Our results suggest that hybrid-state sequences outperform\ntraditional methods. Moreover, hybrid-state sequences that balance $T_1$- and\n$T_2$-encoding still result in near optimal signal-to-noise efficiency. Thus,\nthe second parameter can be encoded at virtually no extra cost.\nConclusion: We provide insights regarding the optimal encoding processes of\nspin relaxation times in order to guide the design of robust and efficient\npulse sequences. We find that joint acquisitions of $T_1$ and $T_2$ in the\nhybrid state are substantially more efficient than sequential encoding\ntechniques.\n", "title": "Optimized Quantification of Spin Relaxation Times in the Hybrid State" }
null
null
null
null
true
null
5984
null
Default
null
null
null
{ "abstract": " Despite recent progress, laminar-turbulent coexistence in transitional planar\nwall-bounded shear flows is still not well understood. Contrasting with the\nprocesses by which chaotic flow inside turbulent patches is sustained at the\nlocal (minimal flow unit) scale, the mechanisms controlling the obliqueness of\nlaminar-turbulent interfaces typically observed all along the coexistence range\nare still mysterious. An extension of Waleffe's approach [Phys. Fluids 9 (1997)\n883--900] is used to show that, already at the local scale, drift flows\nbreaking the problem's spanwise symmetry are generated just by slightly\ndetuning the modes involved in the self-sustainment process. This opens\nperspectives for theorizing the formation of laminar-turbulent patterns.\n", "title": "On the generation of drift flows in wall-bounded flows transiting to turbulence" }
null
null
null
null
true
null
5985
null
Default
null
null
null
{ "abstract": " Goldbach conjecture is one of the most famous open mathematical problems. It\nstates that every even number, bigger than two, can be presented as a sum of 2\nprime numbers. % In this work we present a deep learning based model that\npredicts the number of Goldbach partitions for a given even number.\nSurprisingly, our model outperforms all state-of-the-art analytically derived\nestimations for the number of couples, while not requiring prime factorization\nof the given number. We believe that building a model that can accurately\npredict the number of couples brings us one step closer to solving one of the\nworld most famous open problems. To the best of our knowledge, this is the\nfirst attempt to consider machine learning based data-driven methods to\napproximate open mathematical problems in the field of number theory, and hope\nthat this work will encourage such attempts.\n", "title": "Goldbach's Function Approximation Using Deep Learning" }
null
null
null
null
true
null
5986
null
Default
null
null
null
{ "abstract": " For an unknown continuous distribution on a real line, we consider the\napproximate estimation by the discretization. There are two methods for the\ndiscretization. First method is to divide the real line into several intervals\nbefore taking samples (\"fixed interval method\") . Second method is dividing the\nreal line using the estimated percentiles after taking samples (\"moving\ninterval method\"). In either way, we settle down to the estimation problem of a\nmultinomial distribution. We use (symmetrized) $f$-divergence in order to\nmeasure the discrepancy of the true distribution and the estimated one. Our\nmain result is the asymptotic expansion of the risk (i.e. expected divergence)\nup to the second-order term in the sample size. We prove theoretically that the\nmoving interval method is asymptotically superior to the fixed interval method.\nWe also observe how the presupposed intervals (fixed interval method) or\npercentiles (moving interval method) affect the asymptotic risk.\n", "title": "Estimation of a Continuous Distribution on a Real Line by Discretization Methods -- Complete Version--" }
null
null
null
null
true
null
5987
null
Default
null
null
null
{ "abstract": " In this paper, we revisit the recurrent back-propagation (RBP) algorithm,\ndiscuss the conditions under which it applies as well as how to satisfy them in\ndeep neural networks. We show that RBP can be unstable and propose two variants\nbased on conjugate gradient on the normal equations (CG-RBP) and Neumann series\n(Neumann-RBP). We further investigate the relationship between Neumann-RBP and\nback propagation through time (BPTT) and its truncated version (TBPTT). Our\nNeumann-RBP has the same time complexity as TBPTT but only requires constant\nmemory, whereas TBPTT's memory cost scales linearly with the number of\ntruncation steps. We examine all RBP variants along with BPTT and TBPTT in\nthree different application domains: associative memory with continuous\nHopfield networks, document classification in citation networks using graph\nneural networks and hyperparameter optimization for fully connected networks.\nAll experiments demonstrate that RBPs, especially the Neumann-RBP variant, are\nefficient and effective for optimizing convergent recurrent neural networks.\n", "title": "Reviving and Improving Recurrent Back-Propagation" }
null
null
null
null
true
null
5988
null
Default
null
null
null
{ "abstract": " To each weighted Dirichlet space $\\mathcal{D}_p$, $0<p<1$, we associate a\nfamily of Morrey-type spaces ${\\mathcal{D}}_p^{\\lambda}$, $0< \\lambda < 1$,\nconstructed by imposing growth conditions on the norm of hyperbolic translates\nof functions. We indicate some of the properties of these spaces, mention the\ncharacterization in terms of boundary values, and study integration and\nmultiplication operators on them.\n", "title": "A family of Dirichlet-Morrey spaces" }
null
null
null
null
true
null
5989
null
Default
null
null
null
{ "abstract": " Achieving a symbiotic blending between reality and virtuality is a dream that\nhas been lying in the minds of many people for a long time. Advances in various\ndomains constantly bring us closer to making that dream come true. Augmented\nreality as well as virtual reality are in fact trending terms and are expected\nto further progress in the years to come.\nThis master's thesis aims to explore these areas and starts by defining\nnecessary terms such as augmented reality (AR) or virtual reality (VR). Usual\ntaxonomies to classify and compare the corresponding experiences are then\ndiscussed.\nIn order to enable those applications, many technical challenges need to be\ntackled, such as accurate motion tracking with 6 degrees of freedom (positional\nand rotational), that is necessary for compelling experiences and to prevent\nuser sickness. Additionally, augmented reality experiences typically rely on\nimage processing to position the superimposed content. To do so, \"paper\"\nmarkers or features extracted from the environment are often employed. Both\nsets of techniques are explored and common solutions and algorithms are\npresented.\nAfter investigating those technical aspects, I carry out an objective\ncomparison of the existing state-of-the-art and state-of-the-practice in those\ndomains, and I discuss present and potential applications in these areas. As a\npractical validation, I present the results of an application that I have\ndeveloped using Microsoft HoloLens, one of the more advanced affordable\ntechnologies for augmented reality that is available today. Based on the\nexperience and lessons learned during this development, I discuss the\nlimitations of current technologies and present some avenues of future\nresearch.\n", "title": "Merging real and virtual worlds: An analysis of the state of the art and practical evaluation of Microsoft Hololens" }
null
null
[ "Computer Science" ]
null
true
null
5990
null
Validated
null
null
null
{ "abstract": " Segmental duplications (SDs), or low-copy repeats (LCR), are segments of DNA\ngreater than 1 Kbp with high sequence identity that are copied to other regions\nof the genome. SDs are among the most important sources of evolution, a common\ncause of genomic structural variation, and several are associated with diseases\nof genomic origin. Despite their functional importance, SDs present one of the\nmajor hurdles for de novo genome assembly due to the ambiguity they cause in\nbuilding and traversing both state-of-the-art overlap-layout-consensus and de\nBruijn graphs. This causes SD regions to be misassembled, collapsed into a\nunique representation, or completely missing from assembled reference genomes\nfor various organisms. In turn, this missing or incorrect information limits\nour ability to fully understand the evolution and the architecture of the\ngenomes. Despite the essential need to accurately characterize SDs in\nassemblies, there is only one tool that has been developed for this purpose,\ncalled Whole Genome Assembly Comparison (WGAC). WGAC is comprised of several\nsteps that employ different tools and custom scripts, which makes it difficult\nand time consuming to use. Thus there is still a need for algorithms to\ncharacterize within-assembly SDs quickly, accurately, and in a user friendly\nmanner.\nHere we introduce a SEgmental Duplication Evaluation Framework (SEDEF) to\nrapidly detect SDs through sophisticated filtering strategies based on Jaccard\nsimilarity and local chaining. We show that SEDEF accurately detects SDs while\nmaintaining substantial speed up over WGAC that translates into practical run\ntimes of minutes instead of weeks. Notably, our algorithm captures up to 25%\npairwise error between segments, where previous studies focused on only 10%,\nallowing us to more deeply track the evolutionary history of the genome.\nSEDEF is available at this https URL\n", "title": "Fast Characterization of Segmental Duplications in Genome Assemblies" }
null
null
null
null
true
null
5991
null
Default
null
null
null
{ "abstract": " We propose an adaptive bandwidth selector via cross validation for local\nM-estimators in locally stationary processes. We prove asymptotic optimality of\nthe procedure under mild conditions on the underlying parameter curves. The\nresults are applicable to a wide range of locally stationary processes such\nlinear and nonlinear processes. A simulation study shows that the method works\nfairly well also in misspecified situations.\n", "title": "Cross validation for locally stationary processes" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
5992
null
Validated
null
null
null
{ "abstract": " We consider mesoscopic four-terminal Josephson junctions and study emergent\ntopological properties of the Andreev subgap bands. We use symmetry-constrained\nanalysis for Wigner-Dyson classes of scattering matrices to derive band\ndispersions. When scattering matrix of the normal region connecting\nsuperconducting leads is energy-independent, the determinant formula for\nAndreev spectrum can be reduced to a palindromic equation that admits a\ncomplete analytical solution. Band topology manifests with an appearance of the\nWeyl nodes which serve as monopoles of finite Berry curvature. The\ncorresponding fluxes are quantified by Chern numbers that translate into a\nquantized nonlocal conductance that we compute explicitly for the\ntime-reversal-symmetric scattering matrix. The topological regime can be also\nidentified by supercurrents as Josephson current-phase relationships exhibit\npronounced nonanalytic behavior and discontinuities near Weyl points that can\nbe controllably accessed in experiments.\n", "title": "Weyl nodes in Andreev spectra of multiterminal Josephson junctions: Chern numbers, conductances and supercurrents" }
null
null
null
null
true
null
5993
null
Default
null
null
null
{ "abstract": " Let $A$ be the inductive limit of a sequence $$A_1\\, \\xrightarrow{\\phi_{1,2}}\n\\,A_2\\,\\xrightarrow{\\phi_{2,3}} \\,A_3\\rightarrow\\cdots$$ with\n$A_n=\\oplus_{i=1}^{n_i}A_{[n,i]}$, where all the $A_{[n,i]}$ are\nElliott-Thomsen algebras and $\\phi_{n,n+1}$ are homomorphisms, in this paper,\nwe will prove that $A$ can be written as another inductive limit\n$$B_1\\,\\xrightarrow{\\psi_{1,2}} \\,B_2\\,\\xrightarrow{\\psi_{2,3}}\n\\,B_3\\rightarrow\\cdots$$ with $B_n=\\oplus_{i=1}^{n_i}B_{[n,i]}$, where all the\n$B_{[n,i]}$ are Elliott-Thomsen building blocks and with the extra condition\nthat all the $\\phi_{n,n+1}$ are injective.\n", "title": "Injectivity of the connecting homomorphisms" }
null
null
null
null
true
null
5994
null
Default
null
null
null
{ "abstract": " We study the problem of detecting change points (CPs) that are characterized\nby a subset of dimensions in a multi-dimensional sequence. A method for\ndetecting those CPs can be formulated as a two-stage method: one for selecting\nrelevant dimensions, and another for selecting CPs. It has been difficult to\nproperly control the false detection probability of these CP detection methods\nbecause selection bias in each stage must be properly corrected. Our main\ncontribution in this paper is to formulate a CP detection problem as a\nselective inference problem, and show that exact (non-asymptotic) inference is\npossible for a class of CP detection methods. We demonstrate the performances\nof the proposed selective inference framework through numerical simulations and\nits application to our motivating medical data analysis problem.\n", "title": "Selective Inference for Change Point Detection in Multi-dimensional Sequences" }
null
null
[ "Statistics" ]
null
true
null
5995
null
Validated
null
null
null
{ "abstract": " Over recent years, emerging interest has occurred in integrating computer\nvision technology into the retail industry. Automatic checkout (ACO) is one of\nthe critical problems in this area which aims to automatically generate the\nshopping list from the images of the products to purchase. The main challenge\nof this problem comes from the large scale and the fine-grained nature of the\nproduct categories as well as the difficulty for collecting training images\nthat reflect the realistic checkout scenarios due to continuous update of the\nproducts. Despite its significant practical and research value, this problem is\nnot extensively studied in the computer vision community, largely due to the\nlack of a high-quality dataset. To fill this gap, in this work we propose a new\ndataset to facilitate relevant research. Our dataset enjoys the following\ncharacteristics: (1) It is by far the largest dataset in terms of both product\nimage quantity and product categories. (2) It includes single-product images\ntaken in a controlled environment and multi-product images taken by the\ncheckout system. (3) It provides different levels of annotations for the\ncheck-out images. Comparing with the existing datasets, ours is closer to the\nrealistic setting and can derive a variety of research problems. Besides the\ndataset, we also benchmark the performance on this dataset with various\napproaches. The dataset and related resources can be found at\n\\url{this https URL}.\n", "title": "RPC: A Large-Scale Retail Product Checkout Dataset" }
null
null
null
null
true
null
5996
null
Default
null
null
null
{ "abstract": " We enquire into the quasi-many-body localization in topologically ordered\nstates of matter, revolving around the case of Kitaev toric code on ladder\ngeometry, where different types of anyonic defects carry different masses\ninduced by environmental errors. Our study verifies that random arrangement of\nanyons generates a complex energy landscape solely through braiding statistics,\nwhich suffices to suppress the diffusion of defects in such multi-component\nanyonic liquid. This non-ergodic dynamic suggests a promising scenario for\ninvestigation of quasi-many-body localization. Computing standard diagnostics\nevidences that, in such disorder-free many-body system, a typical initial\ninhomogeneity of anyons gives birth to a glassy dynamics with an exponentially\ndiverging time scale of the full relaxation. A by-product of this dynamical\neffect is manifested by the slow growth of entanglement entropy, with\ncharacteristic time scales bearing resemblance to those of inhomogeneity\nrelaxation. This setting provides a new platform which paves the way toward\nimpeding logical errors by self-localization of anyons in a generic, high\nenergy state, originated in their exotic statistics.\n", "title": "Anyonic self-induced disorder in a stabilizer code: quasi-many body localization in a translational invariant model" }
null
null
null
null
true
null
5997
null
Default
null
null
null
{ "abstract": " We present the tomographic cross-correlation between galaxy lensing measured\nin the Kilo Degree Survey (KiDS-450) with overlapping lensing measurements of\nthe cosmic microwave background (CMB), as detected by Planck 2015. We compare\nour joint probe measurement to the theoretical expectation for a flat\n$\\Lambda$CDM cosmology, assuming the best-fitting cosmological parameters from\nthe KiDS-450 cosmic shear and Planck CMB analyses. We find that our results are\nconsistent within $1\\sigma$ with the KiDS-450 cosmology, with an amplitude\nre-scaling parameter $A_{\\rm KiDS} = 0.86 \\pm 0.19$. Adopting a Planck\ncosmology, we find our results are consistent within $2\\sigma$, with $A_{\\it\nPlanck} = 0.68 \\pm 0.15$. We show that the agreement is improved in both cases\nwhen the contamination to the signal by intrinsic galaxy alignments is\naccounted for, increasing $A$ by $\\sim 0.1$. This is the first tomographic\nanalysis of the galaxy lensing -- CMB lensing cross-correlation signal, and is\nbased on five photometric redshift bins. We use this measurement as an\nindependent validation of the multiplicative shear calibration and of the\ncalibrated source redshift distribution at high redshifts. We find that\nconstraints on these two quantities are strongly correlated when obtained from\nthis technique, which should therefore not be considered as a stand-alone\ncompetitive calibration tool.\n", "title": "KiDS-450: Tomographic Cross-Correlation of Galaxy Shear with {\\it Planck} Lensing" }
null
null
null
null
true
null
5998
null
Default
null
null
null
{ "abstract": " Earthquake Early Warning (EEW) systems can effectively reduce fatalities,\ninjuries, and damages caused by earthquakes. Current EEW systems are mostly\nbased on traditional seismic and geodetic networks, and exist only in a few\ncountries due to the high cost of installing and maintaining such systems. The\nMyShake system takes a different approach and turns people's smartphones into\nportable seismic sensors to detect earthquake-like motions. However, to issue\nEEW messages with high accuracy and low latency in the real world, we need to\naddress a number of challenges related to mobile computing. In this paper, we\nfirst summarize our experience building and deploying the MyShake system, then\nfocus on two key challenges for smartphone-based EEW (sensing heterogeneity and\nuser/system dynamics) and some preliminary exploration. We also discuss other\nchallenges and new research directions associated with smartphone-based seismic\nnetwork.\n", "title": "Earthquake Early Warning and Beyond: Systems Challenges in Smartphone-based Seismic Network" }
null
null
null
null
true
null
5999
null
Default
null
null
null
{ "abstract": " In the model of gate-based quantum computation, the qubits are controlled by\na sequence of quantum gates. In superconducting qubit systems, these gates can\nbe implemented by voltage pulses. The success of implementing a particular gate\ncan be expressed by various metrics such as the average gate fidelity, the\ndiamond distance, and the unitarity. We analyze these metrics of gate pulses\nfor a system of two superconducting transmon qubits coupled by a resonator, a\nsystem inspired by the architecture of the IBM Quantum Experience. The metrics\nare obtained by numerical solution of the time-dependent Schrödinger equation\nof the transmon system. We find that the metrics reflect systematic errors that\nare most pronounced for echoed cross-resonance gates, but that none of the\nstudied metrics can reliably predict the performance of a gate when used\nrepeatedly in a quantum algorithm.\n", "title": "Gate-error analysis in simulations of quantum computers with transmon qubits" }
null
null
null
null
true
null
6000
null
Default
null
null