text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " In this paper we address the convergence of stochastic approximation when the\nfunctions to be minimized are not convex and nonsmooth. We show that the\n\"mean-limit\" approach to the convergence which leads, for smooth problems, to\nthe ODE approach can be adapted to the non-smooth case. The limiting dynamical\nsystem may be shown to be, under appropriate assumption, a differential\ninclusion. Our results expand earlier works in this direction by Benaim et al.\n(2005) and provide a general framework for proving convergence for\nunconstrained and constrained stochastic approximation problems, with either\nexplicit or implicit updates. In particular, our results allow us to establish\nthe convergence of stochastic subgradient and proximal stochastic gradient\ndescent algorithms arising in a large class of deep learning and\nhigh-dimensional statistical inference with sparsity inducing penalties.\n", "title": "Analysis of nonsmooth stochastic approximation: the differential inclusion approach" }
null
null
null
null
true
null
6601
null
Default
null
null
null
{ "abstract": " In the paper we analyze 26 communities across the United States with the\nobjective to understand what attaches people to their community and how this\nattachment differs among communities. How different are attached people from\nunattached? What attaches people to their community? How different are the\ncommunities? What are key drivers behind emotional attachment? To address these\nquestions, graphical, supervised and unsupervised learning tools were used and\ninformation from the Census Bureau and the Knight Foundation were combined.\nUsing the same pre-processed variables as Knight (2010) most likely will drive\nthe results towards the same conclusions than the Knight foundation, so this\npaper does not use those variables.\n", "title": "Clicks and Cliques. Exploring the Soul of the Community" }
null
null
null
null
true
null
6602
null
Default
null
null
null
{ "abstract": " The problem of quickest change detection (QCD) under transient dynamics is\nstudied, where the change from the initial distribution to the final persistent\ndistribution does not happen instantaneously, but after a series of transient\nphases. The observations within the different phases are generated by different\ndistributions. The objective is to detect the change as quickly as possible,\nwhile controlling the average run length (ARL) to false alarm, when the\ndurations of the transient phases are completely unknown. Two algorithms are\nconsidered, the dynamic Cumulative Sum (CuSum) algorithm, proposed in earlier\nwork, and a newly constructed weighted dynamic CuSum algorithm. Both algorithms\nadmit recursions that facilitate their practical implementation, and they are\nadaptive to the unknown transient durations. Specifically, their asymptotic\noptimality is established with respect to both Lorden's and Pollak's criteria\nas the ARL to false alarm and the durations of the transient phases go to\ninfinity at any relative rate. Numerical results are provided to demonstrate\nthe adaptivity of the proposed algorithms, and to validate the theoretical\nresults.\n", "title": "Quickest Change Detection under Transient Dynamics: Theory and Asymptotic Analysis" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
6603
null
Validated
null
null
null
{ "abstract": " Many application settings involve the analysis of timestamped relations or\nevents between a set of entities, e.g. messages between users of an on-line\nsocial network. Static and discrete-time network models are typically used as\nanalysis tools in these settings; however, they discard a significant amount of\ninformation by aggregating events over time to form network snapshots. In this\npaper, we introduce a block point process model (BPPM) for dynamic networks\nevolving in continuous time in the form of events at irregular time intervals.\nThe BPPM is inspired by the well-known stochastic block model (SBM) for static\nnetworks and is a simpler version of the recently-proposed Hawkes infinite\nrelational model (IRM). We show that networks generated by the BPPM follow an\nSBM in the limit of a growing number of nodes and leverage this property to\ndevelop an efficient inference procedure for the BPPM. We fit the BPPM to\nseveral real network data sets, including a Facebook network with over 3, 500\nnodes and 130, 000 events, several orders of magnitude larger than the Hawkes\nIRM and other existing point process network models.\n", "title": "The Block Point Process Model for Continuous-Time Event-Based Dynamic Networks" }
null
null
null
null
true
null
6604
null
Default
null
null
null
{ "abstract": " This work demonstrates the potential of deep reinforcement learning\ntechniques for transmit power control in emerging and future wireless networks.\nVarious techniques have been proposed in the literature to find near-optimal\npower allocations, often by solving a challenging optimization problem. Most of\nthese algorithms are not scalable to large networks in real-world scenarios\nbecause of their computational complexity and instantaneous cross-cell channel\nstate information (CSI) requirement. In this paper, a model-free distributively\nexecuted dynamic power allocation scheme is developed based on deep\nreinforcement learning. Each transmitter collects CSI and quality of service\n(QoS) information from several neighbors and adapts its own transmit power\naccordingly. The objective is to maximize a weighted sum-rate utility function,\nwhich can be particularized to achieve maximum sum-rate or proportionally fair\nscheduling (with weights that are changing over time). Both random variations\nand delays in the CSI are inherently addressed using deep Q-learning. For a\ntypical network architecture, the proposed algorithm is shown to achieve\nnear-optimal power allocation in real time based on delayed CSI measurements\navailable to the agents. This work indicates that deep reinforcement learning\nbased radio resource management can be very fast and deliver highly competitive\nperformance, especially in practical scenarios where the system model is\ninaccurate and CSI delay is non-negligible.\n", "title": "Multi-Agent Deep Reinforcement Learning for Dynamic Power Allocation in Wireless Networks" }
null
null
null
null
true
null
6605
null
Default
null
null
null
{ "abstract": " This paper provides a holistic study of how stock prices vary in their\nresponse to financial disclosures across different topics. Thereby, we\nspecifically shed light into the extensive amount of filings for which no a\npriori categorization of their content exists. For this purpose, we utilize an\napproach from data mining - namely, latent Dirichlet allocation - as a means of\ntopic modeling. This technique facilitates our task of automatically\ncategorizing, ex ante, the content of more than 70,000 regulatory 8-K filings\nfrom U.S. companies. We then evaluate the subsequent stock market reaction. Our\nempirical evidence suggests a considerable discrepancy among various types of\nnews stories in terms of their relevance and impact on financial markets. For\ninstance, we find a statistically significant abnormal return in response to\nearnings results and credit rating, but also for disclosures regarding business\nstrategy, the health sector, as well as mergers and acquisitions. Our results\nyield findings that benefit managers, investors and policy-makers by indicating\nhow regulatory filings should be structured and the topics most likely to\nprecede changes in stock valuations.\n", "title": "Investor Reaction to Financial Disclosures Across Topics: An Application of Latent Dirichlet Allocation" }
null
null
null
null
true
null
6606
null
Default
null
null
null
{ "abstract": " Inference and learning for probabilistic generative networks is often very\nchallenging and typically prevents scalability to as large networks as used for\ndeep discriminative approaches. To obtain efficiently trainable, large-scale\nand well performing generative networks for semi-supervised learning, we here\ncombine two recent developments: a neural network reformulation of hierarchical\nPoisson mixtures (Neural Simpletrons), and a novel truncated variational EM\napproach (TV-EM). TV-EM provides theoretical guarantees for learning in\ngenerative networks, and its application to Neural Simpletrons results in\nparticularly compact, yet approximately optimal, modifications of learning\nequations. If applied to standard benchmarks, we empirically find, that\nlearning converges in fewer EM iterations, that the complexity per EM iteration\nis reduced, and that final likelihood values are higher on average. For the\ntask of classification on data sets with few labels, learning improvements\nresult in consistently lower error rates if compared to applications without\ntruncation. Experiments on the MNIST data set herein allow for comparison to\nstandard and state-of-the-art models in the semi-supervised setting. Further\nexperiments on the NIST SD19 data set show the scalability of the approach when\na manifold of additional unlabeled data is available.\n", "title": "Truncated Variational EM for Semi-Supervised Neural Simpletrons" }
null
null
[ "Statistics" ]
null
true
null
6607
null
Validated
null
null
null
{ "abstract": " Modern social media platforms facilitate the rapid spread of information\nonline. Modelling phenomena such as social contagion and information diffusion\nare contingent upon a detailed understanding of the information-sharing\nprocesses. In Twitter, an important aspect of this occurs with retweets, where\nusers rebroadcast the tweets of other users. To improve our understanding of\nhow these distributions arise, we analyse the distribution of retweet times. We\nshow that a power law with exponential cutoff provides a better fit than the\npower laws previously suggested. We explain this fit through the burstiness of\nhuman behaviour and the priorities individuals place on different tasks.\n", "title": "The nature and origin of heavy tails in retweet activity" }
null
null
null
null
true
null
6608
null
Default
null
null
null
{ "abstract": " We explore a new mechanism to explain polarization phenomena in opinion\ndynamics in which agents evaluate alternative views on the basis of the social\nfeedback obtained on expressing them. High support of the favored opinion in\nthe social environment, is treated as a positive feedback which reinforces the\nvalue associated to this opinion. In connected networks of sufficiently high\nmodularity, different groups of agents can form strong convictions of competing\nopinions. Linking the social feedback process to standard equilibrium concepts\nwe analytically characterize sufficient conditions for the stability of\nbi-polarization. While previous models have emphasized the polarization effects\nof deliberative argument-based communication, our model highlights an affective\nexperience-based route to polarization, without assumptions about negative\ninfluence or bounded confidence.\n", "title": "Opinion Polarization by Learning from Social Feedback" }
null
null
null
null
true
null
6609
null
Default
null
null
null
{ "abstract": " Many real-world applications are characterized by a number of conflicting\nperformance measures. As optimizing in a multi-objective setting leads to a set\nof non-dominated solutions, a preference function is required for selecting the\nsolution with the appropriate trade-off between the objectives. The question\nis: how good do estimations of these objectives have to be in order for the\nsolution maximizing the preference function to remain unchanged? In this paper,\nwe introduce the concept of preference radius to characterize the robustness of\nthe preference function and provide guidelines for controlling the quality of\nestimations in the multi-objective setting. More specifically, we provide a\ngeneral formulation of multi-objective optimization under the bandits setting.\nWe show how the preference radius relates to the optimal gap and we use this\nconcept to provide a theoretical analysis of the Thompson sampling algorithm\nfrom multivariate normal priors. We finally present experiments to support the\ntheoretical results and highlight the fact that one cannot simply scalarize\nmulti-objective problems into single-objective problems.\n", "title": "Estimating Quality in Multi-Objective Bandits Optimization" }
null
null
null
null
true
null
6610
null
Default
null
null
null
{ "abstract": " We consider an adaptive algorithm for finite element methods for the\nisogeometric analysis (IGAFEM) of elliptic (possibly non-symmetric)\nsecond-order partial differential equations in arbitrary space dimension\n$d\\ge2$. We employ hierarchical B-splines of arbitrary degree and different\norder of smoothness. We propose a refinement strategy to generate a sequence of\nlocally refined meshes and corresponding discrete solutions. Adaptivity is\ndriven by some weighted residual a posteriori error estimator. We prove linear\nconvergence of the error estimator (resp. the sum of energy error plus data\noscillations) with optimal algebraic rates. Numerical experiments underpin the\ntheoretical findings.\n", "title": "Adaptive IGAFEM with optimal convergence rates: Hierarchical B-splines" }
null
null
null
null
true
null
6611
null
Default
null
null
null
{ "abstract": " Photoluminescence polarization is experimentally studied for samples with\n(In,Ga)As/GaAs selfassembled quantum dots in transverse magnetic field (Hanle\neffect) under slow modulation of the excitation light polarization from\nfractions of Hz to tens of kHz. The polarization reflects the evolution of\nstrongly coupled electron-nuclear spin system in the quantum dots. Strong\nmodification of the Hanle curves under variation of the modulation period is\nattributed to the peculiarities of the spin dynamics of quadrupole nuclei,\nwhich states are split due to deformation of the crystal lattice in the quantum\ndots. Analysis of the Hanle curves is fulfilled in the framework of a\nphenomenological model considering a separate dynamics of a nuclear field BNd\ndetermined by the +/- 12 nuclear spin states and of a nuclear field BNq\ndetermined by the split-off states +/- 3/2, +/- 5/2, etc. It is found that the\ncharacteristic relaxation time for the nuclear field BNd is of order of 0.5 s,\nwhile the relaxation of the field BNq is faster by three orders of magnitude.\n", "title": "Spin dynamics of quadrupole nuclei in InGaAs quantum dots" }
null
null
null
null
true
null
6612
null
Default
null
null
null
{ "abstract": " The weak variance-alpha-gamma process is a multivariate Lévy process\nconstructed by weakly subordinating Brownian motion, possibly with correlated\ncomponents with an alpha-gamma subordinator. It generalises the\nvariance-alpha-gamma process of Semeraro constructed by traditional\nsubordination. We compare three calibration methods for the weak\nvariance-alpha-gamma process, method of moments, maximum likelihood estimation\n(MLE) and digital moment estimation (DME). We derive a condition for Fourier\ninvertibility needed to apply MLE and show in our simulations that MLE produces\na better fit when this condition holds, while DME produces a better fit when it\nis violated. We also find that the weak variance-alpha-gamma process exhibits a\nwider range of dependence and produces a significantly better fit than the\nvariance-alpha-gamma process on an S&P500-FTSE100 data set, and that DME\nproduces the best fit in this situation.\n", "title": "Calibration for Weak Variance-Alpha-Gamma Processes" }
null
null
[ "Quantitative Finance" ]
null
true
null
6613
null
Validated
null
null
null
{ "abstract": " A generalization of the coordinated transaction scheduling (CTS)---the\nstate-of-the-art interchange scheduling---is proposed. Referred to as\ngeneralized coordinated transaction scheduling (GCTS), the proposed approach\naddresses major seams issues of CTS: the ad hoc use of proxy buses, the\npresence of loop flow as a result of proxy bus approximation, and difficulties\nin dealing with multiple interfaces. By allowing market participants to submit\nbids across market boundaries, GCTS also generalizes the joint economic\ndispatch that achieves seamless interchange without market participants. It is\nshown that GCTS asymptotically achieves seamless interface under certain\nconditions. GCTS is also shown to be revenue adequate in that each regional\nmarket has a non-negative net revenue that is equal to its congestion rent.\nNumerical examples are presented to illustrate the quantitative improvement of\nthe proposed approach.\n", "title": "Generalized Coordinated Transaction Scheduling: A Market Approach to Seamless Interfaces" }
null
null
null
null
true
null
6614
null
Default
null
null
null
{ "abstract": " In this paper, we study the receiver performance with physical layer security\nin a Poisson field of interferers. We compare the performance in two deployment\nscenarios: (i) the receiver is located at the corner of a quadrant, (ii) the\nreceiver is located in the infinite plane. When the channel state information\n(CSI) of the eavesdropper is not available at the transmitter, we calculate the\nprobability of secure connectivity using the Wyner coding scheme, and we show\nthat hiding the receiver at the corner is beneficial at high rates of the\ntransmitted codewords and detrimental at low transmission rates. When the CSI\nis available, we show that the average secrecy capacity is higher when the\nreceiver is located at the corner, even if the intensity of interferers in this\ncase is four times higher than the intensity of interferers in the bulk.\nTherefore boundaries can also be used as a secrecy enhancement technique for\nhigh data rate applications.\n", "title": "Boundaries as an Enhancement Technique for Physical Layer Security" }
null
null
null
null
true
null
6615
null
Default
null
null
null
{ "abstract": " The tetragonal copper oxide Bi$_2$CuO$_4$ has an unusual crystal structure\nwith a three-dimensional network of well separated CuO$_4$ plaquettes. This\nmaterial was recently predicted to host electronic excitations with an\nunconventional spectrum and the spin structure of its magnetically ordered\nstate appearing at T$_N$ $\\sim$43 K remains controversial. Here we present the\nresults of detailed studies of specific heat, magnetic and dielectric\nproperties of Bi$_2$CuO$_4$ single crystals grown by the floating zone\ntechnique, combined with the polarized neutron scattering and high-resolution\nX-ray measurements. Our polarized neutron scattering data show Cu spins are\nparallel to the $ab$ plane. Below the onset of the long range antiferromagnetic\nordering we observe an electric polarization induced by an applied magnetic\nfield, which indicates inversion symmetry breaking by the ordered state of Cu\nspins. For the magnetic field applied perpendicular to the tetragonal axis, the\nspin-induced ferroelectricity is explained in terms of the linear\nmagnetoelectric effect that occurs in a metastable magnetic state. A relatively\nsmall electric polarization induced by the field parallel to the tetragonal\naxis may indicate a more complex magnetic ordering in Bi$_2$CuO$_4$.\n", "title": "Magnetically induced Ferroelectricity in Bi$_2$CuO$_4$" }
null
null
null
null
true
null
6616
null
Default
null
null
null
{ "abstract": " In order to achieve state-of-the-art performance, modern machine learning\ntechniques require careful data pre-processing and hyperparameter tuning.\nMoreover, given the ever increasing number of machine learning models being\ndeveloped, model selection is becoming increasingly important. Automating the\nselection and tuning of machine learning pipelines consisting of data\npre-processing methods and machine learning models, has long been one of the\ngoals of the machine learning community. In this paper, we tackle this\nmeta-learning task by combining ideas from collaborative filtering and Bayesian\noptimization. Using probabilistic matrix factorization techniques and\nacquisition functions from Bayesian optimization, we exploit experiments\nperformed in hundreds of different datasets to guide the exploration of the\nspace of possible pipelines. In our experiments, we show that our approach\nquickly identifies high-performing pipelines across a wide range of datasets,\nsignificantly outperforming the current state-of-the-art.\n", "title": "Probabilistic Matrix Factorization for Automated Machine Learning" }
null
null
null
null
true
null
6617
null
Default
null
null
null
{ "abstract": " Design optimization of engineering systems with multiple competing objectives\nis a painstakingly tedious process especially when the objective functions are\nexpensive-to-evaluate computer codes with parametric uncertainties. The\neffectiveness of the state-of-the-art techniques is greatly diminished because\nthey require a large number of objective evaluations, which makes them\nimpractical for problems of the above kind. Bayesian global optimization (BGO),\nhas managed to deal with these challenges in solving single-objective\noptimization problems and has recently been extended to multi-objective\noptimization (MOO). BGO models the objectives via probabilistic surrogates and\nuses the epistemic uncertainty to define an information acquisition function\n(IAF) that quantifies the merit of evaluating the objective at new designs.\nThis iterative data acquisition process continues until a stopping criterion is\nmet. The most commonly used IAF for MOO is the expected improvement over the\ndominated hypervolume (EIHV) which in its original form is unable to deal with\nparametric uncertainties or measurement noise. In this work, we provide a\nsystematic reformulation of EIHV to deal with stochastic MOO problems. The\nprimary contribution of this paper lies in being able to filter out the noise\nand reformulate the EIHV without having to observe or estimate the stochastic\nparameters. An addendum of the probabilistic nature of our methodology is that\nit enables us to characterize our confidence about the predicted Pareto front.\nWe verify and validate the proposed methodology by applying it to synthetic\ntest problems with known solutions. We demonstrate our approach on an\nindustrial problem of die pass design for a steel wire drawing process.\n", "title": "Stochastic Multi-objective Optimization on a Budget: Application to multi-pass wire drawing with quantified uncertainties" }
null
null
null
null
true
null
6618
null
Default
null
null
null
{ "abstract": " We present a method of generating high resolution 3D shapes from natural\nlanguage descriptions. To achieve this goal, we propose two steps that\ngenerating low resolution shapes which roughly reflect texts and generating\nhigh resolution shapes which reflect the detail of texts. In a previous paper,\nthe authors have shown a method of generating low resolution shapes. We improve\nit to generate 3D shapes more faithful to natural language and test the\neffectiveness of the method. To generate high resolution 3D shapes, we use the\nframework of Conditional Wasserstein GAN. We propose two roles of Critic\nseparately, which calculate the Wasserstein distance between two probability\ndistribution, so that we achieve generating high quality shapes or acceleration\nof learning speed of model. To evaluate our approach, we performed quantitive\nevaluation with several numerical metrics for Critic models. Our method is\nfirst to realize the generation of high quality model by propagating text\nembedding information to high resolution task when generating 3D model.\n", "title": "Generation High resolution 3D model from natural language by Generative Adversarial Network" }
null
null
null
null
true
null
6619
null
Default
null
null
null
{ "abstract": " Most policy search algorithms require thousands of training episodes to find\nan effective policy, which is often infeasible with a physical robot. This\nsurvey article focuses on the extreme other end of the spectrum: how can a\nrobot adapt with only a handful of trials (a dozen) and a few minutes? By\nanalogy with the word \"big-data\", we refer to this challenge as \"micro-data\nreinforcement learning\". We show that a first strategy is to leverage prior\nknowledge on the policy structure (e.g., dynamic movement primitives), on the\npolicy parameters (e.g., demonstrations), or on the dynamics (e.g.,\nsimulators). A second strategy is to create data-driven surrogate models of the\nexpected reward (e.g., Bayesian optimization) or the dynamical model (e.g.,\nmodel-based policy search), so that the policy optimizer queries the model\ninstead of the real system. Overall, all successful micro-data algorithms\ncombine these two strategies by varying the kind of model and prior knowledge.\nThe current scientific challenges essentially revolve around scaling up to\ncomplex robots (e.g., humanoids), designing generic priors, and optimizing the\ncomputing time.\n", "title": "A survey on policy search algorithms for learning robot controllers in a handful of trials" }
null
null
null
null
true
null
6620
null
Default
null
null
null
{ "abstract": " We prove that there exist non-linear binary cyclic codes that attain the\nGilbert-Varshamov bound.\n", "title": "Non-linear Cyclic Codes that Attain the Gilbert-Varshamov Bound" }
null
null
null
null
true
null
6621
null
Default
null
null
null
{ "abstract": " We study the consistency of Lipschitz learning on graphs in the limit of\ninfinite unlabeled data and finite labeled data. Previous work has conjectured\nthat Lipschitz learning is well-posed in this limit, but is insensitive to the\ndistribution of the unlabeled data, which is undesirable for semi-supervised\nlearning. We first prove that this conjecture is true in the special case of a\nrandom geometric graph model with kernel-based weights. Then we go on to show\nthat on a random geometric graph with self-tuning weights, Lipschitz learning\nis in fact highly sensitive to the distribution of the unlabeled data, and we\nshow how the degree of sensitivity can be adjusted by tuning the weights. In\nboth cases, our results follow from showing that the sequence of learned\nfunctions converges to the viscosity solution of an $\\infty$-Laplace type\nequation, and studying the structure of the limiting equation.\n", "title": "Consistency of Lipschitz learning with infinite unlabeled data and finite labeled data" }
null
null
null
null
true
null
6622
null
Default
null
null
null
{ "abstract": " In manufacturing, the increasing involvement of autonomous robots in\nproduction processes poses new challenges on the production management. In this\npaper we report on the usage of Optimization Modulo Theories (OMT) to solve\ncertain multi-robot scheduling problems in this area. Whereas currently\nexisting methods are heuristic, our approach guarantees optimality for the\ncomputed solution. We do not only present our final method but also its\nchronological development, and draw some general observations for the\ndevelopment of OMT-based approaches.\n", "title": "On the Synthesis of Guaranteed-Quality Plans for Robot Fleets in Logistics Scenarios via Optimization Modulo Theories" }
null
null
null
null
true
null
6623
null
Default
null
null
null
{ "abstract": " We explore inflectional morphology as an example of the relationship of the\ndiscrete and the continuous in linguistics. The grammar requests a form of a\nlexeme by specifying a set of feature values, which corresponds to a corner M\nof a hypercube in feature value space. The morphology responds to that request\nby providing a morpheme, or a set of morphemes, whose vector sum is\ngeometrically closest to the corner M. In short, the chosen morpheme $\\mu$ is\nthe morpheme (or set of morphemes) that maximizes the inner product of $\\mu$\nand M.\n", "title": "Geometrical morphology" }
null
null
[ "Computer Science" ]
null
true
null
6624
null
Validated
null
null
null
{ "abstract": " The study of genome rearrangement has many flavours, but they all are somehow\ntied to edit distances on variations of a multi-graph called the breakpoint\ngraph. We study a weighted 2-break distance on Eulerian 2-edge-colored\nmulti-graphs, which generalizes weighted versions of several Double Cut and\nJoin problems, including those on genomes with unequal gene content. We affirm\nthe connection between cycle decompositions and edit scenarios first discovered\nwith the Sorting By Reversals problem. Using this we show that the problem of\nfinding a parsimonious scenario of minimum cost on an Eulerian 2-edge-colored\nmulti-graph - with a general cost function for 2-breaks - can be solved by\ndecomposing the problem into independent instances on simple alternating\ncycles. For breakpoint graphs, and a more constrained cost function, based on\ncoloring the vertices, we give a polynomial-time algorithm for finding a\nparsimonious 2-break scenario of minimum cost, while showing that finding a\nnon-parsimonious 2-break scenario of minimum cost is NP-Hard.\n", "title": "A framework for cost-constrained genome rearrangement under Double Cut and Join" }
null
null
null
null
true
null
6625
null
Default
null
null
null
{ "abstract": " In this paper, a novel scheme for synchronizing four drive and four response\nsystems is proposed by the authors. The idea of multi switching and dual\ncombination synchronization is extended to dual combination-combination multi\nswitching synchronization involving eight chaotic systems and is a first of its\nkind. Due to the multiple combination of chaotic systems and multi switching\nthe resultant dynamic behaviour is so complex that, in communication theory,\ntransmission and security of the resultant signal is more effective. Using\nLyapunov stability theory, sufficient conditions are achieved and suitable\ncontrollers are designed to realise the desired synchronization. Corresponding\ntheoretical analysis is presented and numerical simulations performed to\ndemonstrate the effectiveness of the proposed scheme.\n", "title": "Dual combination combination multi switching synchronization of eight chaotic systems" }
null
null
null
null
true
null
6626
null
Default
null
null
null
{ "abstract": " A whole-body torque control framework adapted for balancing and walking tasks\nis presented in this paper. In the proposed approach, centroidal momentum terms\nare excluded in favor of a hierarchy of high-priority position and orientation\ntasks and a low-priority postural task. More specifically, the controller\nstabilizes the position of the center of mass, the orientation of the pelvis\nframe, as well as the position and orientation of the feet frames. The\nlow-priority postural task provides reference positions for each joint of the\nrobot. Joint torques and contact forces to stabilize tasks are obtained through\nquadratic programming optimization. Besides the exclusion of centroidal\nmomentum terms, part of the novelty of the approach lies in the definition of\ncontrol laws in SE(3) which do not require the use of Euler parameterization.\nValidation of the framework was achieved in a scenario where the robot kept\nbalance while walking in place. Experiments have been conducted with the iCub\nrobot, in simulation and in real-world experiments.\n", "title": "An Optimization Based Control Framework for Balancing and Walking: Implementation on the iCub Robot" }
null
null
null
null
true
null
6627
null
Default
null
null
null
{ "abstract": " We construct new classes of self-similar groups : S-aritmetic groups, affine\ngroups and metabelian groups. Most of the soluble ones are finitely presented\nand of type FP_{n} for appropriate n.\n", "title": "Self-similar groups of type FP_{n}" }
null
null
[ "Mathematics" ]
null
true
null
6628
null
Validated
null
null
null
{ "abstract": " Estimation of parameters is a crucial part of model development. When models\nare deterministic, one can minimise the fitting error; for stochastic systems\none must be more careful. Broadly parameterisation methods for stochastic\ndynamical systems fit into maximum likelihood estimation- and method of\nmoment-inspired techniques. We propose a method where one matches a finite\ndimensional approximation of the Koopman operator with the implied Koopman\noperator as generated by an extended dynamic mode decomposition approximation.\nOne advantage of this approach is that the objective evaluation cost can be\nindependent the number of samples for some dynamical systems. We test our\napproach on two simple systems in the form of stochastic differential\nequations, compare to benchmark techniques, and consider limited\neigen-expansions of the operators being approximated. Other small variations on\nthe technique are also considered, and we discuss the advantages to our\nformulation.\n", "title": "Operator Fitting for Parameter Estimation of Stochastic Differential Equations" }
null
null
null
null
true
null
6629
null
Default
null
null
null
{ "abstract": " Given two infinite sequences with known binomial transforms, we compute the\nbinomial transform of the product sequence. Various identities are obtained and\nnumerous examples are given involving sequences of special numbers: Harmonic\nnumbers, Bernoulli numbers, Fibonacci numbers, and also Laguerre polynomials.\n", "title": "Binomial transform of products" }
null
null
null
null
true
null
6630
null
Default
null
null
null
{ "abstract": " We present a new system S for handling uncertainty in a quantified modal\nlogic (first-order modal logic). The system is based on both probability theory\nand proof theory. The system is derived from Chisholm's epistemology. We\nconcretize Chisholm's system by grounding his undefined and primitive (i.e.\nfoundational) concept of reasonablenes in probability and proof theory. S can\nbe useful in systems that have to interact with humans and provide\njustifications for their uncertainty. As a demonstration of the system, we\napply the system to provide a solution to the lottery paradox. Another\nadvantage of the system is that it can be used to provide uncertainty values\nfor counterfactual statements. Counterfactuals are statements that an agent\nknows for sure are false. Among other cases, counterfactuals are useful when\nsystems have to explain their actions to users. Uncertainties for\ncounterfactuals fall out naturally from our system.\nEfficient reasoning in just simple first-order logic is a hard problem.\nResolution-based first-order reasoning systems have made significant progress\nover the last several decades in building systems that have solved non-trivial\ntasks (even unsolved conjectures in mathematics). We present a sketch of a\nnovel algorithm for reasoning that extends first-order resolution.\nFinally, while there have been many systems of uncertainty for propositional\nlogics, first-order logics and propositional modal logics, there has been very\nlittle work in building systems of uncertainty for first-order modal logics.\nThe work described below is in progress; and once finished will address this\nlack.\n", "title": "Strength Factors: An Uncertainty System for a Quantified Modal Logic" }
null
null
[ "Computer Science" ]
null
true
null
6631
null
Validated
null
null
null
{ "abstract": " We search for runaway former companions of the progenitors of nearby Galactic\ncore-collapse supernova remnants (SNRs) in the Tycho-Gaia astrometric solution\n(TGAS). We look for candidates for a sample of ten SNRs with distances less\nthan $2\\;\\mathrm{kpc}$, taking astrometry and $G$ magnitude from TGAS and $B,V$\nmagnitudes from the AAVSO Photometric All-Sky Survey (APASS). A simple method\nof tracking back stars and finding the closest point to the SNR centre is shown\nto have several failings when ranking candidates. In particular, it neglects\nour expectation that massive stars preferentially have massive companions. We\nevolve a grid of binary stars to exploit these covariances in the distribution\nof runaway star properties in colour - magnitude - ejection velocity space. We\nconstruct an analytic model which predicts the properties of a runaway star, in\nwhich the model parameters are the properties of the progenitor binary and the\nproperties of the SNR. Using nested sampling we calculate the Bayesian evidence\nfor each candidate to be the runaway and simultaneously constrain the\nproperties of that runaway and of the SNR itself. We identify four likely\nrunaway companions of the Cygnus Loop, HB 21, S147 and the Monoceros Loop. HD\n37424 has previously been suggested as the companion of S147, however the other\nthree stars are new candidates. The favoured companion of HB 21 is the Be star\nBD+50 3188 whose emission-line features could be explained by pre-supernova\nmass transfer from the primary. There is a small probability that the\n$2\\;\\mathrm{M}_{\\odot}$ candidate runaway TYC 2688-1556-1 associated with the\nCygnus Loop is a hypervelocity star. If the Monoceros Loop is related to the\non-going star formation in the Mon OB2 association, the progenitor of the\nMonoceros Loop is required to be more massive than $40\\;\\mathrm{M}_{\\odot}$\nwhich is in tension with the posterior for HD 261393.\n", "title": "Binary companions of nearby supernova remnants found with Gaia" }
null
null
null
null
true
null
6632
null
Default
null
null
null
{ "abstract": " We introduce some natural families of distributions on rooted binary ranked\nplane trees with a view toward unifying ideas from various fields, including\nmacroevolution, epidemiology, computational group theory, search algorithms and\nother fields. In the process we introduce the notions of split-exchangeability\nand plane-invariance of a general Markov splitting model in order to readily\nobtain probabilities over various equivalence classes of trees that arise in\nstatistics, phylogenetics, epidemiology and group theory.\n", "title": "Some Distributions on Finite Rooted Binary Trees" }
null
null
null
null
true
null
6633
null
Default
null
null
null
{ "abstract": " DNA-mediated computing is a novel technology that seeks to capitalize on the\nenormous informational capacity of DNA and has tremendous computational ability\nto compete with the current silicon-mediated computing, due to massive\nparallelism and unique characteristics inherent in DNA interaction. In this\npaper, the methodology of DNA-mediated computing is utilized to enrich decision\ntheory, by demonstrating how a novel programmable DNA-mediated normative\ndecision-making apparatus is able to capture rational choice under uncertainty.\n", "title": "Programmable DNA-mediated decision maker" }
null
null
null
null
true
null
6634
null
Default
null
null
null
{ "abstract": " We discuss the effect of ram pressure on the cold clouds in the centers of\ncool-core galaxy clusters, and in particular, how it reduces cloud velocity and\nsometimes causes an offset between the cold gas and young stars. The velocities\nof the molecular gas in both observations and our simulations fall in the range\nof $100-400$ km/s, much lower than expected if they fall from a few tens of kpc\nballistically. If the intra-cluster medium (ICM) is at rest, the ram pressure\nof the ICM only slightly reduces the velocity of the clouds. When we assume\nthat the clouds are actually \"fluffier\" because they are co-moving with a\nwarm-hot layer, the velocity becomes smaller. If we also consider the AGN wind\nin the cluster center by adding a wind profile measured from the simulation,\nthe clouds are further slowed down at small radii, and the resulting velocities\nare in general agreement with the observations and simulations. Because ram\npressure only affects gas but not stars, it can cause a separation between a\nfilament and young stars that formed in the filament as they move through the\nICM together. This separation has been observed in Perseus and also exists in\nour simulations. We show that the star-filament offset combined with\nline-of-sight velocity measurements can help determine the true motion of the\ncold gas, and thus distinguish between inflows and outflows.\n", "title": "The Effects of Ram Pressure on the Cold Clouds in the Centers of Galaxy Clusters" }
null
null
null
null
true
null
6635
null
Default
null
null
null
{ "abstract": " This paper analyses the dynamics of infectious disease with a concurrent\nspread of disease awareness. The model includes local awareness due to contacts\nwith aware individuals, as well as global awareness due to reported cases of\ninfection and awareness campaigns. We investigate the effects of time delay in\nresponse of unaware individuals to available information on the epidemic\ndynamics by establishing conditions for the Hopf bifurcation of the endemic\nsteady state of the model. Analytical results are supported by numerical\nbifurcation analysis and simulations.\n", "title": "Time-delayed SIS epidemic model with population awareness" }
null
null
null
null
true
null
6636
null
Default
null
null
null
{ "abstract": " The task of determining item similarity is a crucial one in a recommender\nsystem. This constitutes the base upon which the recommender system will work\nto determine which items are more likely to be enjoyed by a user, resulting in\nmore user engagement. In this paper we tackle the problem of determining song\nsimilarity based solely on song metadata (such as the performer, and song\ntitle) and on tags contributed by users. We evaluate our approach under a\nseries of different machine learning algorithms. We conclude that tf-idf\nachieves better results than Word2Vec to model the dataset to feature vectors.\nWe also conclude that k-NN models have better performance than SVMs and Linear\nRegression for this problem.\n", "title": "Determining Song Similarity via Machine Learning Techniques and Tagging Information" }
null
null
null
null
true
null
6637
null
Default
null
null
null
{ "abstract": " In this paper, we provide an analytical framework to analyze the uplink\nperformance of device-to-device (D2D)-enabled millimeter wave (mmWave) cellular\nnetworks. Signal-to- interference-plus-noise ratio (SINR) outage probabilities\nare derived for both cellular and D2D links using tools from stochastic\ngeometry. The distinguishing features of mmWave communications such as\ndirectional beamforming and having different path loss laws for line-of-sight\n(LOS) and non-line-of-sight (NLOS) links are incorporated into the outage\nanalysis by employing a flexible mode selection scheme and Nakagami fading.\nAlso, the effect of beamforming alignment errors on the outage probability is\ninvestigated to get insight on the performance in practical scenarios.\n", "title": "Uplink Performance Analysis in D2D-Enabled mmWave Cellular Networks" }
null
null
null
null
true
null
6638
null
Default
null
null
null
{ "abstract": " In this paper, we explain a sharp phase transition phenomenon which occurs\nfor $L^p$-Carleman classes with exponents $0<p<1$. In principle, these classes\nare defined as usual, only the traditional $L^\\infty$-bounds are replaced by\ncorresponding $L^p$-bounds. To mirror the classical definition, we add the\nfeature of dilatation invariance as well, and consider a larger soft-topology\nspace, the $L^p$-Carleman class. A particular degenerate instance is when we\nobtain the $L^p$-Sobolev spaces, analyzed previously by Peetre, following an\ninitial insight by Douady. Peetre found that these $L^p$-Sobolev spaces are\nhighly degenerate for $0<p<1$. Essentially, the contact is lost between the\nfunction and its derivatives. Here, we analyze this degeneracy for the more\ngeneral $L^p$-Carleman classes defined by a weight sequence. Under some\nreasonable growth and regularity properties, and a condition on the collection\nof test functions, we find that there is a sharp boundary, defined in terms of\nthe weight sequence: on the one side, we get Douady-Peetre's phenomenon of\n\"disconnexion\" between the function and its derivatives, while on the other, we\nobtain a collection of highly smooth functions. We also look at the more\nstandard second phase transition, between non-quasianalyticity and\nquasianalyticity, in the $L^p$ setting, with $0<p<1$.\n", "title": "A critical topology for $L^p$-Carleman classes with $0<p<1$" }
null
null
null
null
true
null
6639
null
Default
null
null
null
{ "abstract": " Generative adversarial networks (GANs) are a class of deep generative models\nwhich aim to learn a target distribution in an unsupervised fashion. While they\nwere successfully applied to many problems, training a GAN is a notoriously\nchallenging task and requires a significant amount of hyperparameter tuning,\nneural architecture engineering, and a non-trivial amount of \"tricks\". The\nsuccess in many practical applications coupled with the lack of a measure to\nquantify the failure modes of GANs resulted in a plethora of proposed losses,\nregularization and normalization schemes, and neural architectures. In this\nwork we take a sober view of the current state of GANs from a practical\nperspective. We reproduce the current state of the art and go beyond fairly\nexploring the GAN landscape. We discuss common pitfalls and reproducibility\nissues, open-source our code on Github, and provide pre-trained models on\nTensorFlow Hub.\n", "title": "The GAN Landscape: Losses, Architectures, Regularization, and Normalization" }
null
null
null
null
true
null
6640
null
Default
null
null
null
{ "abstract": " We consider an energy-based boundary condition to impose an equilibrium\nwetting angle for the Cahn-Hilliard-Navier-Stokes phase-field model on\nvoxel-set-type computational domains. These domains typically stem from the\nmicro-CT imaging of porous rock and approximate a (on {\\mu}m scale) smooth\ndomain with a certain resolution. Planar surfaces that are perpendicular to the\nmain axes are naturally approximated by a layer of voxels. However, planar\nsurfaces in any other directions and curved surfaces yield a jagged/rough\nsurface approximation by voxels. For the standard Cahn-Hilliard formulation,\nwhere the contact angle between the diffuse interface and the domain boundary\n(fluid-solid interface/wall) is 90 degrees, jagged surfaces have no impact on\nthe contact angle. However, a prescribed contact angle smaller or larger than\n90 degrees on jagged voxel surfaces is amplified in either direction. As a\nremedy, we propose the introduction of surface energy correction factors for\neach fluid-solid voxel face that counterbalance the difference of the voxel-set\nsurface area with the underlying smooth one. The discretization of the model\nequations is performed with the discontinuous Galerkin method, however, the\npresented semi-analytical approach of correcting the surface energy is equally\napplicable to other direct numerical methods such as finite elements, finite\nvolumes, or finite differences, since the correction factors appear in the\nstrong formulation of the model.\n", "title": "An energy-based equilibrium contact angle boundary condition on jagged surfaces for phase-field methods" }
null
null
[ "Physics" ]
null
true
null
6641
null
Validated
null
null
null
{ "abstract": " Period polynomials have long been fruitful tools for the study of values of\n$L$-functions in the context of major outstanding conjectures. In this paper,\nwe survey some facets of this study from the perspective of Eichler cohomology.\nWe discuss ways to incorporate non-cuspidal modular forms and values of\nderivatives of $L$-functions into the same framework. We further review\ninvestigations of the location of zeros of the period polynomial as well as of\nits analogue for $L$-derivatives.\n", "title": "Period polynomials, derivatives of $L$-functions, and zeros of polynomials" }
null
null
null
null
true
null
6642
null
Default
null
null
null
{ "abstract": " Context: In a series of papers, we study the major merger of two disk\ngalaxies in order to establish whether or not such a merger can produce a disc\ngalaxy. Aims: Our aim here is to describe in detail the technical aspects of\nour numerical experiments. Methods: We discuss the initial conditions of our\nmajor merger, which consist of two protogalaxies on a collision orbit. We show\nthat such merger simulations can produce a non-realistic central mass\nconcentration, and we propose simple, parametric, AGN-like feedback as a\nsolution to this problem. Our AGN-like feedback algorithm is very simple: at\neach time-step we take all particles whose local volume density is above a\ngiven threshold value and increase their temperature to a preset value. We also\ncompare the GADGET3 and GIZMO codes, by applying both of them to the same\ninitial conditions. Results: We show that the evolution of isolated\nprotogalaxies resembles the evolution of disk galaxies, thus arguing that our\nprotogalaxies are well suited for our merger simulations. We demonstrate that\nthe problem with the unphysical central mass concentration in our merger\nsimulations is further aggravated when we increase the resolution. We show that\nour AGN-like feedback removes this non-physical central mass concentration, and\nthus allows the formation of realistic bars. Note that our AGN-like feedback\nmainly affects the central region of a model, without significantly modifying\nthe rest of the galaxy. We demonstrate that, in the context of our kind of\nsimulation, GADGET3 gives results which are very similar to those obtained with\nthe PSPH (density independent SPH) flavor of GIZMO. Moreover, in the examples\nwe tried, the differences between the results of the two flavors of GIZMO,\nnamely PSPH, and MFM (mesh-less algorithm) are similar to and, in some\ncomparisons, larger than the differences between the results of GADGET3 and\nPSPH.\n", "title": "Forming disc galaxies in major mergers II. The central mass concentration problem and a comparison of GADGET3 with GIZMO" }
null
null
null
null
true
null
6643
null
Default
null
null
null
{ "abstract": " In this paper, we measure systematic risk with a new nonparametric factor\nmodel, the neural network factor model. The suitable factors for systematic\nrisk can be naturally found by inserting daily returns on a wide range of\nassets into the bottleneck network. The network-based model does not stick to a\nprobabilistic structure unlike parametric factor models, and it does not need\nfeature engineering because it selects notable features by itself. In addition,\nwe compare performance between our model and the existing models using 20-year\ndata of S&P 100 components. Although the new model can not outperform the best\nones among the parametric factor models due to limitations of the variational\ninference, the estimation method used for this study, it is still noteworthy in\nthat it achieves the performance as best the comparable models could without\nany prior knowledge.\n", "title": "Measuring Systematic Risk with Neural Network Factor Model" }
null
null
null
null
true
null
6644
null
Default
null
null
null
{ "abstract": " In this paper we present a novel method for obstacle avoidance using the\nstereo camera. The conventional obstacle avoidance methods and their\nlimitations are discussed. A new algorithm is developed for the real-time\nobstacle avoidance which responds faster to unexpected obstacles. In this\napproach the depth map is divided into optimized number of regions and the\nminimum depth at each section is assigned as the depth of that region. A fuzzy\ncontroller is designed to create the drive commands for the robot/quadcopter.\nThe system was tested on multiple paths with different obstacles and the\nresults demonstrated the high accuracy of the developed system.\n", "title": "Obstacle Avoidance Using Stereo Camera" }
null
null
null
null
true
null
6645
null
Default
null
null
null
{ "abstract": " Advances in sensor technology have enabled the collection of large-scale\ndatasets. Such datasets can be extremely noisy and often contain a significant\namount of outliers that result from sensor malfunction or human operation\nfaults. In order to utilize such data for real-world applications, it is\ncritical to detect outliers so that models built from these datasets will not\nbe skewed by outliers.\nIn this paper, we propose a new outlier detection method that utilizes the\ncorrelations in the data (e.g., taxi trip distance vs. trip time). Different\nfrom existing outlier detection methods, we build a robust regression model\nthat explicitly models the outliers and detects outliers simultaneously with\nthe model fitting.\nWe validate our approach on real-world datasets against methods specifically\ndesigned for each dataset as well as the state of the art outlier detectors.\nOur outlier detection method achieves better performances, demonstrating the\nrobustness and generality of our method. Last, we report interesting case\nstudies on some outliers that result from atypical events.\n", "title": "Detecting Outliers in Data with Correlated Measures" }
null
null
null
null
true
null
6646
null
Default
null
null
null
{ "abstract": " The hexagonal structure of graphene gives rise to the property of gas\nimpermeability, motivating its investigation for a new application: protection\nof semiconductor photocathodes in electron accelerators. These materials are\nextremely susceptible to degradation in efficiency through multiple mechanisms\nrelated to contamination from the local imperfect vacuum environment of the\nhost photoinjector. Few-layer graphene has been predicted to permit a modified\nphotoemission response of protected photocathode surfaces, and recent\nexperiments of single-layer graphene on copper have begun to confirm these\npredictions for single crystal metallic photocathodes. Unlike metallic\nphotoemitters, the integration of an ultra-thin graphene barrier film with\nconventional semiconductor photocathode growth processes is not\nstraightforward. A first step toward addressing this challenge is the growth\nand characterization of technologically relevant, high quantum efficiency\nbialkali photocathodes grown on ultra-thin free-standing graphene substrates.\nPhotocathode growth on free-standing graphene provides the opportunity to\nintegrate these two materials and study their interaction. Specifically,\nspectral response features and photoemission stability of cathodes grown on\ngraphene substrates are compared to those deposited on established substrates.\nIn addition we observed an increase of work function for the graphene\nencapsulated bialkali photocathode surfaces, which is predicted by our\ncalculations. The results provide a unique demonstration of bialkali\nphotocathodes on free-standing substrates, and indicate promise towards our\ngoal of fabricating high-performance graphene encapsulated photocathodes with\nenhanced lifetime for accelerator applications.\n", "title": "Active bialkali photocathodes on free-standing graphene substrates" }
null
null
[ "Physics" ]
null
true
null
6647
null
Validated
null
null
null
{ "abstract": " Given a smooth non-trapping compact manifold with strictly con- vex boundary,\nwe consider an inverse problem of reconstructing the manifold from the\nscattering data initiated from internal sources. This data consist of the exit\ndirections of geodesics that are emaneted from interior points of the manifold.\nWe show that under certain generic assumption of the metric, one can\nreconstruct an isometric copy of the manifold from such scattering data\nmeasured on the boundary.\n", "title": "Reconstruction of a compact Riemannian manifold from the scattering data of internal sources" }
null
null
null
null
true
null
6648
null
Default
null
null
null
{ "abstract": " The correlation of weak lensing and Cosmic Microwave Anisotropy (CMB) data\ntraces the pressure distribution of the hot, ionized gas and the underlying\nmatter density field. The measured correlation is dominated by baryons residing\nin halos. Detecting the contribution from unbound gas by measuring the residual\ncross-correlation after masking all known halos requires a theoretical\nunderstanding of this correlation and its dependence with model parameters. Our\nmodel assumes that the gas in filaments is well described by a log-normal\nprobability distribution function, with temperatures $10^{5-7}$K and\noverdensities $\\xi\\le 100$. The lensing-comptonization cross-correlation is\ndominated by gas with overdensities in the range $\\xi\\approx[3-33]$; the signal\nis generated at redshifts $z\\le 1$. If only 10\\% of the measured\ncross-correlation is due to unbound gas, then the most recent measurements set\nan upper limit of $\\bar{T}_e\\lesssim 10^6$K on the mean temperature of Inter\nGalactic Medium. The amplitude is proportional to the baryon fraction stored in\nfilaments. The lensing-comptonization power spectrum peaks at a different scale\nthan the gas in halos making it possible to distinguish both contributions. To\ntrace the distribution of the low density and low temperature plasma on\ncosmological scales, the effect of halos will have to be subtracted from the\ndata, requiring observations with larger signal-to-noise ratio than currently\navailable.\n", "title": "Lensing and the Warm Hot Intergalactic Medium" }
null
null
[ "Physics" ]
null
true
null
6649
null
Validated
null
null
null
{ "abstract": " We consider the asymmetric orthogonal tensor decomposition problem, and\npresent an orthogonalized alternating least square algorithm that converges to\nrank-$r$ of the true tensor factors simultaneously in\n$O(\\log(\\log(\\frac{1}{\\epsilon})))$ steps under our proposed Trace Based\nInitialization procedure. Trace Based Initialization requires $O(1/{\\log\n(\\frac{\\lambda_{r}}{\\lambda_{r+1}})})$ number of matrix subspace iterations to\nguarantee a \"good\" initialization for the simultaneous orthogonalized ALS\nmethod, where $\\lambda_r$ is the $r$-th largest singular value of the tensor.\nWe are the first to give a theoretical guarantee on orthogonal asymmetric\ntensor decomposition using Trace Based Initialization procedure and the\northogonalized alternating least squares. Our Trace Based Initialization also\nimproves convergence for symmetric orthogonal tensor decomposition.\n", "title": "Guaranteed Simultaneous Asymmetric Tensor Decomposition via Orthogonalized Alternating Least Squares" }
null
null
[ "Statistics" ]
null
true
null
6650
null
Validated
null
null
null
{ "abstract": " We prove a generalization of a result of Bhargava regarding the average size\n$\\mathrm{Cl}(K)[2]$ as $K$ varies among cubic fields. For a fixed set of\nrational primes $S$, we obtain a formula for the average size of\n$\\mathrm{Cl}(K)/\\langle S \\rangle[2]$ as $K$ varies among cubic fields with a\nfixed signature, where $\\langle S \\rangle$ is the subgroup of $\\mathrm{Cl}(K)$\ngenerated by the classes of primes of $K$ above primes in $S$.\nAs a consequence, we are able to calculate the average sizes of\n$K_{2n}(\\mathcal{O}_K)[2]$ for $n > 0$ and for the relaxed Selmer group\n$\\mathrm{Sel}_2^S(K)$ as $K$ varies in these same families.\n", "title": "The average sizes of two-torsion subgroups in quotients of class groups of cubic fields" }
null
null
null
null
true
null
6651
null
Default
null
null
null
{ "abstract": " We construct and analyze a strongly consistent second-order finite difference\nscheme for the steady two-dimensional Stokes flow. The pressure Poisson\nequation is explicitly incorporated into the scheme. Our approach suggested by\nthe first two authors is based on a combination of the finite volume method,\ndifference elimination, and numerical integration. We make use of the\ntechniques of the differential and difference Janet/Groebner bases. In order to\nprove strong consistency of the generated scheme we correlate the differential\nideal generated by the polynomials in the Stokes equations with the difference\nideal generated by the polynomials in the constructed difference scheme.\nAdditionally, we compute the modified differential system of the obtained\nscheme and analyze the scheme's accuracy and strong consistency by considering\nthis system. An evaluation of our scheme against the established\nmarker-and-cell method is carried out.\n", "title": "A Strongly Consistent Finite Difference Scheme for Steady Stokes Flow and its Modified Equations" }
null
null
null
null
true
null
6652
null
Default
null
null
null
{ "abstract": " It was proven in [B.-Y. Chen, F. Dillen, J. Van der Veken and L. Vrancken,\nCurvature inequalities for Lagrangian submanifolds: the final solution, Differ.\nGeom. Appl. 31 (2013), 808-819] that every Lagrangian submanifold $M$ of a\ncomplex space form $\\tilde M^{n}(4c)$ of constant holomorphic sectional\ncurvature $4c$ satisfies the following optimal inequality: \\begin{align*}\n\\delta(2,n-2) \\leq \\frac{n^2(n-2)}{4(n-1)} H^2 + 2(n-2) c, \\end{align*} where\n$H^2$ is the squared mean curvature and $\\delta(2,n-2)$ is a $\\delta$-invariant\non $M$. In this paper we classify Lagrangian submanifolds of complex space\nforms $\\tilde M^{n}(4c)$, $n \\geq 5$, which satisfy the equality case of this\ninequality at every point.\n", "title": "Classification of $δ(2,n-2)$-ideal Lagrangian submanifolds in $n$-dimensional complex space forms" }
null
null
null
null
true
null
6653
null
Default
null
null
null
{ "abstract": " This paper studies stability analysis of DC microgrids with uncertain\nconstant power loads (CPLs). It is well known that CPLs have negative impedance\neffects, which may cause instability in a DC microgrid. Existing works often\nstudy the stability around a given equilibrium based on some nominal values of\nCPLs. However, in real applications, the equilibrium of a DC microgrid depends\non the loading condition that often changes over time. Different from many\nprevious results, this paper develops a framework that can analyze the DC\nmicrogrid stability for a given range of CPLs. The problem is formulated as a\nrobust stability problem of a polytopic uncertain linear system. By exploiting\nthe structure of the problem, we derive a set of sufficient conditions that can\nguarantee robust stability. The conditions can be efficiently checked by\nsolving a convex optimization problem whose complexity does not grow with the\nnumber of buses in the microgrid. The effectiveness and non-conservativeness of\nthe proposed framework are demonstrated using simulation examples.\n", "title": "Robust stability analysis of DC microgrids with constant power loads" }
null
null
null
null
true
null
6654
null
Default
null
null
null
{ "abstract": " The fundamental purpose of the present research article is to introduce the\nbasic principles of Dimensional Analysis in the context of the neoclassical\neconomic theory, in order to apply such principles to the fundamental relations\nthat underlay most models of economic growth. In particular, basic instruments\nfrom Dimensional Analysis are used to evaluate the analytical consistency of\nthe Neoclassical economic growth model. The analysis shows that an adjustment\nto the model is required in such a way that the principle of dimensional\nhomogeneity is satisfied.\n", "title": "Dimensional Analysis in Economics: A Study of the Neoclassical Economic Growth Model" }
null
null
null
null
true
null
6655
null
Default
null
null
null
{ "abstract": " In this paper, extending past works of Del Popolo, we show how a high\nprecision mass function (MF) can be obtained using the excursion set approach\nand an improved barrier taking implicitly into account a non-zero cosmological\nconstant, the angular momentum acquired by tidal interaction of\nproto-structures and dynamical friction. In the case of the $\\Lambda$CDM\nparadigm, we find that our MF is in agreement at the 3\\% level to Klypin's\nBolshoi simulation, in the mass range $M_{\\rm vir} = 5 \\times 10^9 h^{-1}\nM_{\\odot} -- 5 \\times 10^{14} h^{-1} M_{\\odot}$ and redshift range $0 \\lesssim\nz \\lesssim 10$. For $z=0$ we also compared our MF to several fitting formulae,\nand found in particular agreement with Bhattacharya's within 3\\% in the mass\nrange $10^{12}-10^{16} h^{-1} M_{\\odot}$. Moreover, we discuss our MF validity\nfor different cosmologies.\n", "title": "A high precision semi-analytic mass function" }
null
null
null
null
true
null
6656
null
Default
null
null
null
{ "abstract": " We introduce a compressed suffix array representation that, on a text $T$ of\nlength $n$ over an alphabet of size $\\sigma$, can be built in $O(n)$\ndeterministic time, within $O(n\\log\\sigma)$ bits of working space, and counts\nthe number of occurrences of any pattern $P$ in $T$ in time $O(|P| + \\log\\log_w\n\\sigma)$ on a RAM machine of $w=\\Omega(\\log n)$-bit words. This new index\noutperforms all the other compressed indexes that can be built in linear\ndeterministic time, and some others. The only faster indexes can be built in\nlinear time only in expectation, or require $\\Theta(n\\log n)$ bits. We also\nshow that, by using $O(n\\log\\sigma)$ bits, we can build in linear time an index\nthat counts in time $O(|P|/\\log_\\sigma n + \\log n(\\log\\log n)^2)$, which is\nRAM-optimal for $w=\\Theta(\\log n)$ and sufficiently long patterns.\n", "title": "Fast Compressed Self-Indexes with Deterministic Linear-Time Construction" }
null
null
null
null
true
null
6657
null
Default
null
null
null
{ "abstract": " We consider a bounded block operator matrix of the form $$\nL=\\left(\\begin{array}{cc} A & B \\\\ C & D \\end{array} \\right), $$ where the\nmain-diagonal entries $A$ and $D$ are self-adjoint operators on Hilbert spaces\n$H_{_A}$ and $H_{_D}$, respectively; the coupling $B$ maps $H_{_D}$ to $H_{_A}$\nand $C$ is an operator from $H_{_A}$ to $H_{_D}$. It is assumed that the\nspectrum $\\sigma_{_D}$ of $D$ is absolutely continuous and uniform, being\npresented by a single band $[\\alpha,\\beta]\\subset\\mathbb{R}$, $\\alpha<\\beta$,\nand the spectrum $\\sigma_{_A}$ of $A$ is embedded into $\\sigma_{_D}$, that is,\n$\\sigma_{_A}\\subset(\\alpha,\\beta)$. We formulate conditions under which there\nare bounded solutions to the operator Riccati equations associated with the\ncomplexly deformed block operator matrix $L$; in such a case the deformed\noperator matrix $L$ admits a block diagonalization. The same conditions also\nensure the Markus-Matsaev-type factorization of the Schur complement\n$M_{_A}(z)=A-z-B(D-z)^{-1}C$ analytically continued onto the unphysical\nsheet(s) of the complex $z$ plane adjacent to the band $[\\alpha,\\beta]$. We\nprove that the operator roots of the continued Schur complement $M_{_A}$ are\nexplicitly expressed through the respective solutions to the deformed Riccati\nequations.\n", "title": "Solvability of the operator Riccati equation in the Feshbach case" }
null
null
null
null
true
null
6658
null
Default
null
null
null
{ "abstract": " This work is devoted to the study of the first order operator\n$x'(t)+m\\,x(-t)$ coupled with periodic boundary value conditions. We describe\nthe eigenvalues of the operator and obtain the expression of its related\nGreen's function in the non resonant case. We also obtain the range of the\nvalues of the real parameter $m$ for which the integral kernel, which provides\nthe unique solution, has constant sign. In this way, we automatically establish\nmaximum and anti-maximum principles for the equation. Some applications to the\nexistence of nonlinear periodic boundary value problems are showed.\n", "title": "Comparison results for first order linear operators with reflection and periodic boundary value conditions" }
null
null
[ "Mathematics" ]
null
true
null
6659
null
Validated
null
null
null
{ "abstract": " We prove that the $L^2$ bound of an oscillatory integral associated with a\npolynomial depends only on the number of monomials that this polynomial\nconsists of.\n", "title": "A remark on oscillatory integrals associated with fewnomials" }
null
null
[ "Mathematics" ]
null
true
null
6660
null
Validated
null
null
null
{ "abstract": " We develop a quantitative theory of stochastic homogenization for linear,\nuniformly parabolic equations with coefficients depending on space and time.\nInspired by recent works in the elliptic setting, our analysis is focused on\ncertain subadditive quantities derived from a variational interpretation of\nparabolic equations. These subadditive quantities are intimately connected to\nspatial averages of the fluxes and gradients of solutions. We implement a\nrenormalization-type scheme to obtain an algebraic rate for their convergence,\nwhich is essentially a quantification of the weak convergence of the gradients\nand fluxes of solutions to their homogenized limits. As a consequence, we\nobtain estimates of the homogenization error for the Cauchy-Dirichlet problem\nwhich are optimal in stochastic integrability. We also develop a higher\nregularity theory for solutions of the heterogeneous equation, including a\nuniform $C^{0,1}$-type estimate and a Liouville theorem of every finite order.\n", "title": "Quantitative stochastic homogenization and regularity theory of parabolic equations" }
null
null
null
null
true
null
6661
null
Default
null
null
null
{ "abstract": " For any positive integer $r$, the $r$-Fubini number with parameter $n$,\ndenoted by $F_{n,r}$, is equal to the number of ways that the elements of a set\nwith $n+r$ elements can be weak ordered such that the $r$ least elements are in\ndistinct orders. In this article we focus on the sequence of residues of the\n$r$-Fubini numbers modulo a positive integer $s$ and show that this sequence is\nperiodic and then, exhibit how to calculate its period length. As an extra\nresult, an explicit formula for the $r$-Stirling numbers is obtained which is\nfrequently used in calculations.\n", "title": "On the periodicity problem of residual r-Fubini sequences" }
null
null
[ "Mathematics" ]
null
true
null
6662
null
Validated
null
null
null
{ "abstract": " In this paper we solve a problem posed by H. Bommier-Hato, M. Engliš and\nE.H. Youssfi in [3] on the boundedness of the Bergman-type projections in\ngeneralized Fock spaces. It will be a consequence of two facts: a full\ndescription of the embeddings between generalized Fock-Sobolev spaces and a\ncomplete characterization of the boundedness of the above Bergman type\nprojections between weighted $L^p$-spaces related to generalized Fock-Sobolev\nspaces.\n", "title": "Boundedness of the Bergman projection on generalized Fock-Sobolev spaces on ${\\mathbb C}^n$" }
null
null
null
null
true
null
6663
null
Default
null
null
null
{ "abstract": " We review the concept of Support Vector Machines (SVMs) and discuss examples\nof their use in a number of scenarios. Several SVM implementations have been\nused in HEP and we exemplify this algorithm using the Toolkit for Multivariate\nAnalysis (TMVA) implementation. We discuss examples relevant to HEP including\nbackground suppression for $H\\to\\tau^+\\tau^-$ at the LHC with several different\nkernel functions. Performance benchmarking leads to the issue of generalisation\nof hyper-parameter selection. The avoidance of fine tuning (over training or\nover fitting) in MVA hyper-parameter optimisation, i.e. the ability to ensure\ngeneralised performance of an MVA that is independent of the training,\nvalidation and test samples, is of utmost importance. We discuss this issue and\ncompare and contrast performance of hold-out and k-fold cross-validation. We\nhave extended the SVM functionality and introduced tools to facilitate cross\nvalidation in TMVA and present results based on these improvements.\n", "title": "Support Vector Machines and generalisation in HEP" }
null
null
[ "Physics" ]
null
true
null
6664
null
Validated
null
null
null
{ "abstract": " We introduce a sequent calculus with a simple restriction of Lambek's product\nrules that precisely captures the classical Tamari order, i.e., the partial\norder on fully-bracketed words (equivalently, binary trees) induced by a\nsemi-associative law (equivalently, tree rotation). We establish a focusing\nproperty for this sequent calculus (a strengthening of cut-elimination), which\nyields the following coherence theorem: every valid entailment in the Tamari\norder has exactly one focused derivation. One combinatorial application of this\ncoherence theorem is a new proof of the Tutte-Chapoton formula for the number\nof intervals in the Tamari lattice $Y_n$. We also apply the sequent calculus\nand the coherence theorem to build a surprising bijection between intervals of\nthe Tamari order and a certain fragment of lambda calculus, consisting of the\n$\\beta$-normal planar lambda terms with no closed proper subterms.\n", "title": "A sequent calculus for the Tamari order" }
null
null
null
null
true
null
6665
null
Default
null
null
null
{ "abstract": " Clusters of galaxies gravitationally lens the cosmic microwave background\n(CMB) radiation, resulting in a distinct imprint in the CMB on arcminute\nscales. Measurement of this effect offers a promising way to constrain the\nmasses of galaxy clusters, particularly those at high redshift. We use CMB maps\nfrom the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB\nlensing signal around galaxy clusters identified in optical imaging from first\nyear observations of the Dark Energy Survey. The cluster catalog used in this\nanalysis contains 3697 members with mean redshift of $\\bar{z} = 0.45$. We\ndetect lensing of the CMB by the galaxy clusters at $8.1\\sigma$ significance.\nUsing the measured lensing signal, we constrain the amplitude of the relation\nbetween cluster mass and optical richness to roughly $17\\%$ precision, finding\ngood agreement with recent constraints obtained with galaxy lensing. The error\nbudget is dominated by statistical noise but includes significant contributions\nfrom systematic biases due to the thermal SZ effect and cluster miscentering.\n", "title": "A Measurement of CMB Cluster Lensing with SPT and DES Year 1 Data" }
null
null
null
null
true
null
6666
null
Default
null
null
null
{ "abstract": " We present a theory of the Seebeck effect in nanoscale ferromagnets with\ndimensions smaller than the spin diffusion length. The spin accumulation\ngenerated by a temperature gradient strongly affects the thermopower. We also\nidentify a correction arising from the transverse temperature gradient induced\nby the anomalous Ettingshausen effect. The effect of an induced spin-heat accu-\nmulation gradient is considered as well. The importance of these effects for\nnanoscale ferromagnets is illustrated by ab initio calculations for dilute\nferromagnetic alloys.\n", "title": "Seebeck Effect in Nanoscale Ferromagnets" }
null
null
[ "Physics" ]
null
true
null
6667
null
Validated
null
null
null
{ "abstract": " In this paper, we introduce a generalized asymmetric fronts propagation model\nbased on the geodesic distance maps and the Eikonal partial differential\nequations. One of the key ingredients for the computation of the geodesic\ndistance map is the geodesic metric, which can govern the action of the\ngeodesic distance level set propagation. We consider a Finsler metric with the\nRanders form, through which the asymmetry and anisotropy enhancements can be\ntaken into account to prevent the fronts leaking problem during the fronts\npropagation. These enhancements can be derived from the image edge-dependent\nvector field such as the gradient vector flow. The numerical implementations\nare carried out by the Finsler variant of the fast marching method, leading to\nvery efficient interactive segmentation schemes. We apply the proposed Finsler\nfronts propagation model to image segmentation applications. Specifically, the\nforeground and background segmentation is implemented by the Voronoi index map.\nIn addition, for the application of tubularity segmentation, we exploit the\nlevel set lines of the geodesic distance map associated to the proposed Finsler\nmetric providing that a thresholding value is given.\n", "title": "Fast Asymmetric Fronts Propagation for Image Segmentation" }
null
null
null
null
true
null
6668
null
Default
null
null
null
{ "abstract": " Photonic technologies offer numerous advantages for astronomical instruments\nsuch as spectrographs and interferometers owing to their small footprints and\ndiverse range of functionalities. Operating at the diffraction-limit, it is\nnotoriously difficult to efficiently couple such devices directly with large\ntelescopes. We demonstrate that with careful control of both the non-ideal\npupil geometry of a telescope and residual wavefront errors, efficient coupling\nwith single-mode devices can indeed be realised. A fibre injection was built\nwithin the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument.\nLight was coupled into a single-mode fibre operating in the near-IR (J-H bands)\nwhich was downstream of the extreme adaptive optics system and the pupil\napodising optics. A coupling efficiency of 86% of the theoretical maximum limit\nwas achieved at 1550 nm for a diffraction-limited beam in the laboratory, and\nwas linearly correlated with Strehl ratio. The coupling efficiency was constant\nto within <30% in the range 1250-1600 nm. Preliminary on-sky data with a Strehl\nratio of 60% in the H-band produced a coupling efficiency into a single-mode\nfibre of ~50%, consistent with expectations. The coupling was >40% for 84% of\nthe time and >50% for 41% of the time. The laboratory results allow us to\nforecast that extreme adaptive optics levels of correction (Strehl ratio >90%\nin H-band) would allow coupling of >67% (of the order of coupling to multimode\nfibres currently). For Strehl ratios <20%, few-port photonic lanterns become a\nsuperior choice but the signal-to-noise must be considered. These results\nillustrate a clear path to efficient on-sky coupling into a single-mode fibre,\nwhich could be used to realise modal-noise-free radial velocity machines,\nvery-long-baseline optical/near-IR interferometers and/or simply exploit\nphotonic technologies in future instrument design.\n", "title": "Efficient injection from large telescopes into single-mode fibres: Enabling the era of ultra-precision astronomy" }
null
null
null
null
true
null
6669
null
Default
null
null
null
{ "abstract": " In this article, we attempted to develop an upwind scheme based on Flux\nDifference Splitting using Jordan canonical forms to simulate genuine weakly\nhyperbolic systems. Theory of Jordan Canonical Forms is being used to complete\ndefective set of linear independent eigenvectors. Proposed FDS-J scheme is\ncapable of recognizing various shocks accurately.\n", "title": "An upwind method for genuine weakly hyperbolic systems" }
null
null
null
null
true
null
6670
null
Default
null
null
null
{ "abstract": " Gating is a key technique used for integrating information from multiple\nsources by long short-term memory (LSTM) models and has recently also been\napplied to other models such as the highway network. Although gating is\npowerful, it is rather expensive in terms of both computation and storage as\neach gating unit uses a separate full weight matrix. This issue can be severe\nsince several gates can be used together in e.g. an LSTM cell. This paper\nproposes a semi-tied unit (STU) approach to solve this efficiency issue, which\nuses one shared weight matrix to replace those in all the units in the same\nlayer. The approach is termed \"semi-tied\" since extra parameters are used to\nseparately scale each of the shared output values. These extra scaling factors\nare associated with the network activation functions and result in the use of\nparameterised sigmoid, hyperbolic tangent, and rectified linear unit functions.\nSpeech recognition experiments using British English multi-genre broadcast data\nshowed that using STUs can reduce the calculation and storage cost by a factor\nof three for highway networks and four for LSTMs, while giving similar word\nerror rates to the original models.\n", "title": "Semi-tied Units for Efficient Gating in LSTM and Highway Networks" }
null
null
null
null
true
null
6671
null
Default
null
null
null
{ "abstract": " Under the assumption that a defining graph of a Coxeter group admits only\ntwists in $\\mathbb{Z}_2$ and is of type FC, we prove Mühlherr's Twist\nConjecture.\n", "title": "A step towards Twist Conjecture" }
null
null
null
null
true
null
6672
null
Default
null
null
null
{ "abstract": " User Datagram Protocol (UDP) is a commonly used protocol for data\ntransmission in small embedded systems. UDP as such is unreliable and packet\nlosses can occur. The achievable data rates can suffer if optimal packet sizes\nare not used. The alternative, Transmission Control Protocol (TCP) guarantees\nthe ordered delivery of data and automatically adjusts transmission to match\nthe capability of the transmission link. Nevertheless UDP is often favored over\nTCP due to its simplicity, small memory and instruction footprints. Both UDP\nand TCP are implemented in all larger operating systems and commercial embedded\nframeworks. In addition UDP also supported on a variety of small hardware\nplatforms such as Digital Signal Processors (DSP) Field Programmable Gate\nArrays (FPGA). This is not so common for TCP. This paper describes how high\nspeed UDP based data transmission with very low packet error ratios was\nachieved. The near-reliable communications link is used in a data acquisition\n(DAQ) system for the next generation of extremely intense neutron source,\nEuropean Spallation Source. This paper presents measurements of UDP performance\nand reliability as achieved by employing several optimizations. The\nmeasurements were performed on Xeon E5 based CentOS (Linux) servers. The\nmeasured data rates are very close to the 10 Gb/s line rate, and zero packet\nloss was achieved. The performance was obtained utilizing a single processor\ncore as transmitter and a single core as receiver. The results show that\nsupport for transmitting large data packets is a key parameter for good\nperformance.\nOptimizations for throughput are: MTU, packet sizes, tuning Linux kernel\nparameters, thread affinity, core locality and efficient timers.\n", "title": "Achieveing reliable UDP transmission at 10 Gb/s using BSD socket for data acquisition systems" }
null
null
null
null
true
null
6673
null
Default
null
null
null
{ "abstract": " Current understanding of the critical outbreak condition on temporal networks\nrelies on approximations (time scale separation, discretization) that may bias\nthe results. We propose a theoretical framework to compute the epidemic\nthreshold in continuous time through the infection propagator approach. We\nintroduce the {\\em weak commutation} condition allowing the interpretation of\nannealed networks, activity-driven networks, and time scale separation into one\nformalism. Our work provides a coherent connection between discrete and\ncontinuous time representations applicable to realistic scenarios.\n", "title": "Epidemic Threshold in Continuous-Time Evolving Networks" }
null
null
null
null
true
null
6674
null
Default
null
null
null
{ "abstract": " With the vision of deployment of massive Internet-of-Things (IoTs) in 5G\nnetwork, existing 4G network and protocols are inefficient to handle sporadic\nIoT traffic with requirements of low-latency, low control overhead and low\npower. To suffice these requirements, we propose a design of a PHY/MAC layer\nusing Software Defined Radios (SDRs) that is backward compatible with existing\nOFDM based LTE protocols and supports CDMA based transmissions for low power\nIoT devices as well. This demo shows our implemented system based on that\ndesign and the viability of the proposal under different network scenarios.\n", "title": "Demo Abstract: CDMA-based IoT Services with Shared Band Operation of LTE in 5G" }
null
null
null
null
true
null
6675
null
Default
null
null
null
{ "abstract": " We consider attacks on two-way quantum key distribution protocols in which an\nundetectable eavesdropper copies all messages in the message mode. We show that\nunder the attacks there is no disturbance in the message mode and that the\nmutual information between the sender and the receiver is always constant and\nequal to one. It follows that recent proofs of security for two-way protocols\ncannot be considered complete since they do not cover the considered attacks.\n", "title": "Can Two-Way Direct Communication Protocols Be Considered Secure?" }
null
null
null
null
true
null
6676
null
Default
null
null
null
{ "abstract": " We present a new method to approximate posterior probabilities of Bayesian\nNetwork using Deep Neural Network. Experiment results on several public\nBayesian Network datasets shows that Deep Neural Network is capable of learning\njoint probability distri- bution of Bayesian Network by learning from a few\nobservation and posterior probability distribution pairs with high accuracy.\nCompared with traditional approximate method likelihood weighting sampling\nalgorithm, our method is much faster and gains higher accuracy in medium sized\nBayesian Network. Another advantage of our method is that our method can be\nparallelled much easier in GPU without extra effort. We also ex- plored the\nconnection between the accuracy of our model and the number of training\nexamples. The result shows that our model saturate as the number of training\nexamples grow and we don't need many training examples to get reasonably good\nresult. Another contribution of our work is that we have shown discriminative\nmodel like Deep Neural Network can approximate generative model like Bayesian\nNetwork.\n", "title": "Using Deep Neural Network Approximate Bayesian Network" }
null
null
[ "Statistics" ]
null
true
null
6677
null
Validated
null
null
null
{ "abstract": " Binary, or one-bit, representations of data arise naturally in many\napplications, and are appealing in both hardware implementations and algorithm\ndesign. In this work, we study the problem of data classification from binary\ndata and propose a framework with low computation and resource costs. We\nillustrate the utility of the proposed approach through stylized and realistic\nnumerical experiments, and provide a theoretical analysis for a simple case. We\nhope that our framework and analysis will serve as a foundation for studying\nsimilar types of approaches.\n", "title": "Simple Classification using Binary Data" }
null
null
null
null
true
null
6678
null
Default
null
null
null
{ "abstract": " For the stationary nonlinear Schrödinger equation $-\\Delta u+ V(x)u- f(u) =\n\\lambda u$ with periodic potential $V$ we study the existence and stability\nproperties of multibump solutions with prescribed $L^2$-norm. To this end we\nintroduce a new nondegeneracy condition and develop new superposition\ntechniques which allow to match the $L^2$-constraint. In this way we obtain the\nexistence of infinitely many geometrically distinct solutions to the stationary\nproblem. We then calculate the Morse index of these solutions with respect to\nthe restriction of the underlying energy functional to the associated\n$L^2$-sphere, and we show their orbital instability with respect to the\nSchrödinger flow. Our results apply in both, the mass-subcritical and the\nmass-supercritical regime.\n", "title": "Unstable normalized standing waves for the space periodic NLS" }
null
null
null
null
true
null
6679
null
Default
null
null
null
{ "abstract": " Parameter reduction can enable otherwise infeasible design and uncertainty\nstudies with modern computational science models that contain several input\nparameters. In statistical regression, techniques for sufficient dimension\nreduction (SDR) use data to reduce the predictor dimension of a regression\nproblem. A computational scientist hoping to use SDR for parameter reduction\nencounters a problem: a computer prediction is best represented by a\ndeterministic function of the inputs, so data comprised of computer simulation\nqueries fail to satisfy the SDR assumptions. To address this problem, we\ninterpret SDR methods sliced inverse regression (SIR) and sliced average\nvariance estimation (SAVE) as estimating the directions of a ridge function,\nwhich is a composition of a low-dimensional linear transformation with a\nnonlinear function. Within this interpretation, SIR and SAVE estimate matrices\nof integrals whose column spaces are contained in the ridge directions' span;\nwe analyze and numerically verify convergence of these column spaces as the\nnumber of computer model queries increases. Moreover, we show example functions\nthat are not ridge functions but whose inverse conditional moment matrices are\nlow-rank. Consequently, the computational scientist should beware when using\nSIR and SAVE for parameter reduction, since SIR and SAVE may mistakenly suggest\nthat truly important directions are unimportant.\n", "title": "Inverse regression for ridge recovery: A data-driven approach for parameter reduction in computer experiments" }
null
null
null
null
true
null
6680
null
Default
null
null
null
{ "abstract": " In this paper, we consider higher order correction of the entropy and study\nthe thermodynamical properties of recently proposed Schwarzschild-Beltrami-de\nSitter black hole, which is indeed an exact solution of Einstein equation with\na positive cosmological constant. By using the corrected entropy and Hawking\ntemperature we extract some thermodynamical quantities like Gibbs and Helmholtz\nfree energies and heat capacity. We also investigate the first and second laws\nof thermodynamics. We find that presence of higher order corrections, which\ncome from thermal fluctuations, may remove some instabilities of the black\nhole. Also unstable to stable phase transition is possible in presence of the\nfirst and second order corrections.\n", "title": "Thermodynamics of Higher Order Entropy Corrected Schwarzschild-Beltrami-de Sitter Black Hole" }
null
null
null
null
true
null
6681
null
Default
null
null
null
{ "abstract": " We report on a compact, simple and robust high brightness entangled photon\nsource at room temperature. Based on a 30 mm long periodically poled potassium\ntitanyl phosphate (PPKTP), the source produces non-collinear, type0 phase\nmatched, degenerate photons at 810 nm with pair production rate as high 39.13\nMHz per mW at room temperature. To the best of our knowledge, this is the\nhighest photon pair rate generated using bulk crystals pump with\ncontinuous-wave laser. Combined with the inherently stable polarization Sagnac\ninterferometer, the source produces entangled state violating the Bells\ninequality by nearly 10 standard deviations and a Bell state fidelity of 0.96.\nThe compact footprint, simple and robust experimental design and room\ntemperature operation, make our source ideal for various quantum communication\nexperiments including long distance free space and satellite communications.\n", "title": "Robust, high brightness, degenerate entangled photon source at room temperature" }
null
null
[ "Physics" ]
null
true
null
6682
null
Validated
null
null
null
{ "abstract": " A pivotal step toward understanding unconventional superconductors would be\nto decipher how superconductivity emerges from the unusual normal state upon\ncooling. In the cuprates, traces of superconducting pairing appear above the\nmacroscopic transition temperature $T_c$, yet extensive investigation has led\nto disparate conclusions. The main difficulty has been the separation of\nsuperconducting contributions from complex normal state behaviour. Here we\navoid this problem by measuring the nonlinear conductivity, an observable that\nis zero in the normal state. We uncover for several representative cuprates\nthat the nonlinear conductivity vanishes exponentially above $T_c$, both with\ntemperature and magnetic field, and exhibits temperature-scaling characterized\nby a nearly universal scale $T_0$. Attempts to model the response with the\nfrequently evoked Ginzburg-Landau theory are unsuccessful. Instead, our\nfindings are captured by a simple percolation model that can also explain other\nproperties of the cuprates. We thus resolve a long-standing conundrum by\nshowing that the emergence of superconductivity in the cuprates is dominated by\ntheir inherent inhomogeneity.\n", "title": "Emergence of superconductivity in the cuprates via a universal percolation process" }
null
null
null
null
true
null
6683
null
Default
null
null
null
{ "abstract": " We propose and demonstrate an ultrasonic communication link using spatial\ndegrees of freedom to increase data rates for deeply implantable medical\ndevices. Low attenuation and millimeter wavelengths make ultrasound an ideal\ncommunication medium for miniaturized low-power implants. While small spectral\nbandwidth has drastically limited achievable data rates in conventional\nultrasonic implants, large spatial bandwidth can be exploited by using multiple\ntransducers in a multiple-input/multiple-output system to provide spatial\nmultiplexing gain without additional power, larger bandwidth, or complicated\npackaging. We experimentally verify the communication link in mineral oil with\na transmitter and receiver 5 cm apart, each housing two custom-designed\nmm-sized piezoelectric transducers operating at the same frequency. Two streams\nof data modulated with quadrature phase-shift keying at 125 kbps are\nsimultaneously transmitted and received on both channels, effectively doubling\nthe data rate to 250 kbps with a measured bit error rate below 1e-4. We also\nevaluate the performance and robustness of the channel separation network by\ntesting the communication link after introducing position offsets. These\nresults demonstrate the potential of spatial multiplexing to enable more\ncomplex implant applications requiring higher data rates.\n", "title": "Exploiting Spatial Degrees of Freedom for High Data Rate Ultrasound Communication with Implantable Devices" }
null
null
null
null
true
null
6684
null
Default
null
null
null
{ "abstract": " Global sensitivity analysis aims at determining which uncertain input\nparameters of a computational model primarily drives the variance of the output\nquantities of interest. Sobol' indices are now routinely applied in this\ncontext when the input parameters are modelled by classical probability theory\nusing random variables. In many practical applications however, input\nparameters are affected by both aleatory and epistemic (so-called polymorphic)\nuncertainty, for which imprecise probability representations have become\npopular in the last decade. In this paper, we consider that the uncertain input\nparameters are modelled by parametric probability boxes (p-boxes). We propose\ninterval-valued (so-called imprecise) Sobol' indices as an extension of their\nclassical definition. An original algorithm based on the concepts of augmented\nspace, isoprobabilistic transforms and sparse polynomial chaos expansions is\ndevised to allow for the computation of these imprecise Sobol' indices at\nextremely low cost. In particular, phantoms points are introduced to build an\nexperimental design in the augmented space (necessary for the calibration of\nthe sparse PCE) which leads to a smart reuse of runs of the original\ncomputational model. The approach is illustrated on three analytical and\nengineering examples which allows one to validate the proposed algorithms\nagainst brute-force double-loop Monte Carlo simulation.\n", "title": "Global sensitivity analysis in the context of imprecise probabilities (p-boxes) using sparse polynomial chaos expansions" }
null
null
[ "Statistics" ]
null
true
null
6685
null
Validated
null
null
null
{ "abstract": " We report inelastic neutron scattering measurements of low energy ($\\hbar\n\\omega < 10$ meV) magnetic excitations in the \"11\" system\nFe$_{1+y}$Te$_{1-x}$Se$_{x}$. The spin correlations are two-dimensional (2D) in\nthe superconducting samples at low temperature, but appear much more\nthree-dimensional when the temperature rises well above $T_c \\sim 15$ K, with a\nclear increase of the (dynamic) spin correlation length perpendicular to the Fe\nplanes. The spontaneous change of dynamic spin correlations from 2D to 3D on\nwarming is unexpected and cannot be naturally explained when only the spin\ndegree of freedom is considered. Our results suggest that the low temperature\nphysics in the \"11\" system, in particular the evolution of low energy spin\nexcitations towards %better satisfying the nesting condition for mediating\nsuperconducting pairing, is driven by changes in orbital correlations.\n", "title": "Unexpected Enhancement of Three-Dimensional Low-Energy Spin Correlations in Quasi-Two-Dimensional Fe$_{1+y}$Te$_{1-x}$Se$_{x}$ System at High Temperature" }
null
null
null
null
true
null
6686
null
Default
null
null
null
{ "abstract": " Agile - denoting \"the quality of being agile, readiness for motion,\nnimbleness, activity, dexterity in motion\" - software development methods are\nattempting to offer an answer to the eager business community asking for\nlighter weight along with faster and nimbler software development processes.\nThis is especially the case with the rapidly growing and volatile Internet\nsoftware industry as well as for the emerging mobile application environment.\nThe new agile methods have evoked substantial amount of literature and debates.\nHowever, academic research on the subject is still scarce, as most of existing\npublications are written by practitioners or consultants. The aim of this\npublication is to begin filling this gap by systematically reviewing the\nexisting literature on agile software development methodologies. This\npublication has three purposes. First, it proposes a definition and a\nclassification of agile software development approaches. Second, it analyses\nten software development methods that can be characterized as being \"agile\"\nagainst the defined criterion. Third, it compares these methods and highlights\ntheir similarities and differences. Based on this analysis, future research\nneeds are identified and discussed.\n", "title": "Agile Software Development Methods: Review and Analysis" }
null
null
[ "Computer Science" ]
null
true
null
6687
null
Validated
null
null
null
{ "abstract": " Deep learning is a form of machine learning for nonlinear high dimensional\npattern matching and prediction. By taking a Bayesian probabilistic\nperspective, we provide a number of insights into more efficient algorithms for\noptimisation and hyper-parameter tuning. Traditional high-dimensional data\nreduction techniques, such as principal component analysis (PCA), partial least\nsquares (PLS), reduced rank regression (RRR), projection pursuit regression\n(PPR) are all shown to be shallow learners. Their deep learning counterparts\nexploit multiple deep layers of data reduction which provide predictive\nperformance gains. Stochastic gradient descent (SGD) training optimisation and\nDropout (DO) regularization provide estimation and variable selection. Bayesian\nregularization is central to finding weights and connections in networks to\noptimize the predictive bias-variance trade-off. To illustrate our methodology,\nwe provide an analysis of international bookings on Airbnb. Finally, we\nconclude with directions for future research.\n", "title": "Deep Learning: A Bayesian Perspective" }
null
null
null
null
true
null
6688
null
Default
null
null
null
{ "abstract": " We report on tunnel-injected deep ultraviolet light emitting diodes (UV LEDs)\nconfigured with a polarization engineered Al0.75Ga0.25N/ In0.2Ga0.8N tunnel\njunction structure. Tunnel-injected UV LED structure enables n-type contacts\nfor both bottom and top contact layers. However, achieving Ohmic contact to\nwide bandgap n-AlGaN layers is challenging and typically requires high\ntemperature contact metal annealing. In this work, we adopted a compositionally\ngraded top contact layer for non-alloyed metal contact, and obtained a low\ncontact resistance of Rc=4.8x10-5 Ohm cm2 on n-Al0.75Ga0.25N. We also observed\na significant reduction in the forward operation voltage from 30.9 V to 19.2 V\nat 1 kA/cm2 by increasing the Mg doping concentration from 6.2x1018 cm-3 to\n1.5x1019 cm-3. Non-equilibrium hole injection into wide bandgap Al0.75Ga0.25N\nwith Eg>5.2 eV was confirmed by light emission at 257 nm. This work\ndemonstrates the feasibility of tunneling hole injection into deep UV LEDs, and\nprovides a novel structural design towards high power deep-UV emitters.\n", "title": "Tunnel-injected sub-260 nm ultraviolet light emitting diodes" }
null
null
[ "Physics" ]
null
true
null
6689
null
Validated
null
null
null
{ "abstract": " We present a method and preliminary results of the image reconstruction in\nthe Jagiellonian PET tomograph. Using GATE (Geant4 Application for Tomographic\nEmission), interactions of the 511 keV photons with a cylindrical detector were\ngenerated. Pairs of such photons, flying back-to-back, originate from e+e-\nannihilations inside a 1-mm spherical source. Spatial and temporal coordinates\nof hits were smeared using experimental resolutions of the detector. We\nincorporated the algorithm of the 3D Filtered Back Projection, implemented in\nthe STIR and TomoPy software packages, which differ in approximation methods.\nConsistent results for the Point Spread Functions of ~5/7,mm and ~9/20, mm were\nobtained, using STIR, for transverse and longitudinal directions, respectively,\nwith no time of flight information included.\n", "title": "Three-dimensional image reconstruction in J-PET using Filtered Back Projection method" }
null
null
null
null
true
null
6690
null
Default
null
null
null
{ "abstract": " An iteration-free method of domain decomposition is considered for\napproximate solving a boundary value problem for a second-order parabolic\nequation. A standard approach to constructing domain decomposition schemes is\nbased on a partition of unity for the domain under the consideration. Here a\nnew general approach is proposed for constructing domain decomposition schemes\nwith overlapping subdomains based on indicator functions of subdomains. The\nbasic peculiarity of this method is connected with a representation of the\nproblem operator as the sum of two operators, which are constructed for two\nseparate subdomains with the subtraction of the operator that is associated\nwith the intersection of the subdomains. There is developed a two-component\nfactorized scheme, which can be treated as a generalization of the standard\nAlternating Direction Implicit (ADI) schemes to the case of a special\nthree-component splitting. There are obtained conditions for the unconditional\nstability of regionally additive schemes constructed using indicator functions\nof subdomains. Numerical results are presented for a model two-dimensional\nproblem.\n", "title": "Two-component domain decomposition scheme with overlapping subdomains for parabolic equations" }
null
null
null
null
true
null
6691
null
Default
null
null
null
{ "abstract": " Very recently Richter and Rogers proved that any convex geometry can be\nrepresented by a family of convex polygons in the plane. We shall generalize\ntheir construction and obtain a wide variety of convex shapes for representing\nconvex geometries. We present an Erdos-Szekeres type obstruction, which answers\na question of Czedli negatively, that is general convex geometries cannot be\nrepresented with ellipses in the plane. Moreover, we shall prove that one\ncannot even bound the number of common supporting lines of the pairs of the\nrepresenting convex sets. In higher dimensions we prove that all convex\ngeometries can be represented with ellipsoids.\n", "title": "On the representation of finite convex geometries with convex sets" }
null
null
[ "Mathematics" ]
null
true
null
6692
null
Validated
null
null
null
{ "abstract": " The effectiveness of molecular-based light harvesting relies on transport of\noptical excitations, excitons, to charg-transfer sites. Measuring exciton\nmigration has, however, been challenging because of the mismatch between\nnanoscale migration lengths and the diffraction limit. In organic\nsemiconductors, common bulk methods employ a series of films terminated at\nquenching substrates, altering the spatioenergetic landscape for migration.\nHere we instead define quenching boundaries all-optically with sub-diffraction\nresolution, thus characterizing spatiotemporal exciton migration on its native\nnanometer and picosecond scales without disturbing morphology. By transforming\nstimulated emission depletion microscopy into a time-resolved ultrafast\napproach, we measure a 16-nm migration length in CN-PPV conjugated polymer\nfilms. Combining these experiments with Monte Carlo exciton hopping simulations\nshows that migration in CN-PPV films is essentially diffusive because intrinsic\nchromophore energetic disorder is comparable to inhomogeneous broadening among\nchromophores. This framework also illustrates general trends across materials.\nOur new approach's sub-diffraction resolution will enable previously\nunattainable correlations of local material structure to the nature of exciton\nmigration, applicable not only to photovoltaic or display-destined organic\nsemiconductors but also to explaining the quintessential exciton migration\nexhibited in photosynthesis.\n", "title": "Resolving ultrafast exciton migration in organic solids at the nanoscale" }
null
null
null
null
true
null
6693
null
Default
null
null
null
{ "abstract": " We present a simple method to improve neural translation of a low-resource\nlanguage pair using parallel data from a related, also low-resource, language\npair. The method is based on the transfer method of Zoph et al., but whereas\ntheir method ignores any source vocabulary overlap, ours exploits it. First, we\nsplit words using Byte Pair Encoding (BPE) to increase vocabulary overlap.\nThen, we train a model on the first language pair and transfer its parameters,\nincluding its source word embeddings, to another model and continue training on\nthe second language pair. Our experiments show that transfer learning helps\nword-based translation only slightly, but when used on top of a much stronger\nBPE baseline, it yields larger improvements of up to 4.3 BLEU.\n", "title": "Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation" }
null
null
null
null
true
null
6694
null
Default
null
null
null
{ "abstract": " Using local density approximation plus dynamical mean-field theory\n(LDA+DMFT), we have computed the valence band photoelectron spectra of highly\npopular multiferroic BiFeO$_{3}$. Within DMFT, the local impurity problem is\ntackled by exact diagonalization (ED) solver. For comparison, we also present\nresult from LDA+U approach, which is commonly used to compute physical\nproperties of this compound. Our LDA+DMFT derived spectra match adequately with\nthe experimental hard X-ray photoelectron spectroscopy (HAXPES) and resonant\nphotoelectron spectroscopy (RPES) for Fe 3$d$ states, whereas the other\ntheoretical method that we employed failed to capture the features of the\nmeasured spectra. Thus, our investigation shows the importance of accurately\nincorporating the dynamical aspects of electron-electron interaction among the\nFe 3$d$ orbitals in calculations to produce the experimental excitation\nspectra, which establishes BiFeO$_{3}$ as a strongly correlated electron\nsystem. The LDA+DMFT derived density of states (DOSs) exhibit significant\namount of Fe 3$d$ states at the energy of Bi lone-pairs, implying that the\nlatter is not as alone as previously thought in the spectral scenario. Our\nstudy also demonstrates that the combination of orbital cross-sections for the\nconstituent elements and broadening schemes for the calculated spectral\nfunction are pivotal to explain the detailed structures of the experimental\nspectra.\n", "title": "Dynamical correlations in the electronic structure of BiFeO$_{3}$, as revealed by dynamical mean field theory" }
null
null
null
null
true
null
6695
null
Default
null
null
null
{ "abstract": " In this work maximum entropy distributions in the space of steady states of\nmetabolic networks are defined upon constraining the first and second moment of\nthe growth rate. Inherent bistability of fast and slow phenotypes, akin to a\nVan-Der Waals picture, emerges upon considering control on the average growth\n(optimization/repression) and its fluctuations (heterogeneity). This is applied\nto the carbon catabolic core of E.coli where it agrees with some stylized facts\non the persisters phenotype and it provides a quantitative map with metabolic\nfluxes, opening for the possibility to detect coexistence from flux data.\nPreliminary analysis on data for E.Coli cultures in standard conditions shows,\non the other hand, degeneracy for the inferred parameters that extend in the\ncoexistence region.\n", "title": "A Van-Der-Waals picture for metabolic networks from MaxEnt modeling: inherent bistability and elusive coexistence" }
null
null
null
null
true
null
6696
null
Default
null
null
null
{ "abstract": " In this paper, we consider a linear regression model with AR(p) error terms\nwith the assumption that the error terms have a t distribution as a heavy\ntailed alternative to the normal distribution. We obtain the estimators for the\nmodel parameters by using the conditional maximum likelihood (CML) method. We\nconduct an iteratively reweighting algorithm (IRA) to find the estimates for\nthe parameters of interest. We provide a simulation study and three real data\nexamples to illustrate the performance of the proposed robust estimators based\non t distribution.\n", "title": "Robust Parameter Estimation of Regression Model with AR(p) Error Terms" }
null
null
null
null
true
null
6697
null
Default
null
null
null
{ "abstract": " The relative orientation between filamentary structures in molecular clouds\nand the ambient magnetic field provides insight into filament formation and\nstability. To calculate the relative orientation, a measurement of filament\norientation is first required. We propose a new method to calculate the\norientation of the one pixel wide filament skeleton that is output by filament\nidentification algorithms such as \\textsc{filfinder}. We derive the local\nfilament orientation from the direction of the intensity gradient in the\nskeleton image using the Sobel filter and a few simple post-processing steps.\nWe call this the `Sobel-gradient method'. The resulting filament orientation\nmap can be compared quantitatively on a local scale with the magnetic field\norientation map to then find the relative orientation of the filament with\nrespect to the magnetic field at each point along the filament. It can also be\nused in constructing radial profiles for filament width fitting. The proposed\nmethod facilitates automation in analysis of filament skeletons, which is\nimperative in this era of `big data'.\n", "title": "Measuring filament orientation: a new quantitative, local approach" }
null
null
[ "Physics" ]
null
true
null
6698
null
Validated
null
null
null
{ "abstract": " We report a combined theoretical/experimental study of dynamic screening of\nexcitons in media with frequency-dependent dielectric functions. We develop an\nanalytical model showing that interparticle interactions in an exciton are\nscreened in the range of frequencies from zero to the characteristic binding\nenergy depending on the symmetries and transition energies of that exciton. The\nproblem of the dynamic screening is then reduced to simply solving the\nSchrodinger equation with an effectively frequency-independent potential.\nQuantitative predictions of the model are experimentally verified using a test\nsystem: neutral, charged and defect-bound excitons in two-dimensional monolayer\nWS2, screened by metallic, liquid, and semiconducting environments. The\nscreening-induced shifts of the excitonic peaks in photoluminescence spectra\nare in good agreement with our model.\n", "title": "Controlled dynamic screening of excitonic complexes in 2D semiconductors" }
null
null
null
null
true
null
6699
null
Default
null
null
null
{ "abstract": " We study four problems in the dynamics of a body moving about a fixed point,\nproviding a non-complex, analytical solution for all of them. For the first\ntwo, we will work on the motion first integrals. For the symmetrical heavy\nbody, that is the Lagrange-Poisson case, we compute the second and third Euler\nangles in explicit and real forms by means of multiple hypergeometric functions\n(Lauricella, functions). Releasing the weight load but adding the complication\nof the asymmetry, by means of elliptic integrals of third kind, we provide the\nprecession angle completing some previous treatments of the Euler-Poinsot case.\nIntegrating then the relevant differential equation, we reach the finite polar\nequation of a special trajectory named the {\\it herpolhode}. In the last\nproblem we keep the symmetry of the first problem, but without the weight, and\ntake into account a viscous dissipation. The approach of first integrals is no\nlonger practicable in this situation and the Euler equations are faced directly\nleading to dumped goniometric functions obtained as particular occurrences of\nBessel functions of order $-1/2$.\n", "title": "Motions about a fixed point by hypergeometric functions: new non-complex analytical solutions and integration of the herpolhode" }
null
null
null
null
true
null
6700
null
Default
null
null