text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We study the statistical properties of an estimator derived by applying a\ngradient ascent method with multiple initializations to a multi-modal\nlikelihood function. We derive the population quantity that is the target of\nthis estimator and study the properties of confidence intervals (CIs)\nconstructed from asymptotic normality and the bootstrap approach. In\nparticular, we analyze the coverage deficiency due to finite number of random\ninitializations. We also investigate the CIs by inverting the likelihood ratio\ntest, the score test, and the Wald test, and we show that the resulting CIs may\nbe very different. We provide a summary of the uncertainties that we need to\nconsider while making inference about the population. Note that we do not\nprovide a solution to the problem of multiple local maxima; instead, our goal\nis to investigate the effect from local maxima on the behavior of our\nestimator. In addition, we analyze the performance of the EM algorithm under\nrandom initializations and derive the coverage of a CI with a finite number of\ninitializations. Finally, we extend our analysis to a nonparametric mode\nhunting problem.\n", "title": "Statistical Inference with Local Optima" }
null
null
null
null
true
null
3601
null
Default
null
null
null
{ "abstract": " In this paper, we investigate the multi-variate sequence classification\nproblem from a multi-instance learning perspective. Real-world sequential data\ncommonly show discriminative patterns only at specific time periods. For\ninstance, we can identify a cropland during its growing season, but it looks\nsimilar to a barren land after harvest or before planting. Besides, even within\nthe same class, the discriminative patterns can appear in different periods of\nsequential data. Due to such property, these discriminative patterns are also\nreferred to as shifting patterns. The shifting patterns in sequential data\nseverely degrade the performance of traditional classification methods without\nsufficient training data.\nWe propose a novel sequence classification method by automatically mining\nshifting patterns from multi-variate sequence. The method employs a\nmulti-instance learning approach to detect shifting patterns while also\nmodeling temporal relationships within each multi-instance bag by an LSTM model\nto further improve the classification performance. We extensively evaluate our\nmethod on two real-world applications - cropland mapping and affective state\nrecognition. The experiments demonstrate the superiority of our proposed method\nin sequence classification performance and in detecting discriminative shifting\npatterns.\n", "title": "Discovery of Shifting Patterns in Sequence Classification" }
null
null
null
null
true
null
3602
null
Default
null
null
null
{ "abstract": " A virtual network (VN) contains a collection of virtual nodes and links\nassigned to underlying physical resources in a network substrate. VN migration\nis the process of remapping a VN's logical topology to a new set of physical\nresources to provide failure recovery, energy savings, or defense against\nattack. Providing VN migration that is transparent to running applications is a\nsignificant challenge. Efficient migration mechanisms are highly dependent on\nthe technology deployed in the physical substrate. Prior work has considered\nmigration in data centers and in the PlanetLab infrastructure. However, there\nhas been little effort targeting an SDN-enabled wide-area networking\nenvironment - an important building block of future networking infrastructure.\nIn this work, we are interested in the design, implementation and evaluation of\nVN migration in GENI as a working example of such a future network. We identify\nand propose techniques to address key challenges: the dynamic allocation of\nresources during migration, managing hosts connected to the VN, and flow table\nmigration sequences to minimize packet loss. We find that GENI's virtualization\narchitecture makes transparent and efficient migration challenging. We suggest\nalternatives that might be adopted in GENI and are worthy of adoption by\nvirtual network providers to facilitate migration.\n", "title": "Virtual Network Migration on the GENI Wide-Area SDN-Enabled Infrastructure" }
null
null
null
null
true
null
3603
null
Default
null
null
null
{ "abstract": " We propose a communicationally and computationally efficient algorithm for\nhigh-dimensional distributed sparse learning. At each iteration, local machines\ncompute the gradient on local data and the master machine solves one shifted\n$l_1$ regularized minimization problem. The communication cost is reduced from\nconstant times of the dimension number for the state-of-the-art algorithm to\nconstant times of the sparsity number via Two-way Truncation procedure.\nTheoretically, we prove that the estimation error of the proposed algorithm\ndecreases exponentially and matches that of the centralized method under mild\nassumptions. Extensive experiments on both simulated data and real data verify\nthat the proposed algorithm is efficient and has performance comparable with\nthe centralized method on solving high-dimensional sparse learning problems.\n", "title": "Communication-efficient Algorithm for Distributed Sparse Learning via Two-way Truncation" }
null
null
null
null
true
null
3604
null
Default
null
null
null
{ "abstract": " The extraction of natural gas from the earth has been shown to be governed by\ndifferential equations concerning flow through a porous material. Recently,\nmodels such as fractional differential equations have been developed to model\nthis phenomenon. One key issue with these models is estimating the fraction of\nthe differential equation. Traditional methods such as maximum likelihood,\nleast squares and even method of moments are not available to estimate this\nparameter as traditional calculus methods do not apply. We develop a Bayesian\napproach to estimate the fraction of the order of the differential equation\nthat models transport in unconventional hydrocarbon reservoirs. In this paper,\nwe use this approach to adequately quantify the uncertainties associated with\nthe error and predictions. A simulation study is presented as well to assess\nthe utility of the modeling approach.\n", "title": "A Bayesian Estimation for the Fractional Order of the Differential Equation that Models Transport in Unconventional Hydrocarbon Reservoirs" }
null
null
null
null
true
null
3605
null
Default
null
null
null
{ "abstract": " We present numerical simulations of magnetic billiards inside a convex domain\nin the plane.\n", "title": "Numerical simulations of magnetic billiards in a convex domain in $\\mathbb{R}^2$" }
null
null
null
null
true
null
3606
null
Default
null
null
null
{ "abstract": " A well-know drawback of l_1-penalized estimators is the systematic shrinkage\nof the large coefficients towards zero. A simple remedy is to treat Lasso as a\nmodel-selection procedure and to perform a second refitting step on the\nselected support. In this work we formalize the notion of refitting and provide\noracle bounds for arbitrary refitting procedures of the Lasso solution. One of\nthe most widely used refitting techniques which is based on Least-Squares may\nbring a problem of interpretability, since the signs of the refitted estimator\nmight be flipped with respect to the original estimator. This problem arises\nfrom the fact that the Least-Squares refitting considers only the support of\nthe Lasso solution, avoiding any information about signs or amplitudes. To this\nend we define a sign consistent refitting as an arbitrary refitting procedure,\npreserving the signs of the first step Lasso solution and provide Oracle\ninequalities for such estimators. Finally, we consider special refitting\nstrategies: Bregman Lasso and Boosted Lasso. Bregman Lasso has a fruitful\nproperty to converge to the Sign-Least-Squares refitting (Least-Squares with\nsign constraints), which provides with greater interpretability. We\nadditionally study the Bregman Lasso refitting in the case of orthogonal\ndesign, providing with simple intuition behind the proposed method. Boosted\nLasso, in contrast, considers information about magnitudes of the first Lasso\nstep and allows to develop better oracle rates for prediction. Finally, we\nconduct an extensive numerical study to show advantages of one approach over\nothers in different synthetic and semi-real scenarios.\n", "title": "On Lasso refitting strategies" }
null
null
null
null
true
null
3607
null
Default
null
null
null
{ "abstract": " Using supercharacter theory, we identify the matrices that are diagonalized\nby the discrete cosine and discrete sine transforms, respectively. Our method\naffords a combinatorial interpretation for the matrix entries.\n", "title": "Supercharacters and the discrete Fourier, cosine, and sine transforms" }
null
null
null
null
true
null
3608
null
Default
null
null
null
{ "abstract": " We study the asymptotic behavior of the Max $\\kappa$-cut on a family of\nsparse, inhomogeneous random graphs. In the large degree limit, the leading\nterm is a variational problem, involving the ground state of a constrained\ninhomogeneous Potts spin glass. We derive a Parisi type formula for the free\nenergy of this model, with possible constraints on the proportions, and derive\nthe limiting ground state energy by a suitable zero temperature limit.\n", "title": "A connection between MAX $κ$-CUT and the inhomogeneous Potts spin glass in the large degree limit" }
null
null
[ "Mathematics" ]
null
true
null
3609
null
Validated
null
null
null
{ "abstract": " Topological data analysis is an emerging mathematical concept for\ncharacterizing shapes in multi-scale data. In this field, persistence diagrams\nare widely used as a descriptor of the input data, and can distinguish robust\nand noisy topological properties. Nowadays, it is highly desired to develop a\nstatistical framework on persistence diagrams to deal with practical data. This\npaper proposes a kernel method on persistence diagrams. A theoretical\ncontribution of our method is that the proposed kernel allows one to control\nthe effect of persistence, and, if necessary, noisy topological properties can\nbe discounted in data analysis. Furthermore, the method provides a fast\napproximation technique. The method is applied into several problems including\npractical data in physics, and the results show the advantage compared to the\nexisting kernel method on persistence diagrams.\n", "title": "Kernel method for persistence diagrams via kernel embedding and weight factor" }
null
null
null
null
true
null
3610
null
Default
null
null
null
{ "abstract": " The exponential-time hypothesis (ETH) states that 3-SAT is not solvable in\nsubexponential time, i.e. not solvable in O(c^n) time for arbitrary c > 1,\nwhere n denotes the number of variables. Problems like k-SAT can be viewed as\nspecial cases of the constraint satisfaction problem (CSP), which is the\nproblem of determining whether a set of constraints is satisfiable. In this\npaper we study thef worst-case time complexity of NP-complete CSPs. Our main\ninterest is in the CSP problem parameterized by a constraint language Gamma\n(CSP(Gamma)), and how the choice of Gamma affects the time complexity. It is\nbelieved that CSP(Gamma) is either tractable or NP-complete, and the algebraic\nCSP dichotomy conjecture gives a sharp delineation of these two classes based\non algebraic properties of constraint languages. Under this conjecture and the\nETH, we first rule out the existence of subexponential algorithms for\nfinite-domain NP-complete CSP(Gamma) problems. This result also extends to\ncertain infinite-domain CSPs and structurally restricted CSP(Gamma) problems.\nWe then begin a study of the complexity of NP-complete CSPs where one is\nallowed to arbitrarily restrict the values of individual variables, which is a\nvery well-studied subclass of CSPs. For such CSPs with finite domain D, we\nidentify a relation SD such that (1) CSP({SD}) is NP-complete and (2) if\nCSP(Gamma) over D is NP-complete and solvable in O(c^n) time, then CSP({SD}) is\nsolvable in O(c^n) time, too. Hence, the time complexity of CSP({SD}) is a\nlower bound for all CSPs of this particular kind. We also prove that the\ncomplexity of CSP({SD}) is decreasing when |D| increases, unless the ETH is\nfalse. This implies, for instance, that for every c>1 there exists a\nfinite-domain Gamma such that CSP(Gamma) is NP-complete and solvable in O(c^n)\ntime.\n", "title": "Time Complexity of Constraint Satisfaction via Universal Algebra" }
null
null
null
null
true
null
3611
null
Default
null
null
null
{ "abstract": " We propose a structured low rank matrix completion algorithm to recover a\ntime series of images consisting of linear combination of exponential\nparameters at every pixel, from under-sampled Fourier measurements. The spatial\nsmoothness of these parameters is exploited along with the exponential\nstructure of the time series at every pixel, to derive an annihilation relation\nin the $k-t$ domain. This annihilation relation translates into a structured\nlow rank matrix formed from the $k-t$ samples. We demonstrate the algorithm in\nthe parameter mapping setting and show significant improvement over state of\nthe art methods.\n", "title": "Novel Structured Low-rank algorithm to recover spatially smooth exponential image time series" }
null
null
null
null
true
null
3612
null
Default
null
null
null
{ "abstract": " This article lays the foundations for the study of modular forms transforming\nwith respect to representations of Fuchsian groups of genus zero. More\nprecisely, we define geometrically weighted graded modules of such modular\nforms, where the graded structure comes from twisting with all isomorphism\nclasses of line bundles on the corresponding compactified modular curve, and we\nstudy their structure by relating it to the structure of vector bundles over\norbifold curves of genus zero. We prove that these modules are free whenever\nthe Fuchsian group has at most two elliptic points. For three or more elliptic\npoints, we give explicit constructions of indecomposable vector bundles of rank\ntwo over modular orbifold curves, which give rise to non-free modules of\ngeometrically weighted modular forms.\n", "title": "Vector bundles and modular forms for Fuchsian groups of genus zero" }
null
null
null
null
true
null
3613
null
Default
null
null
null
{ "abstract": " Research in UAV scheduling has obtained an emerging interest from scientists\nin the optimization field. When the scheduling itself has established a strong\nroot since the 19th century, works on UAV scheduling in indoor environment has\ncome forth in the latest decade. Several works on scheduling UAV operations in\nindoor (two and three dimensional) and outdoor environments are reported. In\nthis paper, a further study on UAV scheduling in three dimensional indoor\nenvironment is investigated. Dealing with indoor environment\\textemdash where\nhumans, UAVs, and other elements or infrastructures are likely to coexist in\nthe same space\\textemdash draws attention towards the safety of the operations.\nIn relation to the battery level, a preserved battery level leads to safer\noperations, promoting the UAV to have a decent remaining power level. A\nmethodology which consists of a heuristic approach based on Restful Task\nAssignment Algorithm, incorporated with Particle Swarm Optimization Algorithm,\nis proposed. The motivation is to preserve the battery level throughout the\noperations, which promotes less possibility in having failed UAVs on duty. This\nmethodology is tested with 54 benchmark datasets stressing on 4 different\naspects: geographical distance, number of tasks, number of predecessors, and\nslack time. The test results and their characteristics in regard to the\nproposed methodology are discussed and presented.\n", "title": "Indoor UAV scheduling with Restful Task Assignment Algorithm" }
null
null
null
null
true
null
3614
null
Default
null
null
null
{ "abstract": " We report temperature (T) dependence of dc magnetization, electrical\nresistivity (rho(T)), and heat-capacity of rare-earth (R) compounds, Gd3RuSn6\nand Tb3RuSn6, which are found to crystallize in the Yb3CoSn6-type orthorhombic\nstructure (space group: Cmcm). The results establish that there is an onset of\nantiferromagnetic order near (T_N) 19 and 25 K respectively. In addition, we\nfind that there is another magnetic transition for both the cases around 14 and\n17 K respectively. In the case of the Gd compound, the spin-scattering\ncontribution to rho is found to increase below 75 K as the material is cooled\ntowards T_N, thereby resulting in a minimum in the plot of rho(T) unexpected\nfor Gd based systems. Isothermal magnetization at 1.8 K reveals an upward\ncurvature around 50 kOe. Isothermal magnetoresistance plots show interesting\nanomalies in the magnetically ordered state. There are sign reversals in the\nplot of isothermal entropy change versus T in the magnetically ordered state,\nindicating subtle changes in the spin reorientation with T. The results reveal\nthat these compounds exhibit interesting magnetic properties.\n", "title": "Magnetic behavior of new compounds, Gd3RuSn6 and Tb3RuSn6" }
null
null
[ "Physics" ]
null
true
null
3615
null
Validated
null
null
null
{ "abstract": " We develop an assume-guarantee contract framework for the design of\ncyber-physical systems, modeled as closed-loop control systems, under\nprobabilistic requirements. We use a variant of signal temporal logic, namely,\nStochastic Signal Temporal Logic (StSTL) to specify system behaviors as well as\ncontract assumptions and guarantees, thus enabling automatic reasoning about\nrequirements of stochastic systems. Given a stochastic linear system\nrepresentation and a set of requirements captured by bounded StSTL contracts,\nwe propose algorithms that can check contract compatibility, consistency, and\nrefinement, and generate a controller to guarantee that a contract is\nsatisfied, following a stochastic model predictive control approach. Our\nalgorithms leverage encodings of the verification and control synthesis tasks\ninto mixed integer optimization problems, and conservative approximations of\nprobabilistic constraints that produce both sound and tractable problem\nformulations. We illustrate the effectiveness of our approach on a few\nexamples, including the design of embedded controllers for aircraft power\ndistribution networks.\n", "title": "Stochastic Assume-Guarantee Contracts for Cyber-Physical System Design Under Probabilistic Requirements" }
null
null
[ "Computer Science" ]
null
true
null
3616
null
Validated
null
null
null
{ "abstract": " This paper presents an active search trajectory synthesis technique for\nautonomous mobile robots with nonlinear measurements and dynamics. The\npresented approach uses the ergodicity of a planned trajectory with respect to\nan expected information density map to close the loop during search. The\nergodic control algorithm does not rely on discretization of the search or\naction spaces, and is well posed for coverage with respect to the expected\ninformation density whether the information is diffuse or localized, thus\ntrading off between exploration and exploitation in a single objective\nfunction. As a demonstration, we use a robotic electrolocation platform to\nestimate location and size parameters describing static targets in an\nunderwater environment. Our results demonstrate that the ergodic exploration of\ndistributed information (EEDI) algorithm outperforms commonly used\ninformation-oriented controllers, particularly when distractions are present.\n", "title": "Ergodic Exploration of Distributed Information" }
null
null
[ "Computer Science" ]
null
true
null
3617
null
Validated
null
null
null
{ "abstract": " In recent years, the optical control of exchange interactions has emerged as\nan exciting new direction in the study of the ultrafast optical control of\nmagnetic order. Here we review recent theoretical works on antiferromagnetic\nsystems, devoted to i) simulating the ultrafast control of exchange\ninteractions, ii) modeling the strongly nonequilibrium response of the magnetic\norder and iii) the relation with relevant experimental works developed in\nparallel. In addition to the excitation of spin precession, we discuss examples\nof rapid cooling and the control of ultrafast coherent longitudinal spin\ndynamics in response to femtosecond optically induced perturbations of exchange\ninteractions. These elucidate the potential for exploiting the control of\nexchange interactions to find new scenarios for both faster and more\nenergy-efficient manipulation of magnetism.\n", "title": "Manipulating magnetism by ultrafast control of the exchange interaction" }
null
null
null
null
true
null
3618
null
Default
null
null
null
{ "abstract": " The predictive power of neural networks often costs model interpretability.\nSeveral techniques have been developed for explaining model outputs in terms of\ninput features; however, it is difficult to translate such interpretations into\nactionable insight. Here, we propose a framework to analyze predictions in\nterms of the model's internal features by inspecting information flow through\nthe network. Given a trained network and a test image, we select neurons by two\nmetrics, both measured over a set of images created by perturbations to the\ninput image: (1) magnitude of the correlation between the neuron activation and\nthe network output and (2) precision of the neuron activation. We show that the\nformer metric selects neurons that exert large influence over the network\noutput while the latter metric selects neurons that activate on generalizable\nfeatures. By comparing the sets of neurons selected by these two metrics, our\nframework suggests a way to investigate the internal attention mechanisms of\nconvolutional neural networks.\n", "title": "Towards Visual Explanations for Convolutional Neural Networks via Input Resampling" }
null
null
null
null
true
null
3619
null
Default
null
null
null
{ "abstract": " Tracking and controlling the shape of continuum dexterous manipulators (CDM)\nin constraint environments is a challenging task. The imposed constraints and\ninteraction with unknown obstacles may conform the CDM's shape and therefore\ndemands for shape sensing methods which do not rely on direct line of sight. To\naddress these issues, we integrate a novel Fiber Bragg Grating (FBG) shape\nsensing unit into a CDM, reconstruct the shape in real-time, and develop an\noptimization-based control algorithm using FBG tip position feedback. The CDM\nis designed for less-invasive treatment of osteolysis (bone degradation). To\nevaluate the performance of the feedback control algorithm when the CDM\ninteracts with obstacles, we perform a set of experiments similar to the real\nscenario of the CDM interaction with soft and hard lesions during the treatment\nof osteolysis. In addition, we propose methods for identification of the CDM\ncollisions with soft or hard obstacles using the jacobian information. Results\ndemonstrate successful control of the CDM tip based on the FBG feedback and\nindicate repeatability and robustness of the proposed method when interacting\nwith unknown obstacles.\n", "title": "FBG-Based Control of a Continuum Manipulator Interacting With Obstacles" }
null
null
null
null
true
null
3620
null
Default
null
null
null
{ "abstract": " In this paper we present some new results on the existence of solutions of\ngeneralized variational inequalities in real reflexive Banach spaces with\nFréchet differentiable norms. Moreover, we also give some theorems about the\nstructure of solution sets. The results obtained in this paper improve and\nextend the ones announced by Fang and Peterson [1] to infinite dimensional\nspaces.\n", "title": "Generalized variational inequalities for maximal monotone operators" }
null
null
null
null
true
null
3621
null
Default
null
null
null
{ "abstract": " The increasing accuracy of automatic chord estimation systems, the\navailability of vast amounts of heterogeneous reference annotations, and\ninsights from annotator subjectivity research make chord label personalization\nincreasingly important. Nevertheless, automatic chord estimation systems are\nhistorically exclusively trained and evaluated on a single reference\nannotation. We introduce a first approach to automatic chord label\npersonalization by modeling subjectivity through deep learning of a harmonic\ninterval-based chord label representation. After integrating these\nrepresentations from multiple annotators, we can accurately personalize chord\nlabels for individual annotators from a single model and the annotators' chord\nlabel vocabulary. Furthermore, we show that chord personalization using\nmultiple reference annotations outperforms using a single reference annotation.\n", "title": "Chord Label Personalization through Deep Learning of Integrated Harmonic Interval-based Representations" }
null
null
null
null
true
null
3622
null
Default
null
null
null
{ "abstract": " Blooms Taxonomy (BT) have been used to classify the objectives of learning\noutcome by dividing the learning into three different domains; the cognitive\ndomain, the effective domain and the psychomotor domain. In this paper, we are\nintroducing a new approach to classify the questions and learning outcome\nstatements (LOS) into Blooms taxonomy (BT) and to verify BT verb lists, which\nare being cited and used by academicians to write questions and (LOS). An\nexperiment was designed to investigate the semantic relationship between the\naction verbs used in both questions and LOS to obtain more accurate\nclassification of the levels of BT. A sample of 775 different action verbs\ncollected from different universities allows us to measure an accurate and\nclear-cut cognitive level for the action verb. It is worth mentioning that\nnatural language processing techniques were used to develop our rules as to\ninduce the questions into chunks in order to extract the action verbs. Our\nproposed solution was able to classify the action verb into a precise level of\nthe cognitive domain. We, on our side, have tested and evaluated our proposed\nsolution using confusion matrix. The results of evaluation tests yielded 97%\nfor the macro average of precision and 90% for F1. Thus, the outcome of the\nresearch suggests that it is crucial to analyse and verify the action verbs\ncited and used by academicians to write LOS and classify their questions based\non blooms taxonomy in order to obtain a definite and more accurate\nclassification.\n", "title": "Classification of Questions and Learning Outcome Statements (LOS) Into Blooms Taxonomy (BT) By Similarity Measurements Towards Extracting Of Learning Outcome from Learning Material" }
null
null
[ "Computer Science" ]
null
true
null
3623
null
Validated
null
null
null
{ "abstract": " The need to test whether two random vectors are independent has spawned a\nlarge number of competing measures of dependence. We are interested in\nnonparametric measures that are invariant under strictly increasing\ntransformations, such as Kendall's tau, Hoeffding's D, and the more recently\ndiscovered Bergsma--Dassios sign covariance. Each of these measures exhibits\nsymmetries that are not readily apparent from their definitions. Making these\nsymmetries explicit, we define a new class of multivariate nonparametric\nmeasures of dependence that we refer to as Symmetric Rank Covariances. This new\nclass generalises all of the above measures and leads naturally to multivariate\nextensions of the Bergsma--Dassios sign covariance. Symmetric Rank Covariances\nmay be estimated unbiasedly using U-statistics for which we prove results on\ncomputational efficiency and large-sample behavior. The algorithms we develop\nfor their computation include, to the best of our knowledge, the first\nefficient algorithms for the well-known Hoeffding's D statistic in the\nmultivariate setting.\n", "title": "Symmetric Rank Covariances: a Generalised Framework for Nonparametric Measures of Dependence" }
null
null
null
null
true
null
3624
null
Default
null
null
null
{ "abstract": " We report measurements of the magnetoresistance in the charge density wave\n(CDW) state of rare-earth tritellurides, namely TbTe$_3$ and HoTe$_3$. The\nmagnetic field dependence of magnetoresistance exhibits a temperature dependent\ncrossover between a conventional quadratic law at high $T$ and low $B$ and an\nunusual linear dependence at low $T$ and high $B$. We present a quite general\nmodel to explain the linear magnetoresistance taking into account the strong\nscattering of quasiparticles on CDW fluctuations in the vicinity of \"hot spots\"\nof the Fermi surface (FS) where the FS reconstruction is the strongest.\n", "title": "Linear magnetoresistance in the charge density wave state of quasi-two-dimensional rare-earth tritellurides" }
null
null
null
null
true
null
3625
null
Default
null
null
null
{ "abstract": " This paper introduces two classes of totally real quartic number fields, one\nof biquadratic extensions and one of cyclic extensions, each of which has a\nnon-principal Euclidean ideal. It generalizes techniques of Graves used to\nprove that the number field $\\mathbb{Q}(\\sqrt{2},\\sqrt{35})$ has a\nnon-principal Euclidean ideal.\n", "title": "Two classes of number fields with a non-principal Euclidean ideal" }
null
null
null
null
true
null
3626
null
Default
null
null
null
{ "abstract": " We present some elementary but foundational results concerning\ndiamond-colored modular and distributive lattices and connect these structures\nto certain one-player combinatorial \"move-minimizing games,\" in particular, a\nso-called \"domino game.\" The objective of this game is to find, if possible,\nthe least number of \"domino moves\" to get from one partition to another, where\na domino move is, with one exception, the addition or removal of a\ndomino-shaped pair of tiles. We solve this domino game by demonstrating the\nsomewhat surprising fact that the associated \"game graphs\" coincide with a\nwell-known family of diamond-colored distributive lattices which shall be\nreferred to as the \"type $\\mathsf{A}$ fundamental lattices.\" These lattices\narise as supporting graphs for the fundamental representations of the special\nlinear Lie algebras and as splitting posets for type $\\mathsf{A}$ fundamental\nsymmetric functions, connections which are further explored in sequel papers\nfor types $\\mathsf{A}$, $\\mathsf{C}$, and $\\mathsf{B}$. In this paper, this\nconnection affords a solution to the proposed domino game as well as new\ndescriptions of the type $\\mathsf{A}$ fundamental lattices.\n", "title": "Diamond-colored distributive lattices, move-minimizing games, and fundamental Weyl symmetric functions: The type $\\mathsf{A}$ case" }
null
null
null
null
true
null
3627
null
Default
null
null
null
{ "abstract": " A hidden truncation hyperbolic (HTH) distribution is introduced and finite\nmixtures thereof are applied for clustering. A stochastic representation of the\nHTH distribution is given and a density is derived. A hierarchical\nrepresentation is described, which aids in parameter estimation. Finite\nmixtures of HTH distributions are presented and their identifiability is\nproved. The convexity of the HTH distribution is discussed, which is important\nin clustering applications, and some theoretical results in this direction are\npresented. The relationship between the HTH distribution and other skewed\ndistributions in the literature is discussed. Illustrations are provided ---\nboth of the HTH distribution and application of finite mixtures thereof for\nclustering.\n", "title": "Hidden Truncation Hyperbolic Distributions, Finite Mixtures Thereof, and Their Application for Clustering" }
null
null
null
null
true
null
3628
null
Default
null
null
null
{ "abstract": " We reinvestigate a claimed sample of 22 X-ray detected active galactic nuclei\n(AGN) at redshifts z > 4, which has reignited the debate as to whether young\ngalaxies or AGN reionized the Universe. These sources lie within the\nGOODS-S/CANDELS field, and we examine both the robustness of the claimed X-ray\ndetections (within the Chandra 4Ms imaging) and perform an independent analysis\nof the photometric redshifts of the optical/infrared counterparts. We confirm\nthe reality of only 15 of the 22 reported X-ray detections, and moreover find\nthat only 12 of the 22 optical/infrared counterpart galaxies actually lie\nrobustly at z > 4. Combining these results we find convincing evidence for only\n7 X-ray AGN at z > 4 in the GOODS-S field, of which only one lies at z > 5. We\nrecalculate the evolving far-UV (1500 Angstrom) luminosity density produced by\nAGN at high redshift, and find that it declines rapidly from z = 4 to z = 6, in\nagreement with several other recent studies of the evolving AGN luminosity\nfunction. The associated rapid decline in inferred hydrogen-ionizing emissivity\ncontributed by AGN falls an order-of-magnitude short of the level required to\nmaintain hydrogen ionization at z ~ 6. We conclude that all available evidence\ncontinues to favour a scenario in which young galaxies reionized the Universe,\nwith AGN making, at most, a very minor contribution to cosmic hydrogen\nreionization.\n", "title": "No evidence for a significant AGN contribution to cosmic hydrogen reionization" }
null
null
null
null
true
null
3629
null
Default
null
null
null
{ "abstract": " We present a new approach to testing file-system crash consistency: bounded\nblack-box crash testing (B3). B3 tests the file system in a black-box manner\nusing workloads of file-system operations. Since the space of possible\nworkloads is infinite, B3 bounds this space based on parameters such as the\nnumber of file-system operations or which operations to include, and\nexhaustively generates workloads within this bounded space. Each workload is\ntested on the target file system by simulating power-loss crashes while the\nworkload is being executed, and checking if the file system recovers to a\ncorrect state after each crash. B3 builds upon insights derived from our study\nof crash-consistency bugs reported in Linux file systems in the last five\nyears. We observed that most reported bugs can be reproduced using small\nworkloads of three or fewer file-system operations on a newly-created file\nsystem, and that all reported bugs result from crashes after fsync() related\nsystem calls. We build two tools, CrashMonkey and ACE, to demonstrate the\neffectiveness of this approach. Our tools are able to find 24 out of the 26\ncrash-consistency bugs reported in the last five years. Our tools also revealed\n10 new crash-consistency bugs in widely-used, mature Linux file systems, seven\nof which existed in the kernel since 2014. Our tools also found a\ncrash-consistency bug in a verified file system, FSCQ. The new bugs result in\nsevere consequences like broken rename atomicity and loss of persisted files.\n", "title": "Finding Crash-Consistency Bugs with Bounded Black-Box Crash Testing" }
null
null
null
null
true
null
3630
null
Default
null
null
null
{ "abstract": " In this paper we construct entire solutions to the Cahn-Hilliard equation\n$-\\Delta(-\\Delta u+W^{'}(u))+W^{\"}(u)(-\\Delta u+W^{'}(u))=0$ in the Euclidean\nplane, where $W(u)$ is the standard double-well potential $\\frac{1}{4}\n(1-u^2)^2$. Such solutions have a non-trivial profile that shadows a Willmore\nplanar curve, and converge uniformly to $\\pm 1$ as $x_2 \\to \\pm \\infty$. These\nsolutions give a counterexample to the counterpart of Gibbons' conjecture for\nthe fourth-order counterpart of the Allen-Cahn equation. We also study the\n$x_2$-derivative of these solutions using the special structure of Willmore's\nequation.\n", "title": "Periodic solutions to the Cahn-Hilliard equation in the plane" }
null
null
null
null
true
null
3631
null
Default
null
null
null
{ "abstract": " The wave turbulence equation is an effective kinetic equation that describes\nthe dynamics of wave spectrum in weakly nonlinear and dispersive media. Such a\nkinetic model has been derived by physicists in the sixties, though the\nwell-posedness theory remains open, due to the complexity of resonant\ninteraction kernels. In this paper, we provide a global unique radial strong\nsolution, the first such a result, to the wave turbulence equation for\ncapillary waves.\n", "title": "On the kinetic equation in Zakharov's wave turbulence theory for capillary waves" }
null
null
null
null
true
null
3632
null
Default
null
null
null
{ "abstract": " Multi-label text classification is a popular machine learning task where each\ndocument is assigned with multiple relevant labels. This task is challenging\ndue to high dimensional features and correlated labels. Multi-label text\nclassifiers need to be carefully regularized to prevent the severe over-fitting\nin the high dimensional space, and also need to take into account label\ndependencies in order to make accurate predictions under uncertainty. We\ndemonstrate significant and practical improvement by carefully regularizing the\nmodel complexity during training phase, and also regularizing the label search\nspace during prediction phase. Specifically, we regularize the classifier\ntraining using Elastic-net (L1+L2) penalty for reducing model complexity/size,\nand employ early stopping to prevent overfitting. At prediction time, we apply\nsupport inference to restrict the search space to label sets encountered in the\ntraining set, and F-optimizer GFM to make optimal predictions for the F1\nmetric. We show that although support inference only provides density\nestimations on existing label combinations, when combined with GFM predictor,\nthe algorithm can output unseen label combinations. Taken collectively, our\nexperiments show state of the art results on many benchmark datasets. Beyond\nperformance and practical contributions, we make some interesting observations.\nContrary to the prior belief, which deems support inference as purely an\napproximate inference procedure, we show that support inference acts as a\nstrong regularizer on the label prediction structure. It allows the classifier\nto take into account label dependencies during prediction even if the\nclassifiers had not modeled any label dependencies during training.\n", "title": "Regularizing Model Complexity and Label Structure for Multi-Label Text Classification" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
3633
null
Validated
null
null
null
{ "abstract": " We consider a network design problem with random arc capacities and give a\nformulation with a probabilistic capacity constraint on each cut of the\nnetwork. To handle the exponentially-many probabilistic constraints a\nseparation procedure that solves a nonlinear minimum cut problem is introduced.\nFor the case with independent arc capacities, we exploit the supermodularity of\nthe set function defining the constraints and generate cutting planes based on\nthe supermodular covering knapsack polytope. For the general correlated case,\nwe give a reformulation of the constraints that allows to uncover and utilize\nthe submodularity of a related function. The computational results indicate\nthat exploiting the underlying submodularity and supermodularity arising with\nthe probabilistic constraints provides significant advantages over the\nclassical approaches.\n", "title": "Network Design with Probabilistic Capacities" }
null
null
[ "Mathematics" ]
null
true
null
3634
null
Validated
null
null
null
{ "abstract": " Consider an infinite chain of masses, each connected to its nearest neighbors\nby a (nonlinear) spring. This is a Fermi-Pasta-Ulam-Tsingou lattice. We prove\nthe existence of traveling waves in the setting where the masses alternate in\nsize. In particular we address the limit where the mass ratio tends to zero.\nThe problem is inherently singular and we find that the traveling waves are not\ntrue solitary waves but rather \"nanopterons\", which is to say, waves which\nasymptotic at spatial infinity to very small amplitude periodic waves.\nMoreover, we can only find solutions when the mass ratio lies in a certain open\nset. The difficulties in the problem all revolve around understanding Jost\nsolutions of a nonlocal Schrödinger operator in its semi-classical limit.\n", "title": "Nanopteron solutions of diatomic Fermi-Pasta-Ulam-Tsingou lattices with small mass-ratio" }
null
null
[ "Mathematics" ]
null
true
null
3635
null
Validated
null
null
null
{ "abstract": " The general procedure underlying Hartree-Fock and Kohn-Sham density\nfunctional theory calculations consists in optimizing orbitals for a\nself-consistent solution of the Roothaan-Hall equations in an iterative\nprocess. It is often ignored that multiple self-consistent solutions can exist,\nseveral of which may correspond to minima of the energy functional. In addition\nto the difficulty sometimes encountered to converge the calculation to a\nself-consistent solution, one must ensure that the correct self-consistent\nsolution was found, typically the one with the lowest electronic energy.\nConvergence to an unwanted solution is in general not trivial to detect and\nwill deliver incorrect energy and molecular properties, and accordingly a\nmisleading description of chemical reactivity. Wrong conclusions based on\nincorrect self-consistent field convergence are particularly cumbersome in\nautomated calculations met in high-throughput virtual screening, structure\noptimizations, ab initio molecular dynamics, and in real-time explorations of\nchemical reactivity, where the vast amount of data can hardly be manually\ninspected. Here, we introduce a fast and automated approach to detect and cure\nincorrect orbital convergence, which is especially suited for electronic\nstructure calculations on sequences of molecular structures. Our approach\nconsists of a randomized perturbation of the converged electron density\n(matrix) intended to push orbital convergence to solutions that correspond to\nanother stationary point (of potentially lower electronic energy) in the\nvariational parameter space of an electronic wave function approximation.\n", "title": "Steering Orbital Optimization out of Local Minima and Saddle Points Toward Lower Energy" }
null
null
null
null
true
null
3636
null
Default
null
null
null
{ "abstract": " Recent studies suggest that the quenching properties of galaxies are\ncorrelated over several mega-parsecs. The large-scale \"galactic conformity\"\nphenomenon around central galaxies has been regarded as a potential signature\nof \"galaxy assembly bias\" or \"pre-heating\", both of which interpret conformity\nas a result of direct environmental effects acting on galaxy formation.\nBuilding on the iHOD halo quenching framework developed in Zu & Mandelbaum\n(2015, 2016), we discover that our fiducial halo mass quenching model, without\nany galaxy assembly bias, can successfully explain the overall environmental\ndependence and the conformity of galaxy colours in SDSS, as measured by the\nmark correlation functions of galaxy colours and the red galaxy fractions\naround isolated primaries, respectively. Our fiducial iHOD halo quenching mock\nalso correctly predicts the differences in the spatial clustering and\ngalaxy-galaxy lensing signals between the more vs. less red galaxy subsamples,\nsplit by the red-sequence ridge-line at fixed stellar mass. Meanwhile, models\nthat tie galaxy colours fully or partially to halo assembly bias have\ndifficulties in matching all these observables simultaneously. Therefore, we\ndemonstrate that the observed environmental dependence of galaxy colours can be\nnaturally explained by the combination of 1) halo quenching and 2) the\nvariation of halo mass function with environment --- an indirect environmental\neffect mediated by two separate physical processes.\n", "title": "Mapping stellar content to dark matter halos - III.Environmental dependence and conformity of galaxy colours" }
null
null
null
null
true
null
3637
null
Default
null
null
null
{ "abstract": " In young starburst galaxies, the X-ray population is expected to be dominated\nby the relics of the most massive and short-lived stars, black-hole and\nneutron-star high mass X-ray binaries (XRBs). In the closest such galaxy, IC\n10, we have made a multi-wavelength census of these objects. Employing a novel\nstatistical correlation technique, we have matched our list of 110 X-ray point\nsources, derived from a decade of Chandra observations, against published\nphotometric data. We report an 8 sigma correlation between the celestial\ncoordinates of the two catalogs, with 42 X-ray sources having an optical\ncounterpart. Applying an optical color-magnitude selection to isolate blue\nsupergiant (SG) stars in IC 10, we find 16 matches. Both cases show a\nstatistically significant overabundance versus the expectation value for chance\nalignments. The blue objects also exhibit systematically higher fx/fv ratios\nthan other stars in the same magnitude range. Blue SG-XRBs include a major\nclass of progenitors of double-degenerate binaries, hence their numbers are an\nimportant factor in modeling the rate of gravitational wave sources. We suggest\nthat the anomalous features of the IC 10 stellar population are explained if\nthe age of the IC 10 starburst is close to the time of the peak of interaction\nfor massive binaries.\n", "title": "Blue Supergiant X-ray Binaries in the Nearby Dwarf Galaxy IC 10" }
null
null
[ "Physics" ]
null
true
null
3638
null
Validated
null
null
null
{ "abstract": " In this paper, we present a concolic execution technique for detecting SQL\ninjection vulnerabilities in Android apps, with a new tool we called\nConsiDroid. We extend the source code of apps with mocking technique, such that\nthe execution of original source code is not affected. The extended source\ncodes can be treated as Java applications and may be executed by SPF with\nconcolic execution. We automatically produce a DummyMain class out of static\nanalysis such that the essential functions are called sequentially and, the\nevents leading to vulnerable functions are triggered. We extend SPF with taint\nanalysis in ConsiDroid. For making taint analysis possible, we introduce a new\ntechnique of symbolic mock classes in order to ease the propagation of tainted\nvalues in the code. An SQL injection vulnerability is detected through\nreceiving a tainted value by a vulnerable function. Besides, ConsiDroid takes\nadvantage of static analysis to adjust SPF in order to inspect only suspicious\npaths. To illustrate the applicability of ConsiDroid, we have inspected\nrandomly selected 140 apps from F-Droid repository. From these apps, we found\nthree apps vulnerable to SQL injection. To verify their vulnerability, we\nanalyzed the apps manually based on ConsiDroid's reports by using Robolectric.\n", "title": "ConsiDroid: A Concolic-based Tool for Detecting SQL Injection Vulnerability in Android Apps" }
null
null
null
null
true
null
3639
null
Default
null
null
null
{ "abstract": " Innovation and entrepreneurship have a very special role to play in creating\nsustainable development in the world. Engineering design plays a major role in\ninnovation. These are not new facts. However this added to the fact that in\ncurrent time knowledge seem to increase at an exponential rate, growing twice\nevery few months. This creates a need to have newer methods to innovate with\nvery little scope to fall short of the expectations from customers. In terms of\nreliable designing, system design tools and methodologies have been very\nhelpful and have been in use in most engineering industries for decades now.\nBut traditional system design is rigorous and rigid. As we can see, we need an\ninnovation system that should be rigorous and flexible at the same time. We\ntake our inspiration from biosphere, where some of the most rugged yet flexible\nplants are creepers which grow to create mesh. In this thematic paper we shall\nexplain our approach to system engineering which we call the MeMo (Mesh Model)\nthat fuses the rigor of system engineering with the flexibility of agile\nmethods to create a scheme that can give rise to reliable innovation in the\nhigh risk market of today.\n", "title": "Mesh Model (MeMo): A Systematic Approach to Agile System Engineering" }
null
null
null
null
true
null
3640
null
Default
null
null
null
{ "abstract": " The exact law for fully developed homogeneous compressible\nmagnetohydrodynamics (CMHD) turbulence is derived. For an isothermal plasma,\nwithout the assumption of isotropy, the exact law is expressed as a function of\nthe plasma velocity field, the compressible Alfvén velocity and the scalar\ndensity, instead of the Elsässer variables used in previous works. The\ntheoretical results show four different types of terms that are involved in the\nnonlinear cascade of the total energy in the inertial range. Each category is\nexamined in detail, in particular those that can be written either as source or\nflux terms. Finally, the role of the background magnetic field $B_0$ is\nhighlighted and comparison with the incompressible MHD (IMHD) model is\ndiscussed. This point is particularly important when testing the exact law on\nnumerical simulations and in situ observations in space plasmas.\n", "title": "Alternative derivation of exact law for compressible and isothermal magnetohydrodynamics turbulence" }
null
null
null
null
true
null
3641
null
Default
null
null
null
{ "abstract": " CMS-HF Calorimeters have been undergoing a major upgrade for the last couple\nof years to alleviate the problems encountered during Run I, especially in the\nPMT and the readout systems. In this poster, the problems caused by the old\nPMTs installed in the detectors and their solutions will be explained.\nInitially, regular PMTs with thicker windows, causing large Cherenkov\nradiation, were used. Instead of the light coming through the fibers from the\ndetector, stray muons passing through the PMT itself produce Cherenkov\nradiation in the PMT window, resulting in erroneously large signals. Usually,\nlarge signals are the result of very high-energy particles in the calorimeter\nand are tagged as important. As a result, these so-called window events\ngenerate false triggers. Four-anode PMTs with thinner windows were selected to\nreduce these window events. Additional channels also help eliminate such\nremaining events through algorithms comparing the output of different PMT\nchannels. During the EYETS 16/17 period in the LHC operations, the final\ncomponents of the modifications to the readout system, namely the two-channel\nfront-end electronics cards, are installed. Complete upgrade of the HF\nCalorimeter, including the preparations for the Run II will be discussed in\nthis poster, with possible effects on the eventual data taking.\n", "title": "CMS-HF Calorimeter Upgrade for Run II" }
null
null
null
null
true
null
3642
null
Default
null
null
null
{ "abstract": " The ambitious goals of precision cosmology with wide-field optical surveys\nsuch as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope\n(LSST) demand, as their foundation, precision CCD astronomy. This in turn\nrequires an understanding of previously uncharacterized sources of systematic\nerror in CCD sensors, many of which manifest themselves as static effective\nvariations in pixel area. Such variation renders a critical assumption behind\nthe traditional procedure of flat fielding--that a sensor's pixels comprise a\nuniform grid--invalid. In this work, we present a method to infer a curl-free\nmodel of a sensor's underlying pixel grid from flat field images, incorporating\nthe superposition of all electrostatic sensor effects--both known and\nunknown--present in flat field data. We use these pixel grid models to estimate\nthe overall impact of sensor systematics on photometry, astrometry, and PSF\nshape measurements in a representative sensor from the Dark Energy Camera\n(DECam) and a prototype LSST sensor. Applying the method to DECam data recovers\nknown significant sensor effects for which corrections are currently being\ndeveloped within DES. For an LSST prototype CCD with pixel-response\nnon-uniformity (PRNU) of 0.4%, we find the impact of \"improper\" flat-fielding\non these observables is negligible in nominal .7\" seeing conditions. These\nerrors scale linearly with the PRNU, so for future LSST production sensors,\nwhich may have larger PRNU, our method provides a way to assess whether\npixel-level calibration beyond flat fielding will be required.\n", "title": "Is Flat Fielding Safe for Precision CCD Astronomy?" }
null
null
[ "Physics" ]
null
true
null
3643
null
Validated
null
null
null
{ "abstract": " We present results from a non-cosmological, three-dimensional hydrodynamical\nsimulation of the gas in the dwarf spheroidal galaxy Ursa Minor. Assuming an\ninitial baryonic-to-dark-matter ratio derived from the cosmic microwave\nbackground radiation, we evolved the galactic gas distribution over 3 Gyr,\ntaking into account the effects of the types Ia and II supernovae. For the\nfirst time, we used in our simulation the instantaneous supernovae rates\nderived from a chemical evolution model applied to spectroscopic observational\ndata of Ursa Minor. We show that the amount of gas that is lost in this process\nis variable with time and radius, being the highest rates observed during the\ninitial 600 Myr in our simulation. Our results indicate that types Ia and II\nsupernovae must be essential drivers of the gas loss in Ursa Minor galaxy (and\nprobably in other similar dwarf galaxies), but it is ultimately the combination\nof galactic winds powered by these supernovae and environmental effects (e.g.,\nram-pressure stripping) that results in the complete removal of the gas\ncontent.\n", "title": "Gas removal in the Ursa Minor galaxy: linking hydrodynamics and chemical evolution models" }
null
null
null
null
true
null
3644
null
Default
null
null
null
{ "abstract": " Causal inference in multivariate time series is challenging due to the fact\nthat the sampling rate may not be as fast as the timescale of the causal\ninteractions. In this context, we can view our observed series as a subsampled\nversion of the desired series. Furthermore, due to technological and other\nlimitations, series may be observed at different sampling rates, representing a\nmixed frequency setting. To determine instantaneous and lagged effects between\ntime series at the true causal scale, we take a model-based approach based on\nstructural vector autoregressive (SVAR) models. In this context, we present a\nunifying framework for parameter identifiability and estimation under both\nsubsampling and mixed frequencies when the noise, or shocks, are non-Gaussian.\nImportantly, by studying the SVAR case, we are able to both provide\nidentifiability and estimation methods for the causal structure of both lagged\nand instantaneous effects at the desired time scale. We further derive an exact\nEM algorithm for inference in both subsampled and mixed frequency settings. We\nvalidate our approach in simulated scenarios and on two real world data sets.\n", "title": "Identifiability and Estimation of Structural Vector Autoregressive Models for Subsampled and Mixed Frequency Time Series" }
null
null
null
null
true
null
3645
null
Default
null
null
null
{ "abstract": " Differential privacy promises to enable general data analytics while\nprotecting individual privacy, but existing differential privacy mechanisms do\nnot support the wide variety of features and databases used in real-world\nSQL-based analytics systems.\nThis paper presents the first practical approach for differential privacy of\nSQL queries. Using 8.1 million real-world queries, we conduct an empirical\nstudy to determine the requirements for practical differential privacy, and\ndiscuss limitations of previous approaches in light of these requirements. To\nmeet these requirements we propose elastic sensitivity, a novel method for\napproximating the local sensitivity of queries with general equijoins. We prove\nthat elastic sensitivity is an upper bound on local sensitivity and can\ntherefore be used to enforce differential privacy using any local\nsensitivity-based mechanism.\nWe build FLEX, a practical end-to-end system to enforce differential privacy\nfor SQL queries using elastic sensitivity. We demonstrate that FLEX is\ncompatible with any existing database, can enforce differential privacy for\nreal-world SQL queries, and incurs negligible (0.03%) performance overhead.\n", "title": "Towards Practical Differential Privacy for SQL Queries" }
null
null
null
null
true
null
3646
null
Default
null
null
null
{ "abstract": " Industrie 4.0 is originally a future vision described in the high-tech\nstrategy of the German government that is conceived upon the information and\ncommunication technologies like Cyber-Physical Systems, Internet of Things,\nPhysical Internet and Internet of Services to achieve a high degree of\nflexibility in production, higher productivity rates through real-time\nmonitoring and diagnosis, and a lower wastage rate of material in production.\nAn important part of the tasks in the preparation for Industrie 4.0 is the\nadaption of the higher education to the requirements of this vision, in\nparticular the engineering education. In this work, we introduce a road map\nconsisting of three pillars describing the changes/enhancements to be conducted\nin the areas of curriculum development, lab concept, and student club\nactivities. We also report our current application of this road map at the\nTurkish-German University, Istanbul.\n", "title": "Adapting Engineering Education to Industrie 4.0 Vision" }
null
null
null
null
true
null
3647
null
Default
null
null
null
{ "abstract": " Future wireless systems are expected to provide a wide range of services to\nmore and more users. Advanced scheduling strategies thus arise not only to\nperform efficient radio resource management, but also to provide fairness among\nthe users. On the other hand, the users' perceived quality, i.e., Quality of\nExperience (QoE), is becoming one of the main drivers within the schedulers\ndesign. In this context, this paper starts by providing a comprehension of what\nis QoE and an overview of the evolution of wireless scheduling techniques.\nAfterwards, a survey on the most recent QoE-based scheduling strategies for\nwireless systems is presented, highlighting the application/service of the\ndifferent approaches reported in the literature, as well as the parameters that\nwere taken into account for QoE optimization. Therefore, this paper aims at\nhelping readers interested in learning the basic concepts of QoE-oriented\nwireless resources scheduling, as well as getting in touch with the present\ntime research frontier.\n", "title": "A Survey on QoE-oriented Wireless Resources Scheduling" }
null
null
null
null
true
null
3648
null
Default
null
null
null
{ "abstract": " We show that there is no C^{k+1} diffeomorphism of the k-torus which is\nsemiconjugate to a minimal translation and has a wandering domain all of whose\niterates are Euclidean balls.\n", "title": "Wandering domains for diffeomorphisms of the k-torus: a remark on a theorem by Norton and Sullivan" }
null
null
null
null
true
null
3649
null
Default
null
null
null
{ "abstract": " We provide excess risk guarantees for statistical learning in the presence of\nan unknown nuisance component. We analyze a two-stage sample splitting\nmeta-algorithm that takes as input two arbitrary estimation algorithms: one for\nthe target model and one for the nuisance model. We show that if the population\nrisk satisfies a condition called Neyman orthogonality, the impact of the first\nstage error on the excess risk bound achieved by the meta-algorithm is of\nsecond order. Our general theorem is agnostic to the particular algorithms used\nfor the target and nuisance and only makes an assumption on their individual\nperformance. This enables the use of a plethora of existing results from\nstatistical learning and machine learning literature to give new guarantees for\nlearning with a nuisance component. Moreover, by focusing on excess risk rather\nthan parameter estimation, we can give guarantees under weaker assumptions than\nin previous works and accommodate the case where the target parameter belongs\nto a complex nonparametric class. When the nuisance and target parameters\nbelong to arbitrary classes, we characterize conditions on the metric entropy\nsuch that oracle rates---rates of the same order as if we knew the nuisance\nmodel---are achieved. We also analyze the rates achieved by specific estimation\nalgorithms such as variance-penalized empirical risk minimization, neural\nnetwork estimation and sparse high-dimensional linear model estimation. We\nhighlight the applicability of our results via four applications of primary\nimportance: 1) heterogeneous treatment effect estimation, 2) offline policy\noptimization, 3) domain adaptation, and 4) learning with missing data.\n", "title": "Orthogonal Statistical Learning" }
null
null
null
null
true
null
3650
null
Default
null
null
null
{ "abstract": " We study the inference of a model of dynamic networks in which both\ncommunities and links keep memory of previous network states. By considering\nmaximum likelihood inference from single snapshot observations of the network,\nwe show that link persistence makes the inference of communities harder,\ndecreasing the detectability threshold, while community persistence tends to\nmake it easier. We analytically show that communities inferred from single\nnetwork snapshot can share a maximum overlap with the underlying communities of\na specific previous instant in time. This leads to time-lagged inference: the\nidentification of past communities rather than present ones. Finally we compute\nthe time lag and propose a corrected algorithm, the Lagged Snapshot Dynamic\n(LSD) algorithm, for community detection in dynamic networks. We analytically\nand numerically characterize the detectability transitions of such algorithm as\na function of the memory parameters of the model and we make a comparison with\na full dynamic inference.\n", "title": "Disentangling group and link persistence in Dynamic Stochastic Block models" }
null
null
null
null
true
null
3651
null
Default
null
null
null
{ "abstract": " Available possibilities to prevent a biped robot from falling down in the\npresence of severe disturbances are mainly Center of Pressure (CoP) modulation,\nstep location and timing adjustment, and angular momentum regulation. In this\npaper, we aim at designing a walking pattern generator which employs an optimal\ncombination of these tools to generate robust gaits. In this approach, first,\nthe next step location and timing are decided consistent with the commanded\nwalking velocity and based on the Divergent Component of Motion (DCM)\nmeasurement. This stage which is done by a very small-size Quadratic Program\n(QP) uses the Linear Inverted Pendulum Model (LIPM) dynamics to adapt the\nswitching contact location and time. Then, consistent with the first stage, the\nLIPM with flywheel dynamics is used to regenerate the DCM and angular momentum\ntrajectories at each control cycle. This is done by modulating the CoP and\nCentroidal Momentum Pivot (CMP) to realize a desired DCM at the end of current\nstep. Simulation results show the merit of this reactive approach in generating\nrobust and dynamically consistent walking patterns.\n", "title": "A Reactive and Efficient Walking Pattern Generator for Robust Bipedal Locomotion" }
null
null
null
null
true
null
3652
null
Default
null
null
null
{ "abstract": " A key challenge for modern Bayesian statistics is how to perform scalable\ninference of posterior distributions. To address this challenge, variational\nBayes (VB) methods have emerged as a popular alternative to the classical\nMarkov chain Monte Carlo (MCMC) methods. VB methods tend to be faster while\nachieving comparable predictive performance. However, there are few theoretical\nresults around VB. In this paper, we establish frequentist consistency and\nasymptotic normality of VB methods. Specifically, we connect VB methods to\npoint estimates based on variational approximations, called frequentist\nvariational approximations, and we use the connection to prove a variational\nBernstein-von Mises theorem. The theorem leverages the theoretical\ncharacterizations of frequentist variational approximations to understand\nasymptotic properties of VB. In summary, we prove that (1) the VB posterior\nconverges to the Kullback-Leibler (KL) minimizer of a normal distribution,\ncentered at the truth and (2) the corresponding variational expectation of the\nparameter is consistent and asymptotically normal. As applications of the\ntheorem, we derive asymptotic properties of VB posteriors in Bayesian mixture\nmodels, Bayesian generalized linear mixed models, and Bayesian stochastic block\nmodels. We conduct a simulation study to illustrate these theoretical results.\n", "title": "Frequentist Consistency of Variational Bayes" }
null
null
null
null
true
null
3653
null
Default
null
null
null
{ "abstract": " A method for determining quantum variance asymptotics on compact quotients\nattached to non-split quaternion algebras is developed in general and applied\nto \"microlocal lifts\" in the non-archimedean setting. The results obtained are\nin the spirit of recent work of Sarnak--Zhao.\nThe arguments involve a careful analytic study of the theta correspondence,\nthe interplay between additive and multiplicative harmonic analysis on\nquaternion algebras, the equidistribution of translates of elementary theta\nfunctions, and the Rallis inner product formula.\n", "title": "Quantum variance on quaternion algebras, II" }
null
null
null
null
true
null
3654
null
Default
null
null
null
{ "abstract": " Virtual heart models have been proposed to enhance the safety of implantable\ncardiac devices through closed loop validation. To communicate with a virtual\nheart, devices have been driven by cardiac signals at specific sites. As a\nresult, only the action potentials of these sites are sensed. However, the real\ndevice implanted in the heart will sense a complex combination of near and\nfar-field extracellular potential signals. Therefore many device functions,\nsuch as blanking periods and refractory periods, are designed to handle these\nunexpected signals. To represent these signals, we develop an intracardiac\nelectrogram (IEGM) model as an interface between the virtual heart and the\ndevice. The model can capture not only the local excitation but also far-field\nsignals and pacing afterpotentials. Moreover, the sensing controller can\nspecify unipolar or bipolar electrogram (EGM) sensing configurations and\nintroduce various oversensing and undersensing modes. The simulation results\nshow that the model is able to reproduce clinically observed sensing problems,\nwhich significantly extends the capabilities of the virtual heart model in the\ncontext of device validation.\n", "title": "An intracardiac electrogram model to bridge virtual hearts and implantable cardiac devices" }
null
null
null
null
true
null
3655
null
Default
null
null
null
{ "abstract": " The consistency of a bootstrap or resampling scheme is classically validated\nby weak convergence of conditional laws. However, when working with stochastic\nprocesses in the space of bounded functions and their weak convergence in the\nHoffmann-J{\\o}rgensen sense, an obstacle occurs: due to possible\nnon-measurability, neither laws nor conditional laws are well-defined. Starting\nfrom an equivalent formulation of weak convergence based on the bounded\nLipschitz metric, a classical circumvent is to formulate bootstrap consistency\nin terms of the latter distance between what might be called a\n\\emph{conditional law} of the (non-measurable) bootstrap process and the law of\nthe limiting process. The main contribution of this note is to provide an\nequivalent formulation of bootstrap consistency in the space of bounded\nfunctions which is more intuitive and easy to work with. Essentially, the\nequivalent formulation consists of (unconditional) weak convergence of the\noriginal process jointly with two bootstrap replicates. As a by-product, we\nprovide two equivalent formulations of bootstrap consistency for statistics\ntaking values in separable metric spaces: the first in terms of (unconditional)\nweak convergence of the statistic jointly with its bootstrap replicates, the\nsecond in terms of convergence in probability of the empirical distribution\nfunction of the bootstrap replicates. Finally, the asymptotic validity of\nbootstrap-based confidence intervals and tests is briefly revisited, with\nparticular emphasis on the, in practice unavoidable, Monte Carlo approximation\nof conditional quantiles.\n", "title": "A note on conditional versus joint unconditional weak convergence in bootstrap consistency results" }
null
null
null
null
true
null
3656
null
Default
null
null
null
{ "abstract": " Dynamic scaling analyses of linear and nonlinear ac susceptibilities in a\nmodel magnet of the long-rang RKKY Ising spin glass (SG)\nDy$_{x}$Y$_{1-x}$Ru$_{2}$Si$_{2}$ were examined. The obtained set of the\ncritical exponents, $\\gamma$ $\\sim$ 1, $\\beta$ $\\sim$ 1, $\\delta$ $\\sim$ 2, and\n$z\\nu$ $\\sim$ 3.4, indicates the SG phase transition belongs to a different\nuniversality class from either the canonical (Heisenberg) or the short-range\nIsing SGs. The analyses also reveal a finite-temperature SG transition with the\nsame critical exponents under a magnetic field and the phase transition line\n$T_{\\mbox{g}}(H)$ described by $T_{\\mbox{g}}(H)$ $=$\n$T_{\\mbox{g}}(0)(1-AH^{2/\\phi})$ with $\\phi$ $\\sim$ 2. The crossover exponent\n$\\phi$ obeys the scaling relation $\\phi$ $=$ $\\gamma + \\beta$ within the margin\nof errors. These results strongly suggest the spontaneous\nreplica-symmetry-breaking (RSB) with a {\\it non- or marginal-mean-field\nuniversality class} in the long-range RKKY Ising SG.\n", "title": "Dynamic scaling analysis of the long-range RKKY Ising spin glass Dy$_{x}$Y$_{1-x}$Ru$_{2}$Si$_{2}$" }
null
null
null
null
true
null
3657
null
Default
null
null
null
{ "abstract": " Bayesian hierarchical models are used to share information between related\nsamples and obtain more accurate estimates of sample-level parameters, common\nstructure, and variation between samples. When the parameter of interest is the\ndistribution or density of a continuous variable, a hierarchical model for\ndistributions is required. A number of such models have been described in the\nliterature using extensions of the Dirichlet process and related processes,\ntypically as a distribution on the parameters of a mixing kernel. We propose a\nnew hierarchical model based on the Polya tree, which allows direct modeling of\ndensities and enjoys some computational advantages over the Dirichlet process.\nThe Polya tree also allows more flexible modeling of the variation between\nsamples, providing more informed shrinkage and permitting posterior inference\non the dispersion function, which quantifies the variation among sample\ndensities. We also show how the model can be extended to cluster samples in\nsituations where the observed samples are believed to have been drawn from\nseveral latent populations.\n", "title": "A Bayesian hierarchical model for related densities using Polya trees" }
null
null
null
null
true
null
3658
null
Default
null
null
null
{ "abstract": " We adapt the Ping-Pong Lemma, which historically was used to study free\nproducts of groups, to the setting of the homeomorphism group of the unit\ninterval. As a consequence, we isolate a large class of generating sets for\nsubgroups of $\\mathrm{Homeo}_+(I)$ for which certain finite dynamical data can\nbe used to determine the marked isomorphism type of the groups which they\ngenerate. As a corollary, we will obtain a criteria for embedding subgroups of\n$\\mathrm{Homeo}_+(I)$ into Richard Thompson's group $F$. In particular, every\nmember of our class of generating sets generates a group which embeds into $F$\nand in particular is not a free product. An analogous abstract theory is also\ndeveloped for groups of permutations of an infinite set.\n", "title": "Groups of fast homeomorphisms of the interval and the ping-pong argument" }
null
null
null
null
true
null
3659
null
Default
null
null
null
{ "abstract": " CodRep is a machine learning competition on source code data. It is carefully\ndesigned so that anybody can enter the competition, whether professional\nresearchers, students or independent scholars, without specific knowledge in\nmachine learning or program analysis. In particular, it aims at being a common\nplayground on which the machine learning and the software engineering research\ncommunities can interact. The competition has started on April 14th 2018 and\nhas ended on October 14th 2018. The CodRep data is hosted at\nthis https URL.\n", "title": "The CodRep Machine Learning on Source Code Competition" }
null
null
null
null
true
null
3660
null
Default
null
null
null
{ "abstract": " Mixed volumes $V(K_1,\\dots, K_d)$ of convex bodies $K_1,\\dots ,K_d$ in\nEuclidean space $\\mathbb{R}^d$ are of central importance in the Brunn-Minkowski\ntheory. Representations for mixed volumes are available in special cases, for\nexample as integrals over the unit sphere with respect to mixed area measures.\nMore generally, in Hug-Rataj-Weil (2013) a formula for $V(K [n], M[d-n])$,\n$n\\in \\{1,\\dots ,d-1\\}$, as a double integral over flag manifolds was\nestablished which involved certain flag measures of the convex bodies $K$ and\n$M$ (and required a general position of the bodies). In the following, we\ndiscuss the general case $V(K_1[n_1],\\dots , K_k[n_k])$, $n_1+\\cdots +n_k=d$,\nand show a corresponding result involving the flag measures\n$\\Omega_{n_1}(K_1;\\cdot),\\dots, \\Omega_{n_k}(K_k;\\cdot)$. For this purpose, we\nfirst establish a curvature representation of mixed volumes over the normal\nbundles of the bodies involved.\nWe also obtain a corresponding flag representation for the mixed functionals\nfrom translative integral geometry and a local version, for mixed (translative)\ncurvature measures.\n", "title": "Flag representations of mixed volumes and mixed functionals of convex bodies" }
null
null
null
null
true
null
3661
null
Default
null
null
null
{ "abstract": " There has recently been significant interest in hard attention models for\ntasks such as object recognition, visual captioning and speech recognition.\nHard attention can offer benefits over soft attention such as decreased\ncomputational cost, but training hard attention models can be difficult because\nof the discrete latent variables they introduce. Previous work used REINFORCE\nand Q-learning to approach these issues, but those methods can provide\nhigh-variance gradient estimates and be slow to train. In this paper, we tackle\nthe problem of learning hard attention for a sequential task using variational\ninference methods, specifically the recently introduced VIMCO and NVIL.\nFurthermore, we propose a novel baseline that adapts VIMCO to this setting. We\ndemonstrate our method on a phoneme recognition task in clean and noisy\nenvironments and show that our method outperforms REINFORCE, with the\ndifference being greater for a more complicated task.\n", "title": "Learning Hard Alignments with Variational Inference" }
null
null
null
null
true
null
3662
null
Default
null
null
null
{ "abstract": " Many problems in signal processing require finding sparse solutions to\nunder-determined, or ill-conditioned, linear systems of equations. When dealing\nwith real-world data, the presence of outliers and impulsive noise must also be\naccounted for. In past decades, the vast majority of robust linear regression\nestimators has focused on robustness against rowwise contamination. Even so\ncalled `high breakdown' estimators rely on the assumption that a majority of\nrows of the regression matrix is not affected by outliers. Only very recently,\nthe first cellwise robust regression estimation methods have been developed. In\nthis paper, we define robust oracle properties, which an estimator must have in\norder to perform robust model selection for under-determined, or\nill-conditioned linear regression models that are contaminated by cellwise\noutliers in the regression matrix. We propose and analyze a robustly weighted\nand adaptive Lasso type regularization term which takes into account cellwise\noutliers for model selection. The proposed regularization term is integrated\ninto the objective function of the MM-estimator, which yields the proposed\nMM-Robust Weighted Adaptive Lasso (MM-RWAL), for which we prove that at least\nthe weak robust oracle properties hold. A performance comparison to existing\nrobust Lasso estimators is provided using Monte Carlo experiments. Further, the\nMM-RWAL is applied to determine the temporal releases of the European Tracer\nExperiment (ETEX) at the source location. This ill-conditioned linear inverse\nproblem contains cellwise and rowwise outliers and is sparse both in the\nregression matrix and the parameter vector. The proposed RWAL penalty is not\nlimited to the MM-estimator but can easily be integrated into the objective\nfunction of other robust estimators.\n", "title": "A New Sparse and Robust Adaptive Lasso Estimator for the Independent Contamination Model" }
null
null
null
null
true
null
3663
null
Default
null
null
null
{ "abstract": " Hybrid quantum mechanical-molecular mechanical (QM/MM) simulations are widely\nused in enzyme simulation. Over ten convergence studies of QM/MM methods have\nrevealed over the past several years that key energetic and structural\nproperties approach asymptotic limits with only very large (ca. 500-1000 atom)\nQM regions. This slow convergence has been observed to be due in part to\nsignificant charge transfer between the core active site and surrounding\nprotein environment, which cannot be addressed by improvement of MM force\nfields or the embedding method employed within QM/MM. Given this slow\nconvergence, it becomes essential to identify strategies for the most\natom-economical determination of optimal QM regions and to gain insight into\nthe crucial interactions captured only in large QM regions. Here, we extend and\ndevelop two methods for quantitative determination of QM regions. First, in the\ncharge shift analysis (CSA) method, we probe the reorganization of electron\ndensity when core active site residues are removed completely, as determined by\nlarge-QM region QM/MM calculations. Second, we introduce the\nhighly-parallelizable Fukui shift analysis (FSA), which identifies how\ncore/substrate frontier states are altered by the presence of an additional QM\nresidue on smaller initial QM regions. We demonstrate that the FSA and CSA\napproaches are complementary and consistent on three test case enzymes:\ncatechol O-methyltransferase, cytochrome P450cam, and hen eggwhite lysozyme. We\nalso introduce validation strategies and test sensitivities of the two methods\nto geometric structure, basis set size, and electronic structure methodology.\nBoth methods represent promising approaches for the systematic, unbiased\ndetermination of quantum mechanical effects in enzymes and large systems that\nnecessitate multi-scale modeling.\n", "title": "Systematic Quantum Mechanical Region Determination in QM/MM Simulation" }
null
null
null
null
true
null
3664
null
Default
null
null
null
{ "abstract": " In this study, we develop a theoretical model of strategic equilibrium\nbidding and price-setting behaviour by heterogeneous and boundedly rational\nelectricity producers and a grid operator in a single electricity market under\nuncertain information about production capabilities and electricity demand.\nWe compare eight different market design variants and several levels of\ncentralized electricity production that influence the spatial distribution of\nproducers in the grid, their unit production and curtailment costs, and the\nmean and standard deviation of their production capabilities.\nOur market design variants differ in three aspects. Producers are either paid\ntheir individual bid price (\"pay as bid\") or the (higher) market price set by\nthe grid operator (\"uniform pricing\"). They are either paid for their bid\nquantity (\"pay requested\") or for their actual supply (\"pay supplied\") which\nmay differ due to production uncertainty. Finally, excess production is either\nrequired to be curtailed or may be supplied to the grid.\nOverall, we find the combination of uniform pricing, paying for requested\namounts, and required curtailment to perform best or second best in many\nrespects, and to provide the best compromise between the goals of low economic\ncosts, low consumer costs, positive profits, low balancing, low workload, and\nhonest bidding behaviour.\n", "title": "Comparison of electricity market designs for stable decentralized power grids" }
null
null
null
null
true
null
3665
null
Default
null
null
null
{ "abstract": " Multi-core CPUs are a standard component in many modern embedded systems.\nTheir virtualisation extensions enable the isolation of services, and gain\npopularity to implement mixed-criticality or otherwise split systems. We\npresent Jailhouse, a Linux-based, OS-agnostic partitioning hypervisor that uses\nnovel architectural approaches to combine Linux, a powerful general-purpose\nsystem, with strictly isolated special-purpose components. Our design goals\nfavour simplicity over features, establish a minimal code base, and minimise\nhypervisor activity.\nDirect assignment of hardware to guests, together with a deferred\ninitialisation scheme, offloads any complex hardware handling and bootstrapping\nissues from the hypervisor to the general purpose OS. The hypervisor\nestablishes isolated domains that directly access physical resources without\nthe need for emulation or paravirtualisation. This retains, with negligible\nsystem overhead, Linux's feature-richness in uncritical parts, while frugal\nsafety and real-time critical workloads execute in isolated, safe domains.\n", "title": "Look Mum, no VM Exits! (Almost)" }
null
null
null
null
true
null
3666
null
Default
null
null
null
{ "abstract": " Low-rank modeling plays a pivotal role in signal processing and machine\nlearning, with applications ranging from collaborative filtering, video\nsurveillance, medical imaging, to dimensionality reduction and adaptive\nfiltering. Many modern high-dimensional data and interactions thereof can be\nmodeled as lying approximately in a low-dimensional subspace or manifold,\npossibly with additional structures, and its proper exploitations lead to\nsignificant reduction of costs in sensing, computation and storage. In recent\nyears, there is a plethora of progress in understanding how to exploit low-rank\nstructures using computationally efficient procedures in a provable manner,\nincluding both convex and nonconvex approaches. On one side, convex relaxations\nsuch as nuclear norm minimization often lead to statistically optimal\nprocedures for estimating low-rank matrices, where first-order methods are\ndeveloped to address the computational challenges; on the other side, there is\nemerging evidence that properly designed nonconvex procedures, such as\nprojected gradient descent, often provide globally optimal solutions with a\nmuch lower computational cost in many problems. This survey article will\nprovide a unified overview of these recent advances on low-rank matrix\nestimation from incomplete measurements. Attention is paid to rigorous\ncharacterization of the performance of these algorithms, and to problems where\nthe low-rank matrix have additional structural properties that require new\nalgorithmic designs and theoretical analysis.\n", "title": "Harnessing Structures in Big Data via Guaranteed Low-Rank Matrix Estimation" }
null
null
null
null
true
null
3667
null
Default
null
null
null
{ "abstract": " Accurate delineation of the left ventricle (LV) is an important step in\nevaluation of cardiac function. In this paper, we present an automatic method\nfor segmentation of the LV in cardiac CT angiography (CCTA) scans. Segmentation\nis performed in two stages. First, a bounding box around the LV is detected\nusing a combination of three convolutional neural networks (CNNs).\nSubsequently, to obtain the segmentation of the LV, voxel classification is\nperformed within the defined bounding box using a CNN. The study included CCTA\nscans of sixty patients, fifty scans were used to train the CNNs for the LV\nlocalization, five scans were used to train LV segmentation and the remaining\nfive scans were used for testing the method. Automatic segmentation resulted in\nthe average Dice coefficient of 0.85 and mean absolute surface distance of 1.1\nmm. The results demonstrate that automatic segmentation of the LV in CCTA scans\nusing voxel classification with convolutional neural networks is feasible.\n", "title": "Automatic Segmentation of the Left Ventricle in Cardiac CT Angiography Using Convolutional Neural Network" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
3668
null
Validated
null
null
null
{ "abstract": " The Jacobian-based Saliency Map Attack is a family of adversarial attack\nmethods for fooling classification models, such as deep neural networks for\nimage classification tasks. By saturating a few pixels in a given image to\ntheir maximum or minimum values, JSMA can cause the model to misclassify the\nresulting adversarial image as a specified erroneous target class. We propose\ntwo variants of JSMA, one which removes the requirement to specify a target\nclass, and another that additionally does not need to specify whether to only\nincrease or decrease pixel intensities. Our experiments highlight the\ncompetitive speeds and qualities of these variants when applied to datasets of\nhand-written digits and natural scenes.\n", "title": "Maximal Jacobian-based Saliency Map Attack" }
null
null
[ "Statistics" ]
null
true
null
3669
null
Validated
null
null
null
{ "abstract": " Reinforcement learning (RL) has been successfully used to solve many\ncontinuous control tasks. Despite its impressive results however, fundamental\nquestions regarding the sample complexity of RL on continuous problems remain\nopen. We study the performance of RL in this setting by considering the\nbehavior of the Least-Squares Temporal Difference (LSTD) estimator on the\nclassic Linear Quadratic Regulator (LQR) problem from optimal control. We give\nthe first finite-time analysis of the number of samples needed to estimate the\nvalue function for a fixed static state-feedback policy to within\n$\\varepsilon$-relative error. In the process of deriving our result, we give a\ngeneral characterization for when the minimum eigenvalue of the empirical\ncovariance matrix formed along the sample path of a fast-mixing stochastic\nprocess concentrates above zero, extending a result by Koltchinskii and\nMendelson in the independent covariates setting. Finally, we provide\nexperimental evidence indicating that our analysis correctly captures the\nqualitative behavior of LSTD on several LQR instances.\n", "title": "Least-Squares Temporal Difference Learning for the Linear Quadratic Regulator" }
null
null
null
null
true
null
3670
null
Default
null
null
null
{ "abstract": " BL Lacertae is the prototype of the blazar subclass known as BL Lac type\nobjects. BL Lacertae object itself is a low-frequency-peaked BL Lac(LBL). Very\nhigh energy (VHE) gamma ray emission from this source was discovered in 2005 by\nMAGIC observatory while the source was at a flaring state. Since then, VHE\ngamma rays from this source has been detected several times. However, all of\nthose times the source was in a high activity state. Former studies suggest\nseveral non-thermal zones emitting in gamma-rays, then gamma-ray flare should\nbe composed of a convolution. Observing the BL Lacertae object at quiescent\nstates and active states is the key to disentangle these two components.\nVERITAS is monitoring the BL Lacertae object since 2011. The archival data set\nincludes observations during flaring and quiescent states. This presentation\nreports on the preliminary results of the VERITAS observation between January\n2013 - December 2015, and simultaneous multiwavelength observations.\n", "title": "VERITAS long term monitoring of Gamma-Ray emission from the BL Lacertae object" }
null
null
null
null
true
null
3671
null
Default
null
null
null
{ "abstract": " Experiment and theory indicate that UPt3 is a topological superconductor in\nan odd-parity state, based in part from temperature independence of the NMR\nKnight shift. However, quasiparticle spin-flip scattering near a surface, where\nthe Knight shift is measured, might be responsible. We use polarized neutron\nscattering to measure the bulk susceptibility with H||c, finding consistency\nwith the Knight shift but inconsistent with theory for this field orientation.\nWe infer that neither spin susceptibility nor Knight shift are a reliable\nindication of odd-parity.\n", "title": "Spin Susceptibility of the Topological Superconductor UPt3 from Polarized Neutron Diffraction" }
null
null
null
null
true
null
3672
null
Default
null
null
null
{ "abstract": " The appearance of a Nano-jet in the micro-sphere optical experiments is\nanalyzed by relating this effect to non-diffracting Bessel beams. By inserting\na circular aperture with a radius which is in the order of subwavelength in the\nEM waist, and sending the transmitted light into a confocal microscope, EM\nfluctuations by the different Bessel beams are avoided. On this constant EM\nfield evanescent waves are superposed. While this effect improves the\noptical-depth of the imaging process, the object fine-structures are obtained,\nfrom the modulation of the EM fields by the evanescent waves. The use of a\ncombination of the micro-sphere optical system with an interferometer for phase\ncontrast measurements is described.\n", "title": "Nano-jet Related to Bessel Beams and to Super-Resolutions in Micro-sphere Optical Experiments" }
null
null
null
null
true
null
3673
null
Default
null
null
null
{ "abstract": " We define triangulated factorization systems on triangulated categories, and\nprove that a suitable subclass thereof (the normal triangulated torsion\ntheories) corresponds bijectively to $t$-structures on the same category. This\nresult is then placed in the framework of derivators regarding a triangulated\ncategory as the base of a stable derivator. More generally, we define derivator\nfactorization systems in the 2-category $\\mathrm{PDer}$, describing them as\nalgebras for a suitable strict 2-monad (this result is of independent\ninterest), and prove that a similar characterization still holds true: for a\nstable derivator $\\mathbb D$, a suitable class of derivator factorization\nsystems (the normal derivator torsion theories) correspond bijectively with\n$t$-structures on the base $\\mathbb{D}(\\mathbb{1})$ of the derivator. These two\nresult can be regarded as the triangulated- and derivator- analogues,\nrespectively, of the theorem that says that `$t$-structures are normal torsion\ntheories' in the setting of stable $\\infty$-categories, showing how the result\nremains true whatever the chosen model for stable homotopy theory is.\n", "title": "Factorization systems on (stable) derivators" }
null
null
null
null
true
null
3674
null
Default
null
null
null
{ "abstract": " Cloud storage services have become accessible and used by everyone.\nNevertheless, stored data are dependable on the behavior of the cloud servers,\nand losses and damages often occur. One solution is to regularly audit the\ncloud servers in order to check the integrity of the stored data. The Dynamic\nProvable Data Possession scheme with Public Verifiability and Data Privacy\npresented in ACISP'15 is a straightforward design of such solution. However,\nthis scheme is threatened by several attacks. In this paper, we carefully\nrecall the definition of this scheme as well as explain how its security is\ndramatically menaced. Moreover, we proposed two new constructions for Dynamic\nProvable Data Possession scheme with Public Verifiability and Data Privacy\nbased on the scheme presented in ACISP'15, one using Index Hash Tables and one\nbased on Merkle Hash Trees. We show that the two schemes are secure and\nprivacy-preserving in the random oracle model.\n", "title": "Dynamic Provable Data Possession Protocols with Public Verifiability and Data Privacy" }
null
null
null
null
true
null
3675
null
Default
null
null
null
{ "abstract": " We consider two-dimensional (2D) Dirac quantum ring systems formed by the\ninfinite mass constraint. When an Aharonov-Bohm magnetic flux is present, e.g.,\nthrough the center of the ring domain, persistent currents, i.e., permanent\ncurrents without dissipation, can arise. In real materials, impurities and\ndefects are inevitable, raising the issue of robustness of the persistent\ncurrents. Using localized random potential to simulate the disorders, we\ninvestigate how the ensemble averaged current magnitude varies with the\ndisorder density. For comparison, we study the nonrelativistic quantum\ncounterpart by analyzing the solutions of the Schrödinger equation under\nthe same geometrical and disorder settings. We find that, for the Dirac ring\nsystem, as the disorder density is systematically increased, the average\ncurrent decreases slowly initially and then plateaus at a finite nonzero value,\nindicating remarkable robustness of the persistent currents. The physical\nmechanism responsible for the robustness is the emergence of a class of\nboundary states - whispering gallery modes. In contrast, in the Schrödinger\nring system, such boundary states cannot form and the currents diminish rapidly\nto zero with increase in the disorder density. We develop a physical theory\nbased on a quasi one-dimensional approximation to understand the striking\ncontrast in the behaviors of the persistent currents in the Dirac and\nSchrödinger rings. Our 2D Dirac ring systems with disorders can be\nexperimentally realized, e.g., on the surface of a topological insulator with\nnatural or deliberately added impurities from the fabrication process.\n", "title": "Robustness of persistent currents in two-dimensional Dirac systems with disorders" }
null
null
null
null
true
null
3676
null
Default
null
null
null
{ "abstract": " We study the arithmetically Cohen-Macaulay (ACM) property for finite sets of\npoints in multiprojective spaces, especially $(\\mathbb P^1)^n$. A combinatorial\ncharacterization, the $(\\star)$-property, is known in $\\mathbb P^1 \\times\n\\mathbb P^1$. We propose a combinatorial property, $(\\star_n)$, that directly\ngeneralizes the $(\\star)$-property to $(\\mathbb P^1)^n$ for larger $n$. We show\nthat $X$ is ACM if and only if it satisfies the $(\\star_n)$-property. The main\ntool for several of our results is an extension to the multiprojective setting\nof certain liaison methods in projective space.\n", "title": "On the arithmetically Cohen-Macaulay property for sets of points in multiprojective spaces" }
null
null
null
null
true
null
3677
null
Default
null
null
null
{ "abstract": " Point clouds obtained from photogrammetry are noisy and incomplete models of\nreality. We propose an evolutionary optimization methodology that is able to\napproximate the underlying object geometry on such point clouds. This approach\nassumes a priori knowledge on the 3D structure modeled and enables the\nidentification of a collection of primitive shapes approximating the scene.\nBuilt-in mechanisms that enforce high shape diversity and adaptive population\nsize make this method suitable to modeling both simple and complex scenes. We\nfocus here on the case of cylinder approximations and we describe, test, and\ncompare a set of mutation operators designed for optimal exploration of their\nsearch space. We assess the robustness and limitations of this algorithm\nthrough a series of synthetic examples, and we finally demonstrate its general\napplicability on two real-life cases in vegetation and industrial settings.\n", "title": "Fitting 3D Shapes from Partial and Noisy Point Clouds with Evolutionary Computing" }
null
null
null
null
true
null
3678
null
Default
null
null
null
{ "abstract": " The unique information ($UI$) is an information measure that quantifies a\ndeviation from the Blackwell order. We have recently shown that this quantity\nis an upper bound on the one-way secret key rate. In this paper, we prove a\ntriangle inequality for the $UI$, which implies that the $UI$ is never greater\nthan one of the best known upper bounds on the two-way secret key rate. We\nconjecture that the $UI$ lower bounds the two-way rate and discuss implications\nof the conjecture.\n", "title": "Unique Information and Secret Key Decompositions" }
null
null
null
null
true
null
3679
null
Default
null
null
null
{ "abstract": " We study distributionally robust optimization (DRO) problems where the\nambiguity set is defined using the Wasserstein metric. We show that this class\nof DRO problems can be reformulated as semi-infinite programs. We give an\nexchange method to solve the reformulated problem for the general nonlinear\nmodel, and a central cutting-surface method for the convex case, assuming that\nwe have a separation oracle. We used a distributionally robust generalization\nof the logistic regression model to test our algorithm. Numerical experiments\non the distributionally robust logistic regression models show that the number\nof oracle calls are typically 20 ? 50 to achieve 5-digit precision. The\nsolution found by the model is generally better in its ability to predict with\na smaller standard error.\n", "title": "Decomposition Algorithm for Distributionally Robust Optimization using Wasserstein Metric" }
null
null
null
null
true
null
3680
null
Default
null
null
null
{ "abstract": " Privacy is a major issue in learning from distributed data. Recently the\ncryptographic literature has provided several tools for this task. However,\nthese tools either reduce the quality/accuracy of the learning\nalgorithm---e.g., by adding noise---or they incur a high performance penalty\nand/or involve trusting external authorities.\nWe propose a methodology for {\\sl private distributed machine learning from\nlight-weight cryptography} (in short, PD-ML-Lite). We apply our methodology to\ntwo major ML algorithms, namely non-negative matrix factorization (NMF) and\nsingular value decomposition (SVD). Our resulting protocols are communication\noptimal, achieve the same accuracy as their non-private counterparts, and\nsatisfy a notion of privacy---which we define---that is both intuitive and\nmeasurable. Our approach is to use lightweight cryptographic protocols (secure\nsum and normalized secure sum) to build learning algorithms rather than wrap\ncomplex learning algorithms in a heavy-cost MPC framework.\nWe showcase our algorithms' utility and privacy on several applications: for\nNMF we consider topic modeling and recommender systems, and for SVD, principal\ncomponent regression, and low rank approximation.\n", "title": "PD-ML-Lite: Private Distributed Machine Learning from Lighweight Cryptography" }
null
null
null
null
true
null
3681
null
Default
null
null
null
{ "abstract": " This paper deals with the establishment of Input-to-State Stability (ISS)\nproperties for infinite dimensional systems with respect to both boundary and\ndistributed disturbances. First, an ISS estimate is established with respect to\nfinite dimensional boundary disturbances for a class of Riesz-spectral boundary\ncontrol systems satisfying certain eigenvalue constraints. Second, a concept of\nweak solutions is introduced in order to relax the disturbances regularity\nassumptions required to ensure the existence of strong solutions. The proposed\nconcept of weak solutions, that applies to a large class of boundary control\nsystems which is not limited to the Riesz-spectral ones, provides a natural\nextension of the concept of both strong and mild solutions. Assuming that an\nISS estimate holds true for strong solutions, we show the existence, the\nuniqueness, and the ISS property of the weak solutions.\n", "title": "ISS Property with Respect to Boundary Disturbances for a Class of Riesz-Spectral Boundary Control Systems" }
null
null
[ "Computer Science" ]
null
true
null
3682
null
Validated
null
null
null
{ "abstract": " Neural machine translation (NMT) has recently become popular in the field of\nmachine translation. However, NMT suffers from the problem of repeating or\nmissing words in the translation. To address this problem, Tu et al. (2017)\nproposed an encoder-decoder-reconstructor framework for NMT using\nback-translation. In this method, they selected the best forward translation\nmodel in the same manner as Bahdanau et al. (2015), and then trained a\nbi-directional translation model as fine-tuning. Their experiments show that it\noffers significant improvement in BLEU scores in Chinese-English translation\ntask. We confirm that our re-implementation also shows the same tendency and\nalleviates the problem of repeating and missing words in the translation on a\nEnglish-Japanese task too. In addition, we evaluate the effectiveness of\npre-training by comparing it with a jointly-trained model of forward\ntranslation and back-translation.\n", "title": "English-Japanese Neural Machine Translation with Encoder-Decoder-Reconstructor" }
null
null
null
null
true
null
3683
null
Default
null
null
null
{ "abstract": " Bayesian networks, or directed acyclic graph (DAG) models, are widely used to\nrepresent complex causal systems. Since the basic task of learning a Bayesian\nnetwork from data is NP-hard, a standard approach is greedy search over the\nspace of DAGs or Markov equivalent DAGs. Since the space of DAGs on p nodes and\nthe associated space of Markov equivalence classes are both much larger than\nthe space of permutations, it is desirable to consider permutation-based\nsearches. We here provide the first consistency guarantees, both uniform and\nhigh-dimensional, of a permutation-based greedy search. Geometrically, this\nsearch corresponds to a simplex-type algorithm on a sub-polytope of the\npermutohedron, the DAG associahedron. Every vertex in this polytope is\nassociated with a DAG, and hence with a collection of permutations that are\nconsistent with the DAG ordering. A walk is performed on the edges of the\npolytope maximizing the sparsity of the associated DAGs. We show based on\nsimulations that this permutation search is competitive with standard\napproaches.\n", "title": "Consistency Guarantees for Permutation-Based Causal Inference Algorithms" }
null
null
null
null
true
null
3684
null
Default
null
null
null
{ "abstract": " We present TriviaQA, a challenging reading comprehension dataset containing\nover 650K question-answer-evidence triples. TriviaQA includes 95K\nquestion-answer pairs authored by trivia enthusiasts and independently gathered\nevidence documents, six per question on average, that provide high quality\ndistant supervision for answering the questions. We show that, in comparison to\nother recently introduced large-scale datasets, TriviaQA (1) has relatively\ncomplex, compositional questions, (2) has considerable syntactic and lexical\nvariability between questions and corresponding answer-evidence sentences, and\n(3) requires more cross sentence reasoning to find answers. We also present two\nbaseline algorithms: a feature-based classifier and a state-of-the-art neural\nnetwork, that performs well on SQuAD reading comprehension. Neither approach\ncomes close to human performance (23% and 40% vs. 80%), suggesting that\nTriviaQA is a challenging testbed that is worth significant future study. Data\nand code available at -- this http URL\n", "title": "TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension" }
null
null
null
null
true
null
3685
null
Default
null
null
null
{ "abstract": " Many people dream to become famous, YouTube video makers also wish their\nvideos to have a large audience, and product retailers always hope to expose\ntheir products to customers as many as possible. Do these seemingly different\nphenomena share a common structure? We find that fame, popularity, or exposure,\ncould be modeled as a node's discoverability on some properly defined network,\nand all of the previously mentioned phenomena can be commonly stated as a\ntarget node wants to be discovered easily by the other nodes in the network. In\nthis work, we explicitly define a node's discoverability in a network, and\nformulate a general node discoverability optimization problem, where the goal\nis to create a budgeted set of incoming edges to the target node so as to\noptimize the target node's discoverability in the network. Although the\noptimization problem is proven to be NP-hard, we find that the defined\ndiscoverability measures have good properties that enable us to use a greedy\nalgorithm to find provably near-optimal solutions. The computational complexity\nof a greedy algorithm is dominated by the time cost of an oracle call, i.e.,\ncalculating the marginal gain of a node. To scale up the oracle call over large\nnetworks, we propose an estimation-and-refinement approach, that provides a\ngood trade-off between estimation accuracy and computational efficiency.\nExperiments conducted on real-world networks demonstrate that our method is\nthousands of times faster than an exact method using dynamic programming,\nthereby allowing us to solve the node discoverability optimization problem on\nlarge networks.\n", "title": "Optimizing Node Discovery on Networks: Problem Definitions, Fast Algorithms, and Observations" }
null
null
[ "Computer Science" ]
null
true
null
3686
null
Validated
null
null
null
{ "abstract": " In this work, we introduce the class of $h$-${\\rm{MN}}$-convex functions by\ngeneralizing the concept of ${\\rm{MN}}$-convexity and combining it with\n$h$-convexity. Namely, Let $I,J$ be two intervals subset of\n$\\left(0,\\infty\\right)$ such that $\\left(0,1\\right)\\subseteq J$ and\n$\\left[a,b\\right]\\subseteq I$. Consider a non-negative function $h:\n(0,\\infty)\\to \\left(0,\\infty\\right)$ and let ${\\rm{M}}:\\left[0,1\\right]\\to\n\\left[a,b\\right] $ $(0<a<b)$ be a Mean function given by\n${\\rm{\\rm{M}}}\\left(t\\right)={\\rm{\\rm{M}}}\\left( {h(t);a,b} \\right)$; where\nby ${\\rm{\\rm{M}}}\\left( {h(t);a,b} \\right)$ we mean one of the following\nfunctions: $A_h\\left( {a,b} \\right):=h\\left( {1 - t} \\right)a + h(t) b$,\n$G_h\\left( {a,b} \\right)=a^{h(1-t)} b^{h(t)}$ and $H_h\\left( {a,b}\n\\right):=\\frac{ab}{h(t) a + h\\left( {1 - t} \\right)b} = \\frac{1}{A_h\\left(\n{\\frac{1}{a},\\frac{1}{b}} \\right)}$; with the property that\n${\\rm{\\rm{M}}}\\left( {h(0);a,b} \\right)=a$ and ${\\rm{M}}\\left( {h(1);a,b}\n\\right)=b$.\nA function $f : I \\to \\left(0,\\infty\\right)$ is said to be\n$h$-${\\rm{\\rm{MN}}}$-convex (concave) if the inequality \\begin{align*} f\n\\left({\\rm{M}}\\left(t;x, y\\right)\\right) \\le (\\ge) \\, {\\rm{N}}\\left(h(t);f (x),\nf (y)\\right), \\end{align*} holds for all $x,y \\in I$ and $t\\in [0,1]$, where M\nand N are two mean functions. In this way, nine classes of\n$h$-${\\rm{MN}}$-convex functions are established and some of their analytic\nproperties are explored and investigated. Characterizations of each type are\ngiven. Various Jensen's type inequalities and their converses are proved.\n", "title": "Some properties of h-MN-convexity and Jensen's type inequalities" }
null
null
null
null
true
null
3687
null
Default
null
null
null
{ "abstract": " Achieving high performance for sparse applications is challenging due to\nirregular access patterns and weak locality. These properties preclude many\nstatic optimizations and degrade cache performance on traditional systems. To\naddress these challenges, novel systems such as the Emu architecture have been\nproposed. The Emu design uses light-weight migratory threads, narrow memory,\nand near-memory processing capabilities to address weak locality and reduce the\ntotal load on the memory system. Because the Emu architecture is fundamentally\ndifferent than cache based hierarchical memory systems, it is crucial to\nunderstand the cost-benefit tradeoffs of standard sparse algorithm\noptimizations on Emu hardware. In this work, we explore sparse matrix-vector\nmultiplication (SpMV) on the Emu architecture. We investigate the effects of\ndifferent sparse optimizations such as dense vector data layouts, work\ndistributions, and matrix reorderings. Our study finds that initially\ndistributing work evenly across the system is inadequate to maintain load\nbalancing over time due to the migratory nature of Emu threads. In severe\ncases, matrix sparsity patterns produce hot-spots as many migratory threads\nconverge on a single resource. We demonstrate that known matrix reordering\ntechniques can improve SpMV performance on the Emu architecture by as much as\n70% by encouraging more consistent load balancing. This can be compared with a\nperformance gain of no more than 16% on a cache-memory based system.\n", "title": "Impact of Traditional Sparse Optimizations on a Migratory Thread Architecture" }
null
null
null
null
true
null
3688
null
Default
null
null
null
{ "abstract": " With pressure to increase graduation rates and reduce time to degree in\nhigher education, it is important to identify at-risk students early. Automated\nearly warning systems are therefore highly desirable. In this paper, we use\nunsupervised clustering techniques to predict the graduation status of declared\nmajors in five departments at California State University Northridge (CSUN),\nbased on a minimal number of lower division courses in each major. In addition,\nwe use the detected clusters to identify hidden bottleneck courses.\n", "title": "Finding Bottlenecks: Predicting Student Attrition with Unsupervised Classifier" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
3689
null
Validated
null
null
null
{ "abstract": " Meteoritic abundances of r-process elements are analyzed to deduce the\nhistory of chemical enrichment by r-process from the beginning of disk\nformation to the present time in the solar vicinity, by combining the abundance\ninformation from short-lived radioactive nuclei such as 244Pu with that from\nstable r-process nuclei such as Eu. These two types of nuclei can be associated\nwith one r-process event and cumulation of events till formation of the solar\nsystem, respectively. With help of the observed local star formation history,\nwe deduce the chemical evolution of 244Pu and obtain three main results: (i)\nthe last r-process event occurred 130-140 Myr before formation of the solar\nsystem, (ii) the present-day low 244Pu abundance as measured in deep sea\nreservoirs results from the low recent star formation rate compared to ~4.5 - 5\nGyr ago, and (iii) there were ~15 r-process events in the solar vicinity from\nformation of the Galaxy to the time of solar system formation and ~30 r-process\nevents to the present time. Then, adopting a reasonable hypothesis that a\nneutron star merger is the r-process production site, we find that the ejected\nr-process elements are extensively spread out and mixed with interstellar\nmatter with a mass of ~3.5 million solar masses, which is about 100 times\nlarger than that for supernova ejecta. In addition, the event frequency of\nr-process production is estimated to be one per about 1400 core-collapse\nsupernovae, which is identical to the frequency of neutron star mergers\nestimated from the analysis of stellar abundances.\n", "title": "Chemical evolution of 244Pu in the solar vicinity and its implication for the properties of r-process production" }
null
null
null
null
true
null
3690
null
Default
null
null
null
{ "abstract": " We present SAVITR, a system that leverages the information posted on the\nTwitter microblogging site to monitor and analyse emergency situations. Given\nthat only a very small percentage of microblogs are geo-tagged, it is essential\nfor such a system to extract locations from the text of the microblogs. We\nemploy natural language processing techniques to infer the locations mentioned\nin the microblog text, in an unsupervised fashion and display it on a map-based\ninterface. The system is designed for efficient performance, achieving an\nF-score of 0.79, and is approximately two orders of magnitude faster than other\navailable tools for location extraction.\n", "title": "SAVITR: A System for Real-time Location Extraction from Microblogs during Emergencies" }
null
null
null
null
true
null
3691
null
Default
null
null
null
{ "abstract": " The resonances associated with a fractional damped oscillator which is driven\nby an oscillatory external force are studied. It is shown that such resonances\ncan be manipulated by tuning up either the coefficient of the fractional\ndamping or the order of the corresponding fractional derivatives.\n", "title": "Fractional Driven Damped Oscillator" }
null
null
null
null
true
null
3692
null
Default
null
null
null
{ "abstract": " Let $X_{1},\\ldots,X_{n}$ be i.i.d. sample in $\\mathbb{R}^{p}$ with zero mean\nand the covariance matrix $\\mathbf{\\Sigma}$. The problem of recovering the\nprojector onto an eigenspace of $\\mathbf{\\Sigma}$ from these observations\nnaturally arises in many applications. Recent technique from [Koltchinskii,\nLounici, 2015] helps to study the asymptotic distribution of the distance in\nthe Frobenius norm $\\| \\mathbf{P}_r - \\widehat{\\mathbf{P}}_r \\|_{2}$ between\nthe true projector $\\mathbf{P}_r$ on the subspace of the $r$-th eigenvalue and\nits empirical counterpart $\\widehat{\\mathbf{P}}_r$ in terms of the effective\nrank of $\\mathbf{\\Sigma}$. This paper offers a bootstrap procedure for building\nsharp confidence sets for the true projector $\\mathbf{P}_r$ from the given\ndata. This procedure does not rely on the asymptotic distribution of $\\|\n\\mathbf{P}_r - \\widehat{\\mathbf{P}}_r \\|_{2}$ and its moments. It could be\napplied for small or moderate sample size $n$ and large dimension $p$. The main\nresult states the validity of the proposed procedure for finite samples with an\nexplicit error bound for the error of bootstrap approximation. This bound\ninvolves some new sharp results on Gaussian comparison and Gaussian\nanti-concentration in high-dimensional spaces. Numeric results confirm a good\nperformance of the method in realistic examples.\n", "title": "Bootstrap confidence sets for spectral projectors of sample covariance" }
null
null
null
null
true
null
3693
null
Default
null
null
null
{ "abstract": " A locally recoverable code is a code over a finite alphabet such that the\nvalue of any single coordinate of a codeword can be recovered from the values\nof a small subset of other coordinates. Building on work of Barg, Tamo, and\nVlăduţ, we present several constructions of locally recoverable codes\nfrom algebraic curves and surfaces.\n", "title": "Locally recoverable codes from algebraic curves and surfaces" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
3694
null
Validated
null
null
null
{ "abstract": " In the following text we introduce specification property (stroboscopical\nproperty) for dynamical systems on uniform space. We focus on two classes of\ndynamical systems: generalized shifts and dynamical systems with Alexandroff\ncompactification of a discrete space as phase space. We prove that for a\ndiscrete finite topological space $X$ with at least two elements, a nonempty\nset $\\Gamma$ and a self--map $\\varphi:\\Gamma\\to\\Gamma$ the generalized shift\ndynamical system $(X^\\Gamma,\\sigma_\\varphi)$: \\begin{itemize} \\item has\n(almost) weak specification property if and only if $\\varphi:\\Gamma\\to\\Gamma$\ndoes not have any periodic point,\n\\item has (uniform) stroboscopical property if and only if\n$\\varphi:\\Gamma\\to\\Gamma$\nis one-to-one. \\end{itemize}\n", "title": "Specification properties on uniform spaces" }
null
null
null
null
true
null
3695
null
Default
null
null
null
{ "abstract": " We report the discovery of tidal tails around the two outer halo globular\nclusters, Eridanus and Palomar 15, based on $gi$-band images obtained with\nDECam at the CTIO 4-m Blanco Telescope. The tidal tails are among the most\nremote stellar streams presently known in the Milky Way halo. Cluster members\nhave been determined from the color-magnitude diagrams and used to establish\nthe radial density profiles, which show, in both cases, a strong departure in\nthe outer regions from the best-fit King profile. Spatial density maps reveal\ntidal tails stretching out on opposite sides of both clusters, extending over a\nlength of $\\sim$760 pc for Eridanus and $\\sim$1160 pc for Palomar 15. The great\ncircle projected from the Palomar 15 tidal tails encompasses the Galactic\nCenter, while that for Eridanus passes close to four dwarf satellite galaxies,\none of which (Sculptor) is at a comparable distance to that of Eridanus.\n", "title": "Tidal tails around the outer halo globular clusters Eridanus and Palomar 15" }
null
null
null
null
true
null
3696
null
Default
null
null
null
{ "abstract": " We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in\nthe context of Reinforcement Learning (RL). SAC-X enables learning of complex\nbehaviors - from scratch - in the presence of multiple sparse reward signals.\nTo this end, the agent is equipped with a set of general auxiliary tasks, that\nit attempts to learn simultaneously via off-policy RL. The key idea behind our\nmethod is that active (learned) scheduling and execution of auxiliary policies\nallows the agent to efficiently explore its environment - enabling it to excel\nat sparse reward RL. Our experiments in several challenging robotic\nmanipulation settings demonstrate the power of our approach.\n", "title": "Learning by Playing - Solving Sparse Reward Tasks from Scratch" }
null
null
null
null
true
null
3697
null
Default
null
null
null
{ "abstract": " We describe a generalization of the Hierarchical Dirichlet Process Hidden\nMarkov Model (HDP-HMM) which is able to encode prior information that state\ntransitions are more likely between \"nearby\" states. This is accomplished by\ndefining a similarity function on the state space and scaling transition\nprobabilities by pair-wise similarities, thereby inducing correlations among\nthe transition distributions. We present an augmented data representation of\nthe model as a Markov Jump Process in which: (1) some jump attempts fail, and\n(2) the probability of success is proportional to the similarity between the\nsource and destination states. This augmentation restores conditional conjugacy\nand admits a simple Gibbs sampler. We evaluate the model and inference method\non a speaker diarization task and a \"harmonic parsing\" task using four-part\nchorale data, as well as on several synthetic datasets, achieving favorable\ncomparisons to existing models.\n", "title": "An Infinite Hidden Markov Model With Similarity-Biased Transitions" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
3698
null
Validated
null
null
null
{ "abstract": " In the framework of shape constrained estimation, we review methods and works\ndone in convex set estimation. These methods mostly build on stochastic and\nconvex geometry, empirical process theory, functional analysis, linear\nprogramming, extreme value theory, etc. The statistical problems that we review\ninclude density support estimation, estimation of the level sets of densities\nor depth functions, nonparametric regression, etc. We focus on the estimation\nof convex sets under the Nikodym and Hausdorff metrics, which require different\ntechniques and, quite surprisingly, lead to very different results, in\nparticular in density support estimation. Finally, we discuss computational\nissues in high dimensions.\n", "title": "Methods for Estimation of Convex Sets" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
3699
null
Validated
null
null
null
{ "abstract": " In this paper, we have explored the effects of dissipation on the dynamics of\ncharged bulk viscous collapsing cylindrical source which allows the out follow\nof heat flux in the form of radiations. Misner-Sharp formulism has been\nimplemented to drive the dynamical equation in term of proper time and radial\nderivatives. We have investigated the effects of charge and bulk viscosity on\nthe dynamics of collapsing cylinder. To determine the effects of radial heat\nflux, we have formulated the heat transport equations in the context of\nM$\\ddot{u}$ller-Israel-Stewart theory by assuming that thermodynamics\nviscous/heat coupling coefficients can be neglected within some approximations.\nIn our discussion, we have introduced the viscosity by the standard\n(non-casual) thermodynamics approach. The dynamical equations have been coupled\nwith the heat transport equation equation, the consequences of resulting\ncoupled heat equation have been analyzed in detail.\n", "title": "Dynamics of Charged Bulk Viscous Collapsing Cylindrical Source With Heat Flux" }
null
null
null
null
true
null
3700
null
Default
null
null