text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " For a positive parameter $\\beta$, the $\\beta$-bounded distance between a pair\nof vertices $u,v$ in a weighted undirected graph $G = (V,E,\\omega)$ is the\nlength of the shortest $u-v$ path in $G$ with at most $\\beta$ edges, aka {\\em\nhops}. For $\\beta$ as above and $\\epsilon>0$, a {\\em $(\\beta,\\epsilon)$-hopset}\nof $G = (V,E,\\omega)$ is a graph $G' =(V,H,\\omega_H)$ on the same vertex set,\nsuch that all distances in $G$ are $(1+\\epsilon)$-approximated by\n$\\beta$-bounded distances in $G\\cup G'$.\nHopsets are a fundamental graph-theoretic and graph-algorithmic construct,\nand they are widely used for distance-related problems in a variety of\ncomputational settings. Currently existing constructions of hopsets produce\nhopsets either with $\\Omega(n \\log n)$ edges, or with a hopbound\n$n^{\\Omega(1)}$. In this paper we devise a construction of {\\em linear-size}\nhopsets with hopbound $(\\log n)^{\\log^{(3)}n+O(1)}$. This improves the previous\nbound almost exponentially.\nWe also devise efficient implementations of our construction in PRAM and\ndistributed settings. The only existing PRAM algorithm \\cite{EN16} for\ncomputing hopsets with a constant (i.e., independent of $n$) hopbound requires\n$n^{\\Omega(1)}$ time. We devise a PRAM algorithm with polylogarithmic running\ntime for computing hopsets with a constant hopbound, i.e., our running time is\nexponentially better than the previous one. Moreover, these hopsets are also\nsignificantly sparser than their counterparts from \\cite{EN16}.\nWe use our hopsets to devise a distributed routing scheme that exhibits\nnear-optimal tradeoff between individual memory requirement\n$\\tilde{O}(n^{1/k})$ of vertices throughout preprocessing and routing phases of\nthe algorithm, and stretch $O(k)$, along with a near-optimal construction time\n$\\approx D + n^{1/2 + 1/k}$, where $D$ is the hop-diameter of the input graph.\n", "title": "Linear-Size Hopsets with Small Hopbound, and Distributed Routing with Low Memory" }
null
null
null
null
true
null
19601
null
Default
null
null
null
{ "abstract": " Pipelines are used in a huge range of industrial processes involving fluids,\nand the ability to accurately predict properties of the flow through a pipe is\nof fundamental engineering importance. Armed with parallel MPI, Arnoldi and\nNewton--Krylov solvers, the Openpipeflow code can be used in a range of\nsettings, from large-scale simulation of highly turbulent flow, to the detailed\nanalysis of nonlinear invariant solutions (equilibria and periodic orbits) and\ntheir influence on the dynamics of the flow.\n", "title": "The Openpipeflow Navier--Stokes Solver" }
null
null
null
null
true
null
19602
null
Default
null
null
null
{ "abstract": " The quadratic unconstrained binary optimization (QUBO) problem arises in\ndiverse optimization applications ranging from Ising spin problems to classical\nproblems in graph theory and binary discrete optimization. The use of\npreprocessing to transform the graph representing the QUBO problem into a\nsmaller equivalent graph is important for improving solution quality and time\nfor both exact and metaheuristic algorithms and is a step towards mapping large\nscale QUBO to hardware graphs used in quantum annealing computers. In an\nearlier paper (Lewis and Glover, 2016) a set of rules was introduced that\nachieved significant QUBO reductions as verified through computational testing.\nHere this work is extended with additional rules that provide further\nreductions that succeed in exactly solving 10% of the benchmark QUBO problems.\nAn algorithm and associated data structures to efficiently implement the entire\nset of rules is detailed and computational experiments are reported that\ndemonstrate their efficacy.\n", "title": "Logical and Inequality Implications for Reducing the Size and Complexity of Quadratic Unconstrained Binary Optimization Problems" }
null
null
null
null
true
null
19603
null
Default
null
null
null
{ "abstract": " In order to address the economical dispatch problem in islanded microgrid,\nthis letter proposes an optimal criterion and two decentralized\neconomical-sharing schemes. The criterion is to judge whether global optimal\neconomical-sharing can be realized via a decentralized manner. On the one hand,\nif the system cost functions meet this criterion, the corresponding\ndecentralized droop method is proposed to achieve the global optimal dispatch.\nOtherwise, if the system does not meet this criterion, a modified method to\nachieve suboptimal dispatch is presented. The advantages of these methods are\nconvenient,effective and communication-less.\n", "title": "Optimal Decentralized Economical-sharing Criterion and Scheme for Microgrid" }
null
null
[ "Computer Science" ]
null
true
null
19604
null
Validated
null
null
null
{ "abstract": " We develop an importance sampling (IS) type estimator for Bayesian joint\ninference on the model parameters and latent states of a class of hidden Markov\nmodels. The hidden state dynamics is a diffusion process and noisy observations\nare obtained at discrete points in time. We suppose that the diffusion dynamics\ncan not be simulated exactly and hence one must time-discretise the diffusion.\nOur approach is based on particle marginal Metropolis--Hastings, particle\nfilters, and multilevel Monte Carlo. The resulting IS type estimator leads to\ninference without a bias from the time-discretisation. We give convergence\nresults and recommend allocations for algorithm inputs. In contrast to existing\nunbiased methods requiring strong conditions on the diffusion and tailored\nsolutions, our method relies on standard Euler approximations of the diffusion.\nOur method is parallelisable, and can be computationally efficient. The\nuser-friendly approach is illustrated with two examples.\n", "title": "Unbiased inference for discretely observed hidden Markov model diffusions" }
null
null
null
null
true
null
19605
null
Default
null
null
null
{ "abstract": " With a few hundred spacecraft launched to date with electric propulsion (EP),\nit is possible to conduct an epidemiological study of EP on orbit reliability.\nThe first objective of the present work was to undertake such a study and\nanalyze EP track record of on orbit anomalies and failures by different\ncovariates. The second objective was to provide a comparative analysis of EP\nfailure rates with those of chemical propulsion. After a thorough data\ncollection, 162 EP-equipped satellites launched between January 1997 and\nDecember 2015 were included in our dataset for analysis. Several statistical\nanalyses were conducted, at the aggregate level and then with the data\nstratified by severity of the anomaly, by orbit type, and by EP technology.\nMean Time To Anomaly (MTTA) and the distribution of the time to anomaly were\ninvestigated, as well as anomaly rates. The important findings in this work\ninclude the following: (1) Post-2005, EP reliability has outperformed that of\nchemical propulsion; (2) Hall thrusters have robustly outperformed chemical\npropulsion, and they maintain a small but shrinking reliability advantage over\ngridded ion engines. Other results were also provided, for example the\ndifferentials in MTTA of minor and major anomalies for gridded ion engines and\nHall thrusters. It was shown that: (3) Hall thrusters exhibit minor anomalies\nvery early on orbit, which might be indicative of infant anomalies, and thus\nwould benefit from better ground testing and acceptance procedures; (4) Strong\nevidence exists that EP anomalies (onset and likelihood) and orbit type are\ndependent, a dependence likely mediated by either the space environment or\ndifferences in thrusters duty cycles; (5) Gridded ion thrusters exhibit both\ninfant and wear-out failures, and thus would benefit from a reliability growth\nprogram that addresses both these types of problems.\n", "title": "Electric propulsion reliability: statistical analysis of on-orbit anomalies and comparative analysis of electric versus chemical propulsion failure rates" }
null
null
null
null
true
null
19606
null
Default
null
null
null
{ "abstract": " There exists various proposals to detect cosmic strings from Cosmic Microwave\nBackground (CMB) or 21 cm temperature maps. Current proposals do not aim to\nfind the location of strings on sky maps, all of these approaches can be\nthought of as a statistic on a sky map. We propose a Bayesian interpretation of\ncosmic string detection and within that framework, we derive a connection\nbetween estimates of cosmic string locations and cosmic string tension $G\\mu$.\nWe use this Bayesian framework to develop a machine learning framework for\ndetecting strings from sky maps and outline how to implement this framework\nwith neural networks. The neural network we trained was able to detect and\nlocate cosmic strings on noiseless CMB temperature map down to a string tension\nof $G\\mu=5 \\times10^{-9}$ and when analyzing a CMB temperature map that does\nnot contain strings, the neural network gives a 0.95 probability that\n$G\\mu\\leq2.3\\times10^{-9}$.\n", "title": "A Bayesian Framework for Cosmic String Searches in CMB Maps" }
null
null
null
null
true
null
19607
null
Default
null
null
null
{ "abstract": " Recent works on planetary migration show that the orbital structure of the\nKuiper belt can be very well reproduced if before the onset of the planetary\ninstability Neptune underwent a long-range planetesimal-driven migration up to\n$\\sim$28 au. However, considering that all giant planets should have been\ncaptured in mean motion resonances among themselves during the gas-disk phase,\nit is not clear whether such a very specific evolution for Neptune is possible,\nnor whether the instability could have happened at late times. Here, we first\ninvestigate which initial resonant configuration of the giant planets can be\ncompatible with Neptune being extracted from the resonant chain and migrating\nto $\\sim$28 au before that the planetary instability happened. We address the\nlate instability issue by investigating the conditions where the planets can\nstay in resonance for about 400 My. Our results indicate that this can happen\nonly in the case where the planetesimal disk is beyond a specific minimum\ndistance $\\delta_{stab}$ from Neptune. Then, if there is a sufficient amount of\ndust produced in the planetesimal disk, that drifts inwards, Neptune can enter\nin a slow dust-driven migration phase for hundreds of Mys until it reaches a\ncritical distance $\\delta_{mig}$ from the disk. From that point, faster\nplanetesimal-driven migration takes over and Neptune continues migrating\noutward until the instability happens. We conclude that, although an early\ninstability reproduces more easily the evolution of Neptune required to explain\nthe structure of the Kuiper belt, such evolution is also compatible with a late\ninstability.\n", "title": "Constraining the giant planets' initial configuration from their evolution: implications for the timing of the planetary instability" }
null
null
null
null
true
null
19608
null
Default
null
null
null
{ "abstract": " The General AI Challenge is an initiative to encourage the wider artificial\nintelligence community to focus on important problems in building intelligent\nmachines with more general scope than is currently possible. The challenge\ncomprises of multiple rounds, with the first round focusing on gradual\nlearning, i.e. the ability to re-use already learned knowledge for efficiently\nlearning to solve subsequent problems. In this article, we will present details\nof the first round of the challenge, its inspiration and aims. We also outline\na more formal description of the challenge and present a preliminary analysis\nof its curriculum, based on ideas from computational mechanics. We believe,\nthat such formalism will allow for a more principled approach towards\ninvestigating tasks in the challenge, building new curricula and for\npotentially improving consequent challenge rounds.\n", "title": "General AI Challenge - Round One: Gradual Learning" }
null
null
[ "Computer Science" ]
null
true
null
19609
null
Validated
null
null
null
{ "abstract": " A key enabler for optimizing business processes is accurately estimating the\nprobability distribution of a time series future given its past. Such\nprobabilistic forecasts are crucial for example for reducing excess inventory\nin supply chains. In this paper we propose DeepAR, a novel methodology for\nproducing accurate probabilistic forecasts, based on training an\nauto-regressive recurrent network model on a large number of related time\nseries. We show through extensive empirical evaluation on several real-world\nforecasting data sets that our methodology is more accurate than\nstate-of-the-art models, while requiring minimal feature engineering.\n", "title": "DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks" }
null
null
null
null
true
null
19610
null
Default
null
null
null
{ "abstract": " Lorenzen's \"Algebraische und logistische Untersuchungen über freie\nVerbände\" appeared in 1951 in The journal of symbolic logic. These\n\"Investigations\" have immediately been recognised as a landmark in the history\nof infinitary proof theory, but their approach and method of proof have not\nbeen incorporated into the corpus of proof theory. More precisely, Lorenzen\nproves the admissibility of cut by double induction, on the cut formula and on\nthe complexity of the derivations, without using any ordinal assignment,\ncontrary to the presentation of cut elimination in most standard texts on proof\ntheory. This translation has the intent of giving a new impetus to their\nreception.\nThe \"Investigations\" are best known for providing a constructive proof of\nconsistency for ramified type theory without axiom of reducibility. They do so\nby showing that it is a part of a trivially consistent \"inductive calculus\"\nthat describes our knowledge of arithmetic without detour. The proof resorts\nonly to the inductive definition of formulas and theorems.\nThey propose furthermore a definition of a semilattice, of a distributive\nlattice, of a pseudocomplemented semilattice, and of a countably complete\nboolean lattice as deductive calculuses, and show how to present them for\nconstructing the respective free object over a given preordered set.\nThis translation is published with the kind permission of Lorenzen's\ndaughter, Jutta Reinhardt.\n", "title": "Algebraic and logistic investigations on free lattices" }
null
null
null
null
true
null
19611
null
Default
null
null
null
{ "abstract": " Online interactive recommender systems strive to promptly suggest to\nconsumers appropriate items (e.g., movies, news articles) according to the\ncurrent context including both the consumer and item content information.\nHowever, such context information is often unavailable in practice for the\nrecommendation, where only the users' interaction data on items can be\nutilized. Moreover, the lack of interaction records, especially for new users\nand items, worsens the performance of recommendation further. To address these\nissues, collaborative filtering (CF), one of the recommendation techniques\nrelying on the interaction data only, as well as the online multi-armed bandit\nmechanisms, capable of achieving the balance between exploitation and\nexploration, are adopted in the online interactive recommendation settings, by\nassuming independent items (i.e., arms). Nonetheless, the assumption rarely\nholds in reality, since the real-world items tend to be correlated with each\nother (e.g., two articles with similar topics). In this paper, we study online\ninteractive collaborative filtering problems by considering the dependencies\namong items. We explicitly formulate the item dependencies as the clusters on\narms, where the arms within a single cluster share the similar latent topics.\nIn light of the topic modeling techniques, we come up with a generative model\nto generate the items from their underlying topics. Furthermore, an efficient\nonline algorithm based on particle learning is developed for inferring both\nlatent parameters and states of our model. Additionally, our inferred model can\nbe naturally integrated with existing multi-armed selection strategies in the\nonline interactive collaborating setting. Empirical studies on two real-world\napplications, online recommendations of movies and news, demonstrate both the\neffectiveness and efficiency of the proposed approach.\n", "title": "Online Interactive Collaborative Filtering Using Multi-Armed Bandit with Dependent Arms" }
null
null
null
null
true
null
19612
null
Default
null
null
null
{ "abstract": " Low-rank matrix completion (MC) has achieved great success in many real-world\ndata applications. A latent feature model formulation is usually employed and,\nto improve prediction performance, the similarities between latent variables\ncan be exploited by pairwise learning, e.g., the graph regularized matrix\nfactorization (GRMF) method. However, existing GRMF approaches often use a\nsquared L2 norm to measure the pairwise difference, which may be overly\ninfluenced by dissimilar pairs and lead to inferior prediction. To fully\nempower pairwise learning for matrix completion, we propose a general\noptimization framework that allows a rich class of (non-)convex pairwise\npenalty functions. A new and efficient algorithm is further developed to\nuniformly solve the optimization problem, with a theoretical convergence\nguarantee. In an important situation where the latent variables form a small\nnumber of subgroups, its statistical guarantee is also fully characterized. In\nparticular, we theoretically characterize the complexity-regularized maximum\nlikelihood estimator, as a special case of our framework. It has a better error\nbound when compared to the standard trace-norm regularized matrix completion.\nWe conduct extensive experiments on both synthetic and real datasets to\ndemonstrate the superior performance of this general framework.\n", "title": "Learning Latent Features with Pairwise Penalties in Matrix Completion" }
null
null
null
null
true
null
19613
null
Default
null
null
null
{ "abstract": " We obtain results on mixing for a large class of (not necessarily Markov)\ninfinite measure semiflows and flows. Erickson proved, amongst other things, a\nstrong renewal theorem in the corresponding i.i.d. setting. Using operator\nrenewal theory, we extend Erickson's methods to the deterministic (i.e.\nnon-i.i.d.) continuous time setting and obtain results on mixing as a\nconsequence.\nOur results apply to intermittent semiflows and flows of Pomeau-Manneville\ntype (both Markov and nonMarkov), and to semiflows and flows over\nCollet-Eckmann maps with nonintegrable roof function.\n", "title": "Renewal theorems and mixing for non Markov flows with infinite measure" }
null
null
null
null
true
null
19614
null
Default
null
null
null
{ "abstract": " Many cognitive, sensory and motor processes have correlates in oscillatory\nneural sources, which are embedded as a subspace into the recorded brain\nsignals. Decoding such processes from noisy\nmagnetoencephalogram/electroencephalogram (M/EEG) signals usually requires the\nuse of data-driven analysis methods. The objective evaluation of such decoding\nalgorithms on experimental raw signals, however, is a challenge: the amount of\navailable M/EEG data typically is limited, labels can be unreliable, and raw\nsignals often are contaminated with artifacts. The latter is specifically\nproblematic, if the artifacts stem from behavioral confounds of the oscillatory\nneural processes of interest.\nTo overcome some of these problems, simulation frameworks have been\nintroduced for benchmarking decoding methods. Generating artificial brain\nsignals, however, most simulation frameworks make strong and partially\nunrealistic assumptions about brain activity, which limits the generalization\nof obtained results to real-world conditions.\nIn the present contribution, we thrive to remove many shortcomings of current\nsimulation frameworks and propose a versatile alternative, that allows for\nobjective evaluation and benchmarking of novel data-driven decoding methods for\nneural signals. Its central idea is to utilize post-hoc labelings of arbitrary\nM/EEG recordings. This strategy makes it paradigm-agnostic and allows to\ngenerate comparatively large datasets with noiseless labels. Source code and\ndata of the novel simulation approach are made available for facilitating its\nadoption.\n", "title": "Post-hoc labeling of arbitrary EEG recordings for data-efficient evaluation of neural decoding methods" }
null
null
null
null
true
null
19615
null
Default
null
null
null
{ "abstract": " In this paper, we sharpen earlier work of the first author, Luca and\nMulholland, showing that the Diophantine equation $$ A^3+B^3 = q^\\alpha C^p, \\,\n\\, ABC \\neq 0, \\, \\, \\gcd (A,B) =1, $$ has, for \"most\" primes $q$ and suitably\nlarge prime exponents $p$, no solutions. We handle a number of (presumably\ninfinite) families where no such conclusion was hitherto known. Through further\napplication of certain {\\it symplectic criteria}, we are able to make some\nconditional statements about still more values of $q$, a sample such result is\nthat, for all but $O(\\sqrt{x}/\\log x)$ primes $q$ up to $x$, the equation $$\nA^3 + B^3 = q C^p. $$ has no solutions in coprime, nonzero integers $A, B$\nand $C$, for a positive proportion of prime exponents $p$.\n", "title": "Sums of two cubes as twisted perfect powers, revisited" }
null
null
null
null
true
null
19616
null
Default
null
null
null
{ "abstract": " Bilevel optimization is defined as a mathematical program, where an\noptimization problem contains another optimization problem as a constraint.\nThese problems have received significant attention from the mathematical\nprogramming community. Only limited work exists on bilevel problems using\nevolutionary computation techniques; however, recently there has been an\nincreasing interest due to the proliferation of practical applications and the\npotential of evolutionary algorithms in tackling these problems. This paper\nprovides a comprehensive review on bilevel optimization from the basic\nprinciples to solution strategies; both classical and evolutionary. A number of\npotential application problems are also discussed. To offer the readers\ninsights on the prominent developments in the field of bilevel optimization, we\nhave performed an automated text-analysis of an extended list of papers\npublished on bilevel optimization to date. This paper should motivate\nevolutionary computation researchers to pay more attention to this practical\nyet challenging area.\n", "title": "A Review on Bilevel Optimization: From Classical to Evolutionary Approaches and Applications" }
null
null
null
null
true
null
19617
null
Default
null
null
null
{ "abstract": " This paper proposes a speaker recognition (SRE) task with trivial speech\nevents, such as cough and laugh. These trivial events are ubiquitous in\nconversations and less subjected to intentional change, therefore offering\nvaluable particularities to discover the genuine speaker from disguised speech.\nHowever, trivial events are often short and idiocratic in spectral patterns,\nmaking SRE extremely difficult. Fortunately, we found a very powerful deep\nfeature learning structure that can extract highly speaker-sensitive features.\nBy employing this tool, we studied the SRE performance on three types of\ntrivial events: cough, laugh and \"Wei\" (a short Chinese \"Hello\"). The results\nshow that there is rich speaker information within these trivial events, even\nfor cough that is intuitively less speaker distinguishable. With the deep\nfeature approach, the EER can reach 10%-14% with the three trivial events,\ndespite their extremely short durations (0.2-1.0 seconds).\n", "title": "Speaker Recognition with Cough, Laugh and \"Wei\"" }
null
null
null
null
true
null
19618
null
Default
null
null
null
{ "abstract": " Governing equations for two-dimensional inviscid free-surface flows with\nconstant vorticity over arbitrary non-uniform bottom profile are presented in\nexact and compact form using conformal variables. An efficient and very\naccurate numerical method for this problem is developed.\n", "title": "Explicit equations for two-dimensional water waves with constant vorticity" }
null
null
null
null
true
null
19619
null
Default
null
null
null
{ "abstract": " We prove a global limiting absorption principle on the entire real line for\nfree, massless Dirac operators $H_0 = \\alpha \\cdot (-i \\nabla)$ for all space\ndimensions $n \\in \\mathbb{N}$, $n \\geq 2$. This is a new result for all\ndimensions other than three, in particular, it applies to the two-dimensional\ncase which is known to be of some relevance in applications to graphene.\nWe also prove an essential self-adjointness result for first-order\nmatrix-valued differential operators with Lipschitz coefficients.\n", "title": "On the Global Limiting Absorption Principle for Massless Dirac Operators" }
null
null
null
null
true
null
19620
null
Default
null
null
null
{ "abstract": " In this paper we explore the role of duality principles within the problem of\nrotation averaging, a fundamental task in a wide range of computer vision\napplications. In its conventional form, rotation averaging is stated as a\nminimization over multiple rotation constraints. As these constraints are\nnon-convex, this problem is generally considered challenging to solve globally.\nWe show how to circumvent this difficulty through the use of Lagrangian\nduality. While such an approach is well-known it is normally not guaranteed to\nprovide a tight relaxation. Based on spectral graph theory, we analytically\nprove that in many cases there is no duality gap unless the noise levels are\nsevere. This allows us to obtain certifiably global solutions to a class of\nimportant non-convex problems in polynomial time.\nWe also propose an efficient, scalable algorithm that out-performs general\npurpose numerical solvers and is able to handle the large problem instances\ncommonly occurring in structure from motion settings. The potential of this\nproposed method is demonstrated on a number of different problems, consisting\nof both synthetic and real-world data.\n", "title": "Rotation Averaging and Strong Duality" }
null
null
null
null
true
null
19621
null
Default
null
null
null
{ "abstract": " A looming question that must be solved before robotic plant phenotyping\ncapabilities can have significant impact to crop improvement programs is\nscalability. High Throughput Phenotyping (HTP) uses robotic technologies to\nanalyze crops in order to determine species with favorable traits, however, the\ncurrent practices rely on exhaustive coverage and data collection from the\nentire crop field being monitored under the breeding experiment. This works\nwell in relatively small agricultural fields but can not be scaled to the\nlarger ones, thus limiting the progress of genetics research. In this work, we\npropose an active learning algorithm to enable an autonomous system to collect\nthe most informative samples in order to accurately learn the distribution of\nphenotypes in the field with the help of a Gaussian Process model. We\ndemonstrate the superior performance of our proposed algorithm compared to the\ncurrent practices on sorghum phenotype data collection.\n", "title": "Active Learning with Gaussian Processes for High Throughput Phenotyping" }
null
null
null
null
true
null
19622
null
Default
null
null
null
{ "abstract": " Autoignition experiments of stoichiometric mixtures of s-, t-, and i-butanol\nin air have been performed using a heated rapid compression machine (RCM). At\ncompressed pressures of 15 and 30 bar and for compressed temperatures in the\nrange of 715-910 K, no evidence of a negative temperature coefficient region in\nterms of ignition delay response is found. The present experimental results are\nalso compared with previously reported RCM data of n-butanol in air. The order\nof reactivity of the butanols is\nn-butanol>s-butanol$\\approx$i-butanol>t-butanol at the lower pressure, but\nchanges to n-butanol>t-butanol>s-butanol>i-butanol at higher pressure. In\naddition, t-butanol shows pre-ignition heat release behavior, which is\nespecially evident at higher pressures. To help identify the controlling\nchemistry leading to this pre-ignition heat release, off-stoichiometric\nexperiments are further performed at 30 bar compressed pressure, for t-butanol\nat $\\phi$ = 0.5 and $\\phi$ = 2.0 in air. For these experiments, higher fuel\nloading (i.e. $\\phi$ = 2.0) causes greater pre-ignition heat release (as\nindicated by greater pressure rise) than the stoichiometric or $\\phi$ = 0.5\ncases. Comparison of the experimental ignition delays with the simulated\nresults using two literature kinetic mechanisms shows generally good agreement,\nand one mechanism is further used to explore and compare the fuel decomposition\npathways of the butanol isomers. Using this mechanism, the importance of peroxy\nchemistry in the autoignition of the butanol isomers is highlighted and\ndiscussed.\n", "title": "Comparative Autoignition Trends in the Butanol Isomers at Elevated Pressure" }
null
null
null
null
true
null
19623
null
Default
null
null
null
{ "abstract": " Crowdsourced GPS probe data has become a major source of real-time traffic\ninformation applications. In addition to traditional traveler advisory systems\nsuch as dynamic message signs (DMS) and 511 systems, probe data is being used\nfor automatic incident detection, Integrated Corridor Management (ICM), end of\nqueue warning systems, and mobility-related smartphone applications. Several\nprivate sector vendors offer minute by minute network-wide travel time and\nspeed probe data. The quality of such data in terms of deviation of the\nreported travel time and speeds from ground-truth has been extensively studied\nin recent years, and as a result concerns over the accuracy of probe data has\nmostly faded away. However, the latency of probe data, defined as the lag\nbetween the time that disturbance in traffic speed is reported in the\noutsourced data feed, and the time that the traffic is perturbed, has become a\nsubject of interest. The extent of latency of probe data for real-time\napplications is critical, so it is important to have a good understanding of\nthe amount of latency and its influencing factors. This paper uses high-quality\nindependent Bluetooth/Wi-Fi re-identification data collected on multiple\nfreeway segments in three different states, to measure the latency of the\nvehicle probe data provided by three major vendors. The statistical\ndistribution of the latency and its sensitivity to speed slowdown and recovery\nperiods are discussed.\n", "title": "A cross-vendor and cross-state analysis of the GPS-probe data latency" }
null
null
null
null
true
null
19624
null
Default
null
null
null
{ "abstract": " Generating and detection coherent high-frequency heat-carrying phonons has\nbeen a great topic of interest in recent years. While there have been\nsuccessful attempts in generating and observing coherent phonons, rigorous\ntechniques to characterize and detect these phonon coherence in a crystalline\nmaterial have been lagging compared to what has been achieved for photons. One\nmain challenge is a lack of detailed understanding of how detection signals for\nphonons can be related to coherence. The quantum theory of photoelectric\ndetection has greatly advanced the ability to characterize photon coherence in\nthe last century and a similar theory for phonon detection is necessary. Here,\nwe re-examine the optical sideband fluorescence technique that has been used\ndetect high frequency phonons in materials with optically active defects. We\napply the quantum theory of photodetection to the sideband technique and\npropose signatures in sideband photon-counting statistics and second-order\ncorrelation measurement of sideband signals that indicates the degree of phonon\ncoherence. Our theory can be implemented in recently performed experiments to\nbridge the gap of determining phonon coherence to be on par with that of\nphotons.\n", "title": "Determining Phonon Coherence Using Photon Sideband Detection" }
null
null
null
null
true
null
19625
null
Default
null
null
null
{ "abstract": " For classical many-body systems, our recent study reveals that expectation\nvalue of internal energy, structure, and free energy can be well characterized\nby a single specially-selected microscopic structure. This finding relies on\nthe fact that configurational density of states (CDOS) for typical classical\nsystem before applying interatomic interaction can be well characterized by\nmultidimensional gaussian distribution. Although gaussian distribution is an\nwell-known and widely-used function in diverse fields, it is quantitatively\nunclear why the CDOS takes gaussian when system size gets large, even for\nprojected CDOS onto a single chosen coordination. Here we demonstrate that for\nequiatomic binary system, one-dimensional CDOS along coordination of pair\ncorrelation can be reasonably described by gaussian distribution under an\nappropriate condition, whose deviation from real CDOS mainly reflects the\nexistence of triplet closed link consisting of the pair figure considered. The\npresent result thus significantly makes advance in analytic determination of\nthe special microscopic states to characterized macroscopic physical property\nin equilibrium state.\n", "title": "Landscape of Configurational Density of States for Discrete Large Systems" }
null
null
null
null
true
null
19626
null
Default
null
null
null
{ "abstract": " We consider a generalized Dirac operator on a compact stratified space with\nan iterated cone-edge metric. Assuming a spectral Witt condition, we prove its\nessential self-adjointness and identify its domain and the domain of its square\nwith weighted edge Sobolev spaces. This sharpens previous results where the\nminimal domain is shown only to be a subset of an intersection of weighted edge\nSobolev spaces. Our argument does not rely on microlocal techniques and is very\nexplicit. The novelty of our approach is the use of an abstract functional\nanalytic notion of interpolation scales. Our results hold for the Gauss-Bonnet\nand spin Dirac operators satisfying a spectral Witt condition.\n", "title": "On the domain of Dirac and Laplace type operators on stratified spaces" }
null
null
null
null
true
null
19627
null
Default
null
null
null
{ "abstract": " The discovery of influential entities in all kinds of networks (e.g. social,\ndigital, or computer) has always been an important field of study. In recent\nyears, Online Social Networks (OSNs) have been established as a basic means of\ncommunication and often influencers and opinion makers promote politics,\nevents, brands or products through viral content. In this work, we present a\nsystematic review across i) online social influence metrics, properties, and\napplications and ii) the role of semantic in modeling OSNs information. We end\nup with the conclusion that both areas can jointly provide useful insights\ntowards the qualitative assessment of viral user-generated content, as well as\nfor modeling the dynamic properties of influential content and its flow\ndynamics.\n", "title": "Modeling Influence with Semantics in Social Networks: a Survey" }
null
null
null
null
true
null
19628
null
Default
null
null
null
{ "abstract": " We consider learning of submodular functions from data. These functions are\nimportant in machine learning and have a wide range of applications, e.g. data\nsummarization, feature selection and active learning. Despite their\ncombinatorial nature, submodular functions can be maximized approximately with\nstrong theoretical guarantees in polynomial time. Typically, learning the\nsubmodular function and optimization of that function are treated separately,\ni.e. the function is first learned using a proxy objective and subsequently\nmaximized. In contrast, we show how to perform learning and optimization\njointly. By interpreting the output of greedy maximization algorithms as\ndistributions over sequences of items and smoothening these distributions, we\nobtain a differentiable objective. In this way, we can differentiate through\nthe maximization algorithms and optimize the model to work well with the\noptimization algorithm. We theoretically characterize the error made by our\napproach, yielding insights into the tradeoff of smoothness and accuracy. We\ndemonstrate the effectiveness of our approach for jointly learning and\noptimizing on synthetic maximum cut data, and on real world applications such\nas product recommendation and image collection summarization.\n", "title": "Differentiable Submodular Maximization" }
null
null
null
null
true
null
19629
null
Default
null
null
null
{ "abstract": " An irreducible weight module of an affine Kac-Moody algebra $\\mathfrak{g}$ is\ncalled dense if its support is equal to a coset in $\\mathfrak{h}^{*}/Q$.\nFollowing a conjecture of V. Futorny about affine Kac-Moody algebras\n$\\mathfrak{g}$, an irreducible weight $\\mathfrak{g}$-module is dense if and\nonly if it is cuspidal (i.e. not a quotient of an induced module). The\nconjecture is confirmed for $\\mathfrak{g}=A_{2}^{\\left(1\\right)}$,\n$A_{3}^{\\left(1\\right)}$ and$A_{4}^{\\left(1\\right)}$ and a classification of\nthe supports of the irreducible weight $\\mathfrak{g}$-modules obtained. For all\n$A_{n}^{\\left(1\\right)}$ the problem is reduced to finding primitive elements\nfor only finitely many cases, all lying below a certain bound. For the\nleft-over finitely many cases an algorithm is proposed, which leads to the\nsolution of Futorny's conjecture for the cases $A_{2}^{\\left(1\\right)}$ and\n$A_{3}^{\\left(1\\right)}$. Yet, the solution of the case\n$A_{4}^{\\left(1\\right)}$ required additional combinatorics.\nFor the proofs, a new category of hypoabelian Lie subalgebras,\npre-prosolvable subalgebras, and a subclass thereof, quasicone subalgebras, is\nintroduced and its tropical matrix algebra structure outlined.\n", "title": "On the Support of Weight Modules for Affine Kac-Moody-Algebras" }
null
null
null
null
true
null
19630
null
Default
null
null
null
{ "abstract": " We study the behavior of a real $p$-dimensional Wishart random matrix with\n$n$ degrees of freedom when $n,p\\rightarrow\\infty$ but $p/n\\rightarrow 0$. We\nestablish the existence of phase transitions when $p$ grows at the order\n$n^{(K+1)/(K+3)}$ for every $k\\in\\mathbb{N}$, and derive expressions for\napproximating densities between every two phase transitions. To do this, we\nmake use of a novel tool we call the G-transform of a distribution, which is\nclosely related to the characteristic function. We also derive an extension of\nthe $t$-distribution to the real symmetric matrices, which naturally appears as\nthe conjugate distribution to the Wishart under a G-transformation, and show\nits empirical spectral distribution obeys a semicircle law when $p/n\\rightarrow\n0$. Finally, we discuss how the phase transitions of the Wishart distribution\nmight originate from changes in rates of convergence of symmetric $t$\nstatistics.\n", "title": "The middle-scale asymptotics of Wishart matrices" }
null
null
null
null
true
null
19631
null
Default
null
null
null
{ "abstract": " In this paper, we propose an online learning algorithm based on a\nRao-Blackwellized particle filter for spatial concept acquisition and mapping.\nWe have proposed a nonparametric Bayesian spatial concept acquisition model\n(SpCoA). We propose a novel method (SpCoSLAM) integrating SpCoA and FastSLAM in\nthe theoretical framework of the Bayesian generative model. The proposed method\ncan simultaneously learn place categories and lexicons while incrementally\ngenerating an environmental map. Furthermore, the proposed method has scene\nimage features and a language model added to SpCoA. In the experiments, we\ntested online learning of spatial concepts and environmental maps in a novel\nenvironment of which the robot did not have a map. Then, we evaluated the\nresults of online learning of spatial concepts and lexical acquisition. The\nexperimental results demonstrated that the robot was able to more accurately\nlearn the relationships between words and the place in the environmental map\nincrementally by using the proposed method.\n", "title": "Online Spatial Concept and Lexical Acquisition with Simultaneous Localization and Mapping" }
null
null
null
null
true
null
19632
null
Default
null
null
null
{ "abstract": " We device a new method to calculate a large number of Mellin moments of\nsingle scale quantities using the systems of differential and/or difference\nequations obtained by integration-by-parts identities between the corresponding\nFeynman integrals of loop corrections to physical quantities. These scalar\nquantities have a much simpler mathematical structure than the complete\nquantity. A sufficiently large set of moments may even allow the analytic\nreconstruction of the whole quantity considered, holding in case of first order\nfactorizing systems. In any case, one may derive highly precise numerical\nrepresentations in general using this method, which is otherwise completely\nanalytic.\n", "title": "The Method of Arbitrarily Large Moments to Calculate Single Scale Processes in Quantum Field Theory" }
null
null
null
null
true
null
19633
null
Default
null
null
null
{ "abstract": " The escape mechanism of orbits in a star cluster rotating around its parent\ngalaxy in a circular orbit is investigated. A three degrees of freedom model is\nused for describing the dynamical properties of the Hamiltonian system. The\ngravitational field of the star cluster is represented by a smooth and\nspherically symmetric Plummer potential. We distinguish between ordered and\nchaotic orbits as well as between trapped and escaping orbits, considering only\nunbounded motion for several energy levels. The Smaller Alignment Index (SALI)\nmethod is used for determining the regular or chaotic nature of the orbits. The\nbasins of escape are located and they are also correlated with the\ncorresponding escape time of the orbits. Areas of bounded regular or chaotic\nmotion and basins of escape were found to coexist in the $(x,z)$ plane. The\nproperties of the normally hyperbolic invariant manifolds (NHIMs), located in\nthe vicinity of the index-1 Lagrange points $L_1$ and $L_2$, are also explored.\nThese manifolds are of paramount importance as they control the flow of stars\nover the saddle points, while they also trigger the formation of tidal tails\nobserved in star clusters. Bifurcation diagrams of the Lyapunov periodic orbits\nas well as restrictions of the Poincaré map to the NHIMs are deployed for\nelucidating the dynamics in the neighbourhood of the saddle points. The\nextended tidal tails, or tidal arms, formed by stars with low velocity which\nescape through the Lagrange points are monitored. The numerical results of this\nwork are also compared with previous related work.\n", "title": "Unraveling the escape dynamics and the nature of the normally hyperbolic invariant manifolds in tidally limited star clusters" }
null
null
null
null
true
null
19634
null
Default
null
null
null
{ "abstract": " We present Atacama Large Millimeter/ sub-millimeter Array (ALMA) observations\nof V883 Ori, an FU Ori object. We describe the molecular outflow and envelope\nof the system based on the $^{12}$CO and $^{13}$CO emissions, which together\ntrace a bipolar molecular outflow. The C$^{18}$O emission traces the rotational\nmotion of the circumstellar disk. From the $^{12}$CO blue-shifted emission, we\nestimate a wide opening angle of $\\sim$ 150$^{^{\\circ}}$ for the outflow\ncavities. Also, we find that the outflow is very slow (characteristic velocity\nof only 0.65 km~s$^{-1}$), which is unique for an FU Ori object. We calculate\nthe kinematic properties of the outflow in the standard manner using the\n$^{12}$CO and $^{13}$CO emissions. In addition, we present a P Cygni profile\nobserved in the high-resolution optical spectrum, evidence of a wind driven by\nthe accretion and being the cause for the particular morphology of the\noutflows. We discuss the implications of our findings and the rise of these\nslow outflows during and/or after the formation of a rotationally supported\ndisk.\n", "title": "The ALMA Early Science View of FUor/EXor objects. III. The Slow and Wide Outflow of V883 Ori" }
null
null
null
null
true
null
19635
null
Default
null
null
null
{ "abstract": " We have modelled the evolution of cometary HII regions produced by zero-age\nmain-sequence stars of O and B spectral types, which are driving strong winds\nand are born off-centre from spherically symmetric cores with power-law\n($\\alpha = 2$) density slopes. A model parameter grid was produced that spans\nstellar mass, age and core density. Exploring this parameter space we\ninvestigated limb-brightening, a feature commonly seen in cometary HII regions.\nWe found that stars with mass $M_\\star \\geq 12\\, \\mathrm{M}_\\odot$ produce this\nfeature. Our models have a cavity bounded by a contact discontinuity separating\nhot shocked wind and ionised ambient gas that is similar in size to the\nsurrounding HII region. Due to early pressure confinement we did not see shocks\noutside of the contact discontinuity for stars with $M_\\star \\leq 40\\,\n\\mathrm{M}_\\odot$, but the cavities were found to continue to grow. The cavity\nsize in each model plateaus as the HII region stagnates. The spectral energy\ndistributions of our models are similar to those from identical stars evolving\nin uniform density fields. The turn-over frequency is slightly lower in our\npower-law models due to a higher proportion of low density gas covered by the\nHII regions.\n", "title": "Hydrodynamical models of cometary HII regions" }
null
null
[ "Physics" ]
null
true
null
19636
null
Validated
null
null
null
{ "abstract": " This paper uses a classical approach to feature selection: minimization of a\ncost function applied on estimated joint distributions. However, the search\nspace in which such minimization is performed is extended. In the original\nformulation, the search space is the Boolean lattice of features sets (BLFS),\nwhile, in the present formulation, it is a collection of Boolean lattices of\nordered pairs (features, associated value) (CBLOP), indexed by the elements of\nthe BLFS. In this approach, we may not only select the features that are most\nrelated to a variable Y, but also select the values of the features that most\ninfluence the variable or that are most prone to have a specific value of Y. A\nlocal formulation of Shanon's mutual information is applied on a CBLOP to\nselect features, namely, the Local Lift Dependence Scale, an scale for\nmeasuring variable dependence in multiple resolutions. The main contribution of\nthis paper is to define and apply this local measure, which permits to analyse\nlocal properties of joint distributions that are neglected by the classical\nShanon's global measure. The proposed approach is applied to a dataset\nconsisting of student performances on a university entrance exam, as well as on\nundergraduate courses. The approach is also applied to two datasets of the UCI\nMachine Learning Repository.\n", "title": "Feature Selection based on the Local Lift Dependence Scale" }
null
null
null
null
true
null
19637
null
Default
null
null
null
{ "abstract": " We match analytic results to numerical calculations to provide a detailed\npicture of the metal-insulator and topological transitions found in density\nfunctional plus cluster dynamical mean-field calculations of pyrochlore\niridates. We discuss the transition from Weyl metal to Weyl semimetal regimes,\nand then analyse in detail the properties of the Weyl semimetal phase and its\nevolution into the topologically trivial insulator. The energy scales in the\nWeyl semimetal phase are found to be very small, as are the anisotropy\nparameters. The electronic structure can to a good approximation be described\nas `Weyl rings' and one of the two branches that contributes to the Weyl bands\nis essentially flat, leading to enhanced susceptibilities. The optical\nlongitudinal and Hall conductivities are determined; the frequency dependence\nincludes pronounced features that reveal the basic energy scales of the Weyl\nsemimetal phase.\n", "title": "Weyl Rings and enhanced susceptibilities in Pyrochlore Iridates: $k\\cdot p$ Analysis of Cluster Dynamical Mean-Field Theory Results" }
null
null
null
null
true
null
19638
null
Default
null
null
null
{ "abstract": " We tackle the problem of template estimation when data have been randomly\ndeformed under a group action in the presence of noise. In order to estimate\nthe template, one often minimizes the variance when the influence of the\ntransformations have been removed (computation of the Fr{é}chet mean in the\nquotient space). The consistency bias is defined as the distance (possibly\nzero) between the orbit of the template and the orbit of one element which\nminimizes the variance. In the first part, we restrict ourselves to isometric\ngroup action, in this case the Hilbertian distance is invariant under the group\naction. We establish an asymptotic behavior of the consistency bias which is\nlinear with respect to the noise level. As a result the inconsistency is\nunavoidable as soon as the noise is enough. In practice, template estimation\nwith a finite sample is often done with an algorithm called \"max-max\". In the\nsecond part, also in the case of isometric group finite, we show the\nconvergence of this algorithm to an empirical Karcher mean. Our numerical\nexperiments show that the bias observed in practice can not be attributed to\nthe small sample size or to a convergence problem but is indeed due to the\npreviously studied inconsistency. In a third part, we also present some\ninsights of the case of a non invariant distance with respect to the group\naction. We will see that the inconsistency still holds as soon as the noise\nlevel is large enough. Moreover we prove the inconsistency even when a\nregularization term is added.\n", "title": "Inconsistency of Template Estimation by Minimizing of the Variance/Pre-Variance in the Quotient Space" }
null
null
null
null
true
null
19639
null
Default
null
null
null
{ "abstract": " Bayesian inverse modeling is important for a better understanding of\nhydrological processes. However, this approach can be computationally demanding\nas it usually requires a large number of model evaluations. To address this\nissue, one can take advantage of surrogate modeling techniques. Nevertheless,\nwhen approximation error of the surrogate model is neglected in inverse\nmodeling, the inversion result will be biased. In this paper, we develop a\nsurrogate-based Bayesian inversion framework that explicitly quantifies and\ngradually reduces the approximation error of the surrogate. Specifically, two\nstrategies are proposed and compared. The first strategy works by obtaining an\nensemble of sparse polynomial chaos expansion (PCE) surrogates with Markov\nchain Monte Carlo sampling, while the second one uses Gaussian process (GP) to\nsimulate the approximation error of a single sparse PCE surrogate. The two\nstrategies can also be applied with other surrogates, thus they have general\napplicability. By adaptively refining the surrogate over the posterior\ndistribution, we can gradually reduce the surrogate approximation error to a\nsmall level. Demonstrated with three case studies involving\nhigh-dimensionality, multi-modality and a real-world application, respectively,\nit is found that both strategies can reduce the bias introduced by surrogate\nmodeling, while the second strategy has a better performance as it integrates\ntwo methods (i.e., sparse PCE and GP) that complement each other.\n", "title": "Surrogate-Based Bayesian Inverse Modeling of the Hydrological System: An Adaptive Approach Considering Surrogate Approximation Erro" }
null
null
null
null
true
null
19640
null
Default
null
null
null
{ "abstract": " One-Class Classification (OCC) has been prime concern for researchers and\neffectively employed in various disciplines. But, traditional methods based\none-class classifiers are very time consuming due to its iterative process and\nvarious parameters tuning. In this paper, we present six OCC methods based on\nextreme learning machine (ELM) and Online Sequential ELM (OSELM). Our proposed\nclassifiers mainly lie in two categories: reconstruction based and boundary\nbased, which supports both types of learning viz., online and offline learning.\nOut of various proposed methods, four are offline and remaining two are online\nmethods. Out of four offline methods, two methods perform random feature\nmapping and two methods perform kernel feature mapping. Kernel feature mapping\nbased approaches have been tested with RBF kernel and online version of\none-class classifiers are tested with both types of nodes viz., additive and\nRBF. It is well known fact that threshold decision is a crucial factor in case\nof OCC, so, three different threshold deciding criteria have been employed so\nfar and analyses the effectiveness of one threshold deciding criteria over\nanother. Further, these methods are tested on two artificial datasets to check\nthere boundary construction capability and on eight benchmark datasets from\ndifferent discipline to evaluate the performance of the classifiers. Our\nproposed classifiers exhibit better performance compared to ten traditional\none-class classifiers and ELM based two one-class classifiers. Through proposed\none-class classifiers, we intend to expand the functionality of the most used\ntoolbox for OCC i.e. DD toolbox. All of our methods are totally compatible with\nall the present features of the toolbox.\n", "title": "On The Construction of Extreme Learning Machine for Online and Offline One-Class Classification - An Expanded Toolbox" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
19641
null
Validated
null
null
null
{ "abstract": " INTRODUCTION\nThis papers deals with partial differential equations of second order,\nlinear, with constant and not constant coefficients, in two variables, which\nadmit real characteristics. I face the study of PDEs with the mentality of the\napplied physicist, but with a weakness for formalization: look inside the black\nbox of the formulas, try to compact them (for example, proceeding from an\ninverse transformation of coordinates) and make them smart (in the context,\nreformulating the theory by means of differential operators and related\ninvariants), applying them with awareness and then connecting them to geometry\nor to spatial categories, which are in mathematics what is closest to the\nsensible reality. Finally, proposing examples that are exercise and\ncorroborating for theory.\nTOPICS\nThe geometric meaning of invariant to a differential operator. Operator\nPrincipal Part and its factorization: commutativity and product with and\nwithout residues(first order terms). Related conditions by operators and\ninvariants derivatives. Coordinate transformation by invariants and expression\nof the hyperbolic and parabolic operators in the new coordinates. Properties of\nthe Jacobian Matrix and relations between invariants derivatives and inverse\ncoordinates transformation or the initial variables derivatives. Commutativity\nconditions and product without residues in terms of inverse coordinate\ntransformations that allow to build commutative differential operators or whose\nproduct is without residues (or both). Diffeomorphisms and plane\ntransformations: new operators and invariants in the new coordinate space which\nlead to the chain rule in compact form. Conclusive considerations and examples\nwho compares different methods of solution.\n", "title": "2nd order PDEs: geometric and functional considerations" }
null
null
null
null
true
null
19642
null
Default
null
null
null
{ "abstract": " Machine Learning models have been shown to be vulnerable to adversarial\nexamples, ie. the manipulation of data by a attacker to defeat a defender's\nclassifier at test time. We present a novel probabilistic definition of\nadversarial examples in perfect or limited knowledge setting using prior\nprobability distributions on the defender's classifier. Using the asymptotic\nproperties of the logistic regression, we derive a closed-form expression of\nthe intensity of any adversarial perturbation, in order to achieve a given\nexpected misclassification rate. This technique is relevant in a threat model\nof known model specifications and unknown training data. To our knowledge, this\nis the first method that allows an attacker to directly choose the probability\nof attack success. We evaluate our approach on two real-world datasets.\n", "title": "Adversarial Perturbation Intensity Achieving Chosen Intra-Technique Transferability Level for Logistic Regression" }
null
null
null
null
true
null
19643
null
Default
null
null
null
{ "abstract": " In cellular massive Machine-Type Communications (MTC), a device can transmit\ndirectly to the base station (BS) or through an aggregator (intermediate node).\nWhile direct device-BS communication has recently been in the focus of 5G/3GPP\nresearch and standardization efforts, the use of aggregators remains a less\nexplored topic. In this paper we analyze the deployment scenarios in which\naggregators can perform cellular access on behalf of multiple MTC devices. We\nstudy the effect of packet bundling at the aggregator, which alleviates\noverhead and resource waste when sending small packets. The aggregators give\nrise to a tradeoff between access congestion and resource starvation and we\nshow that packet bundling can minimize resource starvation, especially for\nsmaller numbers of aggregators. Under the limitations of the considered model,\nwe investigate the optimal settings of the network parameters, in terms of\nnumber of aggregators and packet-bundle size. Our results show that, in\ngeneral, data aggregation can benefit the uplink massive MTC in LTE, by\nreducing the signalling overhead.\n", "title": "Data Aggregation and Packet Bundling of Uplink Small Packets for Monitoring Applications in LTE" }
null
null
null
null
true
null
19644
null
Default
null
null
null
{ "abstract": " We report the first detection of sodium absorption in the atmosphere of the\nhot Jupiter WASP-52b. We observed one transit of WASP-52b with the\nlow-resolution Optical System for Imaging and low-Intermediate-Resolution\nIntegrated Spectroscopy (OSIRIS) at the 10.4 m Gran Telescopio Canarias (GTC).\nThe resulting transmission spectrum, covering the wavelength range from 522 nm\nto 903 nm, is flat and featureless, except for the significant narrow\nabsorption signature at the sodium doublet, which can be explained by an\natmosphere in solar composition with clouds at 1 mbar. A cloud-free atmosphere\nis stringently ruled out. By assessing the absorption depths of sodium in\nvarious bin widths, we find that temperature increases towards lower\natmospheric pressure levels, with a positive temperature gradient of 0.88 +/-\n0.65 K/km, possibly indicative of upper atmospheric heating and a temperature\ninversion.\n", "title": "The GTC exoplanet transit spectroscopy survey. VII. Detection of sodium in WASP-52b's cloudy atmosphere" }
null
null
[ "Physics" ]
null
true
null
19645
null
Validated
null
null
null
{ "abstract": " Choreographies are widely used for the specification of concurrent and\ndistributed software architectures. Since asynchronous communications are\nubiquitous in real-world systems, previous works have proposed different\napproaches for the formal modelling of asynchrony in choreographies. Such\napproaches typically rely on ad-hoc syntactic terms or semantics for capturing\nthe concept of messages in transit, yielding different formalisms that have to\nbe studied separately.\nIn this work, we take a different approach, and show that such extensions are\nnot needed to reason about asynchronous communications in choreographies.\nRather, we demonstrate how a standard choreography calculus already has all the\nneeded expressive power to encode messages in transit (and thus asynchronous\ncommunications) through the primitives of process spawning and name mobility.\nThe practical consequence of our results is that we can reason about real-world\nsystems within a choreography formalism that is simpler than those hitherto\nproposed.\n", "title": "That's Enough: Asynchrony with Standard Choreography Primitives" }
null
null
null
null
true
null
19646
null
Default
null
null
null
{ "abstract": " The \"reproducibility crisis\" has been a highly visible source of scientific\ncontroversy and dispute. Here, I propose and review several avenues for\nidentifying and prioritizing research studies for the purpose of targeted\nvalidation. Of the various proposals discussed, I identify scientific data\nscience as being a strategy that merits greater attention among those\ninterested in reproducibility. I argue that the tremendous potential of\nscientific data science for uncovering high-value research studies is a\nsignificant and rarely discussed benefit of the transition to a fully\nopen-access publishing model.\n", "title": "Doing Things Twice (Or Differently): Strategies to Identify Studies for Targeted Validation" }
null
null
null
null
true
null
19647
null
Default
null
null
null
{ "abstract": " SciSports is a Dutch startup company specializing in football analytics. This\npaper describes a joint research effort with SciSports, during the Study Group\nMathematics with Industry 2018 at Eindhoven, the Netherlands. The main\nchallenge that we addressed was to automatically process empirical football\nplayers' trajectories, in order to extract useful information from them. The\ndata provided to us was two-dimensional positional data during entire matches.\nWe developed methods based on Newtonian mechanics and the Kalman filter,\nGenerative Adversarial Nets and Variational Autoencoders. In addition, we\ntrained a discriminator network to recognize and discern different movement\npatterns of players. The Kalman-filter approach yields an interpretable model,\nin which a small number of player-dependent parameters can be fit; in theory\nthis could be used to distinguish among players. The\nGenerative-Adversarial-Nets approach appears promising in theory, and some\ninitial tests showed an improvement with respect to the baseline, but the\nlimits in time and computational power meant that we could not fully explore\nit. We also trained a Discriminator network to distinguish between two players\nbased on their trajectories; after training, the network managed to distinguish\nbetween some pairs of players, but not between others. After training, the\nVariational Autoencoders generated trajectories that are difficult to\ndistinguish, visually, from the data. These experiments provide an indication\nthat deep generative models can learn the underlying structure and statistics\nof football players' trajectories. This can serve as a starting point for\ndetermining player qualities based on such trajectory data.\n", "title": "SciSports: Learning football kinematics through two-dimensional tracking data" }
null
null
null
null
true
null
19648
null
Default
null
null
null
{ "abstract": " We consider the problem of segmenting a large population of customers into\nnon-overlapping groups with similar preferences, using diverse preference\nobservations such as purchases, ratings, clicks, etc. over subsets of items. We\nfocus on the setting where the universe of items is large (ranging from\nthousands to millions) and unstructured (lacking well-defined attributes) and\neach customer provides observations for only a few items. These data\ncharacteristics limit the applicability of existing techniques in marketing and\nmachine learning. To overcome these limitations, we propose a model-based\nprojection technique, which transforms the diverse set of observations into a\nmore comparable scale and deals with missing data by projecting the transformed\ndata onto a low-dimensional space. We then cluster the projected data to obtain\nthe customer segments. Theoretically, we derive precise necessary and\nsufficient conditions that guarantee asymptotic recovery of the true customer\nsegments. Empirically, we demonstrate the speed and performance of our method\nin two real-world case studies: (a) 84% improvement in the accuracy of new\nmovie recommendations on the MovieLens data set and (b) 6% improvement in the\nperformance of similar item recommendations algorithm on an offline dataset at\neBay. We show that our method outperforms standard latent-class and\ndemographic-based techniques.\n", "title": "A Model-based Projection Technique for Segmenting Customers" }
null
null
null
null
true
null
19649
null
Default
null
null
null
{ "abstract": " In this report, it is shown that Cr doped into the bulk and Cr deposited on\nthe surface of Bi2Se3 films produced by molecular beam epitaxy (MBE) have\nstrikingly different effects on both the electronic structure and chemical\nenvironment.\n", "title": "Distinct Effects of Cr Bulk Doping and Surface Deposition on the Chemical Environment and Electronic Structure of the Topological Insulator Bi2Se3" }
null
null
null
null
true
null
19650
null
Default
null
null
null
{ "abstract": " We present a technique for automatically transforming kernel-based\ncomputations in disparate, nested loops into a fused, vectorized form that can\nreduce intermediate storage needs and lead to improved performance on\ncontemporary hardware.\nWe introduce representations for the abstract relationships and data\ndependencies of kernels in loop nests and algorithms for manipulating them into\nmore efficient form; we similarly introduce techniques for determining data\naccess patterns for stencil-like array accesses and show how this can be used\nto elide storage and improve vectorization.\nWe discuss our prototype implementation of these ideas---named HFAV---and its\nuse of a declarative, inference-based front-end to drive transformations, and\nwe present results for some prominent codes in HPC.\n", "title": "High-Performance Code Generation though Fusion and Vectorization" }
null
null
null
null
true
null
19651
null
Default
null
null
null
{ "abstract": " Hierarchical neural architectures are often used to capture long-distance\ndependencies and have been applied to many document-level tasks such as\nsummarization, document segmentation, and sentiment analysis. However,\neffective usage of such a large context can be difficult to learn, especially\nin the case where there is limited labeled data available. Building on the\nrecent success of language model pretraining methods for learning flat\nrepresentations of text, we propose algorithms for pre-training hierarchical\ndocument representations from unlabeled data. Unlike prior work, which has\nfocused on pre-training contextual token representations or context-independent\n{sentence/paragraph} representations, our hierarchical document representations\ninclude fixed-length sentence/paragraph representations which integrate\ncontextual information from the entire documents. Experiments on document\nsegmentation, document-level question answering, and extractive document\nsummarization demonstrate the effectiveness of the proposed pre-training\nalgorithms.\n", "title": "Language Model Pre-training for Hierarchical Document Representations" }
null
null
null
null
true
null
19652
null
Default
null
null
null
{ "abstract": " Research in analysis of microblogging platforms is experiencing a renewed\nsurge with a large number of works applying representation learning models for\napplications like sentiment analysis, semantic textual similarity computation,\nhashtag prediction, etc. Although the performance of the representation\nlearning models has been better than the traditional baselines for such tasks,\nlittle is known about the elementary properties of a tweet encoded within these\nrepresentations, or why particular representations work better for certain\ntasks. Our work presented here constitutes the first step in opening the\nblack-box of vector embeddings for tweets. Traditional feature engineering\nmethods for high-level applications have exploited various elementary\nproperties of tweets. We believe that a tweet representation is effective for\nan application because it meticulously encodes the application-specific\nelementary properties of tweets. To understand the elementary properties\nencoded in a tweet representation, we evaluate the representations on the\naccuracy to which they can model each of those properties such as tweet length,\npresence of particular words, hashtags, mentions, capitalization, etc. Our\nsystematic extensive study of nine supervised and four unsupervised tweet\nrepresentations against most popular eight textual and five social elementary\nproperties reveal that Bi-directional LSTMs (BLSTMs) and Skip-Thought Vectors\n(STV) best encode the textual and social properties of tweets respectively.\nFastText is the best model for low resource settings, providing very little\ndegradation with reduction in embedding size. Finally, we draw interesting\ninsights by correlating the model performance obtained for elementary property\nprediction tasks with the highlevel downstream applications.\n", "title": "Interpretation of Semantic Tweet Representations" }
null
null
null
null
true
null
19653
null
Default
null
null
null
{ "abstract": " Logic-based paradigms are nowadays widely used in many different fields, also\nthank to the availability of robust tools and systems that allow the\ndevelopment of real-world and industrial applications.\nIn this work we present LoIDE, an advanced and modular web-editor for\nlogic-based languages that also integrates with state-of-the-art solvers.\n", "title": "LoIDE: a web-based IDE for Logic Programming - Preliminary Technical Report" }
null
null
[ "Computer Science" ]
null
true
null
19654
null
Validated
null
null
null
{ "abstract": " Mixture models have been around for over 150 years, as an intuitively simple\nand practical tool for enriching the collection of probability distributions\navailable for modelling data. In this chapter we describe the basic ideas of\nthe subject, present several alternative representations and perspectives on\nthese models, and discuss some of the elements of inference about the unknowns\nin the models. Our focus is on the simplest set-up, of finite mixture models,\nbut we discuss also how various simplifying assumptions can be relaxed to\ngenerate the rich landscape of modelling and inference ideas traversed in the\nrest of this book.\n", "title": "Introduction to finite mixtures" }
null
null
null
null
true
null
19655
null
Default
null
null
null
{ "abstract": " In this work we study the pointwise and ergodic iteration-complexity of a\nfamily of projective splitting methods proposed by Eckstein and Svaiter, for\nfinding a zero of the sum of two maximal monotone operators. As a consequence\nof the complexity analysis of the projective splitting methods, we obtain\ncomplexity bounds for the two-operator case of Spingarn's partial inverse\nmethod. We also present inexact variants of two specific instances of this\nfamily of algorithms, and derive corresponding convergence rate results.\n", "title": "On the complexity of the projective splitting and Spingarn's methods for the sum of two maximal monotone operators" }
null
null
[ "Mathematics" ]
null
true
null
19656
null
Validated
null
null
null
{ "abstract": " Latent features learned by deep learning approaches have proven to be a\npowerful tool for machine learning. They serve as a data abstraction that makes\nlearning easier by capturing regularities in data explicitly. Their benefits\nmotivated their adaptation to relational learning context. In our previous\nwork, we introduce an approach that learns relational latent features by means\nof clustering instances and their relations. The major drawback of latent\nrepresentations is that they are often black-box and difficult to interpret.\nThis work addresses these issues and shows that (1) latent features created by\nclustering are interpretable and capture interesting properties of data; (2)\nthey identify local regions of instances that match well with the label, which\npartially explains their benefit; and (3) although the number of latent\nfeatures generated by this approach is large, often many of them are highly\nredundant and can be removed without hurting performance much.\n", "title": "Demystifying Relational Latent Representations" }
null
null
null
null
true
null
19657
null
Default
null
null
null
{ "abstract": " In the near future, cosmology will enter the wide and deep galaxy survey area\nallowing high-precision studies of the large scale structure of the universe in\nthree dimensions. To test cosmological models and determine their parameters\naccurately, it is natural to confront data with exact theoretical expectations\nexpressed in the observational parameter space (angles and redshift). The\ndata-driven galaxy number count fluctuations on redshift shells, can be used to\nbuild correlation functions $C(\\theta; z_1, z_2)$ on and between shells which\ncan probe the baryonic acoustic oscillations, the distance-redshift distortions\nas well as gravitational lensing and other relativistic effects. Transforming\nthe model to the data space usually requires the computation of the angular\npower spectrum $C_\\ell(z_1, z_2)$ but this appears as an artificial and\ninefficient step plagued by apodization issues. In this article we show that it\nis not necessary and present a compact expression for $C(\\theta; z_1, z_2)$\nthat includes directly the leading density and redshift space distortions terms\nfrom the full linear theory. It can be evaluated using a fast integration\nmethod based on Clenshaw-Curtis quadrature and Chebyshev polynomial series.\nThis new method to compute the correlation functions without any Limber\napproximation, allows us to produce and discuss maps of the correlation\nfunction directly in the observable space and is a significant step towards\ndisentangling the data from the tested models.\n", "title": "A direct method to compute the galaxy count angular correlation function including redshift-space distortions" }
null
null
[ "Physics" ]
null
true
null
19658
null
Validated
null
null
null
{ "abstract": " The crowdsourcing consists in the externalisation of tasks to a crowd of\npeople remunerated to execute this ones. The crowd, usually diversified, can\ninclude users without qualification and/or motivation for the tasks. In this\npaper we will introduce a new method of user expertise modelization in the\ncrowdsourcing platforms based on the theory of belief functions in order to\nidentify serious and qualificated users.\n", "title": "Contributors profile modelization in crowdsourcing platforms" }
null
null
[ "Computer Science" ]
null
true
null
19659
null
Validated
null
null
null
{ "abstract": " We study positive solutions to the heat equation on graphs. We prove variants\nof the Li-Yau gradient estimate and the differential Harnack inequality. For\nsome graphs, we can show the estimates to be sharp. We establish new\ncomputation rules for differential operators on discrete spaces and introduce a\nrelaxation function that governs the time dependency in the differential\nHarnack estimate.\n", "title": "Discrete versions of the Li-Yau gradient estimate" }
null
null
null
null
true
null
19660
null
Default
null
null
null
{ "abstract": " This paper considers an alternative method for fitting CARR models using\ncombined estimating functions (CEF) by showing its usefulness in applications\nin economics and quantitative finance. The associated information matrix for\ncorresponding new estimates is derived to calculate the standard errors. A\nsimulation study is carried out to demonstrate its superiority relative to\nother two competitors: linear estimating functions (LEF) and the maximum\nlikelihood (ML). Results show that CEF estimates are more efficient than LEF\nand ML estimates when the error distribution is mis-specified. Taking a real\ndata set from financial economics, we illustrate the usefulness and\napplicability of the CEF method in practice and report reliable forecast values\nto minimize the risk in the decision making process.\n", "title": "Efficient Modelling & Forecasting with range based volatility models and application" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
19661
null
Validated
null
null
null
{ "abstract": " Virtual network services that span multiple data centers are important to\nsupport emerging data-intensive applications in fields such as bioinformatics\nand retail analytics. Successful virtual network service composition and\nmaintenance requires flexible and scalable 'constrained shortest path\nmanagement' both in the management plane for virtual network embedding (VNE) or\nnetwork function virtualization service chaining (NFV-SC), as well as in the\ndata plane for traffic engineering (TE). In this paper, we show analytically\nand empirically that leveraging constrained shortest paths within recent VNE,\nNFV-SC and TE algorithms can lead to network utilization gains (of up to 50%)\nand higher energy efficiency. The management of complex VNE, NFV-SC and TE\nalgorithms can be, however, intractable for large scale substrate networks due\nto the NP-hardness of the constrained shortest path problem. To address such\nscalability challenges, we propose a novel, exact constrained shortest path\nalgorithm viz., 'Neighborhoods Method' (NM). Our NM uses novel search space\nreduction techniques and has a theoretical quadratic speed-up making it\npractically faster (by an order of magnitude) than recent branch-and-bound\nexhaustive search solutions. Finally, we detail our NM-based SDN controller\nimplementation in a real-world testbed to further validate practical NM\nbenefits for virtual network services.\n", "title": "A Constrained Shortest Path Scheme for Virtual Network Service Management" }
null
null
null
null
true
null
19662
null
Default
null
null
null
{ "abstract": " Using the semiclassical WKB approximation and Hamilton-Jacobi method, we\nsolve an equation of motion for the Glashow-Weinberg-Salam model, which is\nimportant for understanding the unified gauge-theory of weak and\nelectromagnetic interactions. We calculate the tunneling rate of the massive\ncharged W-bosons in a background of electromagnetic field to investigate the\nHawking temperature of black holes surrounded by perfect fluid in Rastall\ntheory. Then, we study the quantum gravity effects on the generalized Proca\nequation with generalized uncertainty principle (GUP) on this background. We\nshow that quantum gravity effects leave the remnants on the Hawking temperature\nand the Hawking radiation becomes nonthermal.\n", "title": "Tunneling of Glashow-Weinberg-Salam model particles from Black Hole Solutions in Rastall Theory" }
null
null
null
null
true
null
19663
null
Default
null
null
null
{ "abstract": " Natural language and symbols are intimately correlated. Recent advances in\nmachine learning (ML) and in natural language processing (NLP) seem to\ncontradict the above intuition: symbols are fading away, erased by vectors or\ntensors called distributed and distributional representations. However, there\nis a strict link between distributed/distributional representations and\nsymbols, being the first an approximation of the second. A clearer\nunderstanding of the strict link between distributed/distributional\nrepresentations and symbols will certainly lead to radically new deep learning\nnetworks. In this paper we make a survey that aims to draw the link between\nsymbolic representations and distributed/distributional representations. This\nis the right time to revitalize the area of interpreting how symbols are\nrepresented inside neural networks.\n", "title": "Symbolic, Distributed and Distributional Representations for Natural Language Processing in the Era of Deep Learning: a Survey" }
null
null
null
null
true
null
19664
null
Default
null
null
null
{ "abstract": " Feature extraction and dimension reduction for networks is critical in a wide\nvariety of domains. Efficiently and accurately learning features for multiple\ngraphs has important applications in statistical inference on graphs. We\npropose a method to jointly embed multiple undirected graphs. Given a set of\ngraphs, the joint embedding method identifies a linear subspace spanned by rank\none symmetric matrices and projects adjacency matrices of graphs into this\nsubspace. The projection coefficients can be treated as features of the graphs.\nWe also propose a random graph model which generalizes classical random graph\nmodel and can be used to model multiple graphs. We show through theory and\nnumerical experiments that under the model, the joint embedding method produces\nestimates of parameters with small errors. Via simulation experiments, we\ndemonstrate that the joint embedding method produces features which lead to\nstate of the art performance in classifying graphs. Applying the joint\nembedding method to human brain graphs, we find it extract interpretable\nfeatures that can be used to predict individual composite creativity index.\n", "title": "Joint Embedding of Graphs" }
null
null
null
null
true
null
19665
null
Default
null
null
null
{ "abstract": " Let $q$ be a prime power. We estimate the number of tuples of degree bounded\nmonic polynomials $(Q_1,\\ldots,Q_v) \\in (\\mathbb{F}_q[z])^v$ that satisfy given\npairwise coprimality conditions. We show how this generalises from monic\npolynomials in finite fields to Dedekind domains with finite norms.\n", "title": "Tuples of polynomials over finite fields with pairwise coprimality conditions" }
null
null
null
null
true
null
19666
null
Default
null
null
null
{ "abstract": " In the present paper, using a replica analysis, we examine the portfolio\noptimization problem handled in previous work and discuss the minimization of\ninvestment risk under constraints of budget and expected return for the case\nthat the distribution of the hyperparameters of the mean and variance of the\nreturn rate of each asset are not limited to a specific probability family.\nFindings derived using our proposed method are compared with those in previous\nwork to verify the effectiveness of our proposed method. Further, we derive a\nPythagorean theorem of the Sharpe ratio and macroscopic relations of\nopportunity loss. Using numerical experiments, the effectiveness of our\nproposed method is demonstrated for a specific situation.\n", "title": "Pythagorean theorem of Sharpe ratio" }
null
null
null
null
true
null
19667
null
Default
null
null
null
{ "abstract": " Researchers often summarize their work in the form of scientific posters.\nPosters provide a coherent and efficient way to convey core ideas expressed in\nscientific papers. Generating a good scientific poster, however, is a complex\nand time consuming cognitive task, since such posters need to be readable,\ninformative, and visually aesthetic. In this paper, for the first time, we\nstudy the challenging problem of learning to generate posters from scientific\npapers. To this end, a data-driven framework, that utilizes graphical models,\nis proposed. Specifically, given content to display, the key elements of a good\nposter, including attributes of each panel and arrangements of graphical\nelements are learned and inferred from data. During the inference stage, an MAP\ninference framework is employed to incorporate some design principles. In order\nto bridge the gap between panel attributes and the composition within each\npanel, we also propose a recursive page splitting algorithm to generate the\npanel layout for a poster. To learn and validate our model, we collect and\nrelease a new benchmark dataset, called NJU-Fudan Paper-Poster dataset, which\nconsists of scientific papers and corresponding posters with exhaustively\nlabelled panels and attributes. Qualitative and quantitative results indicate\nthe effectiveness of our approach.\n", "title": "Learning to Generate Posters of Scientific Papers by Probabilistic Graphical Models" }
null
null
null
null
true
null
19668
null
Default
null
null
null
{ "abstract": " Liquid metal (LM) is of current core interest for a wide variety of newly\nemerging areas. However, the functional materials thus made so far by LM only\ncould display a single silver-white appearance. Here in this study, the new\nconceptual colorful LM marbles working like transformable biomimetic chameleons\nwere proposed and fabricated from LM droplets through encasing them with\nfluorescent nano-particles. We demonstrated that this unique LM marble can be\nmanipulated into various stable magnificent appearances as one desires. And it\ncan also splitt and merge among different colors. Such multifunctional LM\nchameleon is capable of responding to the outside electric-stimulus and\nrealizing shape transformation and discoloration behaviors as well. Further\nmore, the electric-stimuli has been disclosed to be an easy going way to\ntrigger the release of nano/micro-particles from the LM. The present\nfluorescent biomimetic liquid metal chameleon is expected to offer important\nopportunities for diverse unconventional applications, especially in a wide\nvariety of functional smart material and color changeable soft robot areas.\n", "title": "Transformable Biomimetic Liquid Metal Chameleon" }
null
null
null
null
true
null
19669
null
Default
null
null
null
{ "abstract": " The local model for differential privacy is emerging as the reference model\nfor practical applications collecting and sharing sensitive information while\nsatisfying strong privacy guarantees. In the local model, there is no trusted\nentity which is allowed to have each individual's raw data as is assumed in the\ntraditional curator model for differential privacy. So, individuals' data are\nusually perturbed before sharing them.\nWe explore the design of private hypothesis tests in the local model, where\neach data entry is perturbed to ensure the privacy of each participant.\nSpecifically, we analyze locally private chi-square tests for goodness of fit\nand independence testing, which have been studied in the traditional, curator\nmodel for differential privacy.\n", "title": "Local Private Hypothesis Testing: Chi-Square Tests" }
null
null
[ "Computer Science", "Mathematics", "Statistics" ]
null
true
null
19670
null
Validated
null
null
null
{ "abstract": " A low-cost, robust, and simple mechanism to measure hemoglobin would play a\ncritical role in the modern health infrastructure. Consistent sample\nacquisition has been a long-standing technical hurdle for photometer-based\nportable hemoglobin detectors which rely on micro cuvettes and dry chemistry.\nAny particulates (e.g. intact red blood cells (RBCs), microbubbles, etc.) in a\ncuvette's sensing area drastically impact optical absorption profile, and\ncommercial hemoglobinometers lack the ability to automatically detect faulty\nsamples. We present the ground-up development of a portable, low-cost and open\nplatform with equivalent accuracy to medical-grade devices, with the addition\nof CNN-based image processing for rapid sample viability prechecks. The\ndeveloped platform has demonstrated precision to the nearest $0.18[g/dL]$ of\nhemoglobin, an R^2 = 0.945 correlation to hemoglobin absorption curves reported\nin literature, and a 97% detection accuracy of poorly-prepared samples. We see\nthe developed hemoglobin device/ML platform having massive implications in\nrural medicine, and consider it an excellent springboard for robust deep\nlearning optical spectroscopy: a currently untapped source of data for\ndetection of countless analytes.\n", "title": "Rapid point-of-care Hemoglobin measurement through low-cost optics and Convolutional Neural Network based validation" }
null
null
null
null
true
null
19671
null
Default
null
null
null
{ "abstract": " In this paper, we study the following classical question of extremal set\ntheory: what is the maximum size of a family of subsets of $[n]$ such that no\n$s$ sets from the family are pairwise disjoint? This problem was first posed by\nErd\\H os and resolved for $n\\equiv 0, -1\\ (\\mathrm{mod }\\ s)$ by Kleitman in\nthe 60s. Very little progress was made on the problem until recently. The only\nresult was a very lengthy resolution of the case $s=3,\\ n\\equiv 1\\ (\\mathrm{mod\n}\\ 3)$ by Quinn, which was written in his PhD thesis and never published in a\nrefereed journal. In this paper, we give another, much shorter proof of Quinn's\nresult, as well as resolve the case $s=4,\\ n\\equiv 2\\ (\\mathrm{mod }\\ 4)$. This\ncomplements the results in our recent paper, where, in particular, we answered\nthe question in the case $n\\equiv -2\\ (\\mathrm{mod }\\ s)$ for $s\\ge 5$.\n", "title": "Families of sets with no matchings of sizes 3 and 4" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
19672
null
Validated
null
null
null
{ "abstract": " The newly emerging field of wave front shaping in complex media has recently\nseen enormous progress. The driving force behind these advances has been the\nexperimental accessibility of the information stored in the scattering matrix\nof a disordered medium, which can nowadays routinely be exploited to focus\nlight as well as to image or to transmit information even across highly turbid\nscattering samples. We will provide an overview of these new techniques, of\ntheir experimental implementations as well as of the underlying theoretical\nconcepts following from mesoscopic scattering theory. In particular, we will\nhighlight the intimate connections between quantum transport phenomena and the\nscattering of light fields in disordered media, which can both be described by\nthe same theoretical concepts. We also put particular emphasis on how the above\ntopics relate to application-oriented research fields such as optical imaging,\nsensing and communication.\n", "title": "Light fields in complex media: mesoscopic scattering meets wave control" }
null
null
null
null
true
null
19673
null
Default
null
null
null
{ "abstract": " In large-scale agile projects, product owners undertake a range of\nchallenging and varied activities beyond those conventionally associated with\nthat role. Using in-depth research interviews from 93 practitioners working in\ncross-border teams, from 21 organisations, our rich empirical data offers a\nunique international perspective into product owner activities. We found that\nthe leaders of large-scale agile projects create product owner teams. Product\nowner team members undertake sponsor, intermediary and release plan master\nactivities to manage scale. They undertake communicator and traveller\nactivities to manage distance and technical architect, governor and risk\nassessor activities to manage governance. Based on our findings, we describe\nproduct owner behaviors that are valued by experienced product owners and their\nline managers.\n", "title": "Tailoring Product Ownership in Large-Scale Agile" }
null
null
[ "Computer Science" ]
null
true
null
19674
null
Validated
null
null
null
{ "abstract": " We analyse a simple extension of the SM with just an additional scalar\nsinglet coupled to the Higgs boson. We discuss the possible probes for\nelectroweak baryogenesis in this model including collider searches,\ngravitational wave and direct dark matter detection signals. We show that a\nlarge portion of the model parameter space exists where the observation of\ngravitational waves would allow detection while the indirect collider searches\nwould not.\n", "title": "Gravitational wave, collider and dark matter signals from a scalar singlet electroweak baryogenesis" }
null
null
null
null
true
null
19675
null
Default
null
null
null
{ "abstract": " Consider a Gaussian vector $\\mathbf{z}=(\\mathbf{x}',\\mathbf{y}')'$,\nconsisting of two sub-vectors $\\mathbf{x}$ and $\\mathbf{y}$ with dimensions $p$\nand $q$ respectively, where both $p$ and $q$ are proportional to the sample\nsize $n$. Denote by $\\Sigma_{\\mathbf{u}\\mathbf{v}}$ the population\ncross-covariance matrix of random vectors $\\mathbf{u}$ and $\\mathbf{v}$, and\ndenote by $S_{\\mathbf{u}\\mathbf{v}}$ the sample counterpart. The canonical\ncorrelation coefficients between $\\mathbf{x}$ and $\\mathbf{y}$ are known as the\nsquare roots of the nonzero eigenvalues of the canonical correlation matrix\n$\\Sigma_{\\mathbf{x}\\mathbf{x}}^{-1}\\Sigma_{\\mathbf{x}\\mathbf{y}}\\Sigma_{\\mathbf{y}\\mathbf{y}}^{-1}\\Sigma_{\\mathbf{y}\\mathbf{x}}$.\nIn this paper, we focus on the case that $\\Sigma_{\\mathbf{x}\\mathbf{y}}$ is of\nfinite rank $k$, i.e. there are $k$ nonzero canonical correlation coefficients,\nwhose squares are denoted by $r_1\\geq\\cdots\\geq r_k>0$. We study the sample\ncounterparts of $r_i,i=1,\\ldots,k$, i.e. the largest $k$ eigenvalues of the\nsample canonical correlation matrix\n$§_{\\mathbf{x}\\mathbf{x}}^{-1}§_{\\mathbf{x}\\mathbf{y}}§_{\\mathbf{y}\\mathbf{y}}^{-1}§_{\\mathbf{y}\\mathbf{x}}$,\ndenoted by $\\lambda_1\\geq\\cdots\\geq \\lambda_k$. We show that there exists a\nthreshold $r_c\\in(0,1)$, such that for each $i\\in\\{1,\\ldots,k\\}$, when $r_i\\leq\nr_c$, $\\lambda_i$ converges almost surely to the right edge of the limiting\nspectral distribution of the sample canonical correlation matrix, denoted by\n$d_{+}$. When $r_i>r_c$, $\\lambda_i$ possesses an almost sure limit in\n$(d_{+},1]$. We also obtain the limiting distribution of $\\lambda_i$'s under\nappropriate normalization. Specifically, $\\lambda_i$ possesses Gaussian type\nfluctuation if $r_i>r_c$, and follows Tracy-Widom distribution if $r_i<r_c$.\nSome applications of our results are also discussed.\n", "title": "Canonical correlation coefficients of high-dimensional Gaussian vectors: finite rank case" }
null
null
null
null
true
null
19676
null
Default
null
null
null
{ "abstract": " We present a method for synthesizing a frontal, neutral-expression image of a\nperson's face given an input face photograph. This is achieved by learning to\ngenerate facial landmarks and textures from features extracted from a\nfacial-recognition network. Unlike previous approaches, our encoding feature\nvector is largely invariant to lighting, pose, and facial expression.\nExploiting this invariance, we train our decoder network using only frontal,\nneutral-expression photographs. Since these photographs are well aligned, we\ncan decompose them into a sparse set of landmark points and aligned texture\nmaps. The decoder then predicts landmarks and textures independently and\ncombines them using a differentiable image warping operation. The resulting\nimages can be used for a number of applications, such as analyzing facial\nattributes, exposure and white balance adjustment, or creating a 3-D avatar.\n", "title": "Synthesizing Normalized Faces from Facial Identity Features" }
null
null
null
null
true
null
19677
null
Default
null
null
null
{ "abstract": " Scaling clustering algorithms to massive data sets is a challenging task.\nRecently, several successful approaches based on data summarization methods,\nsuch as coresets and sketches, were proposed. While these techniques provide\nprovably good and small summaries, they are inherently problem dependent - the\npractitioner has to commit to a fixed clustering objective before even\nexploring the data. However, can one construct small data summaries for a wide\nrange of clustering problems simultaneously? In this work, we affirmatively\nanswer this question by proposing an efficient algorithm that constructs such\none-shot summaries for k-clustering problems while retaining strong theoretical\nguarantees.\n", "title": "One-Shot Coresets: The Case of k-Clustering" }
null
null
null
null
true
null
19678
null
Default
null
null
null
{ "abstract": " In this paper, we consider the existence of multiple nodal solutions of the\nnonlinear Choquard equation \\begin{equation*} \\ \\ \\ \\ (P)\\ \\ \\ \\ \\begin{cases}\n-\\Delta u+u=(|x|^{-1}\\ast|u|^p)|u|^{p-2}u \\ \\ \\ \\text{in}\\ \\mathbb{R}^3, \\ \\ \\\n\\ \\\\ u\\in H^1(\\mathbb{R}^3),\\\\ \\end{cases} \\end{equation*} where $p\\in\n(\\frac{5}{2},5)$. We show that for any positive integer $k$, problem $(P)$ has\nat least a radially symmetrical solution changing sign exactly $k$-times.\n", "title": "Multiple nodal solutions of nonlinear Choquard equations" }
null
null
null
null
true
null
19679
null
Default
null
null
null
{ "abstract": " We study estimators with generalized lasso penalties within the computational\nsufficiency framework introduced by Vu (2018, arXiv:1807.05985). By\nrepresenting these penalties as support functions of zonotopes and more\ngenerally Minkowski sums of line segments and rays, we show that there is a\nnatural reflection group associated with the underlying optimization problem. A\nconsequence of this point of view is that for large classes of estimators\nsharing the same penalty, the penalized least squares estimator is\ncomputationally minimal sufficient. This means that all such estimators can be\ncomputed by refining the output of any algorithm for the least squares case. An\ninteresting technical component is our analysis of coordinate descent on the\ndual problem. A key insight is that the iterates are obtained by reflecting and\naveraging, so they converge to an element of the dual feasible set that is\nminimal with respect to a ordering induced by the group associated with the\npenalty. Our main application is fused lasso/total variation denoising and\nisotonic regression on arbitrary graphs. In those cases the associated group is\na permutation group.\n", "title": "Computational Sufficiency, Reflection Groups, and Generalized Lasso Penalties" }
null
null
null
null
true
null
19680
null
Default
null
null
null
{ "abstract": " According to data from the United Nations, more than 3000 people have died\neach day in the world due to road traffic collision. Considering recent\nresearches, the human error may be considered as the main responsible for these\nfatalities. Because of this, researchers seek alternatives to transfer the\nvehicle control from people to autonomous systems. However, providing this\ntechnological innovation for the people may demand complex challenges in the\nlegal, economic and technological areas. Consequently, carmakers and\nresearchers have divided the driving automation in safety and emergency systems\nthat improve the driver perception on the road. This may reduce the human\nerror. Therefore, the main contribution of this study is to propose a driving\nsimulator platform to develop and evaluate safety and emergency systems, in the\nfirst design stage. This driving simulator platform has an advantage: a\nflexible software structure.This allows in the simulation one adaptation for\ndevelopment or evaluation of a system. The proposed driving simulator platform\nwas tested in two applications: cooperative vehicle system development and the\ninfluence evaluation of a Driving Assistance System (\\textit{DAS}) on a driver.\nIn the cooperative vehicle system development, the results obtained show that\nthe increment of the time delay in the communication among vehicles ($V2V$) is\ndeterminant for the system performance. On the other hand, in the influence\nevaluation of a \\textit{DAS} in a driver, it was possible to conclude that the\n\\textit{DAS'} model does not have the level of influence necessary in a driver\nto avoid an accident.\n", "title": "Driving Simulator Platform for Development and Evaluation of Safety and Emergency Systems" }
null
null
null
null
true
null
19681
null
Default
null
null
null
{ "abstract": " The current-voltage (I-V) conversion characterizes the physiology of cellular\nmicrodomains and reflects cellular communication, excitability, and electrical\ntransduction. Yet deriving such I-V laws remains a major challenge in most\ncellular microdomains due to their small sizes and the difficulty of accessing\nvoltage with a high nanometer precision. We present here novel analytical\nrelations derived for different numbers of ionic species inside a neuronal\nmicro/nano-domains, such as dendritic spines. When a steady-state current is\ninjected, we find a large deviation from the classical Ohm's law, showing that\nthe spine neck resistance is insuficent to characterize electrical properties.\nFor a constricted spine neck, modeled by a hyperboloid, we obtain a new I-V law\nthat illustrates the consequences of narrow passages on electrical conduction.\nFinally, during a fast current transient, the local voltage is modulated by the\ndistance between activated voltage-gated channels. To conclude,\nelectro-diffusion laws can now be used to interpret voltage distribution in\nneuronal microdomains.\n", "title": "Electrical transient laws in neuronal microdomains based on electro-diffusion" }
null
null
[ "Quantitative Biology" ]
null
true
null
19682
null
Validated
null
null
null
{ "abstract": " We characterize the neutron output of a deuterium-deuterium plasma fusion\nneutron generator, model 35-DD-W-S, manufactured by NSD/Gradel-Fusion. The\nmeasured energy spectrum is found to be dominated by neutron peaks at 2.2 MeV\nand 2.7 MeV. A detailed GEANT4 simulation accurately reproduces the measured\nenergy spectrum and confirms our understanding of the fusion process in this\ngenerator. Additionally, a contribution of 14.1 MeV neutrons from\ndeuterium-tritium fusion is found at a level of~$3.5\\%$, from tritium produced\nin previous deuterium-deuterium reactions. We have measured both the absolute\nneutron flux as well as its relative variation on the operational parameters of\nthe generator. We find the flux to be proportional to voltage $V^{3.32 \\pm\n0.14}$ and current $I^{0.97 \\pm 0.01}$. Further, we have measured the angular\ndependence of the neutron emission with respect to the polar angle. We conclude\nthat it is well described by isotropic production of neutrons within the\ncathode field cage.\n", "title": "Characterization of a Deuterium-Deuterium Plasma Fusion Neutron Generator" }
null
null
null
null
true
null
19683
null
Default
null
null
null
{ "abstract": " We describe how turbulence distributes tracers away from a localized source\nof injection, and analyse how the spatial inhomogeneities of the concentration\nfield depend on the amount of randomness in the injection mechanism. For that\npurpose, we contrast the mass correlations induced by purely random injections\nwith those induced by continuous injections in the environment. Using the\nKraichnan model of turbulent advection, whereby the underlying velocity field\nis assumed to be shortly correlated in time, we explicitly identify scaling\nregions for the statistics of the mass contained within a shell of radius $r$\nand located at a distance $\\rho$ away from the source. The two key parameters\nare found to be (i) the ratio $s^2$ between the absolute and the relative\ntimescales of dispersion and (ii) the ratio $\\Lambda$ between the size of the\ncloud and its distance away from the source. When the injection is random, only\nthe former is relevant, as previously shown by Celani, Martins-Afonso $\\&$\nMazzino, $J. Fluid. Mech$, 2007 in the case of an incompressible fluid. It is\nargued that the space partition in terms of $s^2$ and $\\Lambda$ is a robust\nfeature of the injection mechanism itself, which should remain relevant beyond\nthe Kraichnan model. This is for instance the case in a generalised version of\nthe model, where the absolute dispersion is prescribed to be ballistic rather\nthan diffusive.\n", "title": "Turbulent Mass Inhomogeneities induced by a point-source" }
null
null
null
null
true
null
19684
null
Default
null
null
null
{ "abstract": " In this paper, we give a correspondence between the Berezin-Toeplitz and the\ncomplex Weyl quantizations of the torus $ \\mathbb{T}^2$. To achieve this, we\nuse the correspondence between the Berezin-Toeplitz and the complex Weyl\nquantizations of the complex plane and a relation between the Berezin-Toeplitz\nquantization of a periodic symbol on the real phase space $\\mathbb{R}^2$ and\nthe Berezin-Toeplitz quantization of a symbol on the torus $ \\mathbb{T}^2 $.\n", "title": "Berezin-toeplitz quantization and complex weyl quantization of the torus t${}^2$" }
null
null
[ "Mathematics" ]
null
true
null
19685
null
Validated
null
null
null
{ "abstract": " This paper reviews the checkered history of predictive distributions in\nstatistics and discusses two developments, one from recent literature and the\nother new. The first development is bringing predictive distributions into\nmachine learning, whose early development was so deeply influenced by two\nremarkable groups at the Institute of Automation and Remote Control. The second\ndevelopment is combining predictive distributions with kernel methods, which\nwere originated by one of those groups, including Emmanuel Braverman.\n", "title": "Conformal predictive distributions with kernels" }
null
null
null
null
true
null
19686
null
Default
null
null
null
{ "abstract": " Algorithms working with linear algebraic groups often represent them via\ndefining polynomial equations. One can always choose defining equations for an\nalgebraic group to be of the degree at most the degree of the group as an\nalgebraic variety. However, the degree of a linear algebraic group $G \\subset\n\\mathrm{GL}_n(C)$ can be arbitrarily large even for $n = 1$. One of the key\ningredients of Hrushovski's algorithm for computing the Galois group of a\nlinear differential equation was an idea to `approximate' every algebraic\nsubgroup of $\\mathrm{GL}_n(C)$ by a `similar' group so that the degree of the\nlatter is bounded uniformly in $n$. Making this uniform bound computationally\nfeasible is crucial for making the algorithm practical.\nIn this paper, we derive a single-exponential degree bound for such an\napproximation (we call it toric envelope), which is qualitatively optimal. As\nan application, we improve the quintuply exponential bound for the first step\nof the Hrushovski's algorithm due to Feng to a single-exponential bound. For\nthe cases $n = 2, 3$ often arising in practice, we further refine our general\nbound.\n", "title": "Degree bound for toric envelope of a linear algebraic group" }
null
null
null
null
true
null
19687
null
Default
null
null
null
{ "abstract": " This article studies the monotonicity, log-convexity of the modified Lommel\nfunctions by using its power series and infinite product representation. Same\nproperties for the ratio of the modified Lommel functions with the Lommel\nfunction, $\\sinh$ and $\\cosh$ are also discussed. As a consequence, some\nTurán type and reverse Turán type inequalities are given. A Rayleigh type\nfunction for the Lommel functions are derived and as an application, we obtain\nthe Redheffer-type inequality.\n", "title": "The Modified Lommel functions: monotonic pattern and inequalities" }
null
null
null
null
true
null
19688
null
Default
null
null
null
{ "abstract": " Epilepsy is a neurological disorder arising from anomalies of the electrical\nactivity in the brain, affecting about 0.5--0.8\\% of the world population.\nSeveral studies investigated the relationship between seizures and brainwave\nsynchronization patterns, pursuing the possibility of identifying interictal,\npreictal, ictal and postictal states. In this work, we introduce a graph-based\nmodel of the brain interactions developed to study synchronization patterns in\nthe electroencephalogram (EEG) signals. The aim is to develop a\npatient-specific approach, also for a real-time use, for the prediction of\nepileptic seizures' occurrences. Different synchronization measures of the EEG\nsignals and easily computable functions able to capture in real-time the\nvariations of EEG synchronization have been considered. Both standard and\nad-hoc classification algorithms have been developed and used. Results on scalp\nEEG signals show that this simple and computationally viable processing is able\nto highlight the changes in the synchronization corresponding to the preictal\nstate.\n", "title": "Anticipating epileptic seizures through the analysis of EEG synchronization as a data classification problem" }
null
null
null
null
true
null
19689
null
Default
null
null
null
{ "abstract": " In this paper, we study a classical construction of lattices from number\nfields and obtain a series of new results about their minimum distance and\nother characteristics by introducing a new measure of algebraic numbers. In\nparticular, we show that when the number fields have few complex embeddings,\nthe minimum distances of these lattices can be computed exactly.\n", "title": "On distances in lattices from algebraic number fields" }
null
null
null
null
true
null
19690
null
Default
null
null
null
{ "abstract": " Parity and time-reversal violating electric dipole moment (EDM) of $^{171}$Yb\nis calculated accounting for the electron correlation effects over the\nDirac-Hartree-Fock (DHF) method in the relativistic Rayleigh-Schrödinger\nmany-body perturbation theory, with the second (MBPT(2) method) and third order\n(MBPT(3) method) approximations, and two variants of all-order relativistic\nmany-body approaches, in the random phase approximation (RPA) and\ncoupled-cluster (CC) method with singles and doubles (CCSD method) framework.\nWe consider electron-nucleus tensor-pseudotensor (T-PT) and nuclear Schiff\nmoment (NSM) interactions as the predominant sources that induce EDM in a\ndiamagnetic atomic system. Our results from the CCSD method to EDM ($d_a$) of\n$^{171}$Yb due to the T-PT and NSM interactions are found to be $d_a = 4.85(6)\n\\times 10^{-20} \\langle \\sigma \\rangle C_T \\ |e| \\ cm$ and $d_a=2.89(4) \\times\n10^{-17} {S/(|e|\\ fm^3)}$, respectively, where $C_T$ is the T-PT coupling\nconstant and $S$ is the NSM. These values differ significantly from the earlier\ncalculations. The reason for the same has been attributed to large correlation\neffects arising through non-RPA type of interactions among the electrons in\nthis atom that are observed by analyzing the differences in the RPA and CCSD\nresults. This has been further scrutinized from the MBPT(2) and MBPT(3) results\nand their roles have been demonstrated explicitly.\n", "title": "Significance of distinct electron correlation effects in determining the P,T-odd electric dipole moment of $^{171}$Yb" }
null
null
null
null
true
null
19691
null
Default
null
null
null
{ "abstract": " The increasing amount of information and the absence of an effective tool for\nassisting users with minimal technical knowledge lead us to use associative\nthinking paradigm for implementation of a software solution - Panorama. In this\nstudy, we present object recognition process, based on context + focus\ninformation visualization techniques, as a foundation for realization of\nPanorama. We show that user can easily define data vocabulary of selected\ndomain that is furthermore used as the application framework. The purpose of\nPanorama approach is to facilitate software development of certain problem\ndomains by shortening the Software Development Life Cycle with minimizing the\nimpact of implementation, review and maintenance phase. Our approach is focused\non using and updating data vocabulary by users without extensive programming\nskills. Panorama therefore facilitates traversing through data by following\nassociations where user does not need to be familiar with the query language,\nthe data structure and does not need to know the problem domain fully. Our\napproach has been verified by detailed comparison to existing approaches and in\nan experiment by implementing selected use cases. The results confirmed that\nPanorama fits problem domains with emphasis on data oriented rather than ones\nwith process oriented aspects. In such cases the development of selected\nproblem domains is shortened up to 25%, where emphasis is mainly on analysis,\nlogical design and testing, while omitting physical design and programming,\nwhich is performed automatically by Panorama tool.\n", "title": "Facilitating information system development with Panoramic view on data" }
null
null
null
null
true
null
19692
null
Default
null
null
null
{ "abstract": " Neutron diffraction and muon spin relaxation ($\\mu$SR) studies are presented\nfor the newly characterized polymorph of NiNb$_2$O$_6$ ($\\beta$-NiNb$_2$O$_6$)\nwith space group P4$_2$/n and $\\mu$SR data only for the previously known\ncolumbite structure polymorph with space group Pbcn. The magnetic structure of\nthe P4$_2$/n form was determined from neutron diffraction using both powder and\nsingle crystal data. Powder neutron diffraction determined an ordering wave\nvector $\\vec{k}$ = ($\\frac{1}{2},\\frac{1}{2},\\frac{1}{2}$). Single crystal data\nconfirmed the same $\\vec{k}$-vector and showed that the correct magnetic\nstructure consists of antiferromagnetically-coupled chains running along the a\nor b-axes in adjacent Ni$^{2+}$ layers perpendicular to the c-axis, which is\nconsistent with the expected exchange interaction hierarchy in this system. The\nrefined magnetic structure is compared with the known magnetic structures of\nthe closely related tri-rutile phases, NiSb$_2$O$_6$ and NiTa$_2$O$_6$. $\\mu$SR\ndata finds a transition temperature of $T_N \\sim$ 15 K for this system, while\nthe columbite polymorph exhibits a lower $T_N =$ 5.7(3) K. Our $\\mu$SR\nmeasurements also allowed us to estimate the critical exponent of the order\nparameter $\\beta$ for each polymorph. We found $\\beta =$ 0.25(3) and 0.16(2)\nfor the $\\beta$ and columbite polymorphs respectively. The single crystal\nneutron scattering data gives a value for the critical exponent $\\beta\n=$~0.28(3) for $\\beta$-NiNb$_2$O$_6$, in agreement with the $\\mu$SR value.\nWhile both systems have $\\beta$ values less than 0.3, which is indicative of\nreduced dimensionality, this effect appears to be much stronger for the\ncolumbite system. In other words, although both systems appear to\nwell-described by $S = 1$ spin chains, the interchain interactions in the\n$\\beta$-polymorph are likely much larger.\n", "title": "Neutron Diffraction and $μ$SR Studies of Two Polymorphs of Nickel Niobate (NiNb$_2$O$_6$)" }
null
null
null
null
true
null
19693
null
Default
null
null
null
{ "abstract": " A unified viewpoint on the van Vleck and Herman-Kluk propagators in Hilbert\nspace and their recently developed counterparts in Wigner representation is\npresented. It is shown that the numerical protocol for the Herman-Kluk\npropagator, which contains the van Vleck one as a particular case, coincides in\nboth representations. The flexibility of the Wigner version in choosing the\nGaussians' width for the underlying coherent states, being not bound to minimal\nuncertainty, is investigated numerically on prototypical potentials. Exploiting\nthis flexibility provides neither qualitative nor quantitative improvements.\nThus, the well-established Herman-Kluk propagator in Hilbert space remains the\nbest choice to date given the large number of semiclassical developments and\napplications based on it.\n", "title": "Semiclassical Propagation: Hilbert Space vs. Wigner Representation" }
null
null
null
null
true
null
19694
null
Default
null
null
null
{ "abstract": " This paper presents an alternate form for the dynamic modelling of a\nmechanical system that simulates in real life a gantry crane type, using\nEuler's classical mechanics and Lagrange formalism, which allows find the\nequations of motion that our model describe. Moreover, it has a basic model\ndesign system using the SolidWorks software, based on the material and\ndimensions of the model provides some physical variables necessary for\nmodelling. In order to verify the theoretical results obtained, a contrast was\nmade between solutions obtained by simulation in SimMechanics-Matlab and\nEuler-Lagrange equations system, has been solved through Matlab libraries for\nsolving equation's systems of the type and order obtained. The force is\ndetermined, but not as exerted by the spring, as this will be the control\nvariable. The objective to bring the mass of the pendulum from one point to\nanother with a specified distance without the oscillation from it, so that, the\nanswer is overdamped. This article includes an analysis of PID control in which\nthe equations of motion of Euler-Lagrange are rewritten in the state space,\nonce there, they were implemented in Simulink to get the natural response of\nthe system to a step input in F and then draw the desired trajectories.\n", "title": "Dynamic analysis and control PID path of a model type gantry crane" }
null
null
null
null
true
null
19695
null
Default
null
null
null
{ "abstract": " We present a Las Vegas algorithm for dynamically maintaining a minimum\nspanning forest of an $n$-node graph undergoing edge insertions and deletions.\nOur algorithm guarantees an $O(n^{o(1)})$ worst-case update time with high\nprobability. This significantly improves the two recent Las Vegas algorithms by\nWulff-Nilsen [STOC'17] with update time $O(n^{0.5-\\epsilon})$ for some constant\n$\\epsilon>0$ and, independently, by Nanongkai and Saranurak [STOC'17] with\nupdate time $O(n^{0.494})$ (the latter works only for maintaining a spanning\nforest).\nOur result is obtained by identifying the common framework that both two\nprevious algorithms rely on, and then improve and combine the ideas from both\nworks. There are two main algorithmic components of the framework that are\nnewly improved and critical for obtaining our result. First, we improve the\nupdate time from $O(n^{0.5-\\epsilon})$ in Wulff-Nilsen [STOC'17] to\n$O(n^{o(1)})$ for decrementally removing all low-conductance cuts in an\nexpander undergoing edge deletions. Second, by revisiting the \"contraction\ntechnique\" by Henzinger and King [1997] and Holm et al. [STOC'98], we show a\nnew approach for maintaining a minimum spanning forest in connected graphs with\nvery few (at most $(1+o(1))n$) edges. This significantly improves the previous\napproach in [Wulff-Nilsen STOC'17] and [Nanongkai and Saranurak STOC'17] which\nis based on Frederickson's 2-dimensional topology tree and illustrates a new\napplication to this old technique.\n", "title": "Dynamic Minimum Spanning Forest with Subpolynomial Worst-case Update Time" }
null
null
[ "Computer Science" ]
null
true
null
19696
null
Validated
null
null
null
{ "abstract": " The rising popularity of intelligent mobile devices and the daunting\ncomputational cost of deep learning-based models call for efficient and\naccurate on-device inference schemes. We propose a quantization scheme that\nallows inference to be carried out using integer-only arithmetic, which can be\nimplemented more efficiently than floating point inference on commonly\navailable integer-only hardware. We also co-design a training procedure to\npreserve end-to-end model accuracy post quantization. As a result, the proposed\nquantization scheme improves the tradeoff between accuracy and on-device\nlatency. The improvements are significant even on MobileNets, a model family\nknown for run-time efficiency, and are demonstrated in ImageNet classification\nand COCO detection on popular CPUs.\n", "title": "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
19697
null
Validated
null
null
null
{ "abstract": " Fine particulate matter (PM$_{2.5}$) is one of the criteria air pollutants\nregulated by the Environmental Protection Agency in the United States. There is\nstrong evidence that ambient exposure to (PM$_{2.5}$) increases risk of\nmortality and hospitalization. Large scale epidemiological studies on the\nhealth effects of PM$_{2.5}$ provide the necessary evidence base for lowering\nthe safety standards and inform regulatory policy. However, ambient monitors of\nPM$_{2.5}$ (as well as monitors for other pollutants) are sparsely located\nacross the U.S., and therefore studies based only on the levels of PM$_{2.5}$\nmeasured from the monitors would inevitably exclude large amounts of the\npopulation. One approach to resolving this issue has been developing models to\npredict local PM$_{2.5}$, NO$_2$, and ozone based on satellite, meteorological,\nand land use data. This process typically relies developing a prediction model\nthat relies on large amounts of input data and is highly computationally\nintensive to predict levels of air pollution in unmonitored areas. We have\ndeveloped a flexible R package that allows for environmental health researchers\nto design and train spatio-temporal models capable of predicting multiple\npollutants, including PM$_{2.5}$. We utilize H2O, an open source big data\nplatform, to achieve both performance and scalability when used in conjunction\nwith cloud or cluster computing systems.\n", "title": "airpred: A Flexible R Package Implementing Methods for Predicting Air Pollution" }
null
null
[ "Statistics" ]
null
true
null
19698
null
Validated
null
null
null
{ "abstract": " A profile describes a set of properties, e.g. a set of skills a person may\nhave, a set of skills required for a particular job, or a set of abilities a\nfootball player may have with respect to a particular team strategy. Profile\nmatching aims to determine how well a given profile fits to a requested\nprofile. The approach taken in this article is grounded in a matching theory\nthat uses filters in lattices to represent profiles, and matching values in the\ninterval [0,1]: the higher the matching value the better is the fit. Such\nlattices can be derived from knowledge bases exploiting description logics to\nrepresent the knowledge about profiles. An interesting first question is, how\nhuman expertise concerning the matching can be exploited to obtain most\naccurate matchings. It will be shown that if a set of filters together with\nmatching values by some human expert is given, then under some mild\nplausibility assumptions a matching measure can be determined such that the\ncomputed matching values preserve the rankings given by the expert. A second\nquestion concerns the efficient querying of databases of profile instances. For\nmatching queries that result in a ranked list of profile instances matching a\ngiven one it will be shown how corresponding top-k queries can be evaluated on\ngrounds of pre-computed matching values, which in turn allows the maintenance\nof the knowledge base to be decoupled from the maintenance of profile\ninstances. In addition, it will be shown how the matching queries can be\nexploited for gap queries that determine how profile instances need to be\nextended in order to improve in the rankings. Finally, the theory of matching\nwill be extended beyond the filters, which lead to a matching theory that\nexploits fuzzy sets or probabilistic logic with maximum entropy semantics. It\nwill be shown that added fuzzy links can be captured by extending the\nunderlying lattice.\n", "title": "Accurate and Efficient Profile Matching in Knowledge Bases" }
null
null
[ "Computer Science" ]
null
true
null
19699
null
Validated
null
null
null
{ "abstract": " We present a sample of $\\sim 1000$ emission line galaxies at $z=0.4-4.7$ from\nthe $\\sim0.7$deg$^2$ High-$z$ Emission Line Survey (HiZELS) in the Boötes\nfield identified with a suite of six narrow-band filters at $\\approx 0.4-2.1$\n$\\mu$m. These galaxies have been selected on their Ly$\\alpha$ (73), {\\sc [Oii]}\n(285), H$\\beta$/{\\sc [Oiii]} (387) or H$\\alpha$ (362) emission-line, and have\nbeen classified with optical to near-infrared colours. A subsample of 98\nsources have reliable redshifts from multiple narrow-band (e.g. [O{\\sc\nii}]-H$\\alpha$) detections and/or spectroscopy. In this survey paper, we\npresent the observations, selection and catalogs of emitters. We measure number\ndensities of Ly$\\alpha$, [O{\\sc ii}], H$\\beta$/{\\sc [Oiii]} and H$\\alpha$ and\nconfirm strong luminosity evolution in star-forming galaxies from $z\\sim0.4$ to\n$\\sim 5$, in agreement with previous results. To demonstrate the usefulness of\ndual-line emitters, we use the sample of dual [O{\\sc ii}]-H$\\alpha$ emitters to\nmeasure the observed [O{\\sc ii}]/H$\\alpha$ ratio at $z=1.47$. The observed\n[O{\\sc ii}]/H$\\alpha$ ratio increases significantly from 0.40$\\pm0.01$ at\n$z=0.1$ to 0.52$\\pm0.05$ at $z=1.47$, which we attribute to either decreasing\ndust attenuation with redshift, or due to a bias in the (typically)\nfiber-measurements in the local Universe which only measure the central kpc\nregions. At the bright end, we find that both the H$\\alpha$ and Ly$\\alpha$\nnumber densities at $z\\approx2.2$ deviate significantly from a Schechter form,\nfollowing a power-law. We show that this is driven entirely by an increasing\nX-ray/AGN fraction with line-luminosity, which reaches $\\approx 100$ \\% at\nline-luminosities $L\\gtrsim3\\times10^{44}$ erg s$^{-1}$.\n", "title": "Boötes-HiZELS: an optical to near-infrared survey of emission-line galaxies at $\\bf z=0.4-4.7$" }
null
null
[ "Physics" ]
null
true
null
19700
null
Validated
null
null