text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We report on an empirical study of the main strategies for conditional\nquantile estimation in the context of stochastic computer experiments. To\nensure adequate diversity, six metamodels are presented, divided into three\ncategories based on order statistics, functional approaches, and those of\nBayesian inspiration. The metamodels are tested on several problems\ncharacterized by the size of the training set, the input dimension, the\nquantile order and the value of the probability density function in the\nneighborhood of the quantile. The metamodels studied reveal good contrasts in\nour set of 480 experiments, enabling several patterns to be extracted. Based on\nour results, guidelines are proposed to allow users to select the best method\nfor a given problem.\n", "title": "A Review on Quantile Regression for Stochastic Computer Experiments" }
null
null
null
null
true
null
19501
null
Default
null
null
null
{ "abstract": " Much effort has been devoted to device and materials engineering to realize\nnanoscale resistance random access memory (RRAM) for practical applications,\nbut there still lacks a rational physical basis to be relied on to design\nscalable devices spanning many length scales. In particular, the critical\nswitching criterion is not clear for RRAM devices in which resistance changes\nare limited to localized nanoscale filaments that experience concentrated heat,\nelectric current and field. Here, we demonstrate voltage-controlled resistance\nswitching for macro and nano devices in both filamentary RRAM and nanometallic\nRRAM, the latter switches uniformly and does not require forming. As a result,\nusing a constant current density as the compliance, we have achieved\narea-scalability for the low resistance state of the filamentary RRAM, and for\nboth the low and high resistance states of the nanometallic RRAM. This finding\nwill help design area-scalable RRAM at the nanoscale.\n", "title": "Scalability of Voltage-Controlled Filamentary and Nanometallic Resistance Memories" }
null
null
[ "Physics" ]
null
true
null
19502
null
Validated
null
null
null
{ "abstract": " Motivation: Protein-protein interactions (PPIs) are usually modelled as\nnetworks. These networks have extensively been studied using graphlets, small\ninduced subgraphs capturing the local wiring patterns around nodes in networks.\nThey revealed that proteins involved in similar functions tend to be similarly\nwired. However, such simple models can only represent pairwise relationships\nand cannot fully capture the higher-order organization of protein interactions,\nincluding protein complexes. Results: To model the multi-sale organization of\nthese complex biological systems, we utilize simplicial complexes from\ncomputational geometry. The question is how to mine these new representations\nof PPI networks to reveal additional biological information. To address this,\nwe define simplets, a generalization of graphlets to simplicial complexes. By\nusing simplets, we define a sensitive measure of similarity between simplicial\ncomplex network representations that allows for clustering them according to\ntheir data types better than clustering them by using other state-of-the-art\nmeasures, e.g., spectral distance, or facet distribution distance. We model\nhuman and baker's yeast PPI networks as simplicial complexes that capture PPIs\nand protein complexes as simplices. On these models, we show that our newly\nintroduced simplet-based methods cluster proteins by function better than the\nclustering methods that use the standard PPI networks, uncovering the new\nunderlying functional organization of the cell. We demonstrate the existence of\nthe functional geometry in the PPI data and the superiority of our\nsimplet-based methods to effectively mine for new biological information hidden\nin the complexity of the higher order organization of PPI networks.\n", "title": "Functional geometry of protein-protein interaction networks" }
null
null
null
null
true
null
19503
null
Default
null
null
null
{ "abstract": " Component-based development is a software engineering paradigm that can\nfacilitate the construction of embedded systems and tackle its complexities.\nThe modern embedded systems have more and more demanding requirements. One way\nto cope with such versatile and growing set of requirements is to employ\nheterogeneous processing power, i.e., CPU-GPU architectures. The new CPU-GPU\nembedded boards deliver an increased performance but also introduce additional\ncomplexity and challenges. In this work, we address the component-to-hardware\nallocation for CPU-GPU embedded systems. The allocation for such systems is\nmuch complex due to the increased amount of GPU-related information. For\nexample, while in traditional embedded systems the allocation mechanism may\nconsider only the CPU memory usage of components to find an appropriate\nallocation scheme, in heterogeneous systems, the GPU memory usage needs also to\nbe taken into account in the allocation process. This paper aims at decreasing\nthe component-to-hardware allocation complexity by introducing a 2-layer\ncomponent-based architecture for heterogeneous embedded systems. The detailed\nCPU-GPU information of the system is abstracted at a high-layer by compacting\nconnected components into single units that behave as regular components. The\nallocator, based on the compacted information received from the high-level\nlayer, computes, with a decreased complexity, feasible allocation schemes. In\nthe last part of the paper, the 2-layer allocation method is evaluated using an\nexisting embedded system demonstrator; namely, an underwater robot.\n", "title": "A Two-Layer Component-Based Allocation for Embedded Systems with GPUs" }
null
null
null
null
true
null
19504
null
Default
null
null
null
{ "abstract": " Sales forecast is an essential task in E-commerce and has a crucial impact on\nmaking informed business decisions. It can help us to manage the workforce,\ncash flow and resources such as optimizing the supply chain of manufacturers\netc. Sales forecast is a challenging problem in that sales is affected by many\nfactors including promotion activities, price changes, and user preferences\netc. Traditional sales forecast techniques mainly rely on historical sales data\nto predict future sales and their accuracies are limited. Some more recent\nlearning-based methods capture more information in the model to improve the\nforecast accuracy. However, these methods require case-by-case manual feature\nengineering for specific commercial scenarios, which is usually a difficult,\ntime-consuming task and requires expert knowledge. To overcome the limitations\nof existing methods, we propose a novel approach in this paper to learn\neffective features automatically from the structured data using the\nConvolutional Neural Network (CNN). When fed with raw log data, our approach\ncan automatically extract effective features from that and then forecast sales\nusing those extracted features. We test our method on a large real-world\ndataset from CaiNiao.com and the experimental results validate the\neffectiveness of our method.\n", "title": "Sales Forecast in E-commerce using Convolutional Neural Network" }
null
null
null
null
true
null
19505
null
Default
null
null
null
{ "abstract": " In this study, we provide mathematical and practice-driven justification for\nusing $[0,1]$ normalization of inconsistency indicators in pairwise\ncomparisons. The need for normalization, as well as problems with the lack of\nnormalization, are presented. A new type of paradox of infinity is described.\n", "title": "On normalization of inconsistency indicators in pairwise comparisons" }
null
null
null
null
true
null
19506
null
Default
null
null
null
{ "abstract": " Let $F$ be a $p$-adic fied, $E$ be a quadratic extension of $F$, and $D$ be\nan $F$-division algebra of odd index. Set $H=\\mathrm{GL}m,D)$ and\n$G=\\mathrm{GL}(m,D\\otimes_F E)$, we carry out a fine study of local\nintertwining open periods attached to $H$-distinguished induced representations\nof inner forms of $G$. These objects have been studied globally in \\cite{JLR}\nand \\cite{LR}, and locally in \\cite{BD08}. Here we give sufficient conditions\nfor the local intertwining periods to have singularities. By a local/global\nmethod, we also compute in terms of Asai gamma factors the proportionality\nconstants involved in their functional equations with respect to certain\nintertwining operators. As a consequence, we classify distinguished unitary and\nladder representations of $G$, extending respectively the results of \\cite{M14}\nand \\cite{G15} for $D=F$, which both relied at some crucial step on the theory\nof Bernstein-Zelevinsky derivatives. We make use of one of the main results of\n\\cite{BP17} in our setting, which in the case of the group $G$, asserts that\nthe Jacquet-Langlands correspondence preserves distinction. Such a result is\nfor discrete series representations, but our method in fact allows us to use it\nonly for cuspidal representations of $G$.\n", "title": "Gamma factors of intertwining periods and distinction for inner forms of $\\GL(n)$" }
null
null
null
null
true
null
19507
null
Default
null
null
null
{ "abstract": " A simple double-decker molecule with magnetic anisotropy, nickelocene, is\nattached to the metallic tip of a low-temperature scanning tunneling\nmicroscope. In the presence of a Cu(100) surface, the conductance around the\nFermi energy is governed by spin-flip scattering, the nature of which is\ndetermined by the tunneling barrier thickness. The molecular tip exhibits\ninelastic spin-flip scattering in the tunneling regime, while in the contact\nregime a Kondo ground state is stabilized causing an order of magnitude change\nin the zero-bias conductance. First principle calculations show that\nnickelocene reversibly switches from a spin 1 to 1/2 between the two transport\nregimes.\n", "title": "Spin-flip scattering selection in a controlled molecular junction" }
null
null
null
null
true
null
19508
null
Default
null
null
null
{ "abstract": " We consider nonlinear transport equations with non-local velocity, describing\nthe time-evolution of a measure, which in practice may represent the density of\na crowd. Such equations often appear by taking the mean-field limit of\nfinite-dimensional systems modelling collective dynamics. We first give a sense\nto dissipativity of these mean-field equations in terms of Lie derivatives of a\nLyapunov function depending on the measure. Then, we address the problem of\ncontrolling such equations by means of a time-varying bounded control action\nlocalized on a time-varying control subset with bounded Lebesgue measure\n(sparsity space constraint). Finite-dimensional versions are given by\ncontrol-affine systems, which can be stabilized by the well known\nJurdjevic--Quinn procedure. In this paper, assuming that the uncontrolled\ndynamics are dissipative, we develop an approach in the spirit of the classical\nJurdjevic--Quinn theorem, showing how to steer the system to an invariant\nsublevel of the Lyapunov function. The control function and the control domain\nare designed in terms of the Lie derivatives of the Lyapunov function, and\nenjoy sparsity properties in the sense that the control support is small.\nFinally, we show that our result applies to a large class of kinetic equations\nmodelling multi-agent dynamics.\n", "title": "Mean-Field Sparse Jurdjevic--Quinn Control" }
null
null
null
null
true
null
19509
null
Default
null
null
null
{ "abstract": " Sampling from large networks represents a fundamental challenge for social\nnetwork research. In this paper, we explore the sensitivity of different\nsampling techniques (node sampling, edge sampling, random walk sampling, and\nsnowball sampling) on social networks with attributes. We consider the special\ncase of networks (i) where we have one attribute with two values (e.g., male\nand female in the case of gender), (ii) where the size of the two groups is\nunequal (e.g., a male majority and a female minority), and (iii) where nodes\nwith the same or different attribute value attract or repel each other (i.e.,\nhomophilic or heterophilic behavior). We evaluate the different sampling\ntechniques with respect to conserving the position of nodes and the visibility\nof groups in such networks. Experiments are conducted both on synthetic and\nempirical social networks. Our results provide evidence that different network\nsampling techniques are highly sensitive with regard to capturing the expected\ncentrality of nodes, and that their accuracy depends on relative group size\ndifferences and on the level of homophily that can be observed in the network.\nWe conclude that uninformed sampling from social networks with attributes thus\ncan significantly impair the ability of researchers to draw valid conclusions\nabout the centrality of nodes and the visibility or invisibility of groups in\nsocial networks.\n", "title": "Sampling from Social Networks with Attributes" }
null
null
null
null
true
null
19510
null
Default
null
null
null
{ "abstract": " Recent works have shown that social media platforms are able to influence the\ntrends of stock price movements. However, existing works have majorly focused\non the U.S. stock market and lacked attention to certain emerging countries\nsuch as China, where retail investors dominate the market. In this regard, as\nretail investors are prone to be influenced by news or other social media,\npsychological and behavioral features extracted from social media platforms are\nthought to well predict stock price movements in the China's market. Recent\nadvances in the investor social network in China enables the extraction of such\nfeatures from web-scale data. In this paper, on the basis of tweets from\nXueqiu, a popular Chinese Twitter-like social platform specialized for\ninvestors, we analyze features with regard to collective sentiment and\nperception on stock relatedness and predict stock price movements by employing\nnonlinear models. The features of interest prove to be effective in our\nexperiments.\n", "title": "Exploiting Investors Social Network for Stock Prediction in China's Market" }
null
null
null
null
true
null
19511
null
Default
null
null
null
{ "abstract": " Can we make a famous rap singer like Eminem sing whatever our favorite song?\nSinging style transfer attempts to make this possible, by replacing the vocal\nof a song from the source singer to the target singer. This paper presents a\nmethod that learns from unpaired data for singing style transfer using\ngenerative adversarial networks.\n", "title": "Singing Style Transfer Using Cycle-Consistent Boundary Equilibrium Generative Adversarial Networks" }
null
null
[ "Computer Science" ]
null
true
null
19512
null
Validated
null
null
null
{ "abstract": " We present a robust deep learning based 6 degrees-of-freedom (DoF)\nlocalization system for endoscopic capsule robots. Our system mainly focuses on\nlocalization of endoscopic capsule robots inside the GI tract using only visual\ninformation captured by a mono camera integrated to the robot. The proposed\nsystem is a 23-layer deep convolutional neural network (CNN) that is capable to\nestimate the pose of the robot in real time using a standard CPU. The dataset\nfor the evaluation of the system was recorded inside a surgical human stomach\nmodel with realistic surface texture, softness, and surface liquid properties\nso that the pre-trained CNN architecture can be transferred confidently into a\nreal endoscopic scenario. An average error of 7:1% and 3:4% for translation and\nrotation has been obtained, respectively. The results accomplished from the\nexperiments demonstrate that a CNN pre-trained with raw 2D endoscopic images\nperforms accurately inside the GI tract and is robust to various challenges\nposed by reflection distortions, lens imperfections, vignetting, noise, motion\nblur, low resolution, and lack of unique landmarks to track.\n", "title": "A Deep Learning Based 6 Degree-of-Freedom Localization Method for Endoscopic Capsule Robots" }
null
null
null
null
true
null
19513
null
Default
null
null
null
{ "abstract": " We study a generalization of a cross-diffusion problem deduced from a\nnonlinear complex-variable diffusion model for signal and image denoising.\nWe prove the existence of weak solutions of the time-independent problem with\nfidelity terms under mild conditions on the data problem. Then, we show that\nthis translates on the well-posedness of a quasi-steady state approximation of\nthe evolution problem, and also prove the existence of weak solutions of the\nlatter under more restrictive hypothesis.\nWe finally perform some numerical simulations for image denoising, comparing\nthe performance of the cross-diffusion model and its corresponding scalar\nPerona-Malik equation.\n", "title": "On a cross-diffusion system arising in image denosing" }
null
null
[ "Mathematics" ]
null
true
null
19514
null
Validated
null
null
null
{ "abstract": " In distributed storage systems, locally repairable codes (LRCs) are\nintroduced to realize low disk I/O and repair cost. In order to tolerate\nmultiple node failures, the LRCs with \\emph{$(r, \\delta)$-locality} are further\nproposed. Since hot data is not uncommon in a distributed storage system, both\nZeh \\emph{et al.} and Kadhe \\emph{et al.} focus on the LRCs with \\emph{multiple\nlocalities or unequal localities} (ML-LRCs) recently, which said that the\nlocalities among the code symbols can be different. ML-LRCs are attractive and\nuseful in reducing repair cost for hot data. In this paper, we generalize the\nML-LRCs to the $(r,\\delta)$-locality case of multiple node failures, and define\nan LRC with multiple $(r_{i}, \\delta_{i})_{i\\in [s]}$ localities ($s\\ge 2$),\nwhere $r_{1}\\leq r_{2}\\leq\\dots\\leq r_{s}$ and\n$\\delta_{1}\\geq\\delta_{2}\\geq\\dots\\geq\\delta_{s}\\geq2$. Such codes ensure that\nsome hot data could be repaired more quickly and have better failure-tolerance\nin certain cases because of relatively smaller $r_{i}$ and larger $\\delta_{i}$.\nThen, we derive a Singleton-like upper bound on the minimum distance for the\nproposed LRCs by employing the regenerating-set technique. Finally, we obtain a\nclass of explicit and structured constructions of optimal ML-LRCs, and further\nextend them to the cases of multiple $(r_{i}, \\delta)_{i\\in [s]}$ localities.\n", "title": "Locally Repairable Codes with Multiple $(r_{i}, δ_{i})$-Localities" }
null
null
null
null
true
null
19515
null
Default
null
null
null
{ "abstract": " Syntax-guided synthesis aims to find a program satisfying semantic\nspecification as well as user-provided structural hypothesis. For syntax-guided\nsynthesis there are two main search strategies: concrete search, which\nsystematically or stochastically enumerates all possible solutions, and\nsymbolic search, which interacts with a constraint solver to solve the\nsynthesis problem. In this paper, we propose a concolic synthesis framework\nwhich combines the best of the two worlds. Based on a decision tree\nrepresentation, our framework works by enumerating tree heights from the\nsmallest possible one to larger ones. For each fixed height, the framework\nsymbolically searches a solution through the counterexample-guided inductive\nsynthesis approach. To compensate the exponential blow-up problem with the\nconcolic synthesis framework, we identify two fragments of synthesis problems\nand develop purely symbolic and more efficient procedures. The two fragments\nare decidable as these procedures are terminating and complete. We implemented\nour synthesis procedures and compared with state-of-the-art synthesizers on a\nrange of benchmarks. Experiments show that our algorithms are promising.\n", "title": "Reconciling Enumerative and Symbolic Search in Syntax-Guided Synthesis" }
null
null
null
null
true
null
19516
null
Default
null
null
null
{ "abstract": " In this paper we are concerned with the existence of periodic solutions for\nsemilinear Duffing equations with impulsive effects. Firstly for the autonomous\none, basing on Poincaré-Birkhoff twist theorem, we prove the existence of\ninfinitely many periodic solutions. Secondly, as for the nonautonomous case,\nthe impulse brings us great challenges for the study, and there are only\nfinitely many periodic solutions, which is quite different from the\ncorresponding equation without impulses. Here, taking the autonomous one as an\nauxiliary equation, we find the relation between these two equations and then\nobtain the result also by Poincaré-Birkhoff twist theorem.\n", "title": "Periodic solutions of semilinear Duffing equations with impulsive effects" }
null
null
null
null
true
null
19517
null
Default
null
null
null
{ "abstract": " We analyse spectral properties of a class of compact perturbations of block\nToeplitz operators associated with analytic symbols. In particular, a limiting\nabsorption principle and the absence of singular continuous spectrum are shown.\nThe existence and the completeness of wave operators are also obtained. Our\nstudy is based on the construction of a conjugate operator in Mourre sense for\nthe corresponding Laurent operators.\n", "title": "Spectral and scattering theory for perturbed block Toeplitz operators" }
null
null
null
null
true
null
19518
null
Default
null
null
null
{ "abstract": " There are tradeoffs between current sharing among distributed resources and\nDC bus voltage stability when conventional droop control is used in DC\nmicrogrids. As current sharing approaches the setpoint, bus voltage deviation\nincreases. Previous studies have suggested using secondary control utilizing\nlinear controllers to overcome drawbacks of droop control. However, linear\ncontrol design depends on an accurate model of the system. The derivation of\nsuch a model is challenging because the noise and disturbances caused by the\ncoupling between sources, loads, and switches in microgrids are\nunder-represented. This under-representation makes linear modeling and control\ninsufficient. Hence, in this paper, we propose a robust adaptive control to\nadjust droop characteristics to satisfy both current sharing and bus voltage\nstability. First, the time-varying models of DC microgrids are derived. Second,\nthe improvements for the adaptive control method are presented. Third, the\napplication of the enhanced adaptive method to DC microgrids is presented to\nsatisfy the system objective. Fourth, simulation and experimental results on a\nmicrogrid show that the adaptive method precisely shares current between two\ndistributed resources and maintains the nominal bus voltage. Last, the\ncomparative study validates the effectiveness of the proposed method over the\nconventional method.\n", "title": "Robust adaptive droop control for DC microgrids" }
null
null
[ "Mathematics" ]
null
true
null
19519
null
Validated
null
null
null
{ "abstract": " We prove a new off-diagonal asymptotic of the Bergman kernels associated to\ntensor powers of a positive line bundle on a compact Kähler manifold. We show\nthat if the Kähler potential is real analytic, then the Bergman kernel\naccepts a complete asymptotic expansion in a neighborhood of the diagonal of\nshrinking size $k^{-\\frac14}$. These improve the earlier results in the subject\nfor smooth potentials, where an expansion exists in a $k^{-\\frac12}$\nneighborhood of the diagonal. We obtain our results by finding upper bounds of\nthe form $C^m m!^{2}$ for the Bergman coefficients $b_m(x, \\bar y)$, which is\nan interesting problem on its own. We find such upper bounds using the method\nof Berman-Berndtsson-Sjöstrand. We also show that sharpening these upper\nbounds would improve the rate of shrinking neighborhoods of the diagonal $x=y$\nin our results. In the special case of metrics with local constant holomorphic\nsectional curvatures, we obtain off-diagonal asymptotic in a fixed (as $k \\to\n\\infty$) neighborhood of the diagonal, which recovers a result of Berman [Ber]\n(see Remark 3.5 of [Ber] for higher dimensions). In this case, we also find an\nexplicit formula for the Bergman kernel mod $O(e^{-k \\delta} )$.\n", "title": "Off-diagonal asymptotic properties of Bergman kernels associated to analytic Kähler potentials" }
null
null
null
null
true
null
19520
null
Default
null
null
null
{ "abstract": " Networks are powerful instruments to study complex phenomena, but they become\nhard to analyze in data that contain noise. Network backbones provide a tool to\nextract the latent structure from noisy networks by pruning non-salient edges.\nWe describe a new approach to extract such backbones. We assume that edge\nweights are drawn from a binomial distribution, and estimate the error-variance\nin edge weights using a Bayesian framework. Our approach uses a more realistic\nnull model for the edge weight creation process than prior work. In particular,\nit simultaneously considers the propensity of nodes to send and receive\nconnections, whereas previous approaches only considered nodes as emitters of\nedges. We test our model with real world networks of different types (flows,\nstocks, co-occurrences, directed, undirected) and show that our Noise-Corrected\napproach returns backbones that outperform other approaches on a number of\ncriteria. Our approach is scalable, able to deal with networks with millions of\nedges.\n", "title": "Network Backboning with Noisy Data" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
19521
null
Validated
null
null
null
{ "abstract": " We present the 2017 DAVIS Challenge on Video Object Segmentation, a public\ndataset, benchmark, and competition specifically designed for the task of video\nobject segmentation. Following the footsteps of other successful initiatives,\nsuch as ILSVRC and PASCAL VOC, which established the avenue of research in the\nfields of scene classification and semantic segmentation, the DAVIS Challenge\ncomprises a dataset, an evaluation methodology, and a public competition with a\ndedicated workshop co-located with CVPR 2017. The DAVIS Challenge follows up on\nthe recent publication of DAVIS (Densely-Annotated VIdeo Segmentation), which\nhas fostered the development of several novel state-of-the-art video object\nsegmentation techniques. In this paper we describe the scope of the benchmark,\nhighlight the main characteristics of the dataset, define the evaluation\nmetrics of the competition, and present a detailed analysis of the results of\nthe participants to the challenge.\n", "title": "The 2017 DAVIS Challenge on Video Object Segmentation" }
null
null
[ "Computer Science" ]
null
true
null
19522
null
Validated
null
null
null
{ "abstract": " Context: The success of Stack Overflow and other community-based\nquestion-and-answer (Q&A) sites depends mainly on the will of their members to\nanswer others' questions. In fact, when formulating requests on Q&A sites, we\nare not simply seeking for information. Instead, we are also asking for other\npeople's help and feedback. Understanding the dynamics of the participation in\nQ&A communities is essential to improve the value of crowdsourced knowledge.\nObjective: In this paper, we investigate how information seekers can increase\nthe chance of eliciting a successful answer to their questions on Stack\nOverflow by focusing on the following actionable factors: affect, presentation\nquality, and time.\nMethod: We develop a conceptual framework of factors potentially influencing\nthe success of questions in Stack Overflow. We quantitatively analyze a set of\nover 87K questions from the official Stack Overflow dump to assess the impact\nof actionable factors on the success of technical requests. The information\nseeker reputation is included as a control factor. Furthermore, to understand\nthe role played by affective states in the success of questions, we\nqualitatively analyze questions containing positive and negative emotions.\nFinally, a survey is conducted to understand how Stack Overflow users perceive\nthe guideline suggestions for writing questions.\nResults: We found that regardless of user reputation, successful questions\nare short, contain code snippets, and do not abuse with uppercase characters.\nAs regards affect, successful questions adopt a neutral emotional style.\nConclusion: We provide evidence-based guidelines for writing effective\nquestions on Stack Overflow that software engineers can follow to increase the\nchance of getting technical help. As for the role of affect, we empirically\nconfirmed community guidelines that suggest avoiding rudeness in question\nwriting.\n", "title": "How to Ask for Technical Help? Evidence-based Guidelines for Writing Questions on Stack Overflow" }
null
null
null
null
true
null
19523
null
Default
null
null
null
{ "abstract": " Massive multiple-input multiple-output (MIMO) systems, which utilize a large\nnumber of antennas at the base station, are expected to enhance network\nthroughput by enabling improved multiuser MIMO techniques. To deploy many\nantennas in reasonable form factors, base stations are expected to employ\nantenna arrays in both horizontal and vertical dimensions, which is known as\nfull-dimension (FD) MIMO. The most popular two-dimensional array is the uniform\nplanar array (UPA), where antennas are placed in a grid pattern. To exploit the\nfull benefit of massive MIMO in frequency division duplexing (FDD), the\ndownlink channel state information (CSI) should be estimated, quantized, and\nfed back from the receiver to the transmitter. However, it is difficult to\naccurately quantize the channel in a computationally efficient manner due to\nthe high dimensionality of the massive MIMO channel. In this paper, we develop\nboth narrowband and wideband CSI quantizers for FD-MIMO taking the properties\nof realistic channels and the UPA into consideration. To improve quantization\nquality, we focus on not only quantizing dominant radio paths in the channel,\nbut also combining the quantized beams. We also develop a hierarchical beam\nsearch approach, which scans both vertical and horizontal domains jointly with\nmoderate computational complexity. Numerical simulations verify that the\nperformance of the proposed quantizers is better than that of previous CSI\nquantization techniques.\n", "title": "Advanced Quantizer Designs for FDD-Based FD-MIMO Systems Using Uniform Planar Arrays" }
null
null
null
null
true
null
19524
null
Default
null
null
null
{ "abstract": " Surface parameterizations have been widely applied to computer graphics and\ndigital geometry processing. In this paper, we propose a novel stretch energy\nminimization (SEM) algorithm for the computation of equiareal parameterizations\nof simply connected open surfaces with a very small area distortion and a\nhighly improved computational efficiency. In addition, the existence of\nnontrivial limit points of the SEM algorithm is guaranteed under some mild\nassumptions of the mesh quality. Numerical experiments indicate that the\nefficiency, accuracy, and robustness of the proposed SEM algorithm outperform\nother state-of-the-art algorithms. Applications of the SEM on surface remeshing\nand surface registration for simply connected open surfaces are demonstrated\nthereafter. Thanks to the SEM algorithm, the computations for these\napplications can be carried out efficiently and robustly.\n", "title": "A Novel Stretch Energy Minimization Algorithm for Equiareal Parameterizations" }
null
null
null
null
true
null
19525
null
Default
null
null
null
{ "abstract": " In this paper, we consider a distributed stochastic optimization problem\nwhere the goal is to minimize the time average of a cost function subject to a\nset of constraints on the time averages of related stochastic processes called\npenalties. We assume that the state of the system is evolving in an independent\nand non-stationary fashion and the \"common information\" available at each node\nis distributed and delayed. Such stochastic optimization is an integral part of\nmany important problems in wireless networks such as scheduling, routing,\nresource allocation and crowd sensing. We propose an approximate distributed\nDrift- Plus-Penalty (DPP) algorithm, and show that it achieves a time average\ncost (and penalties) that is within epsilon > 0 of the optimal cost (and\nconstraints) with high probability. Also, we provide a condition on the\nconvergence time t for this result to hold. In particular, for any delay D >= 0\nin the common information, we use a coupling argument to prove that the\nproposed algorithm converges almost surely to the optimal solution. We use an\napplication from wireless sensor network to corroborate our theoretical\nfindings through simulation results.\n", "title": "Time Complexity Analysis of a Distributed Stochastic Optimization in a Non-Stationary Environment" }
null
null
null
null
true
null
19526
null
Default
null
null
null
{ "abstract": " In a wide variety of applications, including personalization, we want to\nmeasure the difference in outcome due to an intervention and thus have to deal\nwith counterfactual inference. The feedback from a customer in any of these\nsituations is only 'bandit feedback' - that is, a partial feedback based on\nwhether we chose to intervene or not. Typically randomized experiments are\ncarried out to understand whether an intervention is overall better than no\nintervention. Here we present a feature learning algorithm to learn from a\nrandomized experiment where the intervention in consideration is most effective\nand where it is least effective rather than only focusing on the overall\nimpact, thus adding a context to our learning mechanism and extract more\ninformation. From the randomized experiment, we learn the feature\nrepresentations which divide the population into subpopulations where we\nobserve statistically significant difference in average customer feedback\nbetween those who were subjected to the intervention and those who were not,\nwith a level of significance l, where l is a configurable parameter in our\nmodel. We use this information to derive the value of the intervention in\nconsideration for each instance in the population. With experiments, we show\nthat using this additional learning, in future interventions, the context for\neach instance could be leveraged to decide whether to intervene or not.\n", "title": "Robust Counterfactual Inferences using Feature Learning and their Applications" }
null
null
null
null
true
null
19527
null
Default
null
null
null
{ "abstract": " Let $\\Delta$ be a simplicial complex of a matroid $M$. In this paper, we\nexplicitly compute the regularity of all the symbolic powers of a\nStanley-Reisner ideal $I_\\Delta$ in terms of combinatorial data of the matroid\n$M$. In order to do that, we provide a sharp bound between the arboricity of\n$M$ and the circumference of its dual $M^*$.\n", "title": "Regularity of symbolic powers and Arboricity of matroids" }
null
null
null
null
true
null
19528
null
Default
null
null
null
{ "abstract": " We propose a max-pooling based loss function for training Long Short-Term\nMemory (LSTM) networks for small-footprint keyword spotting (KWS), with low\nCPU, memory, and latency requirements. The max-pooling loss training can be\nfurther guided by initializing with a cross-entropy loss trained network. A\nposterior smoothing based evaluation approach is employed to measure keyword\nspotting performance. Our experimental results show that LSTM models trained\nusing cross-entropy loss or max-pooling loss outperform a cross-entropy loss\ntrained baseline feed-forward Deep Neural Network (DNN). In addition,\nmax-pooling loss trained LSTM with randomly initialized network performs better\ncompared to cross-entropy loss trained LSTM. Finally, the max-pooling loss\ntrained LSTM initialized with a cross-entropy pre-trained network shows the\nbest performance, which yields $67.6\\%$ relative reduction compared to baseline\nfeed-forward DNN in Area Under the Curve (AUC) measure.\n", "title": "Max-Pooling Loss Training of Long Short-Term Memory Networks for Small-Footprint Keyword Spotting" }
null
null
null
null
true
null
19529
null
Default
null
null
null
{ "abstract": " In Packet Scheduling with Adversarial Jamming packets of arbitrary sizes\narrive over time to be transmitted over a channel in which instantaneous\njamming errors occur at times chosen by the adversary and not known to the\nalgorithm. The transmission taking place at the time of jamming is corrupt, and\nthe algorithm learns this fact immediately. An online algorithm maximizes the\ntotal size of packets it successfully transmits and the goal is to develop an\nalgorithm with the lowest possible asymptotic competitive ratio, where the\nadditive constant may depend on packet sizes.\nOur main contribution is a universal algorithm that works for any speedup and\npacket sizes and, unlike previous algorithms for the problem, it does not need\nto know these properties in advance. We show that this algorithm guarantees\n1-competitiveness with speedup 4, making it the first known algorithm to\nmaintain 1-competitiveness with a moderate speedup in the general setting of\narbitrary packet sizes. We also prove a lower bound of $\\phi+1\\approx 2.618$ on\nthe speedup of any 1-competitive deterministic algorithm, showing that our\nalgorithm is close to the optimum.\nAdditionally, we formulate a general framework for analyzing our algorithm\nlocally and use it to show upper bounds on its competitive ratio for speedups\nin $[1,4)$ and for several special cases, recovering some previously known\nresults, each of which had a dedicated proof. In particular, our algorithm is\n3-competitive without speedup, matching both the (worst-case) performance of\nthe algorithm by Jurdzinski et al. and the lower bound by Anta et al.\n", "title": "On Packet Scheduling with Adversarial Jamming and Speedup" }
null
null
null
null
true
null
19530
null
Default
null
null
null
{ "abstract": " Despite the progress in high performance computing, Computational Fluid\nDynamics (CFD) simulations are still computationally expensive for many\npractical engineering applications such as simulating large computational\ndomains and highly turbulent flows. One of the major reasons of the high\nexpense of CFD is the need for a fine grid to resolve phenomena at the relevant\nscale, and obtain a grid-independent solution. The fine grid requirements often\ndrive the computational time step size down, which makes long transient\nproblems prohibitively expensive. In the research presented, the feasibility of\na Coarse Grid CFD (CG-CFD) approach is investigated by utilizing Machine\nLearning (ML) algorithms. Relying on coarse grids increases the discretization\nerror. Hence, a method is suggested to produce a surrogate model that predicts\nthe CG-CFD local errors to correct the variables of interest. Given\nhigh-fidelity data, a surrogate model is trained to predict the CG-CFD local\nerrors as a function of the coarse grid local features. ML regression\nalgorithms are utilized to construct a surrogate model that relates the local\nerror and the coarse grid features. This method is applied to a\nthree-dimensional flow in a lid driven cubic cavity domain. The performance of\nthe method was assessed by training the surrogate model on the flow full field\nspatial data and tested on new data (from flows of different Reynolds number\nand/or computed by different grid sizes). The proposed method maximizes the\nbenefit of the available data and shows potential for a good predictive\ncapability.\n", "title": "Coarse-Grid Computational Fluid Dynamic (CG-CFD) Error Prediction using Machine Learning" }
null
null
null
null
true
null
19531
null
Default
null
null
null
{ "abstract": " In this paper, we propose a deep learning based approach for facial action\nunit detection by enhancing and cropping the regions of interest. The approach\nis implemented by adding two novel nets (layers): the enhancing layers and the\ncropping layers, to a pretrained CNN model. For the enhancing layers, we\ndesigned an attention map based on facial landmark features and applied it to a\npretrained neural network to conduct enhanced learning (The E-Net). For the\ncropping layers, we crop facial regions around the detected landmarks and\ndesign convolutional layers to learn deeper features for each facial region\n(C-Net). We then fuse the E-Net and the C-Net to obtain our Enhancing and\nCropping (EAC) Net, which can learn both feature enhancing and region cropping\nfunctions. Our approach shows significant improvement in performance compared\nto the state-of-the-art methods applied to BP4D and DISFA AU datasets.\n", "title": "EAC-Net: A Region-based Deep Enhancing and Cropping Approach for Facial Action Unit Detection" }
null
null
null
null
true
null
19532
null
Default
null
null
null
{ "abstract": " On the surface of icy dust grains in the dense regions of the interstellar\nmedium a rich chemistry can take place. Due to the low temperature, reactions\nthat proceed via a barrier can only take place through tunneling. The reaction\nH + H$_2$O$_2$ $\\rightarrow$ H$_2$O + OH is such a case with a gas-phase\nbarrier of $\\sim$26.5 kJ/mol. Still the reaction is known to be involved in\nwater formation on interstellar grains. Here, we investigate the influence of a\nwater ice surface and of bulk ice on the reaction rate constant. Rate constants\nare calculated using instanton theory down to 74 K. The ice is taken into\naccount via multiscale modeling, describing the reactants and the direct\nsurrounding at the quantum mechanical level with density functional theory\n(DFT), while the rest of the ice is modeled on the molecular mechanical level\nwith a force field. We find that H$_2$O$_2$ binding energies cannot be captured\nby a single value, but rather depend on the number of hydrogen bonds with\nsurface molecules. In highly amorphous surroundings the binding site can block\nthe routes of attack and impede the reaction. Furthermore, the activation\nenergies do not correlate with the binding energies of the same sites. The\nunimolecular rate constants related to the Langmuir-Hinshelwood mechanism\nincrease as the activation energy decreases. Thus, we provide a lower limit for\nthe rate constant and argue that rate constants can have values up to two order\nof magnitude larger than this limit.\n", "title": "Influence of surface and bulk water ice on the reactivity of a water-forming reaction" }
null
null
[ "Physics" ]
null
true
null
19533
null
Validated
null
null
null
{ "abstract": " We revisit the fundamental problem of liquid-liquid dewetting and perform a\ndetailed comparison of theoretical predictions based on thin-film models with\nexperimental measurements obtained by atomic force microscopy (AFM).\nSpecifically, we consider the dewetting of a liquid polystyrene (PS) layer from\na liquid polymethyl methacrylate (PMMA) layer, where the thicknesses and the\nviscosities of PS and PMMA layers are similar. The excellent agreement of\nexperiment and theory reveals that dewetting rates for such systems follow no\nuniversal power law, in contrast to dewetting scenarios on solid substrates.\nOur new energetic approach allows to assess the physical importance of\ndifferent contributions to the energy-dissipation mechanism, for which we\nanalyze the local flow fields and the local dissipation rates.\n", "title": "Impact of energy dissipation on interface shapes and on rates for dewetting from liquid substrates" }
null
null
null
null
true
null
19534
null
Default
null
null
null
{ "abstract": " We demonstrate the control of resonance characteristics of a drum type\ngraphene mechanical resonator in nonlinear oscillation regime by the\nphotothermal effect, which is induced by a standing wave of light between a\ngraphene and a substrate. Unlike the conventional Duffing type nonlinearity,\nthe resonance characteristics in nonlinear oscillation regime is modulated by\nthe standing wave of light despite a small variation amplitude. From numerical\ncalculations with a combination of equations of heat and motion with Duffing\ntype nonlinearity, this can be explained that the photothermal effect causes\ndelayed modulation of stress or tension of the graphene.\n", "title": "Resonance control of graphene drum resonator in nonlinear regime by standing wave of light" }
null
null
null
null
true
null
19535
null
Default
null
null
null
{ "abstract": " A surrogate model approximates a computationally expensive solver. Polynomial\nChaos is a method to construct surrogate models by summing combinations of\ncarefully chosen polynomials. The polynomials are chosen to respect the\nprobability distributions of the uncertain input variables (parameters); this\nallows for both uncertainty quantification and global sensitivity analysis.\nIn this paper we apply these techniques to a commercial solver for the\nestimation of peak gas rate and cumulative gas extraction from a coal seam gas\nwell. The polynomial expansion is shown to honour the underlying geophysics\nwith low error when compared to a much more complex and computationally slower\ncommercial solver. We make use of advanced numerical integration techniques to\nachieve this accuracy using relatively small amounts of training data.\n", "title": "Uncertainty quantification of coal seam gas production prediction using Polynomial Chaos" }
null
null
[ "Mathematics" ]
null
true
null
19536
null
Validated
null
null
null
{ "abstract": " The most data-efficient algorithms for reinforcement learning in robotics are\nmodel-based policy search algorithms, which alternate between learning a\ndynamical model of the robot and optimizing a policy to maximize the expected\nreturn given the model and its uncertainties. However, the current algorithms\nlack an effective exploration strategy to deal with sparse or misleading reward\nscenarios: if they do not experience any state with a positive reward during\nthe initial random exploration, it is very unlikely to solve the problem. Here,\nwe propose a novel model-based policy search algorithm, Multi-DEX, that\nleverages a learned dynamical model to efficiently explore the task space and\nsolve tasks with sparse rewards in a few episodes. To achieve this, we frame\nthe policy search problem as a multi-objective, model-based policy optimization\nproblem with three objectives: (1) generate maximally novel state trajectories,\n(2) maximize the expected return and (3) keep the system in state-space regions\nfor which the model is as accurate as possible. We then optimize these\nobjectives using a Pareto-based multi-objective optimization algorithm. The\nexperiments show that Multi-DEX is able to solve sparse reward scenarios (with\na simulated robotic arm) in much lower interaction time than VIME, TRPO,\nGEP-PG, CMA-ES and Black-DROPS.\n", "title": "Multi-objective Model-based Policy Search for Data-efficient Learning with Sparse Rewards" }
null
null
null
null
true
null
19537
null
Default
null
null
null
{ "abstract": " This work explores the trade-off between the number of samples required to\naccurately build models of dynamical systems and the degradation of performance\nin various control objectives due to a coarse approximation. In particular, we\nshow that simple models can be easily fit from input/output data and are\nsufficient for achieving various control objectives. We derive bounds on the\nnumber of noisy input/output samples from a stable linear time-invariant system\nthat are sufficient to guarantee that the corresponding finite impulse response\napproximation is close to the true system in the $\\mathcal{H}_\\infty$-norm. We\ndemonstrate that these demands are lower than those derived in prior art which\naimed to accurately identify dynamical models. We also explore how different\nphysical input constraints, such as power constraints, affect the sample\ncomplexity. Finally, we show how our analysis fits within the established\nframework of robust control, by demonstrating how a controller designed for an\napproximate system provably meets performance objectives on the true system.\n", "title": "Non-Asymptotic Analysis of Robust Control from Coarse-Grained Identification" }
null
null
null
null
true
null
19538
null
Default
null
null
null
{ "abstract": " The stability (or instability) of synchronization is important in a number of\nreal world systems, including the power grid, the human brain and biological\ncells. For identical synchronization, the synchronizability of a network, which\ncan be measured by the range of coupling strength that admits stable\nsynchronization, can be optimized for a given number of nodes and links.\nDepending on the geometric degeneracy of the Laplacian eigenvectors, optimal\nnetworks can be classified into different sensitivity levels, which we define\nas a network's sensitivity index. We introduce an efficient and explicit way to\nconstruct optimal networks of arbitrary size over a wide range of sensitivity\nand link densities. Using coupled chaotic oscillators, we study synchronization\ndynamics on optimal networks, showing that cospectral optimal networks can have\ndrastically different speed of synchronization. Such difference in dynamical\nstability is found to be closely related to the different structural\nsensitivity of these networks: generally, networks with high sensitivity index\nare slower to synchronize, and, surprisingly, may not synchronize at all,\ndespite being theoretically stable under linear stability analysis.\n", "title": "Construction,sensitivity index, and synchronization speed of optimal networks" }
null
null
null
null
true
null
19539
null
Default
null
null
null
{ "abstract": " Vortices, turbulence, and unsteady non-laminar flows are likely both\nprominent and dynamically important features of astrophysical disks. Such\nstrongly nonlinear phenomena are often difficult, however, to simulate\naccurately, and are generally amenable to analytic treatment only in idealized\nform. In this paper, we explore the evolution of compressible two-dimensional\nflows using an implicit dual-time hydrodynamical scheme that strictly conserves\nvorticity (if applied to simulate inviscid flows for which Kelvin's Circulation\nTheorem is applicable). The algorithm is based on the work of Lerat, Falissard\n& Side (2007), who proposed it in the context of terrestrial applications such\nas the blade-vortex interactions generated by helicopter rotors. We present\nseveral tests of Lerat et al.'s vorticity-preserving approach, which we have\nimplemented to second-order accuracy, providing side-by-side comparisons with\nother algorithms that are frequently used in protostellar disk simulations. The\ncomparison codes include one based on explicit, second-order van-Leer\nadvection, one based on spectral methods, and another that implements a\nhigher-order Godunov solver. Our results suggest that Lerat et al's algorithm\nwill be useful for simulations of astrophysical environments in which vortices\nplay a dynamical role, and where strong shocks are not expected.\n", "title": "A Vorticity-Preserving Hydrodynamical Scheme for Modeling Accretion Disk Flows" }
null
null
[ "Physics" ]
null
true
null
19540
null
Validated
null
null
null
{ "abstract": " Direct numerical simulation of liquid-gas-solid flows is uncommon due to the\nconsiderable computational cost. As the grid spacing is determined by the\nsmallest involved length scale, large grid sizes become necessary -- in\nparticular if the bubble-particle aspect ratio is on the order of 10 or larger.\nHence, it arises the question of both feasibility and reasonability. In this\npaper, we present a fully parallel, scalable method for direct numerical\nsimulation of bubble-particle interaction at a size ratio of 1-2 orders of\nmagnitude that makes simulations feasible on currently available\nsuper-computing resources. With the presented approach, simulations of bubbles\nin suspension columns consisting of more than $100\\,000$ fully resolved\nparticles become possible. Furthermore, we demonstrate the significance of\nparticle-resolved simulations by comparison to previous unresolved solutions.\nThe results indicate that fully-resolved direct numerical simulation is indeed\nnecessary to predict the flow structure of bubble-particle interaction problems\ncorrectly.\n", "title": "Direct simulation of liquid-gas-solid flow with a free surface lattice Boltzmann method" }
null
null
null
null
true
null
19541
null
Default
null
null
null
{ "abstract": " This paper focuses on temporal localization of actions in untrimmed videos.\nExisting methods typically train classifiers for a pre-defined list of actions\nand apply them in a sliding window fashion. However, activities in the wild\nconsist of a wide combination of actors, actions and objects; it is difficult\nto design a proper activity list that meets users' needs. We propose to\nlocalize activities by natural language queries. Temporal Activity Localization\nvia Language (TALL) is challenging as it requires: (1) suitable design of text\nand video representations to allow cross-modal matching of actions and language\nqueries; (2) ability to locate actions accurately given features from sliding\nwindows of limited granularity. We propose a novel Cross-modal Temporal\nRegression Localizer (CTRL) to jointly model text query and video clips, output\nalignment scores and action boundary regression results for candidate clips.\nFor evaluation, we adopt TaCoS dataset, and build a new dataset for this task\non top of Charades by adding sentence temporal annotations, called\nCharades-STA. We also build complex sentence queries in Charades-STA for test.\nExperimental results show that CTRL outperforms previous methods significantly\non both datasets.\n", "title": "TALL: Temporal Activity Localization via Language Query" }
null
null
null
null
true
null
19542
null
Default
null
null
null
{ "abstract": " We present here numerical modelling of granular flows with the $\\mu(I)$\nrheology in confined channels. The contribution is twofold: (i) a model to\napproximate the Navier-Stokes equations with the $\\mu(I)$ rheology through an\nasymptotic analysis. Under the hypothesis of a one-dimensional flow, this model\ntakes into account side walls friction; (ii) a multilayer discretization\nfollowing Fernández-Nieto et al. (J. Fluid Mech., vol. 798, 2016, pp.\n643-681). In this new numerical scheme, we propose an appropriate treatment of\nthe rheological terms through a hydrostatic reconstruction which allows this\nscheme to be well-balanced and therefore to deal with dry areas. Based on\nacademic tests, we first evaluate the influence of the width of the channel on\nthe normal profiles of the downslope velocity thanks to the multilayer approach\nthat is intrinsically able to describe changes from Bagnold to S-shaped (and\nvice versa) velocity profiles. We also check the well balance property of the\nproposed numerical scheme. We show that approximating side walls friction using\nsingle-layer models may lead to strong errors. Secondly, we compare the\nnumerical results with experimental data on granular collapses. We show that\nthe proposed scheme allows us to qualitatively reproduce the deposit in the\ncase of a rigid bed (i. e. dry area) and that the error made by replacing the\ndry area by a small layer of material may be large if this layer is not thin\nenough. The proposed model is also able to reproduce the time evolution of the\nfree surface and of the flow/no-flow interface. In addition, it reproduces the\neffect of erosion for granular flows over initially static material lying on\nthe bed. This is possible when using a variable friction coefficient $\\mu(I)$\nbut not with a constant friction coefficient.\n", "title": "2D granular flows with the $μ(I)$ rheology and side walls friction: a well balanced multilayer discretization" }
null
null
[ "Physics" ]
null
true
null
19543
null
Validated
null
null
null
{ "abstract": " Like it or not, attempts to evaluate and monitor the quality of academic\nresearch have become increasingly prevalent worldwide. Performance reviews\nrange from at the level of individuals, through research groups and\ndepartments, to entire universities. Many of these are informed by, or\nfunctions of, simple scientometric indicators and the results of such exercises\nimpact onto careers, funding and prestige. However, there is sometimes a\nfailure to appreciate that scientometrics are, at best, very blunt instruments\nand their incorrect usage can be misleading. Rather than accepting the rise and\nfall of individuals and institutions on the basis of such imprecise measures,\ncalls have been made for indicators be regularly scrutinised and for\nimprovements to the evidence base in this area. It is thus incumbent upon the\nscientific community, especially the physics, complexity-science and\nscientometrics communities, to scrutinise metric indicators. Here, we review\nrecent attempts to do this and show that some metrics in widespread use cannot\nbe used as reliable indicators research quality.\n", "title": "A scientists' view of scientometrics: Not everything that counts can be counted" }
null
null
null
null
true
null
19544
null
Default
null
null
null
{ "abstract": " Widespread usage of complex interconnected social networks such as Facebook,\nTwitter and LinkedIn in modern internet era has also unfortunately opened the\ndoor for privacy violation of users of such networks by malicious entities. In\nthis article we investigate, both theoretically and empirically, privacy\nviolation measures of large networks under active attacks that was recently\nintroduced in (Information Sciences, 328, 403-417, 2016). Our theoretical\nresult indicates that the network manager responsible for prevention of privacy\nviolation must be very careful in designing the network if its topology does\nnot contain a cycle. Our empirical results shed light on privacy violation\nproperties of eight real social networks as well as a large number of synthetic\nnetworks generated by both the classical Erdos-Renyi model and the scale-free\nrandom networks generated by the Barabasi-Albert preferential-attachment model.\n", "title": "On analyzing and evaluating privacy measures for social networks under active attack" }
null
null
null
null
true
null
19545
null
Default
null
null
null
{ "abstract": " Recent developments have established the vulnerability of deep reinforcement\nlearning to policy manipulation attacks via intentionally perturbed inputs,\nknown as adversarial examples. In this work, we propose a technique for\nmitigation of such attacks based on addition of noise to the parameter space of\ndeep reinforcement learners during training. We experimentally verify the\neffect of parameter-space noise in reducing the transferability of adversarial\nexamples, and demonstrate the promising performance of this technique in\nmitigating the impact of whitebox and blackbox attacks at both test and\ntraining times.\n", "title": "Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise" }
null
null
null
null
true
null
19546
null
Default
null
null
null
{ "abstract": " We study loss functions that measure the accuracy of a prediction based on\nmultiple data points simultaneously. To our knowledge, such loss functions have\nnot been studied before in the area of property elicitation or in machine\nlearning more broadly. As compared to traditional loss functions that take only\na single data point, these multi-observation loss functions can in some cases\ndrastically reduce the dimensionality of the hypothesis required. In\nelicitation, this corresponds to requiring many fewer reports; in empirical\nrisk minimization, it corresponds to algorithms on a hypothesis space of much\nsmaller dimension. We explore some examples of the tradeoff between\ndimensionality and number of observations, give some geometric\ncharacterizations and intuition for relating loss functions and the properties\nthat they elicit, and discuss some implications for both elicitation and\nmachine-learning contexts.\n", "title": "Multi-Observation Elicitation" }
null
null
[ "Computer Science" ]
null
true
null
19547
null
Validated
null
null
null
{ "abstract": " Performing diagnostics in IT systems is an increasingly complicated task, and\nit is not doable in satisfactory time by even the most skillful operators.\nSystems and their architecture change very rapidly in response to business and\nuser demand. Many organizations see value in the maintenance and management\nmodel of NoOps that stands for No Operations. One of the implementations of\nthis model is a system that is maintained automatically without any human\nintervention. The path to NoOps involves not only precise and fast diagnostics\nbut also reusing as much knowledge as possible after the system is reconfigured\nor changed. The biggest challenge is to leverage knowledge on one IT system and\nreuse this knowledge for diagnostics of another, different system. We propose a\nframework of weighted graphs which can transfer knowledge, and perform\nhigh-quality diagnostics of IT systems. We encode all possible data in a graph\nrepresentation of a system state and automatically calculate weights of these\ngraphs. Then, thanks to the evaluation of similarity between graphs, we\ntransfer knowledge about failures from one system to another and use it for\ndiagnostics. We successfully evaluate the proposed approach on Spark, Hadoop,\nKafka and Cassandra systems.\n", "title": "Next Stop \"NoOps\": Enabling Cross-System Diagnostics Through Graph-based Composition of Logs and Metrics" }
null
null
[ "Computer Science" ]
null
true
null
19548
null
Validated
null
null
null
{ "abstract": " We prove a generalization of the Davenport-Heilbronn theorem to quotients of\nideal class groups of quadratic fields by the primes lying above a fixed set of\nrational primes $S$. Additionally, we obtain average sizes for the relaxed\nSelmer group $\\mathrm{Sel}_3^S(K)$ and for\n$\\mathcal{O}_{K,S}^\\times/(\\mathcal{O}_{K,S}^\\times)^3$ as $K$ varies among\nquadratic fields with a fixed signature ordered by discriminant.\n", "title": "Davenport-Heilbronn Theorems for Quotients of Class Groups" }
null
null
null
null
true
null
19549
null
Default
null
null
null
{ "abstract": " Neural networks are known to be vulnerable to adversarial examples, inputs\nthat have been intentionally perturbed to remain visually similar to the source\ninput, but cause a misclassification. It was recently shown that given a\ndataset and classifier, there exists so called universal adversarial\nperturbations, a single perturbation that causes a misclassification when\napplied to any input. In this work, we introduce universal adversarial\nnetworks, a generative network that is capable of fooling a target classifier\nwhen it's generated output is added to a clean sample from a dataset. We show\nthat this technique improves on known universal adversarial attacks.\n", "title": "Learning Universal Adversarial Perturbations with Generative Models" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
19550
null
Validated
null
null
null
{ "abstract": " We present a novel method for learning the weights in multinomial logistic\nregression based on the alternating direction method of multipliers (ADMM). In\neach iteration, our algorithm decomposes the training into three steps; a\nlinear least-squares problem for the weights, a global variable update\ninvolving a separable cross-entropy loss function, and a trivial dual variable\nupdate The least-squares problem can be factorized in the off-line phase, and\nthe separability in the global variable update allows for efficient\nparallelization, leading to faster convergence. We compare our method with\nstochastic gradient descent for linear classification as well as for transfer\nlearning and show that the proposed ADMM-Softmax leads to improved\ngeneralization and convergence.\n", "title": "Large-Scale Classification using Multinomial Regression and ADMM" }
null
null
null
null
true
null
19551
null
Default
null
null
null
{ "abstract": " We found analytically a first order quantum phase transition in the Cooper\npair box array of $N$ low-capacitance Josephson junctions capacitively coupled\nto a resonant photon in a microwave cavity. The Hamiltonian of the system maps\non the extended Dicke Hamiltonian of $N$ spins one-half with infinitely\ncoordinated antiferromagnetic (frustrating) interaction. This interaction\narises from the gauge-invariant coupling of the Josephson junctions phases to\nthe vector potential of the resonant photon field. In $N \\gg 1$ semiclassical\nlimit, we found a critical coupling at which ground state of the system\nswitches to the one with a net collective electric dipole moment of the Cooper\npair boxes coupled to superradiant equilibrium photonic condensate. This phase\ntransition changes from the first to second order if the frustrating\ninteraction is switched off. A self-consistently `rotating' Holstein-Primakoff\nrepresentation for the Cartesian components of the total superspin is proposed,\nthat enables to trace both the first and the second order quantum phase\ntransitions in the extended and standard Dicke models respectively.\n", "title": "First order dipolar phase transition in the Dicke model with infinitely coordinated frustrating interaction" }
null
null
null
null
true
null
19552
null
Default
null
null
null
{ "abstract": " Combining Bayesian nonparametrics and a forward model selection strategy, we\nconstruct parsimonious Bayesian deep networks (PBDNs) that infer\ncapacity-regularized network architectures from the data and require neither\ncross-validation nor fine-tuning when training the model. One of the two\nessential components of a PBDN is the development of a special infinite-wide\nsingle-hidden-layer neural network, whose number of active hidden units can be\ninferred from the data. The other one is the construction of a greedy\nlayer-wise learning algorithm that uses a forward model selection criterion to\ndetermine when to stop adding another hidden layer. We develop both Gibbs\nsampling and stochastic gradient descent based maximum a posteriori inference\nfor PBDNs, providing state-of-the-art classification accuracy and interpretable\ndata subtypes near the decision boundaries, while maintaining low computational\ncomplexity for out-of-sample prediction.\n", "title": "Parsimonious Bayesian deep networks" }
null
null
[ "Statistics" ]
null
true
null
19553
null
Validated
null
null
null
{ "abstract": " Achieving superhuman playing level by AlphaGo corroborated the capabilities\nof convolutional neural architectures (CNNs) for capturing complex spatial\npatterns. This result was to a great extent due to several analogies between Go\nboard states and 2D images CNNs have been designed for, in particular\ntranslational invariance and a relatively large board. In this paper, we verify\nwhether CNN-based move predictors prove effective for Othello, a game with\nsignificantly different characteristics, including a much smaller board size\nand complete lack of translational invariance. We compare several CNN\narchitectures and board encodings, augment them with state-of-the-art\nextensions, train on an extensive database of experts' moves, and examine them\nwith respect to move prediction accuracy and playing strength. The empirical\nevaluation confirms high capabilities of neural move predictors and suggests a\nstrong correlation between prediction accuracy and playing strength. The best\nCNNs not only surpass all other 1-ply Othello players proposed to date but\ndefeat (2-ply) Edax, the best open-source Othello player.\n", "title": "Learning to Play Othello with Deep Neural Networks" }
null
null
null
null
true
null
19554
null
Default
null
null
null
{ "abstract": " We consider estimating the \"de facto\" or effectiveness estimand in a\nrandomised placebo-controlled or standard-of-care-controlled drug trial with\nquantitative outcome, where participants who discontinue an investigational\ntreatment are not followed up thereafter. Carpenter et al (2013) proposed\nreference-based imputation methods which use a reference arm to inform the\ndistribution of post-discontinuation outcomes and hence to inform an imputation\nmodel. However, the reference-based imputation methods were not formally\njustified. We present a causal model which makes an explicit assumption in a\npotential outcomes framework about the maintained causal effect of treatment\nafter discontinuation. We show that the \"jump to reference\", \"copy reference\"\nand \"copy increments in reference\" reference-based imputation methods, with the\ncontrol arm as the reference arm, are special cases of the causal model with\nspecific assumptions about the causal treatment effect. Results from simulation\nstudies are presented. We also show that the causal model provides a flexible\nand transparent framework for a tipping point sensitivity analysis in which we\nvary the assumptions made about the causal effect of discontinued treatment. We\nillustrate the approach with data from two longitudinal clinical trials.\n", "title": "A causal modelling framework for reference-based imputation and tipping point analysis" }
null
null
[ "Statistics" ]
null
true
null
19555
null
Validated
null
null
null
{ "abstract": " Recent studies in social media spam and automation provide anecdotal\nargumentation of the rise of a new generation of spambots, so-called social\nspambots. Here, for the first time, we extensively study this novel phenomenon\non Twitter and we provide quantitative evidence that a paradigm-shift exists in\nspambot design. First, we measure current Twitter's capabilities of detecting\nthe new social spambots. Later, we assess the human performance in\ndiscriminating between genuine accounts, social spambots, and traditional\nspambots. Then, we benchmark several state-of-the-art techniques proposed by\nthe academic literature. Results show that neither Twitter, nor humans, nor\ncutting-edge applications are currently capable of accurately detecting the new\nsocial spambots. Our results call for new approaches capable of turning the\ntide in the fight against this raising phenomenon. We conclude by reviewing the\nlatest literature on spambots detection and we highlight an emerging common\nresearch trend based on the analysis of collective behaviors. Insights derived\nfrom both our extensive experimental campaign and survey shed light on the most\npromising directions of research and lay the foundations for the arms race\nagainst the novel social spambots. Finally, to foster research on this novel\nphenomenon, we make publicly available to the scientific community all the\ndatasets used in this study.\n", "title": "The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race" }
null
null
null
null
true
null
19556
null
Default
null
null
null
{ "abstract": " We explore the power of spatial context as a self-supervisory signal for\nlearning visual representations. In particular, we propose spatial context\nnetworks that learn to predict a representation of one image patch from another\nimage patch, within the same image, conditioned on their real-valued relative\nspatial offset. Unlike auto-encoders, that aim to encode and reconstruct\noriginal image patches, our network aims to encode and reconstruct intermediate\nrepresentations of the spatially offset patches. As such, the network learns a\nspatially conditioned contextual representation. By testing performance with\nvarious patch selection mechanisms we show that focusing on object-centric\npatches is important, and that using object proposal as a patch selection\nmechanism leads to the highest improvement in performance. Further, unlike\nauto-encoders, context encoders [21], or other forms of unsupervised feature\nlearning, we illustrate that contextual supervision (with pre-trained model\ninitialization) can improve on existing pre-trained model performance. We build\nour spatial context networks on top of standard VGG_19 and CNN_M architectures\nand, among other things, show that we can achieve improvements (with no\nadditional explicit supervision) over the original ImageNet pre-trained VGG_19\nand CNN_M models in object categorization and detection on VOC2007.\n", "title": "Weakly-Supervised Spatial Context Networks" }
null
null
null
null
true
null
19557
null
Default
null
null
null
{ "abstract": " We present a statistical analysis of the variability of broad absorption\nlines (BALs) in quasars using the large multi-epoch spectroscopic dataset of\nthe Sloan Digital Sky Survey Data Release 12 (SDSS DR12). We divide the sample\ninto two groups according to the pattern of the variation of C iv BAL with\nrespect to that of continuum: the equivalent widths (EW) of the BAL decreases\n(increases) when the continuum brightens (dims) as group T1; and the variation\nof EW and continuum in the opposite relation as group T2. We find that T2 has\nsignificantly (P_T<10-6 , Students T Test) higher EW ratios (R) of Si iv to C\niv BAL than T1. Our result agrees with the prediction of photoionization models\nthat C +3 column density increases (decreases) if there is a (or no) C +3\nionization front while R decreases with the incident continuum. We show that\nBAL variabilities in at least 80% quasars are driven by the variation of\nionizing continuum while other models that predict uncorrelated BAL and\ncontinuum variability contribute less than 20%. Considering large uncertainty\nin the continuum flux calibration, the latter fraction may be much smaller.\nWhen the sample is binned into different time interval between the two\nobservations, we find significant difference in the distribution of R between\nT1 and T2 in all time-bins down to a deltaT < 6 days, suggesting that BAL\noutflow in a fraction of quasars has a recombination time scale of only a few\ndays.\n", "title": "Variation of ionizing continuum: the main driver of Broad Absorption Line Variability" }
null
null
null
null
true
null
19558
null
Default
null
null
null
{ "abstract": " The line spectral estimation problem consists in recovering the frequencies\nof a complex valued time signal that is assumed to be sparse in the spectral\ndomain from its discrete observations. Unlike the gridding required by the\nclassical compressed sensing framework, line spectral estimation reconstructs\nsignals whose spectral supports lie continuously in the Fourier domain. If\nrecent advances have shown that atomic norm relaxation produces highly robust\nestimates in this context, the computational cost of this approach remains,\nhowever, the major flaw for its application to practical systems.\nIn this work, we aim to bridge the complexity issue by studying the atomic\nnorm minimization problem from low dimensional projection of the signal\nsamples. We derive conditions on the sub-sampling matrix under which the\npartial atomic norm can be expressed by a low-dimensional semidefinite program.\nMoreover, we illustrate the tightness of this relaxation by showing that it is\npossible to recover the original signal in poly-logarithmic time for two\nspecific sub-sampling patterns.\n", "title": "Low Dimensional Atomic Norm Representations in Line Spectral Estimation" }
null
null
null
null
true
null
19559
null
Default
null
null
null
{ "abstract": " Monte Carlo methods approximate integrals by sample averages of integrand\nvalues. The error of Monte Carlo methods may be expressed as a trio identity:\nthe product of the variation of the integrand, the discrepancy of the sampling\nmeasure, and the confounding. The trio identity has different versions,\ndepending on whether the integrand is deterministic or Bayesian and whether the\nsampling measure is deterministic or random. Although the variation and the\ndiscrepancy are common in the literature, the confounding is relatively unknown\nand under-appreciated. Theory and examples are used to show how the cubature\nerror may be reduced by employing the low discrepancy sampling that defines\nquasi-Monte Carlo methods. The error may also be reduced by rewriting the\nintegral in terms of a different integrand. Finally, the confounding explains\nwhy the cubature error might decay at a rate different from that of the\ndiscrepancy.\n", "title": "The Trio Identity for Quasi-Monte Carlo Error" }
null
null
[ "Mathematics" ]
null
true
null
19560
null
Validated
null
null
null
{ "abstract": " Let $f: \\mathbb{R}^d \\to\\mathbb{R}$ be a Lipschitz function. If $B$ is a\nbounded self-adjoint operator and if $\\{A_k\\}_{k=1}^d$ are commuting bounded\nself-adjoint operators such that $[A_k,B]\\in L_1(H),$ then\n$$\\|[f(A_1,\\cdots,A_d),B]\\|_{1,\\infty}\\leq\nc(d)\\|\\nabla(f)\\|_{\\infty}\\max_{1\\leq k\\leq d}\\|[A_k,B]\\|_1,$$ where $c(d)$ is\na constant independent of $f$, $\\mathcal{M}$ and $A,B$ and\n$\\|\\cdot\\|_{1,\\infty}$ denotes the weak $L_1$-norm. If $\\{X_k\\}_{k=1}^d$\n(respectively, $\\{Y_k\\}_{k=1}^d$) are commuting bounded self-adjoint operators\nsuch that $X_k-Y_k\\in L_1(H),$ then\n$$\\|f(X_1,\\cdots,X_d)-f(Y_1,\\cdots,Y_d)\\|_{1,\\infty}\\leq\nc(d)\\|\\nabla(f)\\|_{\\infty}\\max_{1\\leq k\\leq d}\\|X_k-Y_k\\|_1.$$\n", "title": "Weak type operator Lipschitz and commutator estimates for commuting tuples" }
null
null
null
null
true
null
19561
null
Default
null
null
null
{ "abstract": " Despite the success of deep learning on representing images for particular\nobject retrieval, recent studies show that the learned representations still\nlie on manifolds in a high dimensional space. This makes the Euclidean nearest\nneighbor search biased for this task. Exploring the manifolds online remains\nexpensive even if a nearest neighbor graph has been computed offline. This work\nintroduces an explicit embedding reducing manifold search to Euclidean search\nfollowed by dot product similarity search. This is equivalent to linear graph\nfiltering of a sparse signal in the frequency domain. To speed up online\nsearch, we compute an approximate Fourier basis of the graph offline. We\nimprove the state of art on particular object retrieval datasets including the\nchallenging Instre dataset containing small objects. At a scale of 10^5 images,\nthe offline cost is only a few hours, while query time is comparable to\nstandard similarity search.\n", "title": "Fast Spectral Ranking for Similarity Search" }
null
null
[ "Computer Science" ]
null
true
null
19562
null
Validated
null
null
null
{ "abstract": " We study a frequency-dependent damping model of hyper-diffusion within the\ngeneralized Langevin equation. The model allows for the colored noise defined\nby its spectral density, assumed to be proportional to $\\omega^{\\delta-1}$ at\nlow frequencies with $0<\\delta<1$ (sub-Ohmic damping) or $1<\\delta<2$\n(super-Ohmic damping), where the frequency-dependent damping is deduced from\nthe noise by means of the fluctuation-dissipation theorem. It is shown that for\nsuper-Ohmic damping and certain parameters, the diffusive process of the\nparticle in a titled periodic potential undergos sequentially four\ntime-regimes: thermalization, hyper-diffusion, collapse and asymptotical\nrestoration. For analysing transition phenomenon of multi-diffusive states, we\ndemonstrate that the first exist time of the particle escaping from the locked\nstate into the running state abides by an exponential distribution. The concept\nof equivalent velocity trap is introduced in the present model, moreover,\nreformation of ballistic diffusive system is also considered as a marginal\nsituation, however there does not exhibit the collapsed state of diffusion.\n", "title": "Transition of multi-diffusive states in a biased periodic potential" }
null
null
[ "Physics" ]
null
true
null
19563
null
Validated
null
null
null
{ "abstract": " Geometric variations of objects, which do not modify the object class, pose a\nmajor challenge for object recognition. These variations could be rigid as well\nas non-rigid transformations. In this paper, we design a framework for training\ndeformable classifiers, where latent transformation variables are introduced,\nand a transformation of the object image to a reference instantiation is\ncomputed in terms of the classifier output, separately for each class. The\nclassifier outputs for each class, after transformation, are compared to yield\nthe final decision. As a by-product of the classification this yields a\ntransformation of the input object to a reference pose, which can be used for\ndownstream tasks such as the computation of object support. We apply a two-step\ntraining mechanism for our framework, which alternates between optimizing over\nthe latent transformation variables and the classifier parameters to minimize\nthe loss function. We show that multilayer perceptrons, also known as deep\nnetworks, are well suited for this approach and achieve state of the art\nresults on the rotated MNIST and the Google Earth dataset, and produce\ncompetitive results on MNIST and CIFAR-10 when training on smaller subsets of\ntraining data.\n", "title": "Deformable Classifiers" }
null
null
null
null
true
null
19564
null
Default
null
null
null
{ "abstract": " We discuss, in the context of energy flow in high-dimensional systems and\nKolmogorov-Arnol'd-Moser (KAM) theory, the behavior of a chain of rotators\n(rotors) which is purely Hamiltonian, apart from dissipation at just one end.\nWe derive bounds on the dissipation rate which become arbitrarily small in\ncertain physical regimes, and we present numerical evidence that these bounds\nare sharp. We relate this to the decoupling of non-resonant terms as is known\nin KAM problems.\n", "title": "Energy Dissipation in Hamiltonian Chains of Rotators" }
null
null
null
null
true
null
19565
null
Default
null
null
null
{ "abstract": " Floer theory was originally devised to estimate the number of 1-periodic\norbits of Hamiltonian systems. In earlier works, we constructed Floer homology\nfor homoclinic orbits on two dimensional manifolds using combinatorial\ntechniques. In the present paper, we study theoretic aspects of computational\ncomplexity of homoclinic Floer homology. More precisely, for finding the\nhomoclinic points and immersions that generate the homology and its boundary\noperator, we establish sharp upper bounds in terms of iterations of the\nunderlying symplectomorphism. This prepares the ground for future numerical\nworks.\nAlthough originally aimed at numerics, the above bounds provide also purely\nalgebraic applications, namely\n1) Torsion-freeness of primary homoclinic Floer homology.\n2) Morse type inequalities for primary homoclinic orbits.\n", "title": "Computational complexity, torsion-freeness of homoclinic Floer homology, and homoclinic Morse inequalities" }
null
null
null
null
true
null
19566
null
Default
null
null
null
{ "abstract": " Bayesian inference for stochastic volatility models using MCMC methods highly\ndepends on actual parameter values in terms of sampling efficiency. While draws\nfrom the posterior utilizing the standard centered parameterization break down\nwhen the volatility of volatility parameter in the latent state equation is\nsmall, non-centered versions of the model show deficiencies for highly\npersistent latent variable series. The novel approach of\nancillarity-sufficiency interweaving has recently been shown to aid in\novercoming these issues for a broad class of multilevel models. In this paper,\nwe demonstrate how such an interweaving strategy can be applied to stochastic\nvolatility models in order to greatly improve sampling efficiency for all\nparameters and throughout the entire parameter range. Moreover, this method of\n\"combining best of different worlds\" allows for inference for parameter\nconstellations that have previously been infeasible to estimate without the\nneed to select a particular parameterization beforehand.\n", "title": "Ancillarity-Sufficiency Interweaving Strategy (ASIS) for Boosting MCMC Estimation of Stochastic Volatility Models" }
null
null
null
null
true
null
19567
null
Default
null
null
null
{ "abstract": " We study simple root flows and Liouville currents for Hitchin\nrepresentations. We show that the Liouville current is associated to the\nmeasure of maximal entropy for a simple root flow, derive a Liouville volume\nrigidity result, and construct a Liouville pressure metric on the Hitchin\ncomponent.\n", "title": "Simple root flows for Hitchin representations" }
null
null
[ "Mathematics" ]
null
true
null
19568
null
Validated
null
null
null
{ "abstract": " This paper is a review of the evolutionary history of deep learning models.\nIt covers from the genesis of neural networks when associationism modeling of\nthe brain is studied, to the models that dominate the last decade of research\nin deep learning like convolutional neural networks, deep belief networks, and\nrecurrent neural networks. In addition to a review of these models, this paper\nprimarily focuses on the precedents of the models above, examining how the\ninitial ideas are assembled to construct the early models and how these\npreliminary models are developed into their current forms. Many of these\nevolutionary paths last more than half a century and have a diversity of\ndirections. For example, CNN is built on prior knowledge of biological vision\nsystem; DBN is evolved from a trade-off of modeling power and computation\ncomplexity of graphical models and many nowadays models are neural counterparts\nof ancient linear models. This paper reviews these evolutionary paths and\noffers a concise thought flow of how these models are developed, and aims to\nprovide a thorough background for deep learning. More importantly, along with\nthe path, this paper summarizes the gist behind these milestones and proposes\nmany directions to guide the future research of deep learning.\n", "title": "On the Origin of Deep Learning" }
null
null
null
null
true
null
19569
null
Default
null
null
null
{ "abstract": " Imitation is widely observed in populations of decision-making agents. Using\nour recent convergence results for asynchronous imitation dynamics on networks,\nwe consider how such networks can be efficiently driven to a desired\nequilibrium state by offering payoff incentives for using a certain strategy,\neither uniformly or targeted to individuals. In particular, if for each\navailable strategy, agents playing that strategy receive maximum payoff when\ntheir neighbors play that same strategy, we show that providing incentives to\nagents in a network that is at equilibrium will result in convergence to a\nunique new equilibrium. For the case when a uniform incentive can be offered to\nall agents, this result allows the computation of the optimal incentive using a\nbinary search algorithm. When incentives can be targeted to individual agents,\nwe propose an algorithm to select which agents should be chosen based on\niteratively maximizing a ratio of the number of agents who adopt the desired\nstrategy to the payoff incentive required to get those agents to do so.\nSimulations demonstrate that the proposed algorithm computes near-optimal\ntargeted payoff incentives for a range of networks and payoff distributions in\ncoordination games.\n", "title": "Control of Asynchronous Imitation Dynamics on Networks" }
null
null
null
null
true
null
19570
null
Default
null
null
null
{ "abstract": " Complex Finsler vector bundles have been studied mainly by T. Aikou, who\ndefined complex Finsler structures on holomorphic vector bundles. In this\npaper, we consider the more general case of a holomorphic Lie algebroid E and\nwe introduce Finsler structures, partial and Chern-Finsler connections on it.\nFirst, we recall some basic notions on holomorphic Lie algebroids. Then, using\nan idea from E. Martinez, we introduce the concept of complexified prolongation\nof such an algebroid. Also, we study nonlinear and linear connections on the\ntangent bundle of E and on the prolongation of E and we investigate the\nrelation between their coefficients. The analogue of the classical\nChern-Finsler connection is defined and studied in the paper for the case of\nthe holomorphic Lie algebroid.\n", "title": "Finsler structures on holomorphic Lie algebroids" }
null
null
null
null
true
null
19571
null
Default
null
null
null
{ "abstract": " Development and growth are complex and tumultuous processes. Modern economic\ngrowth theories identify some key determinants of economic growth. However, the\nrelative importance of the determinants remains unknown, and additional\nvariables may help clarify the directions and dimensions of the interactions.\nThe novel stream of literature on economic complexity goes beyond aggregate\nmeasures of productive inputs, and considers instead a more granular and\nstructural view of the productive possibilities of countries, i.e. their\ncapabilities. Different endowments of capabilities are crucial ingredients in\nexplaining differences in economic performances. In this paper we employ\neconomic fitness, a measure of productive capabilities obtained through complex\nnetwork techniques. Focusing on the combined roles of fitness and some more\ntraditional drivers of growth, we build a bridge between economic growth\ntheories and the economic complexity literature. Our findings, in agreement\nwith other recent empirical studies, show that fitness plays a crucial role in\nfostering economic growth and, when it is included in the analysis, can be\neither complementary to traditional drivers of growth or can completely\novershadow them.\n", "title": "The role of complex analysis in modeling economic growth" }
null
null
null
null
true
null
19572
null
Default
null
null
null
{ "abstract": " Generating structured input files to test programs can be performed by\ntechniques that produce them from a grammar that serves as the specification\nfor syntactically correct input files. Two interesting scenarios then arise for\neffective testing. In the first scenario, software engineers would like to\ngenerate inputs that are as similar as possible to the inputs in common usage\nof the program, to test the reliability of the program. More interesting is the\nsecond scenario where inputs should be as dissimilar as possible from normal\nusage. This is useful for robustness testing and exploring yet uncovered\nbehavior. To provide test cases for both scenarios, we leverage a context-free\ngrammar to parse a set of sample input files that represent the program's\ncommon usage, and determine probabilities for individual grammar production as\nthey occur during parsing the inputs. Replicating these probabilities during\ngrammar-based test input generation, we obtain inputs that are close to the\nsamples. Inverting these probabilities yields inputs that are strongly\ndissimilar to common inputs, yet still valid with respect to the grammar. Our\nevaluation on three common input formats (JSON, JavaScript, CSS) shows the\neffectiveness of these approaches in obtaining instances from both sets of\ninputs.\n", "title": "Inputs from Hell: Generating Uncommon Inputs from Common Samples" }
null
null
null
null
true
null
19573
null
Default
null
null
null
{ "abstract": " Using the set-up of deformation categories of Talpo and Vistoli, we\nre-interpret and generalize, in the context of cartesian morphisms in abstract\ncategories, some results of Rim concerning obstructions against extensions of\ngroup actions in infinitesimal deformations. Furthermore, we observe that\nfinite étale coverings can be infinitesimally extended and the resulting\nformal scheme is algebraizable. Finally, we show that pre-Tango structures\nsurvive under pullbacks with respect to finite, generically étale surjections\n$\\pi:X\\rightarrow Y$, and record some consequences regarding Kodaira vanishing\nin degree one.\n", "title": "On equivariant formal deformation theory" }
null
null
null
null
true
null
19574
null
Default
null
null
null
{ "abstract": " We developed a simple, physical and self-consistent cloud model for brown\ndwarfs and young giant exoplanets. We compared different parametrisations for\nthe cloud particle size, by either fixing particle radii, or fixing the mixing\nefficiency (parameter fsed) or estimating particle radii from simple\nmicrophysics. The cloud scheme with simple microphysics appears as the best\nparametrisation by successfully reproducing the observed photometry and spectra\nof brown dwarfs and young giant exoplanets. In particular, it reproduces the\nL-T transition, due to the condensation of silicate and iron clouds below the\nvisible/near-IR photosphere. It also reproduces the reddening observed for\nlow-gravity objects, due to an increase of cloud optical depth for low gravity.\nIn addition, we found that the cloud greenhouse effect shifts chemical\nequilibriums, increasing the abundances of species stable at high temperature.\nThis effect should significantly contribute to the strong variation of methane\nabundance at the L-T transition and to the methane depletion observed on young\nexoplanets. Finally, we predict the existence of a continuum of brown dwarfs\nand exoplanets for absolute J magnitude=15-18 and J-K color=0-3, due to the\nevolution of the L-T transition with gravity. This self-consistent model\ntherefore provides a general framework to understand the effects of clouds and\nappears well-suited for atmospheric retrievals.\n", "title": "A self-consistent cloud model for brown dwarfs and young giant exoplanets: comparison with photometric and spectroscopic observations" }
null
null
[ "Physics" ]
null
true
null
19575
null
Validated
null
null
null
{ "abstract": " Fermi Large Area Telescope data reveal an excess of GeV gamma rays from the\ndirection of the Galactic Center and bulge. Several explanations have been\nproposed for this excess including an unresolved population of millisecond\npulsars (MSPs) and self-annihilating dark matter. It has been claimed that a\nkey discriminant for or against the MSP explanation can be extracted from the\nproperties of the luminosity function describing this source population.\nSpecifically, is the luminosity function of the putative MSPs in the Galactic\nCenter consistent with that characterizing the resolved MSPs in the Galactic\ndisk? To investigate this we have used a Bayesian Markov Chain Monte Carlo to\nevaluate the posterior distribution of the parameters of the MSP luminosity\nfunction describing both resolved MSPs and the Galactic Center excess. At\nvariance with some other claims, our analysis reveals that, within current\nuncertainties, both data sets can be well fit with the same luminosity\nfunction.\n", "title": "Consistency Between the Luminosity Function of Resolved Millisecond Pulsars and the Galactic Center Excess" }
null
null
[ "Physics" ]
null
true
null
19576
null
Validated
null
null
null
{ "abstract": " The European X-ray Free Electron Laser (XFEL.EU) will provide every 0.1 s a\ntrain of 2700 spatially coherent ultrashort X-ray pulses at 4.5 MHz repetition\nrate. The Small Quantum Systems (SQS) instrument and the Spectroscopy and\nCoherent Scattering instrument (SCS) operate with soft X-rays between 0.5 keV -\n6keV. The DEPFET Sensor with Signal Compression (DSSC) detector is being\ndeveloped to meet the requirements set by these two XFEL.EU instruments. The\nDSSC imager is a 1 mega-pixel camera able to store up to 800 single-pulse\nimages per train. The so-called ladder is the basic unit of the DSSC detector.\nIt is the single unit out of sixteen identical-units composing the\nDSSC-megapixel camera, containing all representative electronic components of\nthe full-size system and allows testing the full electronic chain. Each DSSC\nladder has a focal plane sensor with 128 x 512 pixels. The read-out ASIC\nprovides full-parallel readout of the sensor pixels. Every read-out channel\ncontains an amplifier and an analog filter, an up-to 9 bit ADC and the digital\nmemory. The ASIC amplifier have a double front-end to allow one to use either\nDEPFET sensors or Mini-SDD sensors. In the first case, the signal compression\nis a characteristic intrinsic of the sensor; in the second case, the\ncompression is implemented at the first amplification stage. The goal of signal\ncompression is to meet the requirement of single-photon detection capability\nand wide dynamic range. We present the first results of measurements obtained\nusing a 64 x 64 pixel DEPFET sensor attached to the full final electronic and\ndata-acquisition chain.\n", "title": "First functionality tests of a 64 x 64 pixel DSSC sensor module connected to the complete ladder readout" }
null
null
null
null
true
null
19577
null
Default
null
null
null
{ "abstract": " In this paper, for the first time, we study label propagation in\nheterogeneous graphs under heterophily assumption. Homophily label propagation\n(i.e., two connected nodes share similar labels) in homogeneous graph (with\nsame types of vertices and relations) has been extensively studied before.\nUnfortunately, real-life networks are heterogeneous, they contain different\ntypes of vertices (e.g., users, images, texts) and relations (e.g.,\nfriendships, co-tagging) and allow for each node to propagate both the same and\nopposite copy of labels to its neighbors. We propose a $\\mathcal{K}$-partite\nlabel propagation model to handle the mystifying combination of heterogeneous\nnodes/relations and heterophily propagation. With this model, we develop a\nnovel label inference algorithm framework with update rules in near-linear time\ncomplexity. Since real networks change over time, we devise an incremental\napproach, which supports fast updates for both new data and evidence (e.g.,\nground truth labels) with guaranteed efficiency. We further provide a utility\nfunction to automatically determine whether an incremental or a re-modeling\napproach is favored. Extensive experiments on real datasets have verified the\neffectiveness and efficiency of our approach, and its superiority over the\nstate-of-the-art label propagation methods.\n", "title": "Label Propagation on K-partite Graphs with Heterophily" }
null
null
[ "Computer Science" ]
null
true
null
19578
null
Validated
null
null
null
{ "abstract": " Appearing on the stage quite recently, the Low Power Wide Area Networks\n(LPWANs) are currently getting much of attention. In the current paper we study\nthe susceptibility of one LPWAN technology, namely LoRaWAN, to the\ninter-network interferences. By means of excessive empirical measurements\nemploying the certified commercial transceivers, we characterize the effect of\nmodulation coding schemes (known for LoRaWAN as data rates (DRs)) of a\ntransmitter and an interferer on probability of successful packet delivery\nwhile operating in EU 868 MHz band. We show that in reality the transmissions\nwith different DRs in the same frequency channel can negatively affect each\nother and that the high DRs are influenced by interferences more severely than\nthe low ones. Also, we show that the LoRa-modulated DRs are affected by the\ninterferences much less than the FSK-modulated one. Importantly, the presented\nresults provide insight into the network-level operation of the LoRa LPWAN\ntechnology in general, and its scalability potential in particular. The results\ncan also be used as a reference for simulations and analyses or for defining\nthe communication parameters for real-life applications.\n", "title": "On LoRaWAN Scalability: Empirical Evaluation of Susceptibility to Inter-Network Interference" }
null
null
[ "Computer Science" ]
null
true
null
19579
null
Validated
null
null
null
{ "abstract": " The impacts of climate change are felt by most critical systems, such as\ninfrastructure, ecological systems, and power-plants. However, contemporary\nEarth System Models (ESM) are run at spatial resolutions too coarse for\nassessing effects this localized. Local scale projections can be obtained using\nstatistical downscaling, a technique which uses historical climate observations\nto learn a low-resolution to high-resolution mapping. Depending on statistical\nmodeling choices, downscaled projections have been shown to vary significantly\nterms of accuracy and reliability. The spatio-temporal nature of the climate\nsystem motivates the adaptation of super-resolution image processing techniques\nto statistical downscaling. In our work, we present DeepSD, a generalized\nstacked super resolution convolutional neural network (SRCNN) framework for\nstatistical downscaling of climate variables. DeepSD augments SRCNN with\nmulti-scale input channels to maximize predictability in statistical\ndownscaling. We provide a comparison with Bias Correction Spatial\nDisaggregation as well as three Automated-Statistical Downscaling approaches in\ndownscaling daily precipitation from 1 degree (~100km) to 1/8 degrees (~12.5km)\nover the Continental United States. Furthermore, a framework using the NASA\nEarth Exchange (NEX) platform is discussed for downscaling more than 20 ESM\nmodels with multiple emission scenarios.\n", "title": "DeepSD: Generating High Resolution Climate Change Projections through Single Image Super-Resolution" }
null
null
null
null
true
null
19580
null
Default
null
null
null
{ "abstract": " Among asteroids there exist ambiguities in their rotation period\ndeterminations. They are due to incomplete coverage of the rotation, noise\nand/or aliases resulting from gaps between separate lightcurves. To help to\nremove such uncertainties, basic characteristic of the lightcurves resulting\nfrom constraints imposed by the asteroid shapes and geometries of observations\nshould be identified. We simulated light variations of asteroids which shapes\nwere modelled as Gaussian random spheres, with random orientations of spin\nvectors and phase angles changed every $5^\\circ$ from $0^\\circ$ to $65^\\circ$.\nThis produced 1.4 mln lightcurves. For each simulated lightcurve Fourier\nanalysis has been made and the harmonic of the highest amplitude was recorded.\nFrom the statistical point of view, all lightcurves observed at phase angles\n$\\alpha < 30^\\circ$, with peak-to-peak amplitudes $A>0.2$ mag are bimodal.\nSecond most frequently dominating harmonic is the first one, with the 3rd\nharmonic following right after. For 1% of lightcurves with amplitudes $A < 0.1$\nmag and phase angles $\\alpha < 40^\\circ$ 4th harmonic dominates.\n", "title": "Statistical analysis of the ambiguities in the asteroid period determinations" }
null
null
null
null
true
null
19581
null
Default
null
null
null
{ "abstract": " Learning preferences implicit in the choices humans make is a well studied\nproblem in both economics and computer science. However, most work makes the\nassumption that humans are acting (noisily) optimally with respect to their\npreferences. Such approaches can fail when people are themselves learning about\nwhat they want. In this work, we introduce the assistive multi-armed bandit,\nwhere a robot assists a human playing a bandit task to maximize cumulative\nreward. In this problem, the human does not know the reward function but can\nlearn it through the rewards received from arm pulls; the robot only observes\nwhich arms the human pulls but not the reward associated with each pull. We\noffer sufficient and necessary conditions for successfully assisting the human\nin this framework. Surprisingly, better human performance in isolation does not\nnecessarily lead to better performance when assisted by the robot: a human\npolicy can do better by effectively communicating its observed rewards to the\nrobot. We conduct proof-of-concept experiments that support these results. We\nsee this work as contributing towards a theory behind algorithms for\nhuman-robot interaction.\n", "title": "The Assistive Multi-Armed Bandit" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
19582
null
Validated
null
null
null
{ "abstract": " The use of Laplacian eigenfunctions is ubiquitous in a wide range of computer\ngraphics and geometry processing applications. In particular, Laplacian\neigenbases allow generalizing the classical Fourier analysis to manifolds. A\nkey drawback of such bases is their inherently global nature, as the Laplacian\neigenfunctions carry geometric and topological structure of the entire\nmanifold. In this paper, we introduce a new framework for local spectral shape\nanalysis. We show how to efficiently construct localized orthogonal bases by\nsolving an optimization problem that in turn can be posed as the\neigendecomposition of a new operator obtained by a modification of the standard\nLaplacian. We study the theoretical and computational aspects of the proposed\nframework and showcase our new construction on the classical problems of shape\napproximation and correspondence. We obtain significant improvement compared to\nclassical Laplacian eigenbases as well as other alternatives for constructing\nlocalized bases.\n", "title": "Localized Manifold Harmonics for Spectral Shape Analysis" }
null
null
null
null
true
null
19583
null
Default
null
null
null
{ "abstract": " Significant parts of cultural heritage are produced on the web during the\nlast decades. While easy accessibility to the current web is a good baseline,\noptimal access to the past web faces several challenges. This includes dealing\nwith large-scale web archive collections and lacking of usage logs that contain\nimplicit human feedback most relevant for today's web search. In this paper, we\npropose an entity-oriented search system to support retrieval and analytics on\nthe Internet Archive. We use Bing to retrieve a ranked list of results from the\ncurrent web. In addition, we link retrieved results to the WayBack Machine;\nthus allowing keyword search on the Internet Archive without processing and\nindexing its raw archived content. Our search system complements existing web\narchive search tools through a user-friendly interface, which comes close to\nthe functionalities of modern web search engines (e.g., keyword search, query\nauto-completion and related query suggestion), and provides a great benefit of\ntaking user feedback on the current web into account also for web archive\nsearch. Through extensive experiments, we conduct quantitative and qualitative\nanalyses in order to provide insights that enable further research on and\npractical applications of web archives.\n", "title": "How to Search the Internet Archive Without Indexing It" }
null
null
null
null
true
null
19584
null
Default
null
null
null
{ "abstract": " We study a system consisting of a Luttinger liquid coupled to a quantum dot\non the boundary. The Luttinger liquid is expressed in terms of fermions\ninteracting via density-density coupling and the dot is modeled as an\ninteracting resonant level on to which the bulk fermions can tunnel. We solve\nthe Hamiltonian exactly and construct all eigenstates. We study both the zero\nand finite temperature properties of the system, in particular we compute the\nexact dot occupation as a function of the dot energy in all parameter regimes.\nThe system is seen to flow from weak to to strong coupling for all values of\nthe bulk interaction, with the flow characterized by a non-perturbative Kondo\nscale. We identify the critical exponents at the weak and strong coupling\nregimes.\n", "title": "Quantum Dot at a Luttinger liquid edge - Exact solution via Bethe Ansatz" }
null
null
[ "Physics" ]
null
true
null
19585
null
Validated
null
null
null
{ "abstract": " Jittered Sampling is a refinement of the classical Monte Carlo sampling\nmethod. Instead of picking $n$ points randomly from $[0,1]^2$, one partitions\nthe unit square into $n$ regions of equal measure and then chooses a point\nrandomly from each partition. Currently, no good rules for how to partition the\nspace are available. In this paper, we present a solution for the special case\nof subdividing the unit square by a decreasing function into two regions so as\nto minimize the expected squared $\\mathcal{L}_2-$discrepancy. The optimal\npartitions are given by a \\textit{highly} nonlinear integral equation for which\nwe determine an approximate solution. In particular, there is a break of\nsymmetry and the optimal partition is not into two sets of equal measure. We\nhope this stimulates further interest in the construction of good partitions.\n", "title": "Optimal Jittered Sampling for two Points in the Unit Square" }
null
null
null
null
true
null
19586
null
Default
null
null
null
{ "abstract": " Analysis and quantification of brain structural changes, using Magnetic\nresonance imaging (MRI), are increasingly used to define novel biomarkers of\nbrain pathologies, such as Alzheimer's disease (AD). Network-based models of\nthe brain have shown that both local and global topological properties can\nreveal patterns of disease propagation. On the other hand, intra-subject\ndescriptions cannot exploit the whole information context, accessible through\ninter-subject comparisons. To address this, we developed a novel approach,\nwhich models brain structural connectivity atrophy with a multiplex network and\nsummarizes it within a classification score. On an independent dataset\nmultiplex networks were able to correctly segregate, from normal controls (NC),\nAD patients and subjects with mild cognitive impairment that will convert to AD\n(cMCI) with an accuracy of, respectively, $0.86 \\pm 0.01$ and $0.84 \\pm 0.01$.\nThe model also shows that illness effects are maximally detected by parceling\nthe brain in equal volumes of $3000$ $mm^3$ (\"patches\"), without any $a$\n$priori$ segmentation based on anatomical features. A direct comparison to\nstandard voxel-based morphometry on the same dataset showed that the multiplex\nnetwork approach had higher sensitivity. This method is general and can have\ntwofold potential applications: providing a reliable tool for clinical trials\nand a disease signature of neurodegenerative pathologies.\n", "title": "Brain structural connectivity atrophy in Alzheimer's disease" }
null
null
null
null
true
null
19587
null
Default
null
null
null
{ "abstract": " This work proposes a visual odometry method that combines points and plane\nprimitives, extracted from a noisy depth camera. Depth measurement uncertainty\nis modelled and propagated through the extraction of geometric primitives to\nthe frame-to-frame motion estimation, where pose is optimized by weighting the\nresiduals of 3D point and planes matches, according to their uncertainties.\nResults on an RGB-D dataset show that the combination of points and planes,\nthrough the proposed method, is able to perform well in poorly textured\nenvironments, where point-based odometry is bound to fail.\n", "title": "Probabilistic Combination of Noisy Points and Planes for RGB-D Odometry" }
null
null
[ "Computer Science" ]
null
true
null
19588
null
Validated
null
null
null
{ "abstract": " Autoignition delay experiments for the isomers of butanol, including n-,\nsec-, tert-, and iso-butanol, have been performed using a heated rapid\ncompression machine. For a compressed pressure of 15 bar, the compressed\ntemperatures have been varied in the range of 725-855 K for all the\nstoichiometric fuel/oxidizer mixtures. Over the conditions investigated in this\nstudy, the ignition delay decreases monotonically as temperature increases and\nexhibits single-stage characteristics. Experimental ignition delays are also\ncompared to simulations computed using three kinetic mechanisms available in\nthe literature. Reasonable agreement is found for three isomers (tert-, iso-,\nand n-butanol).\n", "title": "Autoignition of Butanol Isomers at Low to Intermediate Temperature and Elevated Pressure" }
null
null
null
null
true
null
19589
null
Default
null
null
null
{ "abstract": " In this paper, we prove that for every integer $k \\geq 1$, the $k$-abelian\ncomplexity function of the Cantor sequence $\\mathbf{c} = 101000101\\cdots$ is a\n$3$-regular sequence.\n", "title": "On the $k$-abelian complexity of the Cantor sequence" }
null
null
null
null
true
null
19590
null
Default
null
null
null
{ "abstract": " We investigate an inverse problem in time-frequency localization: the\napproximation of the symbol of a time-frequency localization operator from\npartial spectral information by the method of accumulated spectrograms (the sum\nof the spectrograms corresponding to large eigenvalues). We derive a sharp\nbound for the rate of convergence of the accumulated spectrogram, improving on\nrecent results.\n", "title": "Sharp rates of convergence for accumulated spectrograms" }
null
null
null
null
true
null
19591
null
Default
null
null
null
{ "abstract": " In this paper, we present a system that associates faces with voices in a\nvideo by fusing information from the audio and visual signals. The thesis\nunderlying our work is that an extremely simple approach to generating (weak)\nspeech clusters can be combined with visual signals to effectively associate\nfaces and voices by aggregating statistics across a video. This approach does\nnot need any training data specific to this task and leverages the natural\ncoherence of information in the audio and visual streams. It is particularly\napplicable to tracking speakers in videos on the web where a priori information\nabout the environment (e.g., number of speakers, spatial signals for\nbeamforming) is not available. We performed experiments on a real-world dataset\nusing this analysis framework to determine the speaker in a video. Given a\nground truth labeling determined by human rater consensus, our approach had\n~71% accuracy.\n", "title": "Putting a Face to the Voice: Fusing Audio and Visual Signals Across a Video to Determine Speakers" }
null
null
null
null
true
null
19592
null
Default
null
null
null
{ "abstract": " In this paper, the Kelvin wave and knot dynamics are studied on three\ndimensional smoothly deformed entangled vortex-membranes in five dimensional\nspace. Owing to the existence of local Lorentz invariance and diffeomorphism\ninvariance, in continuum limit gravity becomes an emergent phenomenon on 3+1\ndimensional zero-lattice (a lattice of projected zeroes): On the one hand, the\ndeformed zero-lattice can be denoted by curved space-time for knots; on the\nother hand, the knots as topological defect of 3+1 dimensional zero-lattice\nindicates matter may curve space-time. This work would help researchers to\nunderstand the mystery in gravity.\n", "title": "Topological Interplay between Knots and Entangled Vortex-Membranes" }
null
null
null
null
true
null
19593
null
Default
null
null
null
{ "abstract": " We develop the non-commutative polynomial version of the invariant theory for\nthe quantum general linear supergroup ${\\rm{ U}}_q(\\mathfrak{gl}_{m|n})$. A\nnon-commutative ${\\rm{ U}}_q(\\mathfrak{gl}_{m|n})$-module superalgebra\n$\\mathcal{P}^{k|l}_{\\,r|s}$ is constructed, which is the quantum analogue of\nthe supersymmetric algebra over $\\mathbb{C}^{k|l}\\otimes \\mathbb{C}^{m|n}\\oplus\n\\mathbb{C}^{r|s}\\otimes (\\mathbb{C}^{m|n})^{\\ast}$. We analyse the structure of\nthe subalgebra of ${\\rm{ U}}_q(\\mathfrak{gl}_{m|n})$-invariants in\n$\\mathcal{P}^{k|l}_{\\,r|s}$ by using the quantum super analogue of Howe\nduality.\nThe subalgebra of ${\\rm{ U}}_q(\\mathfrak{gl}_{m|n})$-invariants in\n$\\mathcal{P}^{k|l}_{\\,r|s}$ is shown to be finitely generated. We determine its\ngenerators and establish a surjective superalgebra homomorphism from a braided\nsupersymmetric algebra onto it. This establishes the first fundamental theorem\nof invariant theory for ${\\rm{ U}}_q(\\mathfrak{gl}_{m|n})$.\nWe show that the above mentioned superalgebra homomorphism is an isomorphism\nif and only if $m\\geq \\min\\{k,r\\}$ and $n\\geq \\min\\{l,s\\}$, and obtain a\nmonomial basis for the subalgebra of invariants in this case. When the\nhomomorphism is not injective, we give a representation theoretical description\nof the generating elements of the kernel associated to the partition\n$((m+1)^{n+1})$, producing the second fundamental theorem of invariant theory\nfor ${\\rm{ U}}_q(\\mathfrak{gl}_{m|n})$.\nWe consider two applications of our results. A complete treatment of the\nnon-commutative polynomial version of invariant theory for ${\\rm{\nU}}_q(\\mathfrak{gl}_{m})$ is obtained as the special case with $n=0$, where an\nexplicit SFT is proved, which we believe to be new. The FFT and SFT of the\ninvariant theory for the general linear superalgebra are recovered from the\nclassical (i.e., $q\\to 1$) limit of our results.\n", "title": "The first and second fundamental theorems of invariant theory for the quantum general linear supergroup" }
null
null
null
null
true
null
19594
null
Default
null
null
null
{ "abstract": " We introduce an integrated meshing and finite element method pipeline\nenabling black-box solution of partial differential equations in the volume\nenclosed by a boundary representation. We construct a hybrid\nhexahedral-dominant mesh, which contains a small number of star-shaped\npolyhedra, and build a set of high-order basis on its elements, combining\ntriquadratic B-splines, triquadratic hexahedra (27 degrees of freedom), and\nharmonic elements. We demonstrate that our approach converges cubically under\nrefinement, while requiring around 50% of the degrees of freedom than a\nsimilarly dense hexahedral mesh composed of triquadratic hexahedra. We validate\nour approach solving Poisson's equation on a large collection of models, which\nare automatically processed by our algorithm, only requiring the user to\nprovide boundary conditions on their surface.\n", "title": "Poly-Spline Finite Element Method" }
null
null
null
null
true
null
19595
null
Default
null
null
null
{ "abstract": " In this note, we show the class of finite, epistemic programs to be Turing\ncomplete. Epistemic programs is a widely used update mechanism used in\nepistemic logic, where it such are a special type of action models: One which\ndoes not contain postconditions.\n", "title": "Turing Completeness of Finite, Epistemic Programs" }
null
null
null
null
true
null
19596
null
Default
null
null
null
{ "abstract": " Motivated by the problem of domain formation in chromosomes, we studied a\nco--polymer model where only a subset of the monomers feel attractive\ninteractions. These monomers are displaced randomly from a regularly-spaced\npattern, thus introducing some quenched disorder in the system. Previous work\nhas shown that in the case of regularly-spaced interacting monomers this chain\ncan fold into structures characterized by multiple distinct domains of\nconsecutive segments. In each domain, attractive interactions are balanced by\nthe entropy cost of forming loops. We show by advanced replica-exchange\nsimulations that adding disorder in the position of the interacting monomers\nfurther stabilizes these domains. The model suggests that the partitioning of\nthe chain into well-defined domains of consecutive monomers is a spontaneous\nproperty of heteropolymers. In the case of chromosomes, evolution could have\nacted on the spacing of interacting monomers to modulate in a simple way the\nunderlying domains for functional reasons.\n", "title": "Spontaneous domain formation in disordered copolymers as a mechanism for chromosome structuring" }
null
null
[ "Quantitative Biology" ]
null
true
null
19597
null
Validated
null
null
null
{ "abstract": " In this paper, we investigate the reconstruction of time-correlated sources\nin a point-to-point communications scenario comprising an energy-harvesting\nsensor and a Fusion Center (FC). Our goal is to minimize the average distortion\nin the reconstructed observations by using data from previously encoded sources\nas side information. First, we analyze a delay-constrained scenario, where the\nsources must be reconstructed before the next time slot. We formulate the\nproblem in a convex optimization framework and derive the optimal transmission\n(i.e., power and rate allocation) policy. To solve this problem, we propose an\niterative algorithm based on the subgradient method. Interestingly, the\nsolution to the problem consists of a coupling between a two-dimensional\ndirectional water-filling algorithm (for power allocation) and a reverse\nwater-filling algorithm (for rate allocation). Then we find a more general\nsolution to this problem in a delay-tolerant scenario where the time horizon\nfor source reconstruction is extended to multiple time slots. Finally, we\nprovide some numerical results that illustrate the impact of delay and\ncorrelation in the power and rate allocation policies, and in the resulting\nreconstruction distortion. We also discuss the performance gap exhibited by a\nheuristic online policy derived from the optimal (offline) one.\n", "title": "Reconstruction of Correlated Sources with Energy Harvesting Constraints in Delay-constrained and Delay-tolerant Communication Scenarios" }
null
null
[ "Computer Science" ]
null
true
null
19598
null
Validated
null
null
null
{ "abstract": " Robotics enables a variety of unconventional actuation strategies to be used\nfor endoscopes, resulting in reduced trauma to the GI tract. For transmission\nof force to distally mounted endoscopic instruments, robotically actuated\ntendon sheath mechanisms are the current state of the art. Robotics in surgical\nendoscopy enables an ergonomic mapping of the surgeon movements to remotely\ncontrolled slave arms, facilitating tissue manipulation. The learning curve for\ndifficult procedures such as endoscopic submucosal dissection and\nfull-thickness resection can be significantly reduced. Improved surgical\noutcomes are also observed from clinical and pre-clinical trials. The\ntechnology behind master-slave surgical robotics will continue to mature, with\nthe addition of position and force sensors enabling better control and tactile\nfeedback. More robotic assisted GI luminal and NOTES surgeries are expected to\nbe conducted in future, and gastroenterologists will have a key collaborative\nrole to play.\n", "title": "Future of Flexible Robotic Endoscopy Systems" }
null
null
null
null
true
null
19599
null
Default
null
null
null
{ "abstract": " The present numerical study aims at shedding light on the mechanism\nunderlying the precessional instability in a sphere. Precessional instabilities\nin the form of parametric resonance due to topographic coupling have been\nreported in a spheroidal geometry both analytically and numerically. We show\nthat such parametric resonances can also develop in spherical geometry due to\nthe conical shear layers driven by the Ekman pumping singularities at the\ncritical latitudes. Scaling considerations lead to a stability criterion of the\nform, $|P_o|>O(E^{4/5})$, where $P_o$ represents the Poincaré number and $E$\nthe Ekman number. The predicted threshold is consistent with our numerical\nsimulations as well as previous experimental results. When the precessional\nforcing is supercriticial, our simulations show evidence of an inverse cascade,\ni.e. small scale flows merging into large scale cyclones with a retrograde\ndrift. Finally, it is shown that this instability mechanism may be relevant to\nprecessing celestial bodies such as the Earth and Earth's moon.\n", "title": "Shear-driven parametric instability in a precessing sphere" }
null
null
[ "Physics" ]
null
true
null
19600
null
Validated
null
null