title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Communication-Efficient Algorithms for Decentralized and Stochastic Optimization
We present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems defined over multiagent networks. Considering that communication is a major bottleneck in decentralized optimization, our main goal in this paper is to develop algorithmic frameworks which can significantly reduce the number of inter-node communications. We first propose a decentralized primal-dual method which can find an $\epsilon$-solution both in terms of functional optimality gap and feasibility residual in $O(1/\epsilon)$ inter-node communication rounds when the objective functions are convex and the local primal subproblems are solved exactly. Our major contribution is to present a new class of decentralized primal-dual type algorithms, namely the decentralized communication sliding (DCS) methods, which can skip the inter-node communications while agents solve the primal subproblems iteratively through linearizations of their local objective functions. By employing DCS, agents can still find an $\epsilon$-solution in $O(1/\epsilon)$ (resp., $O(1/\sqrt{\epsilon})$) communication rounds for general convex functions (resp., strongly convex functions), while maintaining the $O(1/\epsilon^2)$ (resp., $O(1/\epsilon)$) bound on the total number of intra-node subgradient evaluations. We also present a stochastic counterpart for these algorithms, denoted by SDCS, for solving stochastic optimization problems whose objective function cannot be evaluated exactly. In comparison with existing results for decentralized nonsmooth and stochastic optimization, we can reduce the total number of inter-node communication rounds by orders of magnitude while still maintaining the optimal complexity bounds on intra-node stochastic subgradient evaluations. The bounds on the subgradient evaluations are actually comparable to those required for centralized nonsmooth and stochastic optimization.
1
0
1
0
0
0
A Machine Learning Approach to Shipping Box Design
Having the right assortment of shipping boxes in the fulfillment warehouse to pack and ship customer's online orders is an indispensable and integral part of nowadays eCommerce business, as it will not only help maintain a profitable business but also create great experiences for customers. However, it is an extremely challenging operations task to strategically select the best combination of tens of box sizes from thousands of feasible ones to be responsible for hundreds of thousands of orders daily placed on millions of inventory products. In this paper, we present a machine learning approach to tackle the task by formulating the box design problem prescriptively as a generalized version of weighted $k$-medoids clustering problem, where the parameters are estimated through a variety of descriptive analytics. We test this machine learning approach on fulfillment data collected from Walmart U.S. eCommerce, and our approach is shown to be capable of improving the box utilization rate by more than $10\%$.
0
0
0
1
0
0
Precise but Natural Specification for Robot Tasks
We present Flipper, a natural language interface for describing high-level task specifications for robots that are compiled into robot actions. Flipper starts with a formal core language for task planning that allows expressing rich temporal specifications and uses a semantic parser to provide a natural language interface. Flipper provides immediate visual feedback by executing an automatically constructed plan of the task in a graphical user interface. This allows the user to resolve potentially ambiguous interpretations. Flipper extends itself via naturalization: its users can add definitions for utterances, from which Flipper induces new rules and adds them to the core language, gradually growing a more and more natural task specification language. Flipper improves the naturalization by generalizing the definition provided by users. Unlike other task-specification systems, Flipper enables natural language interactions while maintaining the expressive power and formal precision of a programming language. We show through an initial user study that natural language interactions and generalization can considerably ease the description of tasks. Moreover, over time, users employ more and more concepts outside of the initial core language. Such extensions are available to the Flipper community, and users can use concepts that others have defined.
1
0
0
0
0
0
Rigorous Analysis for Efficient Statistically Accurate Algorithms for Solving Fokker-Planck Equations in Large Dimensions
This article presents a rigorous analysis for efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. Despite the conditional Gaussianity, these nonlinear systems contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy that requires only a small number of samples $L$ to capture both the transient and the equilibrium non-Gaussian PDFs with high accuracy. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. Rigorous analysis shows that the mean integrated squared error in the recovered PDFs in the high-dimensional subspace is bounded by the inverse square root of the determinant of the conditional covariance, where the conditional covariance is completely determined by the underlying dynamics and is independent of $L$. This is fundamentally different from a direct application of kernel methods to solve the full PDF, where $L$ needs to increase exponentially with the dimension of the system and the bandwidth shrinks. A detailed comparison between different methods justifies that the efficient statistically accurate algorithms are able to overcome the curse of dimensionality. It is also shown with mathematical rigour that these algorithms are robust in long time provided that the system is controllable and stochastically stable. Particularly, dynamical systems with energy-conserving quadratic nonlinearity as in many geophysical and engineering turbulence are proved to have these properties.
0
0
1
1
0
0
Rapid processing of 85Kr/Kr ratios using Atom Trap Trace Analysis
We report a methodology for measuring 85Kr/Kr isotopic abundances using Atom Trap Trace Analysis (ATTA) that increases sample measurement throughput by over an order of magnitude to 6 samples per 24 hours. The noble gas isotope 85Kr (half-life = 10.7 yr) is a useful tracer for young groundwater in the age range of 5-50 years. ATTA, an efficient and selective laser-based atom counting method, has recently been applied to 85Kr/Kr isotopic abundance measurements, requiring 5-10 microliters of krypton gas at STP extracted from 50-100 L of water. Previously a single such measurement required 48 hours. Our new method demonstrates that we can measure 85Kr/Kr ratios with 3-5% relative uncertainty every 4 hours, on average, with the same sample requirements.
0
1
0
0
0
0
Adapting Everyday Manipulation Skills to Varied Scenarios
We address the problem of executing tool-using manipulation skills in scenarios where the objects to be used may vary. We assume that point clouds of the tool and target object can be obtained, but no interpretation or further knowledge about these objects is provided. The system must interpret the point clouds and decide how to use the tool to complete a manipulation task with a target object; this means it must adjust motion trajectories appropriately to complete the task. We tackle three everyday manipulations: scraping material from a tool into a container, cutting, and scooping from a container. Our solution encodes these manipulation skills in a generic way, with parameters that can be filled in at run-time via queries to a robot perception module; the perception module abstracts the functional parts for the tool and extracts key parameters that are needed for the task. The approach is evaluated in simulation and with selected examples on a PR2 robot.
1
0
0
0
0
0
Effect of Particle Number Conservation on the Berry Phase Resulting from Transport of a Bound Quasiparticle around a Superfluid Vortex
Motivated by understanding Majorana zero modes in topological superfluids in particle-number conserving framework beyond the present framework, we study the effect of particle number conservation on the Berry phase resulting from transport of a bound quasiparticle around a superfluid vortex. We find that particle-number non-conserving calculations based on Bogoliubov-de Gennes (BdG) equations are unable to capture the correct physics when the quasiparticle is within the penetration depth of the vortex core where the superfluid velocity is non-zero. Particle number conservation is crucial for deriving the correct Berry phase in this context, and the Berry phase takes non-universal values depending on the system parameters and the external trap imposed to bind the quasiparticle. Of particular relevance to Majorana physics are the findings that superfluid condensate affects the part of the Berry phase not accounted for in the standard BdG framework, and that the superfluid many-body ground state of odd number of fermions involves superfluid condensate deformation due to the presence of the bound quasiparticle - an effect which is beyond the description of the BdG equations.
0
1
0
0
0
0
A new topological insulator - β-InTe strained in the layer plane
We have investigated the band structure of the bulk crystal and the (001) surface of the \beta-InTe layered crystal subjected to biaxial stretching in the layer plane. The calculation has been carried out using the full-potential linearized augmented plane wave method (FP LAPW) implemented in WIEN2k. It has been shown that at the strain \Deltaa/a=0.06, where a is the lattice parameter in the layer plane, the band gap in the electronic spectrum collapses. With further strain increase a band inversion occurs. The inclusion of the spin-orbit interaction reopens the gap in the electronic spectrum of a bulk crystal, and our calculations show that the spectrum of the surface states has the form of a Dirac cone, typical for topological insulators.
0
1
0
0
0
0
Stock management (Gestão de estoques)
There is a great need to stock materials for production, but storing materials comes at a cost. Lack of organization in the inventory can result in a very high cost for the final product, in addition to generating other problems in the production chain. In this work we present mathematical and statistical methods applicable to stock management. The stock analysis using ABC curves serves to identify which are the priority items, the most expensive and with the highest turnover (demand), and thus determine, through stock control models, the purchase lot size and the periodicity that minimize the total costs of storing these materials. Using the Economic Order Quantity (EOQ) model and the (Q,R) model, the inventory costs of a company were minimized. The comparison of the results provided by the models was performed.
0
0
0
1
0
0
From Principal Subspaces to Principal Components with Linear Autoencoders
The autoencoder is an effective unsupervised learning model which is widely used in deep learning. It is well known that an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors. In this paper, we show how to recover the loading vectors from the autoencoder weights.
0
0
0
1
0
0
A critical nonlinear elliptic equation with non local regional diffusion
In this article we are interested in the nonlocal regional Schrödinger equation with critical exponent \begin{eqnarray*} &\epsilon^{2\alpha} (-\Delta)_{\rho}^{\alpha}u + u = \lambda u^q + u^{2_{\alpha}^{*}-1} \mbox{ in } \mathbb{R}^{N}, \\ & u \in H^{\alpha}(\mathbb{R}^{N}), \end{eqnarray*} where $\epsilon$ is a small positive parameter, $\alpha \in (0,1)$, $q\in (1,2_{\alpha}^{*}-1)$, $2_{\alpha}^{*} = \frac{2N}{N-2\alpha}$ is the critical Sobolev exponent, $\lambda >0$ is a parameter and $(-\Delta)_{\rho}^{\alpha}$ is a variational version of the regional laplacian, whose range of scope is a ball with radius $\rho(x)>0$. We study the existence of a ground state and we analyze the behavior of semi-classical solutions as $\varepsilon\to 0$.
0
0
1
0
0
0
Don't Decay the Learning Rate, Increase the Batch Size
It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate $\epsilon$ and scaling the batch size $B \propto \epsilon$. Finally, one can increase the momentum coefficient $m$ and scale $B \propto 1/(1-m)$, although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to $76.1\%$ validation accuracy in under 30 minutes.
1
0
0
1
0
0
Collisions of Dark Matter Axion Stars with Astrophysical Sources
If QCD axions form a large fraction of the total mass of dark matter, then axion stars could be very abundant in galaxies. As a result, collisions with each other, and with other astrophysical bodies, can occur. We calculate the rate and analyze the consequences of three classes of collisions, those occurring between a dilute axion star and: another dilute axion star, an ordinary star, or a neutron star. In all cases we attempt to quantify the most important astrophysical uncertainties; we also pay particular attention to scenarios in which collisions lead to collapse of otherwise stable axion stars, and possible subsequent decay through number changing interactions. Collisions between two axion stars can occur with a high total rate, but the low relative velocity required for collapse to occur leads to a very low total rate of collapses. On the other hand, collisions between an axion star and an ordinary star have a large rate, $\Gamma_\odot \sim 3000$ collisions/year/galaxy, and for sufficiently heavy axion stars, it is plausible that most or all such collisions lead to collapse. We identify in this case a parameter space which has a stable region and a region in which collision triggers collapse, which depend on the axion number ($N$) in the axion star, and a ratio of mass to radius cubed characterizing the ordinary star ($M_s/R_s^3$). Finally, we revisit the calculation of collision rates between axion stars and neutron stars, improving on previous estimates by taking cylindrical symmetry of the neutron star distribution into account. Collapse and subsequent decay through collision processes, if occurring with a significant rate, can affect dark matter phenomenology and the axion star mass distribution.
0
1
0
0
0
0
Practical Algorithms for Best-K Identification in Multi-Armed Bandits
In the Best-$K$ identification problem (Best-$K$-Arm), we are given $N$ stochastic bandit arms with unknown reward distributions. Our goal is to identify the $K$ arms with the largest means with high confidence, by drawing samples from the arms adaptively. This problem is motivated by various practical applications and has attracted considerable attention in the past decade. In this paper, we propose new practical algorithms for the Best-$K$-Arm problem, which have nearly optimal sample complexity bounds (matching the lower bound up to logarithmic factors) and outperform the state-of-the-art algorithms for the Best-$K$-Arm problem (even for $K=1$) in practice.
1
0
0
1
0
0
Hyperfield Grassmannians
In a recent paper Baker and Bowler introduced matroids over hyperfields, offering a common generalization of matroids, oriented matroids, and linear subspaces of based vector spaces. This paper introduces the notion of a topological hyperfield and explores the generalization of Grassmannians and realization spaces to this context, particularly in relating the (hyper)fields R and C to hyperfields arising in matroid theory and in tropical geometry.
0
0
1
0
0
0
Statistical estimation in a randomly structured branching population
We consider a binary branching process structured by a stochastic trait that evolves according to a diffusion process that triggers the branching events, in the spirit of Kimmel's model of cell division with parasite infection. Based on the observation of the trait at birth of the first n generations of the process, we construct nonparametric estimator of the transition of the associated bifurcating chain and study the parametric estimation of the branching rate. In the limit, as n tends to infinity, we obtain asymptotic efficiency in the parametric case and minimax optimality in the nonparametric case.
0
0
1
0
0
0
Discovering Eastern European PCs by hacking them. Today
Computer science would not be the same without personal computers. In the West the so called PC revolution started in the late '70s and has its roots in hobbyists and do-it-yourself clubs. In the following years the diffusion of home and personal computers has made the discipline closer to many people. A bit later, to a lesser extent, yet in a similar way, the revolution took place also in East European countries. Today, the scenario of personal computing has completely changed, however the computers of the '80s are still objects of fascination for a number of retrocomputing fans who enjoy using, programming and hacking the old 8-bits. The paper highlights the continuity between yesterday's hobbyists and today's retrocomputing enthusiasts, particularly focusing on East European PCs. Besides the preservation of old hardware and software, the community is engaged in the development of emulators and cross compilers. Such tools can be used for historical investigation, for example to trace the origins of the BASIC interpreters loaded in the ROMs of East European PCs.
1
0
0
0
0
0
Neural Rating Regression with Abstractive Tips Generation for Recommendation
Recently, some E-commerce sites launch a new interaction box called Tips on their mobile apps. Users can express their experience and feelings or provide suggestions using short texts typically several words or one sentence. In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings. Jointly modeling these two facets is helpful for designing a better recommendation system. While some existing models integrate text information such as item specifications or user reviews into user and item latent factors for improving the rating prediction, no existing works consider tips for improving recommendation quality. We propose a deep learning based framework named NRT which can simultaneously predict precise ratings and generate abstractive tips with good linguistic quality simulating user experience and feelings. For abstractive tips generation, gated recurrent neural networks are employed to "translate" user and item latent representations into a concise sentence. Extensive experiments on benchmark datasets from different domains show that NRT achieves significant improvements over the state-of-the-art methods. Moreover, the generated tips can vividly predict the user experience and feelings.
1
0
0
0
0
0
Calibration Uncertainty for Advanced LIGO's First and Second Observing Runs
Calibration of the Advanced LIGO detectors is the quantification of the detectors' response to gravitational waves. Gravitational waves incident on the detectors cause phase shifts in the interferometer laser light which are read out as intensity fluctuations at the detector output. Understanding this detector response to gravitational waves is crucial to producing accurate and precise gravitational wave strain data. Estimates of binary black hole and neutron star parameters and tests of general relativity require well-calibrated data, as miscalibrations will lead to biased results. We describe the method of producing calibration uncertainty estimates for both LIGO detectors in the first and second observing runs.
0
1
0
0
0
0
Entanglement transitions induced by large deviations
The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as $A$ and $B$, is computed analytically using a Coulomb gas method. It is shown that this probability, for large $N$, goes as $\exp[-\beta N^2\Phi(\zeta)]$, where the parameter $\beta$ is the Dyson index of the ensemble, $\zeta$ is the large deviation parameter while the rate function $\Phi(\zeta)$ is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems $1$ and $2$, obtained by further partitioning the subsystem $A$, using the properties of the density matrix's partial transpose $\rho_{12}^\Gamma$. The density of states of $\rho_{12}^\Gamma$ is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter $\zeta$. Log negativity is used to quantify the entanglement between subsystems $1$ and $2$. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.
0
0
1
1
0
0
Understanding System Characteristics of Online Erasure Coding on Scalable, Distributed and Large-Scale SSD Array Systems
Large-scale systems with arrays of solid state disks (SSDs) have become increasingly common in many computing segments. To make such systems resilient, we can adopt erasure coding such as Reed-Solomon (RS) code as an alternative to replication because erasure coding can offer a significantly lower storage cost than replication. To understand the impact of using erasure coding on system performance and other system aspects such as CPU utilization and network traffic, we build a storage cluster consisting of approximately one hundred processor cores with more than fifty high-performance SSDs, and evaluate the cluster with a popular open-source distributed parallel file system, Ceph. Then we analyze behaviors of systems adopting erasure coding from the following five viewpoints, compared with those of systems using replication: (1) storage system I/O performance; (2) computing and software overheads; (3) I/O amplification; (4) network traffic among storage nodes; (5) the impact of physical data layout on performance of RS-coded SSD arrays. For all these analyses, we examine two representative RS configurations, which are used by Google and Facebook file systems, and compare them with triple replication that a typical parallel file system employs as a default fault tolerance mechanism. Lastly, we collect 54 block-level traces from the cluster and make them available for other researchers.
1
0
0
0
0
0
Three Questions on Special Homeomorphisms on Subgroups of $R$ and $R^\infty$
We provide justifications for two questions on special maps on subgroups of the reals. We will show that the questions can be treated from different points of view. We also discuss two versions of Anderson's Involution Conjecture.
0
0
1
0
0
0
Nearly Semiparametric Efficient Estimation of Quantile Regression
As a competitive alternative to least squares regression, quantile regression is popular in analyzing heterogenous data. For quantile regression model specified for one single quantile level $\tau$, major difficulties of semiparametric efficient estimation are the unavailability of a parametric efficient score and the conditional density estimation. In this paper, with the help of the least favorable submodel technique, we first derive the semiparametric efficient scores for linear quantile regression models that are assumed for a single quantile level, multiple quantile levels and all the quantile levels in $(0,1)$ respectively. Our main discovery is a one-step (nearly) semiparametric efficient estimation for the regression coefficients of the quantile regression models assumed for multiple quantile levels, which has several advantages: it could be regarded as an optimal way to pool information across multiple/other quantiles for efficiency gain; it is computationally feasible and easy to implement, as the initial estimator is easily available; due to the nature of quantile regression models under investigation, the conditional density estimation is straightforward by plugging in an initial estimator. The resulting estimator is proved to achieve the corresponding semiparametric efficiency lower bound under regularity conditions. Numerical studies including simulations and an example of birth weight of children confirms that the proposed estimator leads to higher efficiency compared with the Koenker-Bassett quantile regression estimator for all quantiles of interest.
0
0
0
1
0
0
Relevance of backtracking paths in epidemic spreading on networks
The understanding of epidemics on networks has greatly benefited from the recent application of message-passing approaches, which allow to derive exact results for irreversible spreading (i.e. diseases with permanent acquired immunity) in locally-tree like topologies. This success has suggested the application of the same approach to reversible epidemics, for which an individual can contract the epidemic and recover repeatedly. The underlying assumption is that backtracking paths (i.e. an individual is reinfected by a neighbor he/she previously infected) do not play a relevant role. In this paper we show that this is not the case for reversible epidemics, since the neglect of backtracking paths leads to a formula for the epidemic threshold that is qualitatively incorrect in the large size limit. Moreover we define a modified reversible dynamics which explicitly forbids direct backtracking events and show that this modification completely upsets the phenomenology.
1
0
0
0
0
0
Hybrid graphene tunneling photoconductor with interface engineering towards fast photoresponse and high responsivity
Hybrid graphene photoconductor/phototransistor has achieved giant photoresponsivity, but its response speed dramatically degrades as the expense due to the long lifetime of trapped interfacial carriers. In this work, by intercalating a large-area atomically thin MoS2 film into a hybrid graphene photoconductor, we have developed a prototype tunneling photoconductor, which exhibits a record-fast response (rising time ~17 ns) and a high responsivity (~$3\times10^4$ A/W at 635 nm and 16.8 nW illumination) across the broad spectral range. We demonstrate that the photo-excited carriers generated in silicon are transferred into graphene through a tunneling process rather than carrier drift. The atomically thin MoS2 film not only serves as tunneling layer but also passivates surface states, which in combination delivers a superior response speed (~3 order of magnitude improved than a device without MoS2 layer), while the responsivity remains high. This intriguing tunneling photoconductor integrates both fast response and high responsivity and thus has significant potential in practical applications of optoelectronic devices.
0
1
0
0
0
0
Embedded Real-Time Fall Detection Using Deep Learning For Elderly Care
This paper proposes a real-time embedded fall detection system using a DVS(Dynamic Vision Sensor) that has never been used for traditional fall detection, a dataset for fall detection using that, and a DVS-TN(DVS-Temporal Network). The first contribution is building a DVS Falls Dataset, which made our network to recognize a much greater variety of falls than the existing datasets that existed before and solved privacy issues using the DVS. Secondly, we introduce the DVS-TN : optimized deep learning network to detect falls using DVS. Finally, we implemented a fall detection system which can run on low-computing H/W with real-time, and tested on DVS Falls Dataset that takes into account various falls situations. Our approach achieved 95.5% on the F1-score and operates at 31.25 FPS on NVIDIA Jetson TX1 board.
1
0
0
1
0
0
A Distributed Algorithm for Solving Linear Algebraic Equations Over Random Networks
In this paper, we consider the problem of solving linear algebraic equations of the form $Ax=b$ among multi agents which seek a solution by using local information in presence of random communication topologies. The equation is solved by $m$ agents where each agent only knows a subset of rows of the partitioned matrix $[A,b]$. We formulate the problem such that this formulation does not need the distribution of random interconnection graphs. Therefore, this framework includes asynchronous updates or unreliable communication protocols without B-connectivity assumption. We apply the random Krasnoselskii-Mann iterative algorithm which converges almost surely and in mean square to a solution of the problem for any matrices $A$ and $b$ and any initial conditions of agents' states. We demonestrate that the limit point to which the agents' states converge is determined by the unique solution of a convex optimization problem regardless of the distribution of random communication graphs. Eventually, we show by two numerical examples that the rate of convergence of the algorithm cannot be guaranteed.
1
0
0
0
0
0
Doping-induced quantum cross-over in Er$_2$Ti$_{2-x}$Sn$_x$O$_7$
We present the results of the investigation of magnetic properties of the Er$_2$Ti$_{2-x}$Sn$_x$O$_7$ series. For small doping values the ordering temperature decreases linearly with $x$ while the moment configuration remains the same as in the $x = 0$ parent compound. Around $x = 1.7$ doping level we observe a change in the behavior, where the ordering temperature starts to increase and new magnetic Bragg peaks appear. For the first time we present evidence of a long-range order (LRO) in Er$_2$Sn$_2$O$_7$ ($x = 2.0$) below $T_N = 130$ mK. It is revealed that the moment configuration corresponds to a Palmer-Chalker type with a value of the magnetic moment significantly renormalized compared to $x = 0$. We discuss our results in the framework of a possible quantum phase transition occurring close to $x = 1.7$.
0
1
0
0
0
0
Stochastic Block Models with Multiple Continuous Attributes
The stochastic block model (SBM) is a probabilistic model for community structure in networks. Typically, only the adjacency matrix is used to perform SBM parameter inference. In this paper, we consider circumstances in which nodes have an associated vector of continuous attributes that are also used to learn the node-to-community assignments and corresponding SBM parameters. While this assumption is not realistic for every application, our model assumes that the attributes associated with the nodes in a network's community can be described by a common multivariate Gaussian model. In this augmented, attributed SBM, the objective is to simultaneously learn the SBM connectivity probabilities with the multivariate Gaussian parameters describing each community. While there are recent examples in the literature that combine connectivity and attribute information to inform community detection, our model is the first augmented stochastic block model to handle multiple continuous attributes. This provides the flexibility in biological data to, for example, augment connectivity information with continuous measurements from multiple experimental modalities. Because the lack of labeled network data often makes community detection results difficult to validate, we highlight the usefulness of our model for two network prediction tasks: link prediction and collaborative filtering. As a result of fitting this attributed stochastic block model, one can predict the attribute vector or connectivity patterns for a new node in the event of the complementary source of information (connectivity or attributes, respectively). We also highlight two biological examples where the attributed stochastic block model provides satisfactory performance in the link prediction and collaborative filtering tasks.
1
0
0
1
0
0
Asymptotic Independence of Bivariate Order Statistics
It is well known that an extreme order statistic and a central order statistic (os) as well as an intermediate os and a central os from a sample of iid univariate random variables get asymptotically independent as the sample size increases. We extend this result to bivariate random variables, where the os are taken componentwise. An explicit representation of the conditional distribution of bivariate os turns out to be a powerful tool.
0
0
1
1
0
0
Semi-Supervised Haptic Material Recognition for Robots using Generative Adversarial Networks
Material recognition enables robots to incorporate knowledge of material properties into their interactions with everyday objects. For example, material recognition opens up opportunities for clearer communication with a robot, such as "bring me the metal coffee mug", and recognizing plastic versus metal is crucial when using a microwave or oven. However, collecting labeled training data with a robot is often more difficult than unlabeled data. We present a semi-supervised learning approach for material recognition that uses generative adversarial networks (GANs) with haptic features such as force, temperature, and vibration. Our approach achieves state-of-the-art results and enables a robot to estimate the material class of household objects with ~90% accuracy when 92% of the training data are unlabeled. We explore how well this approach can recognize the material of new objects and we discuss challenges facing generalization. To motivate learning from unlabeled training data, we also compare results against several common supervised learning classifiers. In addition, we have released the dataset used for this work which consists of time-series haptic measurements from a robot that conducted thousands of interactions with 72 household objects.
1
0
0
1
0
0
Level set shape and topology optimization of finite strain bilateral contact problems
This paper presents a method for the optimization of multi-component structures comprised of two and three materials considering large motion sliding contact and separation along interfaces. The structural geometry is defined by an explicit level set method, which allows for both shape and topology changes. The mechanical model assumes finite strains, a nonlinear elastic material behavior, and a quasi-static response. Identification of overlapping surface position is handled by a coupled parametric representation of contact surfaces. A stabilized Lagrange method and an active set strategy are used to model frictionless contact and separation. The mechanical model is discretized by the extended finite element method which maintains a clear definition of geometry. Face-oriented ghost penalization and dynamic relaxation are implemented to improve the stability of the physical response prediction. A nonlinear programming scheme is used to solve the optimization problem, which is regularized by introducing a perimeter penalty into the objective function. Sensitivities are determined by the adjoint method. The main characteristics of the proposed method are studied by numerical examples in two dimensions. The numerical results demonstrate improved design performance when compared to models optimized with a small strain assumption. Additionally, examples with load path dependent objectives display non-intuitive designs.
0
0
1
0
0
0
Thermal distortions of non-Gaussian beams in Fabry-Perot cavities
Thermal effects are already important in currently operating interferometric gravitational wave detectors. Planned upgrades of these detectors involve increasing optical power to combat quantum shot noise. We consider the ramifications of this increased power for one particular class of laser beams--wide, flat-topped, mesa beams. In particular we model a single mesa beam Fabry-Perot cavity having thermoelastically deformed mirrors. We calculate the intensity profile of the fundamental cavity eigenmode in the presence of thermal perturbations, and the associated changes in thermal noise. We also outline an idealized method of correcting for such effects. At each stage we contrast our results with those of a comparable Gaussian beam cavity. Although we focus on mesa beams the techniques described are applicable to any azimuthally symmetric system.
0
1
0
0
0
0
Rigidity and trace properties of divergence-measure vector fields
We show some rigidity properties of divergence-free vector fields defined on half-spaces. As an application, we prove the existence of the classical trace for a bounded, divergence-measure vector field $\xi$ defined on the Euclidean plane, at almost every point of a locally oriented rectifiable set $S$, under the assumption that its weak normal trace $[\xi\cdot \nu_S]$ attains a local maximum for the norm of $\xi$ at the point.
0
0
1
0
0
0
A note on the Almansi property
The first goal of this note is to study the Almansi property on an m-dimensional model in the sense of Greene and Wu and, more generally, in a Riemannian geometric setting. In particular, we shall prove that the only model on which the Almansi property is verified is the Euclidean space R^m. In the second part of the paper we shall study Almansi's property and biharmonicity for functions which depend on the distance from a given submanifold. Finally, in the last section we provide an extension to the semi-Euclidean case R^{p,q} which includes the proof of the classical Almansi property in R^m as a special instance.
0
0
1
0
0
0
The Quantum Complexity of Computing Schatten $p$-norms
We consider the quantum complexity of computing Schatten $p$-norms and related quantities, and find that the problem of estimating these quantities is closely related to the one clean qubit model of computation. We show that the problem of approximating $\text{Tr}\, (|A|^p)$ for a log-local $n$-qubit Hamiltonian $A$ and $p=\text{poly}(n)$, up to a suitable level of accuracy, is contained in DQC1; and that approximating this quantity up to a somewhat higher level of accuracy is DQC1-hard. In some cases the level of accuracy achieved by the quantum algorithm is substantially better than a natural classical algorithm for the problem. The same problem can be solved for arbitrary sparse matrices in BQP. One application of the algorithm is the approximate computation of the energy of a graph.
1
0
0
0
0
0
Bounds for fidelity of semiclassical Lagrangian states in K{ä}hler quantization
We define mixed states associated with submanifolds with probability densities in quantizable closed K{ä}hler manifolds. Then, we address the problem of comparing two such states via their fidelity. Firstly, we estimate the sub-fidelity and super-fidelity of two such states, giving lower and upper bounds for their fidelity, when the underlying submanifolds are two Lagrangian submanifolds intersecting transversally at a finite number of points, in the semiclassical limit. Secondly, we investigate a family of examples on the sphere, for which we manage to obtain a better upper bound for the fidelity. We conclude by stating a conjecture regarding the fidelity in the general case.
0
0
1
0
0
0
Precision measurement of antiproton to proton ratio with the Alpha Magnetic Spectrometer on the International Space Station
A precision measurement by AMS of the antiproton-to-proton flux ratio in primary cosmic rays in the absolute rigidity range from 1 to 450 GV is presented based on $3.49\times10^5$ antiproton events and $2.42\times10^9$ proton events. Above $\sim60$ GV the antiproton to proton flux ratio is consistent with being rigidity independent. A decreasing behaviour is expected for this ratio considering the traditional models for the secondary antiproton flux.
0
1
0
0
0
0
Stability criteria for the 2D $α$-Euler equations
We derive analogues of the classical Rayleigh, Fjortoft and Arnold stability and instability theorems in the context of the 2D $\alpha$-Euler equations.
0
1
0
0
0
0
Superconducting spin valves controlled by spiral re-orientation in B20-family magnets
We propose a superconducting spin-triplet valve, which consists of a superconductor and an itinerant magnetic material, with the magnet showing an intrinsic non-collinear order characterized by a wave vector that may be aligned in a few equivalent preferred directions under control of a weak external magnetic field. Re-orienting the spiral direction allows one to controllably modify long-range spin-triplet superconducting correlations, leading to spin-valve switching behavior. Our results indicate that the spin-valve effect may be noticeable. This bilayer may be used as a magnetic memory element for cryogenic nanoelectronics. It has the following advantages in comparison to superconducting spin valves proposed previously: (i) it contains only one magnetic layer, which may be more easily fabricated and controlled, (ii) its ground states are separated by a potential barrier, which solves the "half-select" problem of the addressed switch of memory elements.
0
1
0
0
0
0
Experience Recommendation for Long Term Safe Learning-based Model Predictive Control in Changing Operating Conditions
Learning has propelled the cutting edge of performance in robotic control to new heights, allowing robots to operate with high performance in conditions that were previously unimaginable. The majority of the work, however, assumes that the unknown parts are static or slowly changing. This limits them to static or slowly changing environments. However, in the real world, a robot may experience various unknown conditions. This paper presents a method to extend an existing single mode GP-based safe learning controller to learn an increasing number of non-linear models for the robot dynamics. We show that this approach enables a robot to re-use past experience from a large number of previously visited operating conditions, and to safely adapt when a new and distinct operating condition is encountered. This allows the robot to achieve safety and high performance in an large number of operating conditions that do not have to be specified ahead of time. Our approach runs independently from the controller, imposing no additional computation time on the control loop regardless of the number of previous operating conditions considered. We demonstrate the effectiveness of our approach in experiment on a 900\,kg ground robot with both physical and artificial changes to its dynamics. All of our experiments are conducted using vision for localization.
1
0
0
0
0
0
Calderón-type inequalities for affine frames
We prove sharp upper and lower bounds for generalized Calderón's sums associated to frames on LCA groups generated by affine actions of cocompact subgroup translations and general measurable families of automorphisms. The proof makes use of techniques of analysis on metric spaces, and relies on a counting estimate of lattice points inside metric balls. We will deduce as special cases Calderón-type inequalities for families of expanding automorphisms as well as for LCA-Gabor systems.
0
0
1
0
0
0
Magnetism in Semiconducting Molybdenum Dichalcogenides
Transition metal dichalcogenides (TMDs) are interesting for understanding fundamental physics of two-dimensional materials (2D) as well as for many emerging technologies, including spin electronics. Here, we report the discovery of long-range magnetic order below TM = 40 K and 100 K in bulk semiconducting TMDs 2H-MoTe2 and 2H-MoSe2, respectively, by means of muon spin-rotation (muSR), scanning tunneling microscopy (STM), as well as density functional theory (DFT) calculations. The muon spin rotation measurements show the presence of a large and homogeneous internal magnetic fields at low temperatures in both compounds indicative of long-range magnetic order. DFT calculations show that this magnetism is promoted by the presence of defects in the crystal. The STM measurements show that the vast majority of defects in these materials are metal vacancies and chalcogen-metal antisites which are randomly distributed in the lattice at the sub-percent level. DFT indicates that the antisite defects are magnetic with a magnetic moment in the range of 0.9-2.8 mu_B. Further, we find that the magnetic order stabilized in 2H-MoTe2 and 2H-MoSe2 is highly sensitive to hydrostatic pressure. These observations establish 2H-MoTe2 and 2H-MoSe2 as a new class of magnetic semiconductors and opens a path to studying the interplay of 2D physics and magnetism in these interesting semiconductors.
0
1
0
0
0
0
Characteristic Polynomial of Certain Hyperplane Arrangements through Graph Theory
We give a formula for computing the characteristic polynomial for certain hyperplane arrangements in terms of the number of bipartite graphs of given rank and cardinality.
0
0
1
0
0
0
Accelerated Evaluation of Automated Vehicles Using Piecewise Mixture Models
The process to certify highly Automated Vehicles has not yet been defined by any country in the world. Currently, companies test Automated Vehicles on public roads, which is time-consuming and inefficient. We proposed the Accelerated Evaluation concept, which uses a modified statistics of the surrounding vehicles and the Importance Sampling theory to reduce the evaluation time by several orders of magnitude, while ensuring the evaluation results are statistically accurate. In this paper, we further improve the accelerated evaluation concept by using Piecewise Mixture Distribution models, instead of Single Parametric Distribution models. We developed and applied this idea to forward collision control system reacting to vehicles making cut-in lane changes. The behavior of the cut-in vehicles was modeled based on more than 403,581 lane changes collected by the University of Michigan Safety Pilot Model Deployment Program. Simulation results confirm that the accuracy and efficiency of the Piecewise Mixture Distribution method outperformed single parametric distribution methods in accuracy and efficiency, and accelerated the evaluation process by almost four orders of magnitude.
1
0
0
0
0
0
Phase shift's influence of two strong pulsed laser waves on effective interaction of electrons
The phase shift's influence of two strong pulsed laser waves on effective interaction of electrons was studied. Considerable amplification of electrons repulsion in the certain range of phase shifts and waves intensities is shown. That leads to electrons scatter on greater distances than without an external field. The value of the distance can be greater on 2-3 order of magnitude. Also considerable influence of the phase shift of pulses of waves on the possibility of effective attraction of electrons is shown.
0
1
0
0
0
0
Topological Sieving of Rings According to their Rigidity
We present a novel mechanism for resolving the mechanical rigidity of nanoscopic circular polymers that flow in a complex environment. The emergence of a regime of negative differential mobility induced by topological interactions between the rings and the substrate is the key mechanism for selective sieving of circular polymers with distinct flexibilities. A simple model accurately describes the sieving process observed in molecular dynamics simulations and yields experimentally verifiable analytical predictions, which can be used as a reference guide for improving filtration procedures of circular filaments. The topological sieving mechanism we propose ought to be relevant also in probing the microscopic details of complex substrates.
0
0
0
0
1
0
A structural Markov property for decomposable graph laws that allows control of clique intersections
We present a new kind of structural Markov property for probabilistic laws on decomposable graphs, which allows the explicit control of interactions between cliques, so is capable of encoding some interesting structure. We prove the equivalence of this property to an exponential family assumption, and discuss identifiability, modelling, inferential and computational implications.
0
0
0
1
0
0
Fisher consistency for prior probability shift
We introduce Fisher consistency in the sense of unbiasedness as a desirable property for estimators of class prior probabilities. Lack of Fisher consistency could be used as a criterion to dismiss estimators that are unlikely to deliver precise estimates in test datasets under prior probability and more general dataset shift. The usefulness of this unbiasedness concept is demonstrated with three examples of classifiers used for quantification: Adjusted Classify & Count, EM-algorithm and CDE-Iterate. We find that Adjusted Classify & Count and EM-algorithm are Fisher consistent. A counter-example shows that CDE-Iterate is not Fisher consistent and, therefore, cannot be trusted to deliver reliable estimates of class probabilities.
1
0
0
1
0
0
Quantifying the Effects of Enforcing Disentanglement on Variational Autoencoders
The notion of disentangled autoencoders was proposed as an extension to the variational autoencoder by introducing a disentanglement parameter $\beta$, controlling the learning pressure put on the possible underlying latent representations. For certain values of $\beta$ this kind of autoencoders is capable of encoding independent input generative factors in separate elements of the code, leading to a more interpretable and predictable model behaviour. In this paper we quantify the effects of the parameter $\beta$ on the model performance and disentanglement. After training multiple models with the same value of $\beta$, we establish the existence of consistent variance in one of the disentanglement measures, proposed in literature. The negative consequences of the disentanglement to the autoencoder's discriminative ability are also asserted while varying the amount of examples available during training.
1
0
0
1
0
0
An omnibus test for the global null hypothesis
Global hypothesis tests are a useful tool in the context of, e.g, clinical trials, genetic studies or meta analyses, when researchers are not interested in testing individual hypotheses, but in testing whether none of the hypotheses is false. There are several possibilities how to test the global null hypothesis when the individual null hypotheses are independent. If it is assumed that many of the individual null hypotheses are false, combinations tests have been recommended to maximise power. If, however, it is assumed that only one or a few null hypotheses are false, global tests based on individual test statistics are more powerful (e.g., Bonferroni or Simes test). However, usually there is no a-priori knowledge on the number of false individual null hypotheses. We therefore propose an omnibus test based on the combination of p-values. We show that this test yields an impressive overall performance. The proposed method is implemented in the R-package omnibus.
0
0
0
1
0
0
Learning Without Mixing: Towards A Sharp Analysis of Linear System Identification
We prove that the ordinary least-squares (OLS) estimator attains nearly minimax optimal performance for the identification of linear dynamical systems from a single observed trajectory. Our upper bound relies on a generalization of Mendelson's small-ball method to dependent data, eschewing the use of standard mixing-time arguments. Our lower bounds reveal that these upper bounds match up to logarithmic factors. In particular, we capture the correct signal-to-noise behavior of the problem, showing that more unstable linear systems are easier to estimate. This behavior is qualitatively different from arguments which rely on mixing-time calculations that suggest that unstable systems are more difficult to estimate. We generalize our technique to provide bounds for a more general class of linear response time-series.
0
0
0
1
0
0
Positive and Unlabeled Learning through Negative Selection and Imbalance-aware Classification
Motivated by applications in protein function prediction, we consider a challenging supervised classification setting in which positive labels are scarce and there are no explicit negative labels. The learning algorithm must thus select which unlabeled examples to use as negative training points, possibly ending up with an unbalanced learning problem. We address these issues by proposing an algorithm that combines active learning (for selecting negative examples) with imbalance-aware learning (for mitigating the label imbalance). In our experiments we observe that these two techniques operate synergistically, outperforming state-of-the-art methods on standard protein function prediction benchmarks.
0
0
0
0
1
0
Superconductivity in quantum wires: A symmetry analysis
We study properties of quantim wires with spin-orbit coupling and time reversal symmetry breaking, in normal and superconducting states. Electronic band structures are classified according to quasi-one-dimensional magnetic point groups, or magnetic classes. The latter belong to one of three distinct types, depending on the way the time reversal operation appears in the group elements. The superconducting gap functions are constructed using antiunitary operations and have different symmetry properties depending on the type of the magnetic point group. We obtain the spectrum of the Andreev boundary modes near the end of the wire in a model-independent way, using the semiclassical approach with the boundary conditions described by a phenomenological scattering matrix. Explicit expressions for the bulk topological invariants controlling the number of the boundary zero modes are presented in the general multiband case for two types of the magnetic point groups, corresponding to DIII and BDI symmetry classes.
0
1
0
0
0
0
Exponentially convergent data assimilation algorithm for Navier-Stokes equations
The paper presents a new state estimation algorithm for a bilinear equation representing the Fourier- Galerkin (FG) approximation of the Navier-Stokes (NS) equations on a torus in R2. This state equation is subject to uncertain but bounded noise in the input (Kolmogorov forcing) and initial conditions, and its output is incomplete and contains bounded noise. The algorithm designs a time-dependent gain such that the estimation error converges to zero exponentially. The sufficient condition for the existence of the gain are formulated in the form of algebraic Riccati equations. To demonstrate the results we apply the proposed algorithm to the reconstruction a chaotic fluid flow from incomplete and noisy data.
0
1
0
0
0
0
A Multi-Modal Approach to Infer Image Affect
The group affect or emotion in an image of people can be inferred by extracting features about both the people in the picture and the overall makeup of the scene. The state-of-the-art on this problem investigates a combination of facial features, scene extraction and even audio tonality. This paper combines three additional modalities, namely, human pose, text-based tagging and CNN extracted features / predictions. To the best of our knowledge, this is the first time all of the modalities were extracted using deep neural networks. We evaluate the performance of our approach against baselines and identify insights throughout this paper.
0
0
0
1
0
0
Joint Tilt Angle Adaptation and Beamforming in Multicell Multiuser Cellular Networks
3D beamforming is a promising approach for interference coordination in cellular networks which brings significant improvements in comparison with conventional 2D beamforming techniques. This paper investigates the problem of joint beamforming design and tilt angle adaptation of the BS antenna array for maximizing energy efficiency (EE) in downlink of multi-cell multi-user coordinated cellular networks. An iterative algorithm based on fractional programming approach is introduced to solve the resulting non-convex optimization problem. In each iteration, users are clustered based on their elevation angle. Then, optimization of the tilt angle is carried out through a reduced complexity greedy search to find the best tilt angle for a given placement of the users. Numerical results show that the proposed algorithm achieves higher EE compared to the 2D beamforming techniques.
1
0
1
0
0
0
Testing Degree Corrections in Stochastic Block Models
We study sharp detection thresholds for degree corrections in Stochastic Block Models in the context of a goodness of fit problem. When degree corrections are relatively dense, a simple test based on the total number of edges is asymptotically optimal. For sparse degree corrections in non-dense graphs, simple degree based Higher Criticism Test (Mukherjee, Mukherjee, Sen 2016) is optimal with sharp constants. In contrast, for dense graphs, the optimal procedure runs in two stages. It involves running a suitable community recovery algorithm in stage 1, followed by a Higher Criticism Test based on a linear combination of within and across (estimated) community degrees in stage 2. The necessity of the two step procedure is demonstrated by the failure of the ordinary Maximum Degree Test in achieving sharp constants. As necessary tools we also derive asymptotic distribution of the Maximum Degree in Stochastic Block Models along with moderate deviation and local central limit type asymptotics of positive linear combinations of independent Binomial random variables.
0
0
1
1
0
0
A Survey on Mobile Edge Computing: The Communication Perspective
Driven by the visions of Internet of Things and 5G communications, recent years have seen a paradigm shift in mobile computing, from the centralized Mobile Cloud Computing towards Mobile Edge Computing (MEC). The main feature of MEC is to push mobile computing, network control and storage to the network edges (e.g., base stations and access points) so as to enable computation-intensive and latency-critical applications at the resource-limited mobile devices. MEC promises dramatic reduction in latency and mobile energy consumption, tackling the key challenges for materializing 5G vision. The promised gains of MEC have motivated extensive efforts in both academia and industry on developing the technology. A main thrust of MEC research is to seamlessly merge the two disciplines of wireless communications and mobile computing, resulting in a wide-range of new designs ranging from techniques for computation offloading to network architectures. This paper provides a comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management. We also present a research outlook consisting of a set of promising directions for MEC research, including MEC system deployment, cache-enabled MEC, mobility management for MEC, green MEC, as well as privacy-aware MEC. Advancements in these directions will facilitate the transformation of MEC from theory to practice. Finally, we introduce recent standardization efforts on MEC as well as some typical MEC application scenarios.
1
0
1
0
0
0
Deep Learning Approximation: Zero-Shot Neural Network Speedup
Neural networks offer high-accuracy solutions to a range of problems, but are costly to run in production systems because of computational and memory requirements during a forward pass. Given a trained network, we propose a techique called Deep Learning Approximation to build a faster network in a tiny fraction of the time required for training by only manipulating the network structure and coefficients without requiring re-training or access to the training data. Speedup is achieved by by applying a sequential series of independent optimizations that reduce the floating-point operations (FLOPs) required to perform a forward pass. First, lossless optimizations are applied, followed by lossy approximations using singular value decomposition (SVD) and low-rank matrix decomposition. The optimal approximation is chosen by weighing the relative accuracy loss and FLOP reduction according to a single parameter specified by the user. On PASCAL VOC 2007 with the YOLO network, we show an end-to-end 2x speedup in a network forward pass with a 5% drop in mAP that can be re-gained by finetuning.
0
0
0
1
0
0
Frequency truncated discrete-time system norm
Multirate digital signal processing and model reduction applications require computation of the frequency truncated norm of a discrete-time system. This paper explains how to compute the frequency truncated norm of a discrete-time system. To this end, a much-generalized problem of integrating a transfer function of a discrete-time system given in the descriptor form over an interval of limited frequencies is also discussed along with its computation.
1
0
0
0
0
0
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Recent work has shown that state-of-the-art classifiers are quite brittle, in the sense that a small adversarial change of an originally with high confidence correctly classified input leads to a wrong classification again with high confidence. This raises concerns that such classifiers are vulnerable to attacks and calls into question their usage in safety-critical systems. We show in this paper for the first time formal guarantees on the robustness of a classifier by giving instance-specific lower bounds on the norm of the input manipulation required to change the classifier decision. Based on this analysis we propose the Cross-Lipschitz regularization functional. We show that using this form of regularization in kernel methods resp. neural networks improves the robustness of the classifier without any loss in prediction performance.
1
0
0
1
0
0
Complete DFM Model for High-Performance Computing SoCs with Guard Ring and Dummy Fill Effect
For nanotechnology, the semiconductor device is scaled down dramatically with additional strain engineering for device enhancement, the overall device characteristic is no longer dominated by the device size but also circuit layout. The higher order layout effects, such as well proximity effect (WPE), oxide spacing effect (OSE) and poly spacing effect (PSE), play an important role for the device performance, it is critical to understand Design for Manufacturability (DFM) impacts with various layout topology toward the overall circuit performance. Currently, the layout effects (WPE, OSE and PSE) are validated through digital standard cell and analog differential pair test structure. However, two analog layout structures: the guard ring and dummy fill impact are not well studied yet, then, this paper describes the current mirror test circuit to examine the guard ring and dummy fills DFM impacts using TSMC 28nm HPM process.
1
0
0
0
0
0
Investigating the potential of social network data for transport demand models
Location-based social network data offers the promise of collecting the data from a large base of users over a longer span of time at negligible cost. While several studies have applied social network data to activity and mobility analysis, a comparison with travel diaries and general statistics has been lacking. In this paper, we analysed geo-referenced Twitter activities from a large number of users in Singapore and neighbouring countries. By combining this data, population statistics and travel diaries and applying clustering techniques, we addressed detection of activity locations, as well as spatial separation and transitions between these locations. Kernel density estimation performs best to detect activity locations due to the scattered nature of the twitter data; more activity locations are detected per user than reported in the travel survey. The descriptive analysis shows that determining home locations is more difficult than detecting work locations for most planning zones. Spatial separations between detected activity locations from Twitter data - as reported in a travel survey and captured by public transport smart card data - are mostly similarly distributed, but also show relevant differences for very short and very long distances. This also holds for the transitions between zones. Whether the differences between Twitter data and other data sources stem from differences in the population sub-sample, clustering methodology, or whether social networks are being used significantly more at specific locations must be determined by further research. Despite these shortcomings, location-based social network data offers a promising data source for insights into activity locations and mobility patterns, especially for regions where travel survey data is not readily available.
1
1
0
0
0
0
Generalized Internal Boundaries (GIB)
Representing large-scale motions and topological changes in the finite volume (FV) framework, while at the same time preserving the accuracy of the numerical solution, is difficult. In this paper, we present a robust, highly efficient method designed to achieve this capability. The proposed approach conceptually shares many of the characteristics of the cut-cell interface tracking method, but without the need for complex cell splitting/merging operations. The heart of the new technique is to align existing mesh facets with the geometry to be represented. We then modify the matrix contributions from these facets such that they are represented in an identical fashion to traditional boundary conditions. The collection of such faces is named a Generalised Internal Boundary (GIB). In order to introduce motion into the system, we rely on the classical ALE (Arbitrary Lagrangian-Eulerian) approach, but with the caveat that the non-time-dependent motion of elements instantaneously crossing the interface is handled separately from the time dependent component. The new methodology is validated through comparison with: a) a body-fitted grid simulation of an oscillating two dimensional cylinder and b) experimental results of a butterfly valve.
1
1
0
0
0
0
Reducing Certification Granularity to Increase Adaptability of Avionics Software
A strong certification process is required to insure the safety of airplanes, and more specifically the robustness of avionics applications. To implement this process, the development of avionics software must follow long and costly procedures. Most of these procedures have to be reexecuted each time the software is modified. In this paper, we propose a framework to reduce the cost and time impact of a software modification. With this new approach, the piece of software likely to change is isolated from the rest of the application, so it can be certified independently. This helps the system integrator to adapt an avionics application to the specificities of the target airplane, without the need for a new certification of the application.
1
0
0
0
0
0
Evolutionary multiplayer games on graphs with edge diversity
Evolutionary game dynamics in structured populations has been extensively explored in past decades. However, most previous studies assume that payoffs of individuals are fully determined by the strategic behaviors of interacting parties and social ties between them only serve as the indicator of the existence of interactions. This assumption neglects important information carried by inter-personal social ties such as genetic similarity, geographic proximity, and social closeness, which may crucially affect the outcome of interactions. To model these situations, we present a framework of evolutionary multiplayer games on graphs with edge diversity, where different types of edges describe diverse social ties. Strategic behaviors together with social ties determine the resulting payoffs of interactants. Under weak selection, we provide a general formula to predict the success of one behavior over the other. We apply this formula to various examples which cannot be dealt with using previous models, including the division of labor and relationship- or edge-dependent games. We find that labor division facilitates collective cooperation by decomposing a many-player game into several games of smaller sizes. The evolutionary process based on relationship-dependent games can be approximated by interactions under a transformed and unified game. Our work stresses the importance of social ties and provides effective methods to reduce the calculating complexity in analyzing the evolution of realistic systems.
0
0
0
0
1
0
Robust Consensus for Multi-Agent Systems Communicating over Stochastic Uncertain Networks
In this paper, we study the robust consensus problem for a set of discrete-time linear agents to coordinate over an uncertain communication network, which is to achieve consensus against the transmission errors and noises resulted from the information exchange between the agents. We model the network by means of communication links subject to multiplicative stochastic uncertainties, which are susceptible to describing packet dropout, random delay, and fading phenomena. Different communication topologies, such as undirected graphs and leader-follower graphs, are considered. We derive sufficient conditions for robust consensus in the mean square sense. This results unveil intrinsic constraints on consensus attainment imposed by the network synchronizability, the unstable agent dynamics, and the channel uncertainty variances. Consensus protocols are designed based on the state information transmitted over the uncertain channels, by solving a modified algebraic Riccati equation.
1
0
0
0
0
0
Fractional Topological Elasticity and Fracton Order
We analyze the "higher rank" gauge theories, that capture some of the phenomenology of the Fracton order. It is shown that these theories loose gauge invariance when arbitrarily weak and smooth curvature is introduced. We propose a resolution to this problem by introducing a theory invariant under area-preserving diffeomorphisms, which reduce to the "higher rank" gauge transformations upon linearization around a flat background. The proposed theory is \emph{geometric} in nature and is interpreted as a theory of \emph{fractional topological elasticity}. This theory exhibits the Fracton phenomenology. We explore the conservation laws, topological excitations, linear response, various kinematical constraints, and canonical structure of the theory. Finally, we emphasize that the very structure of Riemann-Cartan geometry, which we use to formulate the theory, encodes the Fracton phenomenology, suggesting that the Fracton order itself is \emph{geometric} in nature.
0
1
0
0
0
0
Perturbing Eisenstein polynomials over local fields
Let $K$ be a local field whose residue field has characteristic $p$ and let $L/K$ be a finite separable totally ramified extension. Let $\pi_L$ be a uniformizer for $L$ and let $f(X)$ be the minimum polynomial for $\pi_L$ over $K$. Suppose $\tilde{\pi}_L$ is another uniformizer for $L$ such that $\tilde{\pi}_L\equiv\pi_L+r\pi_L^{\ell+1} \pmod{\pi_L^{\ell+2}}$ for some $\ell\ge1$ and $r\in O_K$. Let $\tilde{f}(X)$ be the minimum polynomial for $\tilde{\pi}_L$ over $K$. In this paper we give congruences for the coefficients of $\tilde{f}(X)$ in terms of $r$ and the coefficients of $f(X)$. These congruences improve and extend work of Krasner.
0
0
1
0
0
0
Galactic Orbits of Globular Clusters in the Region of the Galactic Bulge
Galactic orbits have been constructed over long time intervals for ten globular clusters located near the Galactic center. A model with an axially symmetric gravitational potential for the Galaxy was initially applied, after which a non-axially symmetric potential corresponding to the central bar was added. Variations in the trajectories of all these globular clusters in the XY plane due to the influence of the bar were detected. These were greatest for the cluster Terzan 4 in the meridional (RZ) plane. The globular clusters Terzan 1, Terzan 2, Terzan 4, Terzan 9, NGC 6522, and NGC 6558 always remained within the Galactic bulge, no farther than 4 kpc from the Galactic center.
0
1
0
0
0
0
Streamlines for Motion Planning in Underwater Currents
Motion planning for underwater vehicles must consider the effect of ocean currents. We present an efficient method to compute reachability and cost between sample points in sampling-based motion planning that supports long-range planning over hundreds of kilometres in complicated flows. The idea is to search a reduced space of control inputs that consists of stream functions whose level sets, or streamlines, optimally connect two given points. Such stream functions are generated by superimposing a control input onto the underlying current flow. A streamline represents the resulting path that a vehicle would follow as it is carried along by the current given that control input. We provide rigorous analysis that shows how our method avoids exhaustive search of the control space, and demonstrate simulated examples in complicated flows including a traversal along the east coast of Australia, using actual current predictions, between Sydney and Brisbane.
1
0
0
0
0
0
Eddington-Limited Accretion in z~2 WISE-selected Hot, Dust-Obscured Galaxies
Hot, Dust-Obscured Galaxies, or "Hot DOGs", are a rare, dusty, hyperluminous galaxy population discovered by the WISE mission. Predominantly at redshifts 2-3, they include the most luminous known galaxies in the universe. Their high luminosities likely come from accretion onto highly obscured super massive black holes (SMBHs). We have conducted a pilot survey to measure the SMBH masses of five z~2 Hot DOGs via broad H_alpha emission lines, using Keck/MOSFIRE and Gemini/FLAMINGOS-2. We detect broad H_alpha emission in all five Hot DOGs. We find substantial corresponding SMBH masses for these Hot DOGs (~ 10^{9} M_sun), and their derived Eddington ratios are close to unity. These z~2 Hot DOGs are the most luminous AGNs at given BH masses, suggesting they are accreting at the maximum rates for their BHs. A similar property is found for known z~6 quasars. Our results are consistent with scenarios in which Hot DOGs represent a transitional, high-accretion phase between obscured and unobscured quasars. Hot DOGs may mark a special evolutionary stage before the red quasar and optical quasar phases, and they may be present at other cosmic epochs.
0
1
0
0
0
0
Boltzmann Transport in Nanostructures as a Friction Effect
Surface scattering is the key limiting factor to thermal transport in dielectric crystals as the length scales are reduced or when temperature is lowered. To explain this phenomenon, it is commonly assumed that the mean free paths of heat carriers are bound by the crystal size and that thermal conductivity is reduced in a manner proportional to such mean free paths. We show here that these conclusions rely on simplifying assumptions and approximated transport models. Instead, starting from the linearized Boltzmann transport equation in the relaxon basis, we show how the problem can be reduced to a set of decoupled linear differential equations. Then, the heat flow can be interpreted as a hydrodynamic phenomenon, with the relaxon gas being slowed down in proximity of a surface by friction effects, similar to the flux of a viscous fluid in a pipe. As an example, we study a ribbon and a trench of monolayer molybdenum disulphide, describing the procedure to reconstruct the temperature and thermal conductivity profile in the sample interior and showing how to estimate the effect of nanostructuring. The approach is general and could be extended to other transport carriers, such as electrons, or extended to materials of higher dimensionality and to different geometries, such as thin films.
0
1
0
0
0
0
Agile Software Engineering and Systems Engineering at SKA Scale
Systems Engineering (SE) is the set of processes and documentation required for successfully realising large-scale engineering projects, but the classical approach is not a good fit for software-intensive projects, especially when the needs of the different stakeholders are not fully known from the beginning, and requirement priorities might change. The SKA is the ultimate software-enabled telescope, with enormous amounts of computing hardware and software required to perform its data reduction. We give an overview of the system and software engineering processes in the SKA1 development, and the tension between classical and agile SE.
1
1
0
0
0
0
Prime geodesic theorem of Gallagher type
We reduce the exponent in the error term of the prime geodesic theorem for compact Riemann surfaces from $\frac{3}{4}$ to $\frac{7}{10}$ outside a set of finite logarithmic measure.
0
0
1
0
0
0
PriMaL: A Privacy-Preserving Machine Learning Method for Event Detection in Distributed Sensor Networks
This paper introduces PriMaL, a general PRIvacy-preserving MAchine-Learning method for reducing the privacy cost of information transmitted through a network. Distributed sensor networks are often used for automated classification and detection of abnormal events in high-stakes situations, e.g. fire in buildings, earthquakes, or crowd disasters. Such networks might transmit privacy-sensitive information, e.g. GPS location of smartphones, which might be disclosed if the network is compromised. Privacy concerns might slow down the adoption of the technology, in particular in the scenario of social sensing where participation is voluntary, thus solutions are needed which improve privacy without compromising on the event detection accuracy. PriMaL is implemented as a machine-learning layer that works on top of an existing event detection algorithm. Experiments are run in a general simulation framework, for several network topologies and parameter values. The privacy footprint of state-of-the-art event detection algorithms is compared within the proposed framework. Results show that PriMaL is able to reduce the privacy cost of a distributed event detection algorithm below that of the corresponding centralized algorithm, within the bounds of some assumptions about the protocol. Moreover the performance of the distributed algorithm is not statistically worse than that of the centralized algorithm.
1
0
0
0
0
0
Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU
The online problem of computing the top eigenvector is fundamental to machine learning. In both adversarial and stochastic settings, previous results (such as matrix multiplicative weight update, follow the regularized leader, follow the compressed leader, block power method) either achieve optimal regret but run slow, or run fast at the expense of loosing a $\sqrt{d}$ factor in total regret where $d$ is the matrix dimension. We propose a $\textit{follow-the-compressed-leader (FTCL)}$ framework which achieves optimal regret without sacrificing the running time. Our idea is to "compress" the matrix strategy to dimension 3 in the adversarial setting, or dimension 1 in the stochastic setting. These respectively resolve two open questions regarding the design of optimal and efficient algorithms for the online eigenvector problem.
1
0
1
1
0
0
Solving satisfiability using inclusion-exclusion
Using Maple, we implement a SAT solver based on the principle of inclusion-exclusion and the Bonferroni inequalities. Using randomly generated input, we investigate the performance of our solver as a function of the number of variables and number of clauses. We also test it against Maple's built-in tautology procedure. Finally, we implement the Lovász local lemma with Maple and discuss its applicability to SAT.
1
0
0
0
0
0
Isometric immersions into manifolds with metallic structures
We consider submanifolds into Riemannian manifold with metallic structures. We obtain some new results for hypersurfaces in these spaces and we express the fundamental theorem of submanifolds into products spaces in terms of metallic structures. Moreover, we define new structures called complex metallic structures. We show that these structures are linked with complex structures. Then, we consider submanifolds into Riemannian manifold with such structures with a focus on invariant submanifolds and hypersurfaces. We also express in particular the fundamental theorem of submanifolds of complex space form in terms of complex metallic structures.
0
0
1
0
0
0
Demonstration of an ac Josephson junction laser
Superconducting electronic devices have re-emerged as contenders for both classical and quantum computing due to their fast operation speeds, low dissipation and long coherence times. An ultimate demonstration of coherence is lasing. We use one of the fundamental aspects of superconductivity, the ac Josephson effect, to demonstrate a laser made from a Josephson junction strongly coupled to a multi-mode superconducting cavity. A dc voltage bias to the junction provides a source of microwave photons, while the circuit's nonlinearity allows for efficient down-conversion of higher order Josephson frequencies down to the cavity's fundamental mode. The simple fabrication and operation allows for easy integration with a range of quantum devices, allowing for efficient on-chip generation of coherent microwave photons at low temperatures.
0
1
0
0
0
0
Integrable systems, symmetries and quantization
These notes correspond to a mini-course given at the Poisson 2016 conference in Geneva. Starting from classical integrable systems in the sense of Liouville, we explore the notion of non-degenerate singularity and expose recent research in connection with semi-toric systems. The quantum and semiclassical counterpart will also be presented, in the viewpoint of the inverse question: from the quantum mechanical spectrum, can you recover the classical system?
0
1
1
0
0
0
Detecting Friedel oscillations in ultracold Fermi gases
Investigating Friedel oscillations in ultracold gases would complement the studies performed on solid state samples with scanning-tunneling microscopes. In atomic quantum gases interactions and external potentials can be tuned freely and the inherently slower dynamics allow to access non-equilibrium dynamics following a potential or interaction quench. Here, we examine how Friedel oscillations can be observed in current ultracold gas experiments under realistic conditions. To this aim we numerically calculate the amplitude of the Friedel oscillations which a potential barrier provokes in a 1D Fermi gas and compare it to the expected atomic and photonic shot noise in a density measurement. We find that to detect Friedel oscillations the signal from several thousand one-dimensional systems has to be averaged. However, as up to 100 parallel one-dimensional systems can be prepared in a single run with present experiments, averaging over about 100 images is sufficient.
0
1
0
0
0
0
Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems
While variational methods have been among the most powerful tools for solving linear inverse problems in imaging, deep (convolutional) neural networks have recently taken the lead in many challenging benchmarks. A remaining drawback of deep learning approaches is their requirement for an expensive retraining whenever the specific problem, the noise level, noise type, or desired measure of fidelity changes. On the contrary, variational methods have a plug-and-play nature as they usually consist of separate data fidelity and regularization terms. In this paper we study the possibility of replacing the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network. The latter therefore serves as an implicit natural image prior, while the data term can still be chosen independently. Using a fixed denoising neural network in exemplary problems of image deconvolution with different blur kernels and image demosaicking, we obtain state-of-the-art reconstruction results. These indicate the high generalizability of our approach and a reduction of the need for problem-specific training. Additionally, we discuss novel results on the analysis of possible optimization algorithms to incorporate the network into, as well as the choices of algorithm parameters and their relation to the noise level the neural network is trained on.
1
0
0
0
0
0
Heated-Up Softmax Embedding
Metric learning aims at learning a distance which is consistent with the semantic meaning of the samples. The problem is generally solved by learning an embedding for each sample such that the embeddings of samples of the same category are compact while the embeddings of samples of different categories are spread-out in the feature space. We study the features extracted from the second last layer of a deep neural network based classifier trained with the cross entropy loss on top of the softmax layer. We show that training classifiers with different temperature values of softmax function leads to features with different levels of compactness. Leveraging these insights, we propose a "heating-up" strategy to train a classifier with increasing temperatures, leading the corresponding embeddings to achieve state-of-the-art performance on a variety of metric learning benchmarks.
0
0
0
1
0
0
Fast Matrix Inversion and Determinant Computation for Polarimetric Synthetic Aperture Radar
This paper introduces a fast algorithm for simultaneous inversion and determinant computation of small sized matrices in the context of fully Polarimetric Synthetic Aperture Radar (PolSAR) image processing and analysis. The proposed fast algorithm is based on the computation of the adjoint matrix and the symmetry of the input matrix. The algorithm is implemented in a general purpose graphical processing unit (GPGPU) and compared to the usual approach based on Cholesky factorization. The assessment with simulated observations and data from an actual PolSAR sensor show a speedup factor of about two when compared to the usual Cholesky factorization. Moreover, the expressions provided here can be implemented in any platform.
1
0
0
1
0
0
The imprint of neutrinos on clustering in redshift-space
(abridged) We investigate the signatures left by the cosmic neutrino background on the clustering of matter, CDM+baryons and halos in redshift-space using a set of more than 1000 N-body and hydrodynamical simulations with massless and massive neutrinos. We find that the effect neutrinos induce on the clustering of CDM+baryons in redshift-space on small scales is almost entirely due to the change in $\sigma_8$. Neutrinos imprint a characteristic signature in the quadrupole of the matter (CDM+baryons+neutrinos) field on small scales, that can be used to disentangle the effect of $\sigma_8$ and $M_\nu$. We show that the effect of neutrinos on the clustering of halos is very different, on all scales, to the one induced by $\sigma_8$. We find that the effects of neutrinos of the growth rate of CDM+baryons ranges from $\sim0.3\%$ to $2\%$ on scales $k\in[0.01, 0.5]~h{\rm Mpc}^{-1}$ for neutrinos with masses $M_\nu \leqslant 0.15$ eV. We compute the bias between the momentum of halos and the momentum of CDM+baryon and find it to be 1 on large scales for all models with massless and massive neutrinos considered. This point towards a velocity bias between halos and total matter on large scales that it is important to account for in order to extract unbiased neutrino information from velocity/momentum surveys such as kSZ observations. We show that baryonic effects can affect the clustering of matter and CDM+baryons in redshift-space by up to a few percent down to $k=0.5~h{\rm Mpc}^{-1}$. We find that hydrodynamics and astrophysical processes, as implemented in our simulations, only distort the relative effect that neutrinos induce on the anisotropic clustering of matter, CDM+baryons and halos in redshift-space by less than $1\%$. Thus, the effect of neutrinos in the fully non-linear regime can be written as a transfer function with very weak dependence on astrophysics.
0
1
0
0
0
0
Quantum Privacy-Preserving Data Analytics
Data analytics (such as association rule mining and decision tree mining) can discover useful statistical knowledge from a big data set. But protecting the privacy of the data provider and the data user in the process of analytics is a serious issue. Usually, the privacy of both parties cannot be fully protected simultaneously by a classical algorithm. In this paper, we present a quantum protocol for data mining that can much better protect privacy than the known classical algorithms: (1) if both the data provider and the data user are honest, the data user can know nothing about the database except the statistical results, and the data provider can get nearly no information about the results mined by the data user; (2) if the data user is dishonest and tries to disclose private information of the other, she/he will be detected with a high probability; (3) if the data provider tries to disclose the privacy of the data user, she/he cannot get any useful information since the data user hides his privacy among noises.
1
0
0
0
0
0
Magnetic Properties of Transition-Metal Adsorbed ot-Phosphorus Monolayer: A First-principles and Monte Carlo Study
Using the first-principles and Monte Carlo methods, here we systematically study magnetic properties of monolayer octagonal-tetragonal phosphorus with 3d transition-metal (TM) adatoms. Different from the puckered hexagonal black phosphorus monolayer (phosphorene or $\alpha$-P), the octagonal-tetragonal phase of 2D phosphorus (named as ot-P or $\epsilon$-P in this article) is buckled with octagon-tetragon structure. Our calculations show that all TMs, except the closed-shell Zn atom, are able to strongly bind onto monolayer $ot$-P with significant binding energies. Local magnetic moments (up to 6 $\mu$B) on adatoms of Sc, Ti, V, Cr, Mn, Fe and Co originate from the exchange and crystal-field splitting of TM 3d orbitals. The magnetic coupling between localized magnetic states of adatoms is dependent on adatomic distances and directions. Lastly, the uniformly magnetic order is investigated to screening two-dimensional dilute ferromagnets with high Curie temperature for applications of spintronics. It is found that ot-P with V atoms homogeneously adsorbed at the centre of octagons with a concentration of 5% has the most stable ferromagnetic ground state. Its Curie temperature is estimated to be 173 K using the Monte Carlo method.
0
1
0
0
0
0
A bootstrap for the number of $\mathbb{F}_{q^r}$-rational points on a curve over $\mathbb{F}_q$
In this note we present a fast algorithm that finds for any $r$ the number $N_r$ of $\mathbb{F}_{q^r}$ rational points on a smooth absolutely irreducible curve $C$ defined over $\mathbb{F}_{q}$ assuming that we know $N_1,\cdots,N_g$, where $g$ is the genus of $C$. The proof of its validity is given in detail and its working are illustrated with several examples. In an Appendix we list the Python function in which we have implemented the algorithm together with other routines used in the examples.
0
0
1
0
0
0
Doping anatase TiO2 with group V-b and VI-b transition metal atoms: a hybrid functional first-principles study
We investigate the role of transition metal atoms of group V-b (V, Nb, and Ta) and VI-b (Cr, Mo, and W) as n- or p-type dopants in anatase TiO2 using thermodynamic principles and density functional theory with the Heyd-Scuseria-Ernzerhof HSE06 hybrid functional. The HSE06 functional provides a realistic value for the band gap, which ensures a correct classification of dopants as shallow or deep donors or acceptors. Defect formation energies and thermodynamic transition levels are calculated taking into account the constraints imposed by the stability of TiO2 and the solubility limit of the impurities. Nb, Ta, W and Mo are identified as shallow donors. Although W provides two electrons, Nb and Ta show a considerably lower formation energy, in particular under O-poor conditions. Mo donates in principle one electron, but under specific conditions can turn into a double donor. V impurities are deep donors and Cr shows up as an amphoteric defect, thereby acting as an electron trapping center in n-type TiO2 especially under O-rich conditions. A comparison with the available experimental data yields excellent agreement.
0
1
0
0
0
0
Patch-planting spin-glass solution for benchmarking
We introduce an algorithm to generate (not solve) spin-glass instances with planted solutions of arbitrary size and structure. First, a set of small problem patches with open boundaries is solved either exactly or with a heuristic, and then the individual patches are stitched together to create a large problem with a known planted solution. Because in these problems frustration is typically smaller than in random problems, we first assess the typical computational complexity of the individual patches using population annealing Monte Carlo, and introduce an approach that allows one to fine-tune the typical computational complexity of the patch-planted system. The scaling of the typical computational complexity of these planted instances with various numbers of patches and patch sizes is investigated and compared to random instances.
0
1
0
0
0
0
Evidence for the formation of comet 67P/Churyumov-Gerasimenko through gravitational collapse of a bound clump of pebbles
The processes that led to the formation of the planetary bodies in the Solar System are still not fully understood. Using the results obtained with the comprehensive suite of instruments on-board ESA's Rosetta mission, we present evidence that comet 67P/Churyumov-Gerasimenko likely formed through the gentle gravitational collapse of a bound clump of mm-sized dust aggregates ("pebbles"), intermixed with microscopic ice particles. This formation scenario leads to a cometary make-up that is simultaneously compatible with the global porosity, homogeneity, tensile strength, thermal inertia, vertical temperature profiles, sizes and porosities of emitted dust, and the steep increase in water-vapour production rate with decreasing heliocentric distance, measured by the instruments on-board the Rosetta spacecraft and the Philae lander. Our findings suggest that the pebbles observed to be abundant in protoplanetary discs around young stars provide the building material for comets and other minor bodies.
0
1
0
0
0
0
Detection of virial shocks in stacked Fermi-LAT clusters
Galaxy clusters are thought to grow by accreting mass through large-scale, strong, yet elusive, virial shocks. Such a shock is expected to accelerate relativistic electrons, thus generating a spectrally-flat leptonic virial-ring. However, until now, only the nearby Coma cluster has shown evidence for a $\gamma$-ray virial ring. We stack Fermi-LAT data for the 112 most massive, high latitude, extended clusters, enhancing the ring sensitivity by rescaling clusters to their virial radii and utilizing the expected flat energy spectrum. In addition to a central unresolved, hard signal (detected at the $\sim 5.8\sigma$ confidence level), probably dominated by AGN, we identify (at the $5.8\sigma$ confidence level) a bright, spectrally-flat $\gamma$-ray ring at the expected virial shock position. The ring signal implies that the shock deposits $\sim 0.6\%$ (with an interpretation uncertainty factor $\sim2$) of the thermal energy in relativistic electrons over a Hubble time. This result, consistent with the Coma signal, validates and calibrates the virial shock model, and indicates that the cumulative emission from such shocks significantly contributes to the diffuse extragalactic $\gamma$-ray and low-frequency radio backgrounds.
0
1
0
0
0
0
Closed-form formulae of hyperbolic metamaterial made by stacked hole-array layers working at terahertz or microwave radiation
A metamaterial made by stacked hole-array layers known as a fishnet metamaterial behaves as a hyperbolic metamaterial at wavelength much longer than hole-array period. However, the analytical formulae of effective parameters of a fishnet metamaterial have not been reported hindering the design of deep-subwavelength imaging devices using this structure. We report the new closed-form formulae of effective parameters comprising anisotropic dispersion relation of a fishnet metamaterial working at terahertz or microwave frequency. These effective parameters of a fishnet metamaterial are consistent with those obtained by quasi-full solutions using known effective parameters of a hole-array layer working at frequency below its spoof plasma frequency with the superlattice period much smaller than the hole-array period. We also theoretically demonstrate the deep-subwavelength focusing at {\lambda}/83 using the composite structure of a slit-array layer and a fishnet metamaterial. It is found that the focused intensity inside a fishnet metamaterial is several times larger than that without the fishnet metamaterial, but the transmitted intensity is still restricted by large-wavevector difference in air and a fishnet metamaterial. Our effective parameters may aid the next-generation deep-subwavelength imaging devices working at terahertz or microwave radiation.
0
1
0
0
0
0
RMPflow: A Computational Graph for Automatic Motion Policy Generation
We develop a novel policy synthesis algorithm, RMPflow, based on geometrically consistent transformations of Riemannian Motion Policies (RMPs). RMPs are a class of reactive motion policies designed to parameterize non-Euclidean behaviors as dynamical systems in intrinsically nonlinear task spaces. Given a set of RMPs designed for individual tasks, RMPflow can consistently combine these local policies to generate an expressive global policy, while simultaneously exploiting sparse structure for computational efficiency. We study the geometric properties of RMPflow and provide sufficient conditions for stability. Finally, we experimentally demonstrate that accounting for the geometry of task policies can simplify classically difficult problems, such as planning through clutter on high-DOF manipulation systems.
1
0
0
0
0
0
Long-range correlations and fractal dynamics in C. elegans: changes with aging and stress
Reduced motor control is one of the most frequent features associated with aging and disease. Nonlinear and fractal analyses have proved to be useful in investigating human physiological alterations with age and disease. Similar findings have not been established for any of the model organisms typically studied by biologists, though. If the physiology of a simpler model organism displays the same characteristics, this fact would open a new research window on the control mechanisms that organisms use to regulate physiological processes during aging and stress. Here, we use a recently introduced animal tracking technology to simultaneously follow tens of Caenorhabdits elegans for several hours and use tools from fractal physiology to quantitatively evaluate the effects of aging and temperature stress on nematode motility. Similarly to human physiological signals, scaling analysis reveals long-range correlations in numerous motility variables, fractal properties in behavioral shifts, and fluctuation dynamics over a wide range of timescales. These properties change as a result of a superposition of age and stress-related adaptive mechanisms that regulate motility.
0
1
0
1
0
0
A Computational Approach to Extinction Events in Chemical Reaction Networks with Discrete State Spaces
Recent work of M.D. Johnston et al. has produced sufficient conditions on the structure of a chemical reaction network which guarantee that the corresponding discrete state space system exhibits an extinction event. The conditions consist of a series of systems of equalities and inequalities on the edges of a modified reaction network called a domination-expanded reaction network. In this paper, we present a computational implementation of these conditions written in Python and apply the program on examples drawn from the biochemical literature, including a model of polyamine metabolism in mammals and a model of the pentose phosphate pathway in Trypanosoma brucei. We also run the program on 458 models from the European Bioinformatics Institute's BioModels Database and report our results.
0
0
1
0
0
0
Face-to-BMI: Using Computer Vision to Infer Body Mass Index on Social Media
A person's weight status can have profound implications on their life, ranging from mental health, to longevity, to financial income. At the societal level, "fat shaming" and other forms of "sizeism" are a growing concern, while increasing obesity rates are linked to ever raising healthcare costs. For these reasons, researchers from a variety of backgrounds are interested in studying obesity from all angles. To obtain data, traditionally, a person would have to accurately self-report their body-mass index (BMI) or would have to see a doctor to have it measured. In this paper, we show how computer vision can be used to infer a person's BMI from social media images. We hope that our tool, which we release, helps to advance the study of social aspects related to body weight.
1
0
0
0
0
0
Recursive simplex stars
This paper proposes a new method which builds a simplex based approximation of a $d-1$-dimensional manifold $M$ separating a $d$-dimensional compact set into two parts, and an efficient algorithm classifying points according to this approximation. In a first variant, the approximation is made of simplices that are defined in the cubes of a regular grid covering the compact set, from boundary points that approximate the intersection between $M$ and the edges of the cubes. All the simplices defined in a cube share the barycentre of the boundary points located in the cube and include simplices similarly defined in cube facets, and so on recursively. In a second variant, the Kuhn triangulation is used to break the cubes into simplices and the approximation is defined in these simplices from the boundary points computed on their edges, with the same principle. Both the approximation in cubes and in simplices define a separating surface on the whole grid and classifying a point on one side or the other of this surface requires only a small number (at most $d$) of simple tests. Under some conditions on the definition of the boundary points and on the reach of $M$, for both variants the Hausdorff distance between $M$ and its approximation decreases like $\mathcal{O}(d n_G^{-2})$, where $n_G$ is the number of points on each axis of the grid. The approximation in cubes requires computing less boundary points than the approximation in simplices but the latter is always a manifold and is more accurate for a given value of $n_G$. The paper reports tests of the method when varying $n_G$ and the dimensionality of the space (up to 9).
1
0
0
0
0
0