title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Diversified essential properties in halogenated graphenes
The significant halogenation effects on the essential properties of graphene are investigated by the first-principles method. The geometric structures, electronic properties, and magnetic configurations are greatly diversified under the various halogen adsorptions. Fluorination, with the strong multi-orbital chemical bondings, can create the buckled graphene structure, while the other halogenations do not change the planar {\sigma} bonding in the presence of single-orbital hybridization. Electronic structures consist of the carbon-, adatom- and (carbon, adatom)-dominated energy bands. All halogenated graphenes belong to hole-doped metals except that fluorinated systems are middle-gap semiconductors at sufficiently high concentration. Moreover, the metallic ferromagnetism is revealed in certain adatom distributions. The unusual hybridization-induced features are clearly evidenced in many van Hove singularities of the density of states. The structure- and adatom-enriched essential properties are compared with the measured results, and potential applications are also discussed.
0
1
0
0
0
0
Quantifying genuine multipartite correlations and their pattern complexity
We propose an information-theoretic framework to quantify multipartite correlations in classical and quantum systems, answering questions such as: what is the amount of seven-partite correlations in a given state of ten particles? We identify measures of genuine multipartite correlations, i.e. statistical dependencies which cannot be ascribed to bipartite correlations, satisfying a set of desirable properties. Inspired by ideas developed in complexity science, we then introduce the concept of weaving to classify states which display different correlation patterns, but cannot be distinguished by correlation measures. The weaving of a state is defined as the weighted sum of correlations of every order. Weaving measures are good descriptors of the complexity of correlation structures in multipartite systems.
0
1
0
0
0
0
Pulsating low-mass white dwarfs in the frame of new evolutionary sequences: IV. The secular rate of period change
We present a theoretical assessment of the expected temporal rates of change of periods ($\dot{\Pi}$) for low-mass ($M_{\star}/M_{\sun} \lesssim 0.45$) and extremely low-mass (ELM, $M_{\star}/M_{\sun} \lesssim 0.18-0.20$) white-dwarf stars, based on fully evolutionary low-mass He-core white dwarf and pre-white dwarf models. Our analysis is based on a large set of adiabatic periods of radial and nonradial pulsation modes computed on a suite of low-mass He-core white dwarf and pre-white dwarf models with masses ranging from $0.1554$ to $0.4352 M_{\sun}$. We compute the secular rates of period change of radial ($\ell= 0$) and nonradial ($\ell= 1, 2$) $g$ and $p$ modes for stellar models representative of ELMV and pre-ELMV stars, as well as for stellar objects that are evolving just before the occurrence of CNO flashes at the early cooling branches. We found that the theoretically expected magnitude of $\dot{\Pi}$ of $g$ modes for pre-ELMVs are by far larger than for ELMVs. In turn, $\dot{\Pi}$ of $g$ modes for models evolving before the occurrence of CNO flashes are larger than the maximum values of the rates of period change predicted for pre-ELMV stars. Regarding $p$ and radial modes, we found that the larger absolute values of $\dot{\Pi}$ correspond to pre-ELMV models. We conclude that any eventual measurement of a rate of period change for a given pulsating low-mass pre-white dwarf or white dwarf star could shed light about its evolutionary status. Also, in view of the systematic difficulties in the spectroscopic classification of stars of the ELM Survey, an eventual measurement of $\dot{\Pi}$ could help to confirm that a given pulsating star is an authentic low-mass white dwarf and not a star from another stellar population.
0
1
0
0
0
0
Normalized Maximum Likelihood with Luckiness for Multivariate Normal Distributions
The normalized maximum likelihood (NML) is one of the most important distribution in coding theory and statistics. NML is the unique solution (if exists) to the pointwise minimax regret problem. However, NML is not defined even for simple family of distributions such as the normal distributions. Since there does not exist any meaningful minimax-regret distribution for such case, it is pointed out that NML with luckiness (LNML) can be employed as an alternative to NML. In this paper, we develop the closed form of LNMLs for multivariate normal distributions.
0
0
1
1
0
0
Tackling Diversity and Heterogeneity by Vertical Memory Management
Existing memory management mechanisms used in commodity computing machines typically adopt hardware based address interleaving and OS directed random memory allocation to service generic application requests. These conventional memory management mechanisms are challenged by contention at multiple memory levels, a daunting variety of workload behaviors, and an increasingly complicated memory hierarchy. Our ISCA-41 paper proposes vertical partitioning to eliminate shared resource contention at multiple levels in the memory hierarchy. Combined with horizontal memory management policies, our framework supports a flexible policy space for tackling diverse application needs in production environment and is suitable for future heterogeneous memory systems.
1
0
0
0
0
0
From the Icosahedron to E8
The regular icosahedron is connected to many exceptional objects in mathematics. Here we describe two constructions of the $\mathrm{E}_8$ lattice from the icosahedron. One uses a subring of the quaternions called the "icosians", while the other uses du Val's work on the resolution of Kleinian singularities. Together they link the golden ratio, the quaternions, the quintic equation, the 600-cell, and the Poincare homology 3-sphere. We leave it as a challenge to the reader to find the connection between these two constructions.
0
0
1
0
0
0
Sequential Discrete Kalman Filter for Real-Time State Estimation in Power Distribution Systems: Theory and Implementation
This paper demonstrates the feasibility of implementing Real-Time State Estimators (RTSEs) for Active Distribution Networks (ADNs) in Field-Programmable Gate Arrays (FPGAs) by presenting an operational prototype. The prototype is based on a Linear State Estimator (LSE) that uses synchrophasor measurements from Phasor Measurement Units (PMUs). The underlying algorithm is the Sequential Discrete Kalman Filter (SDKF), an equivalent formulation of the Discrete Kalman Filter (DKF) for the case of uncorrelated measurement noise. In this regard, this work formally proves the equivalence the SDKF and the DKF, and highlights the suitability of the SDKF for an FPGA implementation by means of a computational complexity analysis. The developed prototype is validated using a case study adapted from the IEEE 34-node distribution test feeder.
0
0
0
1
0
0
Game-Theoretic Semantics for ATL+ with Applications to Model Checking
We develop game-theoretic semantics (GTS) for the fragment ATL+ of the full Alternating-time Temporal Logic ATL*, essentially extending a recently introduced GTS for ATL. We first show that the new game-theoretic semantics is equivalent to the standard semantics of ATL+ (based on perfect recall strategies). We then provide an analysis, based on the new semantics, of the memory and time resources needed for model checking ATL+. Based on that, we establish that strategies that use only a very limited amount of memory suffice for ATL+. Furthermore, using the GTS we provide a new algorithm for model checking of ATL+ and identify a natural hierarchy of tractable fragments of ATL+ that extend ATL.
1
0
1
0
0
0
Killing Three Birds with one Gaussian Process: Analyzing Attack Vectors on Classification
The wide usage of Machine Learning (ML) has lead to research on the attack vectors and vulnerability of these systems. The defenses in this area are however still an open problem, and often lead to an arms race. We define a naive, secure classifier at test time and show that a Gaussian Process (GP) is an instance of this classifier given two assumptions: one concerns the distances in the training data, the other rejection at test time. Using these assumptions, we are able to show that a classifier is either secure, or generalizes and thus learns. Our analysis also points towards another factor influencing robustness, the curvature of the classifier. This connection is not unknown for linear models, but GP offer an ideal framework to study this relationship for nonlinear classifiers. We evaluate on five security and two computer vision datasets applying test and training time attacks and membership inference. We show that we only change which attacks are needed to succeed, instead of alleviating the threat. Only for membership inference, there is a setting in which attacks are unsuccessful (<10% increase in accuracy over random guess). Given these results, we define a classification scheme based on voting, ParGP. This allows us to decide how many points vote and how large the agreement on a class has to be. This ensures a classification output only in cases when there is evidence for a decision, where evidence is parametrized. We evaluate this scheme and obtain promising results.
0
0
0
1
0
0
Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis
We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida--Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric (RS) and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of L1-based methods, and the minimum representation error under RS assumption is obtained at the edge of the RS/RSB phase. The correspondence between the convergence of the existing coordinate descent algorithm and RS/RSB transition is also indicated.
1
0
0
1
0
0
Concurrence Topology of Some Cancer Genomics Data
The topological data analysis method "concurrence topology" is applied to mutation frequencies in 69 genes in glioblastoma data. In dimension 1 some apparent "mutual exclusivity" is found. By simulation of data having approximately the same second order dependence structure as that found in the data, it appears that one triple of mutations, PTEN, RB1, TP53, exhibits mutual exclusivity that depends on special features of the third order dependence and may reflect global dependence among a larger group of genes. A bootstrap analysis suggests that this form of mutual exclusivity is not uncommon in the population from which the data were drawn.
0
0
0
1
0
0
Quasimomentum of an elementary excitation for a system of point bosons with zero boundary conditions
As is known, an elementary excitation of a many-particle system with boundaries is not characterized by a definite momentum. Here, we obtain the formula for the quasimomentum of an elementary excitation for a one-dimensional system of $N$ spinless point bosons with zero boundary conditions (BCs). We also find that the dispersion law $E(p)$ of the system with zero BCs coincides with that of a system with periodic BCs. The elementary excitations are defined within a new approach proposed earlier by the author. This approach is mathematically equivalent to the traditional approach by Lieb, but differs from it by a simpler way of enumeration of excited states and leads to a single dispersion law (instead of two ones in the Lieb's approach).
0
1
0
0
0
0
Moving to VideoKifu: the last steps toward a fully automatic record-keeping of a Go game
In a previous paper [ arXiv:1508.03269 ] we described the techniques we successfully employed for automatically reconstructing the whole move sequence of a Go game by means of a set of pictures. Now we describe how it is possible to reconstruct the move sequence by means of a video stream (which may be provided by an unattended webcam), possibly in real-time. Although the basic algorithms remain the same, we will discuss the new problems that arise when dealing with videos, with special care for the ones that could block a real-time analysis and require an improvement of our previous techniques or even a completely brand new approach. Eventually we present a number of preliminary but positive experimental results supporting the effectiveness of the software we are developing, built on the ideas here outlined.
1
0
0
0
0
0
Malware Detection by Eating a Whole EXE
In this work we introduce malware detection from raw byte sequences as a fruitful research area to the larger machine learning community. Building a neural network for such a problem presents a number of interesting challenges that have not occurred in tasks such as image processing or NLP. In particular, we note that detection from raw bytes presents a sequence problem with over two million time steps and a problem where batch normalization appear to hinder the learning process. We present our initial work in building a solution to tackle this problem, which has linear complexity dependence on the sequence length, and allows for interpretable sub-regions of the binary to be identified. In doing so we will discuss the many challenges in building a neural network to process data at this scale, and the methods we used to work around them.
1
0
0
1
0
0
On the Complexity of Model Checking for Syntactically Maximal Fragments of the Interval Temporal Logic HS with Regular Expressions
In this paper, we investigate the model checking (MC) problem for Halpern and Shoham's interval temporal logic HS. In the last years, interval temporal logic MC has received an increasing attention as a viable alternative to the traditional (point-based) temporal logic MC, which can be recovered as a special case. Most results have been obtained under the homogeneity assumption, that constrains a proposition letter to hold over an interval if and only if it holds over each component state. Recently, Lomuscio and Michaliszyn proposed a way to relax such an assumption by exploiting regular expressions to define the behaviour of proposition letters over intervals in terms of their component states. When homogeneity is assumed, the exact complexity of MC is a difficult open question for full HS and for its two syntactically maximal fragments AA'BB'E' and AA'EB'E'. In this paper, we provide an asymptotically optimal bound to the complexity of these two fragments under the more expressive semantic variant based on regular expressions by showing that their MC problem is AEXP_pol-complete, where AEXP_pol denotes the complexity class of problems decided by exponential-time bounded alternating Turing Machines making a polynomially bounded number of alternations.
1
0
0
0
0
0
Deforming 3-manifolds of bounded geometry and uniformly positive scalar curvature
We prove that the moduli space of complete Riemannian metrics of bounded geometry and uniformly positive scalar curvature on an orientable 3-manifold is path-connected. This generalizes the main result of the fourth author [Mar12] in the compact case. The proof uses Ricci flow with surgery as well as arguments involving performing infinite connected sums with control on the geometry.
0
0
1
0
0
0
Optimal and Myopic Information Acquisition
We consider the problem of optimal dynamic information acquisition from many correlated information sources. Each period, the decision-maker jointly takes an action and allocates a fixed number of observations across the available sources. His payoff depends on the actions taken and on an unknown state. In the canonical setting of jointly normal information sources, we show that the optimal dynamic information acquisition rule proceeds myopically after finitely many periods. If signals are acquired in large blocks each period, then the optimal rule turns out to be myopic from period 1. These results demonstrate the possibility of robust and "simple" optimal information acquisition, and simplify the analysis of dynamic information acquisition in a widely used informational environment.
1
0
1
1
0
0
Infinitely many minimal classes of graphs of unbounded clique-width
The celebrated theorem of Robertson and Seymour states that in the family of minor-closed graph classes, there is a unique minimal class of graphs of unbounded tree-width, namely, the class of planar graphs. In the case of tree-width, the restriction to minor-closed classes is justified by the fact that the tree-width of a graph is never smaller than the tree-width of any of its minors. This, however, is not the case with respect to clique-width, as the clique-width of a graph can be (much) smaller than the clique-width of its minor. On the other hand, the clique-width of a graph is never smaller than the clique-width of any of its induced subgraphs, which allows us to be restricted to hereditary classes (that is, classes closed under taking induced subgraphs), when we study clique-width. Up to date, only finitely many minimal hereditary classes of graphs of unbounded clique-width have been discovered in the literature. In the present paper, we prove that the family of such classes is infinite. Moreover, we show that the same is true with respect to linear clique-width.
1
0
1
0
0
0
A Passivity-Based Approach to Nash Equilibrium Seeking over Networks
In this paper we consider the problem of distributed Nash equilibrium (NE) seeking over networks, a setting in which players have limited local information. We start from a continuous-time gradient-play dynamics that converges to an NE under strict monotonicity of the pseudo-gradient and assumes perfect information, i.e., instantaneous all-to-all player communication. We consider how to modify this gradient-play dynamics in the case of partial, or networked information between players. We propose an augmented gradient-play dynamics with correction in which players communicate locally only with their neighbours to compute an estimate of the other players' actions. We derive the new dynamics based on the reformulation as a multi-agent coordination problem over an undirected graph. We exploit incremental passivity properties and show that a synchronizing, distributed Laplacian feedback can be designed using relative estimates of the neighbours. Under a strict monotonicity property of the pseudo-gradient, we show that the augmented gradient-play dynamics converges to consensus on the NE of the game. We further discuss two cases that highlight the tradeoff between properties of the game and the communication graph.
0
0
1
0
0
0
Magnetic and dielectric investigations of $γ$ - Fe${_2}$WO${_6}$
The magnetic, thermodynamic and dielectric properties of the $\gamma$ - Fe${_2}$WO${_6}$ system is reported. Crystallizing in the centrosymmetric $Pbcn$ space group, this particular polymorph exhibits a number of different magnetic transitions, all of which are seen to exhibit a finite magneto-dielectric coupling. At the lowest measured temperatures, the magnetic ground state appears to be glass-like, as evidenced by the waiting time dependence of the magnetic relaxation. Also reflected in the frequency dependent dielectric measurements, these signatures possibly arise as a consequence of the oxygen non-stoichiometry, which promotes an inhomogeneous magnetic and electronic ground state.
0
1
0
0
0
0
Linear centralization classifier
A classification algorithm, called the Linear Centralization Classifier (LCC), is introduced. The algorithm seeks to find a transformation that best maps instances from the feature space to a space where they concentrate towards the center of their own classes, while maximimizing the distance between class centers. We formulate the classifier as a quadratic program with quadratic constraints. We then simplify this formulation to a linear program that can be solved effectively using a linear programming solver (e.g., simplex-dual). We extend the formulation for LCC to enable the use of kernel functions for non-linear classification applications. We compare our method with two standard classification methods (support vector machine and linear discriminant analysis) and four state-of-the-art classification methods when they are applied to eight standard classification datasets. Our experimental results show that LCC is able to classify instances more accurately (based on the area under the receiver operating characteristic) in comparison to other tested methods on the chosen datasets. We also report the results for LCC with a particular kernel to solve for synthetic non-linear classification problems.
1
0
0
1
0
0
Spatio-temporal Person Retrieval via Natural Language Queries
In this paper, we address the problem of spatio-temporal person retrieval from multiple videos using a natural language query, in which we output a tube (i.e., a sequence of bounding boxes) which encloses the person described by the query. For this problem, we introduce a novel dataset consisting of videos containing people annotated with bounding boxes for each second and with five natural language descriptions. To retrieve the tube of the person described by a given natural language query, we design a model that combines methods for spatio-temporal human detection and multimodal retrieval. We conduct comprehensive experiments to compare a variety of tube and text representations and multimodal retrieval methods, and present a strong baseline in this task as well as demonstrate the efficacy of our tube representation and multimodal feature embedding technique. Finally, we demonstrate the versatility of our model by applying it to two other important tasks.
1
0
0
0
0
0
Thermo-elasto-plastic simulations of femtosecond laser-induced multiple-cavity in fused silica
The formation and the interaction of multiple cavities, induced by tightly focused femtosecond laser pulses, are studied by using a developed numerical tool, including the thermo-elasto-plastic material response. Simulations are performed in fused silica in cases of one, two, and four spots of laser energy deposition. The relaxation of the heated matter, launching shock waves in the surrounding cold material, leads to cavity formation and emergence of areas where cracks may be induced. Results show that the laser-induced structure shape depends on the energy deposition configuration and demonstrate the potential of the used numerical tool to obtain the desired designed structure or technological process.
0
1
0
0
0
0
Efficient and Accurate Machine-Learning Interpolation of Atomic Energies in Compositions with Many Species
Machine-learning potentials (MLPs) for atomistic simulations are a promising alternative to conventional classical potentials. Current approaches rely on descriptors of the local atomic environment with dimensions that increase quadratically with the number of chemical species. In this article, we demonstrate that such a scaling can be avoided in practice. We show that a mathematically simple and computationally efficient descriptor with constant complexity is sufficient to represent transition-metal oxide compositions and biomolecules containing 11 chemical species with a precision of around 3 meV/atom. This insight removes a perceived bound on the utility of MLPs and paves the way to investigate the physics of previously inaccessible materials with more than ten chemical species.
0
1
0
0
0
0
Randomized Constraints Consensus for Distributed Robust Linear Programming
In this paper we consider a network of processors aiming at cooperatively solving linear programming problems subject to uncertainty. Each node only knows a common cost function and its local uncertain constraint set. We propose a randomized, distributed algorithm working under time-varying, asynchronous and directed communication topology. The algorithm is based on a local computation and communication paradigm. At each communication round, nodes perform two updates: (i) a verification in which they check-in a randomized setup-the robust feasibility (and hence optimality) of the candidate optimal point, and (ii) an optimization step in which they exchange their candidate bases (minimal sets of active constraints) with neighbors and locally solve an optimization problem whose constraint set includes: a sampled constraint violating the candidate optimal point (if it exists), agent's current basis and the collection of neighbor's basis. As main result, we show that if a processor successfully performs the verification step for a sufficient number of communication rounds, it can stop the algorithm since a consensus has been reached. The common solution is-with high confidence-feasible (and hence optimal) for the entire set of uncertainty except a subset having arbitrary small probability measure. We show the effectiveness of the proposed distributed algorithm on a multi-core platform in which the nodes communicate asynchronously.
0
0
1
0
0
0
Continuum limit of the vibrational properties of amorphous solids
The low-frequency vibrational and low-temperature thermal properties of amorphous solids are markedly different from those of crystalline solids. This situation is counter-intuitive because any solid material is expected to behave as a homogeneous elastic body in the continuum limit, in which vibrational modes are phonons following the Debye law. A number of phenomenological explanations have been proposed, which assume elastic heterogeneities, soft localized vibrations, and so on. Recently, the microscopic mean-field theories have been developed to predict the universal non-Debye scaling law. Considering these theoretical arguments, it is absolutely necessary to directly observe the nature of the low-frequency vibrations of amorphous solids and determine the laws that such vibrations obey. Here, we perform an extremely large-scale vibrational mode analysis of a model amorphous solid. We find that the scaling law predicted by the mean-field theory is violated at low frequency, and in the continuum limit, the vibrational modes converge to a mixture of phonon modes following the Debye law and soft localized modes following another universal non-Debye scaling law.
0
1
0
0
0
0
Existence of smooth solutions of multi-term Caputo-type fractional differential equations
This paper deals with the initial value problem for the multi-term fractional differential equation. The fractional derivative is defined in the Caputo sense. Firstly the initial value problem is transformed into a equivalent Volterra-type integral equation under appropriate assumptions. Then new existence results for smooth solutions are established by using the Schauder fixed point theorem.
0
0
1
0
0
0
Classes of elementary function solutions to the CEV model. I
The CEV model subsumes some of the previous option pricing models. An important parameter in the model is the parameter b, the elasticity of volatility. For b=0, b=-1/2, and b=-1 the CEV model reduces respectively to the BSM model, the square-root model of Cox and Ross, and the Bachelier model. Both in the case of the BSM model and in the case of the CEV model it has become traditional to begin a discussion of option pricing by starting with the vanilla European calls and puts. In the case of BSM model simpler solutions are the log and power solutions. These contracts, despite the simplicity of their mathematical description, are attracting increasing attention as a trading instrument. Similar simple solutions have not been studied so far in a systematic fashion for the CEV model. We use Kovacic's algorithm to derive, for all half-integer values of b, all solutions "in quadratures" of the CEV ordinary differential equation. These solutions give rise, by separation of variables, to simple solutions to the CEV partial differential equation. In particular, when b=...,-5/2,-2,-3/2,-1, 1, 3/2, 2, 5/2,..., we obtain four classes of denumerably infinite elementary function solutions, when b=-1/2 and b=1/2 we obtain two classes of denumerably infinite elementary function solutions, whereas, when b=0 we find two elementary function solutions. In the derived solutions we have also dispensed with the unnecessary assumption made in the the BSM model asserting that the underlying asset pays no dividends during the life of the option.
0
0
0
0
0
1
Congruent families and invariant tensors
Classical results of Chentsov and Campbell state that -- up to constant multiples -- the only $2$-tensor field of a statistical model which is invariant under congruent Markov morphisms is the Fisher metric and the only invariant $3$-tensor field is the Amari-Chentsov tensor. We generalize this result for arbitrary degree $n$, showing that any family of $n$-tensors which is invariant under congruent Markov morphisms is algebraically generated by the canonical tensor fields defined in an earlier paper.
0
0
1
1
0
0
Quality of Information in Mobile Crowdsensing: Survey and Research Challenges
Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.
1
0
0
0
0
0
SimBlock: A Blockchain Network Simulator
Blockchain, which is a technology for distributedly managing ledger information over multiple nodes without a centralized system, has elicited increasing attention. Performing experiments on actual blockchains are difficult because a large number of nodes in wide areas are necessary. In this study, we developed a blockchain network simulator SimBlock for such experiments. Unlike the existing simulators, SimBlock can easily change behavior of node, so that it enables to investigate the influence of nodes' behavior on blockchains. We compared some simulation results with the measured values in actual blockchains to demonstrate the validity of this simulator. Furthermore, to show practical usage, we conducted two experiments which clarify the influence of neighbor node selection algorithms and relay networks on the block propagation time. The simulator could depict the effects of the two techniques on block propagation time. The simulator will be publicly available in a few months.
1
0
0
0
0
0
Bayesian Fused Lasso regression for dynamic binary networks
We propose a multinomial logistic regression model for link prediction in a time series of directed binary networks. To account for the dynamic nature of the data we employ a dynamic model for the model parameters that is strongly connected with the fused lasso penalty. In addition to promoting sparseness, this prior allows us to explore the presence of change points in the structure of the network. We introduce fast computational algorithms for estimation and prediction using both optimization and Bayesian approaches. The performance of the model is illustrated using simulated data and data from a financial trading network in the NYMEX natural gas futures market. Supplementary material containing the trading network data set and code to implement the algorithms is available online.
0
0
0
1
0
0
Indefinite Kernel Logistic Regression
Traditionally, kernel learning methods requires positive definitiveness on the kernel, which is too strict and excludes many sophisticated similarities, that are indefinite, in multimedia area. To utilize those indefinite kernels, indefinite learning methods are of great interests. This paper aims at the extension of the logistic regression from positive semi-definite kernels to indefinite kernels. The model, called indefinite kernel logistic regression (IKLR), keeps consistency to the regular KLR in formulation but it essentially becomes non-convex. Thanks to the positive decomposition of an indefinite matrix, IKLR can be transformed into a difference of two convex models, which follows the use of concave-convex procedure. Moreover, we employ an inexact solving scheme to speed up the sub-problem and develop a concave-inexact-convex procedure (CCICP) algorithm with theoretical convergence analysis. Systematical experiments on multi-modal datasets demonstrate the superiority of the proposed IKLR method over kernel logistic regression with positive definite kernels and other state-of-the-art indefinite learning based algorithms.
1
0
0
1
0
0
Microstructure and thickening of dense suspensions under extensional and shear flows
Dense suspensions are non-Newtonian fluids which exhibit strong shear thickening and normal stress differences. Using numerical simulation of extensional and shear flows, we investigate how rheological properties are determined by the microstructure which is built under flows and by the interactions between particles. By imposing extensional and shear flows, we can assess the degree of flow-type dependence in regimes below and above thickening. Even when the flow-type dependence is hindered, nondissipative responses, such as normal stress differences, are present and characterise the non-Newtonian behaviour of dense suspensions.
0
1
0
0
0
0
Learning at the Ends: From Hand to Tool Affordances in Humanoid Robots
One of the open challenges in designing robots that operate successfully in the unpredictable human environment is how to make them able to predict what actions they can perform on objects, and what their effects will be, i.e., the ability to perceive object affordances. Since modeling all the possible world interactions is unfeasible, learning from experience is required, posing the challenge of collecting a large amount of experiences (i.e., training data). Typically, a manipulative robot operates on external objects by using its own hands (or similar end-effectors), but in some cases the use of tools may be desirable, nevertheless, it is reasonable to assume that while a robot can collect many sensorimotor experiences using its own hands, this cannot happen for all possible human-made tools. Therefore, in this paper we investigate the developmental transition from hand to tool affordances: what sensorimotor skills that a robot has acquired with its bare hands can be employed for tool use? By employing a visual and motor imagination mechanism to represent different hand postures compactly, we propose a probabilistic model to learn hand affordances, and we show how this model can generalize to estimate the affordances of previously unseen tools, ultimately supporting planning, decision-making and tool selection tasks in humanoid robots. We present experimental results with the iCub humanoid robot, and we publicly release the collected sensorimotor data in the form of a hand posture affordances dataset.
1
0
0
1
0
0
Rewriting in Free Hypergraph Categories
We study rewriting for equational theories in the context of symmetric monoidal categories where there is a separable Frobenius monoid on each object. These categories, also called hypergraph categories, are increasingly relevant: Frobenius structures recently appeared in cross-disciplinary applications, including the study of quantum processes, dynamical systems and natural language processing. In this work we give a combinatorial characterisation of arrows of a free hypergraph category as cospans of labelled hypergraphs and establish a precise correspondence between rewriting modulo Frobenius structure on the one hand and double-pushout rewriting of hypergraphs on the other. This interpretation allows to use results on hypergraphs to ensure decidability of confluence for rewriting in a free hypergraph category. Our results generalise previous approaches where only categories generated by a single object (props) were considered.
1
0
0
0
0
0
Weighting Scheme for a Pairwise Multi-label Classifier Based on the Fuzzy Confusion Matrix
In this work we addressed the issue of applying a stochastic classifier and a local, fuzzy confusion matrix under the framework of multi-label classification. We proposed a novel solution to the problem of correcting label pairwise ensembles. The main step of the correction procedure is to compute classifier-specific competence and cross-competence measures, which estimates error pattern of the underlying classifier. At the fusion phase we employed two weighting approaches based on information theory. The classifier weights promote base classifiers which are the most susceptible to the correction based on the fuzzy confusion matrix. During the experimental study, the proposed approach was compared against two reference methods. The comparison was made in terms of six different quality criteria. The conducted experiments reveals that the proposed approach eliminates one of main drawbacks of the original FCM-based approach i.e. the original approach is vulnerable to the imbalanced class/label distribution. What is more, the obtained results shows that the introduced method achieves satisfying classification quality under all considered quality criteria. Additionally, the impact of fluctuations of data set characteristics is reduced.
1
0
0
1
0
0
Wilcoxon Rank-Based Tests for Clustered Data with R Package clusrank
Wilcoxon Rank-based tests are distribution-free alternatives to the popular two-sample and paired t-tests. For independent data, they are available in several R packages such as stats and coin. For clustered data, in spite of the recent methodological developments, there did not exist an R package that makes them available at one place. We present a package clusrank where the latest developments are implemented and wrapped under a unified user-friendly interface. With different methods dispatched based on the inputs, this package offers great flexibility in rank-based tests for various clustered data. Exact tests based on permutations are also provided for some methods. Details of the major schools of different methods are briefly reviewed. Usages of the package clusrank are illustrated with simulated data as well as a real dataset from an ophthalmological study. The package also enables convenient comparison between selected methods under settings that have not been studied before and the results are discussed.
0
0
0
1
0
0
Latent heterogeneous multilayer community detection
We propose a method for simultaneously detecting shared and unshared communities in heterogeneous multilayer weighted and undirected networks. The multilayer network is assumed to follow a generative probabilistic model that takes into account the similarities and dissimilarities between the communities. We make use of a variational Bayes approach for jointly inferring the shared and unshared hidden communities from multilayer network observations. We show the robustness of our approach compared to state-of-the art algorithms in detecting disparate (shared and private) communities on synthetic data as well as on real genome-wide fibroblast proliferation dataset.
1
0
0
1
0
0
Understanding a Version of Multivariate Symmetric Uncertainty to assist in Feature Selection
In this paper, we analyze the behavior of the multivariate symmetric uncertainty (MSU) measure through the use of statistical simulation techniques under various mixes of informative and non-informative randomly generated features. Experiments show how the number of attributes, their cardinalities, and the sample size affect the MSU. We discovered a condition that preserves good quality in the MSU under different combinations of these three factors, providing a new useful criterion to help drive the process of dimension reduction.
1
0
0
1
0
0
Approximation Dynamics
We describe the approximation of a continuous dynamical system on a p. l. manifold or Cantor set by a tractable system. A system is tractable when it has a finite number of chain components and, with respect to a given full background measure, almost every point is generic for one of a finite number of ergodic invariant measures with non-overlapping supports. The approximations use non-degenerate simplicial dynamical systems for p. l. manifolds and shift-like dynamical systems for Cantor Sets.
0
0
1
0
0
0
Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance
The Wasserstein distance between two probability measures on a metric space is a measure of closeness with applications in statistics, probability, and machine learning. In this work, we consider the fundamental question of how quickly the empirical measure obtained from $n$ independent samples from $\mu$ approaches $\mu$ in the Wasserstein distance of any order. We prove sharp asymptotic and finite-sample results for this rate of convergence for general measures on general compact metric spaces. Our finite-sample results show the existence of multi-scale behavior, where measures can exhibit radically different rates of convergence as $n$ grows.
0
0
1
1
0
0
Ensuring patients privacy in a cryptographic-based-electronic health records using bio-cryptography
Several recent works have proposed and implemented cryptography as a means to preserve privacy and security of patients health data. Nevertheless, the weakest point of electronic health record (EHR) systems that relied on these cryptographic schemes is key management. Thus, this paper presents the development of privacy and security system for cryptography-based-EHR by taking advantage of the uniqueness of fingerprint and iris characteristic features to secure cryptographic keys in a bio-cryptography framework. The results of the system evaluation showed significant improvements in terms of time efficiency of this approach to cryptographic-based-EHR. Both the fuzzy vault and fuzzy commitment demonstrated false acceptance rate (FAR) of 0%, which reduces the likelihood of imposters gaining successful access to the keys protecting patients protected health information. This result also justifies the feasibility of implementing fuzzy key binding scheme in real applications, especially fuzzy vault which demonstrated a better performance during key reconstruction.
1
0
0
0
0
0
A Dynamically Reconfigurable Terahertz Array Antenna for Near-field Imaging Applications
A proof of concept for high speed near-field imaging with sub-wavelength resolution using SLM is presented. An 8 channel THz detector array antenna with an electrode gap of 100 um and length of 5 mm is fabricated using the commercially available GaAs semiconductor substrate. Each array antenna can be excited simultaneously by spatially reconfiguring the optical probe beam and the THz electric field can be recorded using 8 channel lock-in amplifiers. By scanning the probe beam along the length of the array antenna, a 2D image can be obtained with amplitude, phase and frequency information.
0
1
0
0
0
0
Stationary C*-dynamical systems
We introduce the notion of stationary actions in the context of C*-algebras. We develop the basics of the theory, and provide applications to several ergodic theoretical and operator algebraic rigidity problems.
0
0
1
0
0
0
The impact of imbalanced training data on machine learning for author name disambiguation
In supervised machine learning for author name disambiguation, negative training data are often dominantly larger than positive training data. This paper examines how the ratios of negative to positive training data can affect the performance of machine learning algorithms to disambiguate author names in bibliographic records. On multiple labeled datasets, three classifiers - Logistic Regression, Naïve Bayes, and Random Forest - are trained through representative features such as coauthor names, and title words extracted from the same training data but with various positive-negative training data ratios. Results show that increasing negative training data can improve disambiguation performance but with a few percent of performance gains and sometimes degrade it. Logistic Regression and Naïve Bayes learn optimal disambiguation models even with a base ratio (1:1) of positive and negative training data. Also, the performance improvement by Random Forest tends to quickly saturate roughly after 1:10 ~ 1:15. These findings imply that contrary to the common practice using all training data, name disambiguation algorithms can be trained using part of negative training data without degrading much disambiguation performance while increasing computational efficiency. This study calls for more attention from author name disambiguation scholars to methods for machine learning from imbalanced data.
0
0
0
1
0
0
SIGNet: Scalable Embeddings for Signed Networks
Recent successes in word embedding and document embedding have motivated researchers to explore similar representations for networks and to use such representations for tasks such as edge prediction, node label prediction, and community detection. Such network embedding methods are largely focused on finding distributed representations for unsigned networks and are unable to discover embeddings that respect polarities inherent in edges. We propose SIGNet, a fast scalable embedding method suitable for signed networks. Our proposed objective function aims to carefully model the social structure implicit in signed networks by reinforcing the principles of social balance theory. Our method builds upon the traditional word2vec family of embedding approaches and adds a new targeted node sampling strategy to maintain structural balance in higher-order neighborhoods. We demonstrate the superiority of SIGNet over state-of-the-art methods proposed for both signed and unsigned networks on several real world datasets from different domains. In particular, SIGNet offers an approach to generate a richer vocabulary of features of signed networks to support representation and reasoning.
1
0
0
1
0
0
Spread of hate speech in online social media
The present online social media platform is afflicted with several issues, with hate speech being on the predominant forefront. The prevalence of online hate speech has fueled horrific real-world hate-crime such as the mass-genocide of Rohingya Muslims, communal violence in Colombo and the recent massacre in the Pittsburgh synagogue. Consequently, It is imperative to understand the diffusion of such hateful content in an online setting. We conduct the first study that analyses the flow and dynamics of posts generated by hateful and non-hateful users on Gab (gab.com) over a massive dataset of 341K users and 21M posts. Our observations confirms that hateful content diffuse farther, wider and faster and have a greater outreach than those of non-hateful users. A deeper inspection into the profiles and network of hateful and non-hateful users reveals that the former are more influential, popular and cohesive. Thus, our research explores the interesting facets of diffusion dynamics of hateful users and broadens our understanding of hate speech in the online world.
1
0
0
0
0
0
GOOWE: Geometrically Optimum and Online-Weighted Ensemble Classifier for Evolving Data Streams
Designing adaptive classifiers for an evolving data stream is a challenging task due to the data size and its dynamically changing nature. Combining individual classifiers in an online setting, the ensemble approach, is a well-known solution. It is possible that a subset of classifiers in the ensemble outperforms others in a time-varying fashion. However, optimum weight assignment for component classifiers is a problem which is not yet fully addressed in online evolving environments. We propose a novel data stream ensemble classifier, called Geometrically Optimum and Online-Weighted Ensemble (GOOWE), which assigns optimum weights to the component classifiers using a sliding window containing the most recent data instances. We map vote scores of individual classifiers and true class labels into a spatial environment. Based on the Euclidean distance between vote scores and ideal-points, and using the linear least squares (LSQ) solution, we present a novel, dynamic, and online weighting approach. While LSQ is used for batch mode ensemble classifiers, it is the first time that we adapt and use it for online environments by providing a spatial modeling of online ensembles. In order to show the robustness of the proposed algorithm, we use real-world datasets and synthetic data generators using the MOA libraries. First, we analyze the impact of our weighting system on prediction accuracy through two scenarios. Second, we compare GOOWE with 8 state-of-the-art ensemble classifiers in a comprehensive experimental environment. Our experiments show that GOOWE provides improved reactions to different types of concept drift compared to our baselines. The statistical tests indicate a significant improvement in accuracy, with conservative time and memory requirements.
1
0
0
0
0
0
Graph Product Multilayer Networks: Spectral Properties and Applications
This paper aims to establish theoretical foundations of graph product multilayer networks (GPMNs), a family of multilayer networks that can be obtained as a graph product of two or more factor networks. Cartesian, direct (tensor), and strong product operators are considered, and then generalized. We first describe mathematical relationships between GPMNs and their factor networks regarding their degree/strength, adjacency, and Laplacian spectra, and then show that those relationships can still hold for nonsimple and generalized GPMNs. Applications of GPMNs are discussed in three areas: predicting epidemic thresholds, modeling propagation in nontrivial space and time, and analyzing higher-order properties of self-similar networks. Directions of future research are also discussed.
1
1
0
0
0
0
Multilayer flows in molecular networks identify biological modules in the human proteome
A variety of complex systems exhibit different types of relationships simultaneously that can be modeled by multiplex networks. A typical problem is to determine the community structure of such systems that, in general, depend on one or more parameters to be tuned. In this study we propose one measure, grounded on information theory, to find the optimal value of the relax rate characterizing Multiplex Infomap, the generalization of the Infomap algorithm to the realm of multilayer networks. We evaluate our methodology on synthetic networks, to show that the most representative community structure can be reliably identified when the most appropriate relax rate is used. Capitalizing on these results, we use this measure to identify the most reliable meso-scale functional organization in the human protein-protein interaction multiplex network and compare the observed clusters against a collection of independently annotated gene sets from the Molecular Signatures Database (MSigDB). Our analysis reveals that modules obtained with the optimal value of the relax rate are biologically significant and, remarkably, with higher functional content than the ones obtained from the aggregate representation of the human proteome. Our framework allows us to characterize the meso-scale structure of those multilayer systems whose layers are not explicitly interconnected each other -- as in the case of edge-colored models -- the ones describing most biological networks, from proteomes to connectomes.
0
0
0
0
1
0
Using Artificial Neural Networks (ANN) to Control Chaos
Controlling Chaos could be a big factor in getting great stable amounts of energy out of small amounts of not necessarily stable resources. By definition, Chaos is getting huge changes in the system's output due to unpredictable small changes in initial conditions, and that means we could take advantage of this fact and select the proper control system to manipulate system's initial conditions and inputs in general and get a desirable output out of otherwise a Chaotic system. That was accomplished by first building some known chaotic circuit (Chua circuit) and the NI's MultiSim was used to simulate the ANN control system. It was shown that this technique can also be used to stabilize some hard to stabilize electronic systems.
1
1
0
0
0
0
Glassy quantum dynamics in translation invariant fracton models
We investigate relaxation in the recently discovered "fracton" models and discover that these models naturally host glassy quantum dynamics in the absence of quenched disorder. We begin with a discussion of "type I" fracton models, in the taxonomy of Vijay, Haah, and Fu. We demonstrate that in these systems, the mobility of charges is suppressed exponentially in the inverse temperature. We further demonstrate that when a zero temperature type I fracton model is placed in contact with a finite temperature heat bath, the approach to equilibrium is a logarithmic function of time over an exponentially wide window of time scales. Generalizing to the more complex "type II" fracton models, we find that the charges exhibit subdiffusion upto a relaxation time that diverges at low temperatures as a super-exponential function of inverse temperature. This behaviour is reminiscent of "nearly localized" disordered systems, but occurs with a translation invariant three-dimensional Hamiltonian. We also conjecture that fracton models with conserved charge may support a phase which is a thermal metal but a charge insulator.
0
1
0
0
0
0
Saliency Guided Hierarchical Robust Visual Tracking
A saliency guided hierarchical visual tracking (SHT) algorithm containing global and local search phases is proposed in this paper. In global search, a top-down saliency model is novelly developed to handle abrupt motion and appearance variation problems. Nineteen feature maps are extracted first and combined with online learnt weights to produce the final saliency map and estimated target locations. After the evaluation of integration mechanism, the optimum candidate patch is passed to the local search. In local search, a superpixel based HSV histogram matching is performed jointly with an L2-RLS tracker to take both color distribution and holistic appearance feature of the object into consideration. Furthermore, a linear refinement search process with fast iterative solver is implemented to attenuate the possible negative influence of dominant particles. Both qualitative and quantitative experiments are conducted on a series of challenging image sequences. The superior performance of the proposed method over other state-of-the-art algorithms is demonstrated by comparative study.
1
0
0
0
0
0
Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images
The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.
1
1
0
0
0
0
A Semantics for Probabilistic Control-Flow Graphs
This article develops a novel operational semantics for probabilistic control-flow graphs (pCFGs) of probabilistic imperative programs with random assignment and "observe" (or conditioning) statements. The semantics transforms probability distributions (on stores) as control moves from one node to another in pCFGs. We relate this semantics to a standard, expectation-transforming, denotational semantics of structured probabilistic imperative programs, by translating structured programs into (unstructured) pCFGs, and proving adequacy of the translation. This shows that the operational semantics can be used without loss of information, and is faithful to the "intended" semantics and hence can be used to reason about, for example, the correctness of transformations (as we do in a companion article).
1
0
0
0
0
0
Generalized Slow Roll in the Unified Effective Field Theory of Inflation
We provide a compact and unified treatment of power spectrum observables for the effective field theory (EFT) of inflation with the complete set of operators that lead to second-order equations of motion in metric perturbations in both space and time derivatives, including Horndeski and GLPV theories. We relate the EFT operators in ADM form to the four additional free functions of time in the scalar and tensor equations. Using the generalized slow roll formalism, we show that each power spectrum can be described by an integral over a single source that is a function of its respective sound horizon. With this correspondence, existing model independent constraints on the source function can be simply reinterpreted in the more general inflationary context. By expanding these sources around an optimized freeze-out epoch, we also provide characterizations of these spectra in terms of five slow-roll hierarchies whose leading order forms are compact and accurate as long as EFT coefficients vary only on timescales greater than an efold. We also clarify the relationship between the unitary gauge observables employed in the EFT and the comoving gauge observables of the post-inflationary universe.
0
1
0
0
0
0
MgO thickness-induced spin reorientation transition in Co0.9Fe0.1/MgO/Co0.9Fe0.1 structure
The magnetic anisotropy (MA) of Mo/Au/Co0.9Fe0.1/Au/MgO(0.7 - 3 nm)/Au/Co0.9Fe0.1/Au heterostructure has been investigated at room temperature as a function of MgO layer thickness (tMgO). Our studies show that while the MA of the top layer does not change its character upon variation of tMgO, the uniaxial out-of-plane MA of the bottom one undergoes a spin reorientation transition at tMgO of about 0.8 nm, switching to the regime where the coexistence of in- and out-of-plane magnetization alignments is observed. The magnitudes of the magnetic anisotropy constants have been determined from ferromagnetic resonance and dc-magnetometry measurements. The origin of MA evolution has been attributed to a presence of an interlayer exchange coupling (IEC) between Co0.9Fe0.1 layers through the thin MgO film.
0
1
0
0
0
0
Bjerrum Pairs in Ionic Solutions: a Poisson-Boltzmann Approach
Ionic solutions are often regarded as fully dissociated ions dispersed in a polar solvent. While this picture holds for dilute solutions, at higher ionic concentrations, oppositely charged ions can associate into dimers, referred to as Bjerrum pairs. We consider the formation of such pairs within the nonlinear Poisson-Boltzmann framework, and investigate their effects on bulk and interfacial properties of electrolytes. Our findings show that pairs can reduce the magnitude of the dielectric decrement of ionic solutions as the ionic concentration increases. We describe the effect of pairs on the Debye screening length, and relate our results to recent surface-force experiments. Furthermore, we show that Bjerrum pairs reduce the ionic concentration in bulk electrolyte and at the proximity of charged surfaces, while they enhance the attraction between oppositely charged surfaces.
0
1
0
0
0
0
Neural-Brane: Neural Bayesian Personalized Ranking for Attributed Network Embedding
Network embedding methodologies, which learn a distributed vector representation for each vertex in a network, have attracted considerable interest in recent years. Existing works have demonstrated that vertex representation learned through an embedding method provides superior performance in many real-world applications, such as node classification, link prediction, and community detection. However, most of the existing methods for network embedding only utilize topological information of a vertex, ignoring a rich set of nodal attributes (such as, user profiles of an online social network, or textual contents of a citation network), which is abundant in all real-life networks. A joint network embedding that takes into account both attributional and relational information entails a complete network information and could further enrich the learned vector representations. In this work, we present Neural-Brane, a novel Neural Bayesian Personalized Ranking based Attributed Network Embedding. For a given network, Neural-Brane extracts latent feature representation of its vertices using a designed neural network model that unifies network topological information and nodal attributes; Besides, it utilizes Bayesian personalized ranking objective, which exploits the proximity ordering between a similar node-pair and a dissimilar node-pair. We evaluate the quality of vertex embedding produced by Neural-Brane by solving the node classification and clustering tasks on four real-world datasets. Experimental results demonstrate the superiority of our proposed method over the state-of-the-art existing methods.
0
0
0
1
0
0
Pairwise $k$-Semi-Stratifiable Bispaces and Topological Ordered Spaces
In this paper, we continue to study pairwise ($k$-semi-)stratifiable bitopological spaces. Some new characterizations of pairwise $k$-semi-stratifiable bitopological spaces are provided. Relationships between pairwise stratifiable and pairwise $k$-semi-stratifiable bitopological spaces are further investigated, and an open question recently posed by Li and Lin in \cite{LL} is completely solved. We also study the quasi-pseudo-metrizability of a topological ordered space $(X, \tau, \preccurlyeq)$. It is shown that if $(X, \tau, \preccurlyeq)$ is a ball transitive topological ordered $C$- and $I$-space such that $\tau$ is metrizable, then its associated bitopological space $(X,\tau^{\flat},\tau^{\natural})$ is quasi-pseudo-metrizable. This result provides a partial affirmative answer to a problem in \cite{KM}.
0
0
1
0
0
0
Ontology based system to guide internship assignment process
Internship assignment is a complicated process for universities since it is necessary to take into account a multiplicity of variables to establish a compromise between companies' requirements and student competencies acquired during the university training. These variables build up a complex relations map that requires the formulation of an exhaustive and rigorous conceptual scheme. In this research a domain ontological model is presented as support to the student's decision making for opportunities of University studies level of the University Lumiere Lyon 2 (ULL) education system. The ontology is designed and created using methodological approach offering the possibility of improving the progressive creation, capture and knowledge articulation. In this paper, we draw a balance taking the demands of the companies across the capabilities of the students. This will be done through the establishment of an ontological model of an educational learners' profile and the internship postings which are written in a free text and using uncontrolled vocabulary. Furthermore, we outline the process of semantic matching which improves the quality of query results.
1
0
0
0
0
0
Vacancy-driven extended stability of cubic metastable Ta-Al-N and Nb-Al-N phases
Quantum mechanical calculations had been previously applied to predict phase stability in many ternary and multinary nitride systems. While the predictions were very accurate for the Ti-Al-N system, some discrepancies between theory and experiment were obtained in the case of other systems. Namely, in the case of Ta-Al-N, the calculations tend to overestimate the minimum Al content necessary to obtain a metastable solid solution with a cubic structure. In this work, we present a comprehensive study of the impact of vacancies on the phase fields in quasi-binary TaN-AlN and NbN-AlN systems. Our calculations clearly show that presence of point defects strongly enlarges the cubic phase field in the TaN-AlN system, while the effect is less pronounced in the NbN-AlN case. The present phase stability predictions agree better with experimental observations of physical vapour deposited thin films reported in the literature than that based on perfect, non-defected structures. This study shows that a representative structural model is crucial for a meaningful comparison with experimental data.
0
1
0
0
0
0
Why Do Neural Dialog Systems Generate Short and Meaningless Replies? A Comparison between Dialog and Translation
This paper addresses the question: Why do neural dialog systems generate short and meaningless replies? We conjecture that, in a dialog system, an utterance may have multiple equally plausible replies, causing the deficiency of neural networks in the dialog application. We propose a systematic way to mimic the dialog scenario in a machine translation system, and manage to reproduce the phenomenon of generating short and less meaningful sentences in the translation setting, showing evidence of our conjecture.
1
0
0
0
0
0
Material Recognition CNNs and Hierarchical Planning for Biped Robot Locomotion on Slippery Terrain
In this paper we tackle the problem of visually predicting surface friction for environments with diverse surfaces, and integrating this knowledge into biped robot locomotion planning. The problem is essential for autonomous robot locomotion since diverse surfaces with varying friction abound in the real world, from wood to ceramic tiles, grass or ice, which may cause difficulties or huge energy costs for robot locomotion if not considered. We propose to estimate friction and its uncertainty from visual estimation of material classes using convolutional neural networks, together with probability distribution functions of friction associated with each material. We then robustly integrate the friction predictions into a hierarchical (footstep and full-body) planning method using chance constraints, and optimize the same trajectory costs at both levels of the planning method for consistency. Our solution achieves fully autonomous perception and locomotion on slippery terrain, which considers not only friction and its uncertainty, but also collision, stability and trajectory cost. We show promising friction prediction results in real pictures of outdoor scenarios, and planning experiments on a real robot facing surfaces with different friction.
1
0
0
0
0
0
Non-Archimedean Replicator Dynamics and Eigen's Paradox
We present a new non-Archimedean model of evolutionary dynamics, in which the genomes are represented by p-adic numbers. In this model the genomes have a variable length, not necessarily bounded, in contrast with the classical models where the length is fixed. The time evolution of the concentration of a given genome is controlled by a p-adic evolution equation. This equation depends on a fitness function f and on mutation measure Q. By choosing a mutation measure of Gibbs type, and by using a p-adic version of the Maynard Smith Ansatz, we show the existence of threshold function M_{c}(f,Q), such that the long term survival of a genome requires that its length grows faster than M_{c}(f,Q). This implies that Eigen's paradox does not occur if the complexity of genomes grows at the right pace. About twenty years ago, Scheuring and Poole, Jeffares, Penny proposed a hypothesis to explain Eigen's paradox. Our mathematical model shows that this biological hypothesis is feasible, but it requires p-adic analysis instead of real analysis. More exactly, the Darwin-Eigen cycle proposed by Poole et al. takes place if the length of the genomes exceeds M_{c}(f,Q).
0
0
0
0
1
0
D-optimal design for multivariate polynomial regression via the Christoffel function and semidefinite relaxations
We present a new approach to the design of D-optimal experiments with multivariate polynomial regressions on compact semi-algebraic design spaces. We apply the moment-sum-of-squares hierarchy of semidefinite programming problems to solve numerically and approximately the optimal design problem. The geometry of the design is recovered with semidefinite programming duality theory and the Christoffel polynomial.
0
0
1
1
0
0
A dynamic network model to measure exposure diversification in the Austrian interbank market
We propose a statistical model for weighted temporal networks capable of measuring the level of heterogeneity in a financial system. Our model focuses on the level of diversification of financial institutions; that is, whether they are more inclined to distribute their assets equally among partners, or if they rather concentrate their commitment towards a limited number of institutions. Crucially, a Markov property is introduced to capture time dependencies and to make our measures comparable across time. We apply the model on an original dataset of Austrian interbank exposures. The temporal span encompasses the onset and development of the financial crisis in 2008 as well as the beginnings of European sovereign debt crisis in 2011. Our analysis highlights an overall increasing trend for network homogeneity, whereby core banks have a tendency to distribute their market exposures more equally across their partners.
0
0
0
1
0
1
Universal scaling in the Knight shift anomaly of doped periodic Anderson model
We report a Dynamical Cluster Approximation (DCA) investigation of the doped periodic Anderson model (PAM) to explain the universal scaling in the Knight shift anomaly predicted by the phenomenological two-fluid model and confirmed in many heavy-fermion compounds. We calculate the quantitative evolution of the orbital-dependent magnetic susceptibility and reproduce correctly the two-fluid prediction in a large range of doping and hybridization. Our results confirm the presence of a temperature/energy scale $T^{\ast}$ for the universal scaling and show distinctive behavors of the Knight shift anomaly in response to other "orders" at low temperatures. However, comparison with the temperature evolution of the calculated resistivity and quasiparticle spectral peak indicates a different characteristic temperature from $T^*$, in contradiction with the experimental observation in CeCoIn$_5$ and other compounds. This reveals a missing piece in the current model calculations in explaining the two-fluid phenomenology.
0
1
0
0
0
0
Reinforcement Learning Algorithm Selection
This paper formalises the problem of online algorithm selection in the context of Reinforcement Learning. The setup is as follows: given an episodic task and a finite number of off-policy RL algorithms, a meta-algorithm has to decide which RL algorithm is in control during the next episode so as to maximize the expected return. The article presents a novel meta-algorithm, called Epochal Stochastic Bandit Algorithm Selection (ESBAS). Its principle is to freeze the policy updates at each epoch, and to leave a rebooted stochastic bandit in charge of the algorithm selection. Under some assumptions, a thorough theoretical analysis demonstrates its near-optimality considering the structural sampling budget limitations. ESBAS is first empirically evaluated on a dialogue task where it is shown to outperform each individual algorithm in most configurations. ESBAS is then adapted to a true online setting where algorithms update their policies after each transition, which we call SSBAS. SSBAS is evaluated on a fruit collection task where it is shown to adapt the stepsize parameter more efficiently than the classical hyperbolic decay, and on an Atari game, where it improves the performance by a wide margin.
1
0
1
1
0
0
On periodic solutions of nonlinear wave equations, including Einstein equations with a negative cosmological constant
We construct periodic solutions of nonlinear wave equations using analytic continuation. The construction applies in particular to Einstein equations, leading to infinite-dimensional families of time-periodic solutions of the vacuum, or of the Einstein-Maxwell-dilaton-scalar fields-Yang-Mills-Higgs-Chern-Simons-$f(R)$ equations, with a negative cosmological constant.
0
0
1
0
0
0
Automatic Discovery, Association Estimation and Learning of Semantic Attributes for a Thousand Categories
Attribute-based recognition models, due to their impressive performance and their ability to generalize well on novel categories, have been widely adopted for many computer vision applications. However, usually both the attribute vocabulary and the class-attribute associations have to be provided manually by domain experts or large number of annotators. This is very costly and not necessarily optimal regarding recognition performance, and most importantly, it limits the applicability of attribute-based models to large scale data sets. To tackle this problem, we propose an end-to-end unsupervised attribute learning approach. We utilize online text corpora to automatically discover a salient and discriminative vocabulary that correlates well with the human concept of semantic attributes. Moreover, we propose a deep convolutional model to optimize class-attribute associations with a linguistic prior that accounts for noise and missing data in text. In a thorough evaluation on ImageNet, we demonstrate that our model is able to efficiently discover and learn semantic attributes at a large scale. Furthermore, we demonstrate that our model outperforms the state-of-the-art in zero-shot learning on three data sets: ImageNet, Animals with Attributes and aPascal/aYahoo. Finally, we enable attribute-based learning on ImageNet and will share the attributes and associations for future research.
1
0
0
0
0
0
Variational Bayesian Inference For A Scale Mixture Of Normal Distributions Handling Missing Data
In this paper, a scale mixture of Normal distributions model is developed for classification and clustering of data having outliers and missing values. The classification method, based on a mixture model, focuses on the introduction of latent variables that gives us the possibility to handle sensitivity of model to outliers and to allow a less restrictive modelling of missing data. Inference is processed through a Variational Bayesian Approximation and a Bayesian treatment is adopted for model learning, supervised classification and clustering.
0
0
0
1
0
0
Energy stable discretization of Allen-Cahn type problems modeling the motion of phase boundaries
We study the systematic numerical approximation of a class of Allen-Cahn type problems modeling the motion of phase interfaces. The common feature of these models is an underlying gradient flow structure which gives rise to a decay of an associated energy functional along solution trajectories. We first study the discretization in space by a conforming Galerkin approximation of a variational principle which characterizes smooth solutions of the problem. Well-posedness of the resulting semi-discretization is established and the energy decay along discrete solution trajectories is proven. A problem adapted implicit time-stepping scheme is then proposed and we establish its well-posed and decay of the free energy for the fully discrete scheme. Some details about the numerical realization by finite elements are discussed, in particular the iterative solution of the nonlinear problems arising in every time-step. The theoretical results are illustrated by numerical tests which also provide further evidence for asymptotic expansions of the interface velocities derived by Alber et al.
0
0
1
0
0
0
Hidden space reconstruction inspires link prediction in complex networks
As a fundamental challenge in vast disciplines, link prediction aims to identify potential links in a network based on the incomplete observed information, which has broad applications ranging from uncovering missing protein-protein interaction to predicting the evolution of networks. One of the most influential methods rely on similarity indices characterized by the common neighbors or its variations. We construct a hidden space mapping a network into Euclidean space based solely on the connection structures of a network. Compared with real geographical locations of nodes, our reconstructed locations are in conformity with those real ones. The distances between nodes in our hidden space could serve as a novel similarity metric in link prediction. In addition, we hybrid our hidden space method with other state-of-the-art similarity methods which substantially outperforms the existing methods on the prediction accuracy. Hence, our hidden space reconstruction model provides a fresh perspective to understand the network structure, which in particular casts a new light on link prediction.
1
1
0
0
0
0
Maximum principles for the fractional p-Laplacian and symmetry of solutions
In this paper, we consider nonlinear equations involving the fractional p-Laplacian $$ (-\lap)_p^s u(x)) \equiv C_{n,s,p} PV \int_{\mathbb{R}^n} \frac{|u(x)-u(y)|^{p-2}[u(x)-u(y)]}{|x-z|^{n+ps}} dz= f(x,u).$$ We prove a {\em maximum principle for anti-symmetric functions} and obtain other key ingredients for carrying on the method of moving planes, such as {\em a key boundary estimate lemma}. Then we establish radial symmetry and monotonicity for positive solutions to semilinear equations involving the fractional p-Laplacian in a unit ball and in the whole space. We believe that the methods developed here can be applied to a variety of problems involving nonlinear nonlocal operators.
0
0
1
0
0
0
On slowly rotating axisymmetric solutions of the Einstein-Euler equations
In recent works we have constructed axisymmetric solutions to the Euler-Poisson equations which give mathematical models of slowly uniformly rotating gaseous stars. We try to extend this result to the study of solutions of the Einstein-Euler equations in the framework of the general theory of relativity. Although many interesting studies have been done about axisymmetric metric in the general theory of relativity, they are restricted to the region of the vacuum. Mathematically rigorous existence theorem of the axisymmetric interior solutions of the stationary metric corresponding to the energy-momentum tensor of the perfect fluid with non-zero pressure may be not yet established until now except only one found in the pioneering work by U. Heilig done in 1993. In this article, along a different approach to that of Heilig's work, axisymmetric stationary solutions of the Einstein-Euler equations are constructed near those of the Euler-Poisson equations when the speed of light is sufficiently large in the considered system of units, or, when the gravitational field is sufficiently weak.
0
0
1
0
0
0
Gapped paramagnetic state in a frustrated spin-$\frac{1}{2}$ Heisenberg antiferromagnet on the cross-striped square lattice
We implement the coupled cluster method to very high orders of approximation to study the spin-$\frac{1}{2}$ $J_{1}$--$J_{2}$ Heisenberg model on a cross-striped square lattice. Every nearest-neighbour pair of sites on the square lattice has an isotropic antiferromagnetic exchange bond of strength $J_{1}>0$, while the basic square plaquettes in alternate columns have either both or neither next-nearest-neighbour (diagonal) pairs of sites connected by an equivalent frustrating bond of strength $J_{2} \equiv \alpha J_{1} > 0$. By studying the magnetic order parameter (i.e., the average local on-site magnetization) in the range $0 \leq \alpha \leq 1$ of the frustration parameter we find that the quasiclassical antiferromagnetic Néel and (so-called) double Néel states form the stable ground-state phases in the respective regions $\alpha < \alpha_{1a}^{c} = 0.46(1)$ and $\alpha > \alpha_{1b}^{c} = 0.615(5)$. The double Néel state has Néel ($\cdots\uparrow\downarrow\uparrow\downarrow\cdots$) ordering along the (column) direction parallel to the stripes of squares with both or no $J_{2}$ bonds, and spins alternating in a pairwise ($\cdots\uparrow\uparrow\downarrow\downarrow\uparrow\uparrow\downarrow\downarrow\cdots$) fashion along the perpendicular (row) direction, so that the parallel pairs occur on squares with both $J_{2}$ bonds present. Further explicit calculations of both the triplet spin gap and the zero-field uniform transverse magnetic susceptibility provide compelling evidence that the ground-state phase over all or most of the intermediate regime $\alpha_{1a}^{c} < \alpha < \alpha_{1b}^{c}$ is a gapped state with no discernible long-range magnetic order.
0
1
0
0
0
0
Clouds in the atmospheres of extrasolar planets. V. The impact of CO2 ice clouds on the outer boundary of the habitable zone
Clouds have a strong impact on the climate of planetary atmospheres. The potential scattering greenhouse effect of CO2 ice clouds in the atmospheres of terrestrial extrasolar planets is of particular interest because it might influence the position and thus the extension of the outer boundary of the classic habitable zone around main sequence stars. Here, the impact of CO2 ice clouds on the surface temperatures of terrestrial planets with CO2 dominated atmospheres, orbiting different types of stars is studied. Additionally, their corresponding effect on the position of the outer habitable zone boundary is evaluated. For this study, a radiative-convective atmospheric model is used the calculate the surface temperatures influenced by CO2 ice particles. The clouds are included using a parametrised cloud model. The atmospheric model includes a general discrete ordinate radiative transfer that can describe the anisotropic scattering by the cloud particles accurately. A net scattering greenhouse effect caused by CO2 clouds is only obtained in a rather limited parameter range which also strongly depends on the stellar effective temperature. For cool M-stars, CO2 clouds only provide about 6 K of additional greenhouse heating in the best case scenario. On the other hand, the surface temperature for a planet around an F-type star can be increased by 30 K if carbon dioxide clouds are present. Accordingly, the extension of the habitable zone due to clouds is quite small for late-type stars. Higher stellar effective temperatures, on the other hand, can lead to outer HZ boundaries about 0.5 au farther out than the corresponding clear-sky values.
0
1
0
0
0
0
Segmentation of the Proximal Femur from MR Images using Deep Convolutional Neural Networks
Magnetic resonance imaging (MRI) has been proposed as a complimentary method to measure bone quality and assess fracture risk. However, manual segmentation of MR images of bone is time-consuming, limiting the use of MRI measurements in the clinical practice. The purpose of this paper is to present an automatic proximal femur segmentation method that is based on deep convolutional neural networks (CNNs). This study had institutional review board approval and written informed consent was obtained from all subjects. A dataset of volumetric structural MR images of the proximal femur from 86 subject were manually-segmented by an expert. We performed experiments by training two different CNN architectures with multiple number of initial feature maps and layers, and tested their segmentation performance against the gold standard of manual segmentations using four-fold cross-validation. Automatic segmentation of the proximal femur achieved a high dice similarity score of 0.94$\pm$0.05 with precision = 0.95$\pm$0.02, and recall = 0.94$\pm$0.08 using a CNN architecture based on 3D convolution exceeding the performance of 2D CNNs. The high segmentation accuracy provided by CNNs has the potential to help bring the use of structural MRI measurements of bone quality into clinical practice for management of osteoporosis.
1
0
0
1
0
0
Identifying On-time Reward Delivery Projects with Estimating Delivery Duration on Kickstarter
In Crowdfunding platforms, people turn their prototype ideas into real products by raising money from the crowd, or invest in someone else's projects. In reward-based crowdfunding platforms such as Kickstarter and Indiegogo, selecting accurate reward delivery duration becomes crucial for creators, backers, and platform providers to keep the trust between the creators and the backers, and the trust between the platform providers and users. According to Kickstarter, 35% backers did not receive rewards on time. Unfortunately, little is known about on-time and late reward delivery projects, and there is no prior work to estimate reward delivery duration. To fill the gap, in this paper, we (i) extract novel features that reveal latent difficulty levels of project rewards; (ii) build predictive models to identify whether a creator will deliver all rewards in a project on time or not; and (iii) build a regression model to estimate accurate reward delivery duration (i.e., how long it will take to produce and deliver all the rewards). Experimental results show that our models achieve good performance -- 82.5% accuracy, 78.1 RMSE, and 0.108 NRMSE at the first 5% of the longest reward delivery duration.
1
0
0
0
0
0
Bayesian Model-Agnostic Meta-Learning
Learning to infer Bayesian posterior from a few-shot dataset is an important step towards robust meta-learning due to the model uncertainty inherent in the problem. In this paper, we propose a novel Bayesian model-agnostic meta-learning method. The proposed method combines scalable gradient-based meta-learning with nonparametric variational inference in a principled probabilistic framework. During fast adaptation, the method is capable of learning complex uncertainty structure beyond a point estimate or a simple Gaussian approximation. In addition, a robust Bayesian meta-update mechanism with a new meta-loss prevents overfitting during meta-update. Remaining an efficient gradient-based meta-learner, the method is also model-agnostic and simple to implement. Experiment results show the accuracy and robustness of the proposed method in various tasks: sinusoidal regression, image classification, active learning, and reinforcement learning.
0
0
0
1
0
0
Distributed matching scheme and a flexible deterministic matching algorithm for arbitrary systems
We discuss the distributed matching scheme in accelerators where control of transverse beam phase space, oscillation, and transport is accomplished by flexible distribution of focusing elements beyond dedicated matching sections. Besides freeing accelerator design from fixed matching sections, such a scheme has many operational advantages, and enables fluid optics manipulation not possible in conventional schemes. Combined with an interpolation scheme this can bring about a new paradigm for efficient, flexible, and robust optics control. A rigorous and deterministic algorithm is developed for its realization. The algorithm is a matching tool in its own right with unique characteristics in robustness and determinism. The beam phase space dynamics is naturally integrated into the algorithm, instead of being treated as generic numerical parameters as in traditional schemes. It is applicable to a wider range of problems, such as trading-off between competing options for desired machine states.
0
1
0
0
0
0
Group Metrics for Graph Products of Cyclic Groups
We complement the characterization of the graph products of cyclic groups $G(\Gamma, \mathfrak{p})$ admitting a Polish group topology of [9] with the following result. Let $G = G(\Gamma, \mathfrak{p})$, then the following are equivalent: (i) there is a metric on $\Gamma$ which induces a separable topology in which $E_{\Gamma}$ is closed; (ii) $G(\Gamma, \mathfrak{p})$ is embeddable into a Polish group; (iii) $G(\Gamma, \mathfrak{p})$ is embeddable into a non-Archimedean Polish group. We also construct left-invariant separable group ultrametrics for $G = G(\Gamma, \mathfrak{p})$ and $\Gamma$ a closed graph on the Baire space, which is of independent interest.
0
0
1
0
0
0
Robust Online Multi-Task Learning with Correlative and Personalized Structures
Multi-Task Learning (MTL) can enhance a classifier's generalization performance by learning multiple related tasks simultaneously. Conventional MTL works under the offline or batch setting, and suffers from expensive training cost and poor scalability. To address such inefficiency issues, online learning techniques have been applied to solve MTL problems. However, most existing algorithms of online MTL constrain task relatedness into a presumed structure via a single weight matrix, which is a strict restriction that does not always hold in practice. In this paper, we propose a robust online MTL framework that overcomes this restriction by decomposing the weight matrix into two components: the first one captures the low-rank common structure among tasks via a nuclear norm and the second one identifies the personalized patterns of outlier tasks via a group lasso. Theoretical analysis shows the proposed algorithm can achieve a sub-linear regret with respect to the best linear model in hindsight. Even though the above framework achieves good performance, the nuclear norm that simply adds all nonzero singular values together may not be a good low-rank approximation. To improve the results, we use a log-determinant function as a non-convex rank approximation. The gradient scheme is applied to optimize log-determinant function and can obtain a closed-form solution for this refined problem. Experimental results on a number of real-world applications verify the efficacy of our method.
1
0
0
1
0
0
Cobordism maps on PFH induced by Lefschetz fibration over higher genus base
In this note, we discuss the cobordism maps on periodic Floer homology(PFH) induced by Lefschetz fibration. In the first part of the note, we define the cobordism maps on PFH induced by Lefschetz fibration via Seiberg Witten theory and the isomorphism between PFH and Seiberg Witten cohomology. The second part is to define the cobordism maps induced by Lefschetz fibration provided that the cobordism satisfies certain conditions. Under certain monotone assumptions, we show that these two definitions in fact are equivalent.
0
0
1
0
0
0
A short note on the order of the Zhang-Liu matrices over arbitrary fields
We give necessary and sufficient conditions for the Zhang-Liu matrices to be diagonalizable over arbitrary fields and provide the eigen-decomposition when it is possible. We use this result to calculate the order of these matrices over any arbitrary field. This generalizes a result of the second author.
0
0
1
0
0
0
Proceedings of the 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017)
This is the Proceedings of the 2017 ICML Workshop on Human Interpretability in Machine Learning (WHI 2017), which was held in Sydney, Australia, August 10, 2017. Invited speakers were Tony Jebara, Pang Wei Koh, and David Sontag.
1
0
0
1
0
0
Ferromagnetic transition in a one-dimensional spin-orbit-coupled metal and its mapping to a critical point in smectic liquid crystals
We study the quantum phase transition between a paramagnetic and ferromagnetic metal in the presence of Rashba spin-orbit coupling in one dimension. Using bosonization, we analyze the transition by means of renormalization group, controlled by an $\varepsilon$-expansion around the upper critical dimension of two. We show that the presence of Rashba spin-orbit coupling allows for a new nonlinear term in the bosonized action, which generically leads to a fluctuation driven first-order transition. We further demonstrate that the Euclidean action of this system maps onto a classical smectic-A -- C phase transition in a magnetic field in two dimensions. We show that the smectic transition is second-order and is controlled by a new critical point.
0
1
0
0
0
0
Riemannian almost product manifolds generated by a circulant structure
A 4-dimensional Riemannian manifold equipped with a circulant structure, which is an isometry with respect to the metric and its fourth power is the identity, is considered. The almost product manifold associated with the considered manifold is studied. The relation between the covariant derivatives of the almost product structure and the circulant structure is obtained. The conditions for the covariant derivative of the circulant structure, which imply that an almost product manifold belongs to each of the basic classes of the Staikova-Gribachev classification, are given.
0
0
1
0
0
0
Transfer Learning for Performance Modeling of Configurable Systems: An Exploratory Analysis
Modern software systems provide many configuration options which significantly influence their non-functional properties. To understand and predict the effect of configuration options, several sampling and learning strategies have been proposed, albeit often with significant cost to cover the highly dimensional configuration space. Recently, transfer learning has been applied to reduce the effort of constructing performance models by transferring knowledge about performance behavior across environments. While this line of research is promising to learn more accurate models at a lower cost, it is unclear why and when transfer learning works for performance modeling. To shed light on when it is beneficial to apply transfer learning, we conducted an empirical study on four popular software systems, varying software configurations and environmental conditions, such as hardware, workload, and software versions, to identify the key knowledge pieces that can be exploited for transfer learning. Our results show that in small environmental changes (e.g., homogeneous workload change), by applying a linear transformation to the performance model, we can understand the performance behavior of the target environment, while for severe environmental changes (e.g., drastic workload change) we can transfer only knowledge that makes sampling more efficient, e.g., by reducing the dimensionality of the configuration space.
1
0
0
1
0
0
Critique of Barbosa's "P != NP Proof"
We review André Luiz Barbosa's paper "P != NP Proof," in which the classes P and NP are generalized and claimed to be proven separate. We highlight inherent ambiguities in Barbosa's definitions, and show that attempts to resolve this ambiguity lead to flaws in the proof of his main result.
1
0
0
0
0
0
Two-Armed Bandit Problem, Data Processing, and Parallel Version of the Mirror Descent Algorithm
We consider the minimax setup for the two-armed bandit problem as applied to data processing if there are two alternative processing methods available with different a priori unknown efficiencies. One should determine the most effective method and provide its predominant application. To this end we use the mirror descent algorithm (MDA). It is well-known that corresponding minimax risk has the order $N^{1/2}$ with $N$ being the number of processed data. We improve significantly the theoretical estimate of the factor using Monte-Carlo simulations. Then we propose a parallel version of the MDA which allows processing of data by packets in a number of stages. The usage of parallel version of the MDA ensures that total time of data processing depends mostly on the number of packets but not on the total number of data. It is quite unexpectedly that the parallel version behaves unlike the ordinary one even if the number of packets is large. Moreover, the parallel version considerably improves control performance because it provides significantly smaller value of the minimax risk. We explain this result by considering another parallel modification of the MDA which behavior is close to behavior of the ordinary version. Our estimates are based on invariant descriptions of the algorithms. All estimates are obtained by Monte-Carlo simulations. It's worth noting that parallel version performs well only for methods with close efficiencies. If efficiencies differ significantly then one should use the combined algorithm which at initial sufficiently short control horizon uses ordinary version and then switches to the parallel version of the MDA.
0
0
1
1
0
0
Bearing fault diagnosis under varying working condition based on domain adaptation
Traditional intelligent fault diagnosis of rolling bearings work well only under a common assumption that the labeled training data (source domain) and unlabeled testing data (target domain) are drawn from the same distribution. When the distribution changes, most fault diagnosis models need to be rebuilt from scratch using newly recollected labeled training data. However, it is expensive or impossible to annotate huge amount of training data to rebuild such new model. Meanwhile, large amounts of labeled training data have not been fully utilized yet, which is apparently a waste of resources. As one of the important research directions of transfer learning, domain adaptation (DA) typically aims at minimizing the differences between distributions of different domains in order to minimize the cross-domain prediction error by taking full advantage of information coming from both source and target domains. In this paper, we present one of the first studies on unsupervised DA in the field of fault diagnosis of rolling bearings under varying working conditions and a novel diagnosis strategy based on unsupervised DA using subspace alignment (SA) is proposed. After processed by unsupervised DA with SA, the distributions of training data and testing data become close and the classifier trained on training data can be used to classify the testing data. Experimental results on the 60 domain adaptation diagnosis problems under varying working condition in Case Western Reserve benchmark data and 12 domain adaptation diagnosis problems under varying working conditions in our new data are given to demonstrate the effectiveness of the proposed method. The proposed methods can effectively distinguish not only bearing faults categories but also fault severities.
1
0
0
0
0
0
Dialectical Rough Sets, Parthood and Figures of Opposition-1
In one perspective, the main theme of this research revolves around the inverse problem in the context of general rough sets that concerns the existence of rough basis for given approximations in a context. Granular operator spaces and variants were recently introduced by the present author as an optimal framework for anti-chain based algebraic semantics of general rough sets and the inverse problem. In the framework, various sub-types of crisp and non-crisp objects are identifiable that may be missed in more restrictive formalism. This is also because in the latter cases concepts of complementation and negation are taken for granted - while in reality they have a complicated dialectical basis. This motivates a general approach to dialectical rough sets building on previous work of the present author and figures of opposition. In this paper dialectical rough logics are invented from a semantic perspective, a concept of dialectical predicates is formalised, connection with dialetheias and glutty negation are established, parthood analyzed and studied from the viewpoint of classical and dialectical figures of opposition by the present author. Her methods become more geometrical and encompass parthood as a primary relation (as opposed to roughly equivalent objects) for algebraic semantics.
1
0
1
0
0
0
On quasi-hereditary algebras
In this paper we introduce an easily verifiable sufficient condition to determine whether an algebra is quasi-hereditary. In the case of monomial algebras, we give conditions that are both necessary and sufficient to show whether an algebra is quasi-hereditary.
0
0
1
0
0
0
Clustering Analysis on Locally Asymptotically Self-similar Processes with Known Number of Clusters
We study the problems of clustering locally asymptotically self-similar stochastic processes, when the true number of clusters is priorly known. A new covariance-based dissimilarity measure is introduced, from which the so-called approximately asymptotically consistent clustering algorithms are obtained. In a simulation study, clustering data sampled from multifractional Brownian motions is performed to illustrate the approximated asymptotic consistency of the proposed algorithms.
0
0
0
1
0
0
The OSIRIS-REx Visible and InfraRed Spectrometer (OVIRS): Spectral Maps of the Asteroid Bennu
The OSIRIS-REx Visible and Infrared Spectrometer (OVIRS) is a point spectrometer covering the spectral range of 0.4 to 4.3 microns (25,000-2300 cm-1). Its primary purpose is to map the surface composition of the asteroid Bennu, the target asteroid of the OSIRIS-REx asteroid sample return mission. The information it returns will help guide the selection of the sample site. It will also provide global context for the sample and high spatial resolution spectra that can be related to spatially unresolved terrestrial observations of asteroids. It is a compact, low-mass (17.8 kg), power efficient (8.8 W average), and robust instrument with the sensitivity needed to detect a 5% spectral absorption feature on a very dark surface (3% reflectance) in the inner solar system (0.89-1.35 AU). It, in combination with the other instruments on the OSIRIS-REx Mission, will provide an unprecedented view of an asteroid's surface.
0
1
0
0
0
0
The effect of the smoothness of fractional type operators over their commutators with Lipschitz symbols on weighted spaces
We prove boundedness results for integral operators of fractional type and their higher order commutators between weighted spaces, including $L^p$-$L^q$, $L^p$-$BMO$ and $L^p$-Lipschitz estimates. The kernels of such operators satisfy certain size condition and a Lipschitz type regularity, and the symbol of the commutator belongs to a Lipschitz class. We also deal with commutators of fractional type operators with less regular kernels satisfying a Hörmander's type inequality. As far as we know, these last results are new even in the unweighted case. Moreover, we give a characterization result involving symbols of the commutators and continuity results for extreme values of $p$.
0
0
1
0
0
0
Fast mean-reversion asymptotics for large portfolios of stochastic volatility models
We consider a large portfolio limit where the asset prices evolve according certain stochastic volatility models with default upon hitting a lower barrier. When the asset prices and the volatilities are correlated via systemic Brownian Motions, that limit exist and it is described by a SPDE on the positive half-space with Dirichlet boundary conditions which has been studied in \cite{HK17}. We study the convergence of the total mass of a solution to this stochastic initial-boundary value problem when the mean-reversion coefficients of the volatilities are multiples of a parameter that tends to infinity. When the volatilities of the volatilities are multiples of the square root of the same parameter, the convergence is extremely weak. On the other hand, when the volatilities of the volatilities are independent of this exploding parameter, the volatilities converge to their means and we can have much better approximations. Our aim is to use such approximations to improve the accuracy of certain risk-management methods in markets where fast volatility mean-reversion is observed.
0
0
0
0
0
1