abstract
stringlengths
42
2.09k
Using measurements from the PAMELA and ARINA spectrometers onboard the RESURS DK-1 satellite, we have examined the 27-day intensity variations in galactic cosmic ray (GCR) proton fluxes in 2007-2008. The PAMELA and ARINA data allow for the first time a study of time profiles and the rigidity dependence of the 27-day variations observed directly in space in a wide rigidity range from ~300 MV to several GV. We find that the rigidity dependence of the amplitude of the 27-day GCR variations cannot be described by the same power-law at both low and high energies. A flat interval occurs at rigidity R = <0.6-1.0> GV with a power-law index gamma = - 0.13+/-0.44 for PAMELA, whereas for R >= 1 GV the power-law dependence is evident with index gamma = - 0.51+/-0.11. We describe the rigidity dependence of the 27-day GCR variations for PAMELA and ARINA data in the framework of the modulation potential concept using the force-field approximation for GCR transport. For a physical interpretation, we have considered the relationship between the 27-day GCR variations and solar wind plasma and other heliospheric parameters. Moreover, we have discussed possible implications of MHD modeling of the solar wind plasma together with a stochastic GCR transport model concerning the effects of corotating interaction regions.
The G\"odel translation provides an embedding of the intuitionistic logic $\mathsf{IPC}$ into the modal logic $\mathsf{Grz}$, which then embeds into the modal logic $\mathsf{GL}$ via the splitting translation. Combined with Solovay's theorem that $\mathsf{GL}$ is the modal logic of the provability predicate of Peano Arithmetic $\mathsf{PA}$, both $\mathsf{IPC}$ and $\mathsf{Grz}$ admit arithmetical interpretations. When attempting to 'lift' these results to the monadic extensions $\mathsf{MIPC}$, $\mathsf{MGrz}$, and $\mathsf{MGL}$ of these logics, the same techniques no longer work. Following a conjecture made by Esakia, we add an appropriate version of Casari's formula to these monadic extensions (denoted by a '+'), obtaining that the G\"odel translation embeds $\mathsf{M^{+}IPC}$ into $\mathsf{M^{+}Grz}$ and the splitting translation embeds $\mathsf{M^{+}Grz}$ into $\mathsf{MGL}$. As proven by Japaridze, Solovay's result extends to the monadic system $\mathsf{MGL}$, which leads us to an arithmetical interpretation of both $\mathsf{M^{+}IPC}$ and $\mathsf{M^{+}Grz}$.
Curves of maximal slope are a reference gradient-evolution notion in metric spaces and arise as variational formulation of a vast class of nonlinear diffusion equations. Existence theories for curves of maximal slope are often based on minimizing-movements schemes, most notably on the Euler scheme. We present here an alternative minimizing-movements approach, yielding more regular discretizations, serving as a-posteriori convergence estimator, and allowing for a simple convergence proof.
We propose a formalism to model and reason about reconfigurable multi-agent systems. In our formalism, agents interact and communicate in different modes so that they can pursue joint tasks; agents may dynamically synchronize, exchange data, adapt their behaviour, and reconfigure their communication interfaces. Inspired by existing multi-robot systems, we represent a system as a set of agents (each with local state), executing independently and only influence each other by means of message exchange. Agents are able to sense their local states and partially their surroundings. We extend LTL to be able to reason explicitly about the intentions of agents in the interaction and their communication protocols. We also study the complexity of satisfiability and model-checking of this extension.
AM CVn systems are a rare type of accreting binary that consists of a white dwarf and a helium-rich, degenerate donor star. Using the Zwicky Transient Facility (ZTF), we searched for new AM CVn systems by focusing on blue, outbursting stars. We first selected outbursting stars using the ZTF alerts. We cross-matched the candidates with $Gaia$ and Pan-STARRS catalogs. The initial selection of candidates based on the $Gaia$ $BP$-$RP$ contains 1751 unknown objects. We used the Pan-STARRS $g$-$r$ and $r$-$i$ color in combination with the $Gaia$ color to identify 59 high-priority candidates. We obtained identification spectra of 35 sources, of which 18 are high priority candidates, and discovered 9 new AM CVn systems and one magnetic CV which shows only He-II lines. Using the outburst recurrence time, we estimate the orbital periods which are in the range of 29 to 50 minutes. We conclude that targeted followup of blue, outbursting sources is an efficient method to find new AM CVn systems, and we plan to followup all candidates we identified to systematically study the population of outbursting AM CVn systems.
For better user satisfaction and business effectiveness, more and more attention has been paid to the sequence-based recommendation system, which is used to infer the evolution of users' dynamic preferences, and recent studies have noticed that the evolution of users' preferences can be better understood from the implicit and explicit feedback sequences. However, most of the existing recommendation techniques do not consider the noise contained in implicit feedback, which will lead to the biased representation of user interest and a suboptimal recommendation performance. Meanwhile, the existing methods utilize item sequence for capturing the evolution of user interest. The performance of these methods is limited by the length of the sequence, and can not effectively model the long-term interest in a long period of time. Based on this observation, we propose a novel CTR model named denoising user-aware memory network (DUMN). Specifically, the framework: (i) proposes a feature purification module based on orthogonal mapping, which use the representation of explicit feedback to purify the representation of implicit feedback, and effectively denoise the implicit feedback; (ii) designs a user memory network to model the long-term interests in a fine-grained way by improving the memory network, which is ignored by the existing methods; and (iii) develops a preference-aware interactive representation component to fuse the long-term and short-term interests of users based on gating to understand the evolution of unbiased preferences of users. Extensive experiments on two real e-commerce user behavior datasets show that DUMN has a significant improvement over the state-of-the-art baselines. The code of DUMN model has been uploaded as an additional material.
The spin Hall effect (SHE) and the magnetic spin Hall effect (MSHE) are responsible for electrical spin current generation, which is a key concept of modern spintronics. We theoretically investigated the spin conductivity induced by spin-dependent s-d scattering in a ferromagnetic 3d alloy model by employing microscopic transport theory based on the Kubo formula. We derived a novel extrinsic mechanism that contributes to both the SHE and MSHE. This mechanism can be understood as the contribution from anisotropic (spatial-dependent) spin-flip scattering due to the combination of the orbital-dependent anisotropic shape of s-d hybridization and spin flipping, with the orbital shift caused by spin-orbit interaction with the d-orbitals. We also show that this mechanism is valid under crystal-field splitting among the d-orbitals in either the cubic or tetragonal symmetry.
The nonleptonic decays $\Lambda_{b}\rightarrow\Sigma_{c}^{*-}\pi^{+},\Xi_{c}^{*0}K^{0}$ and $\Lambda_{b}\rightarrow\Delta^{0}D^{0},\Sigma^{*-}D_{0}^{+}$ are studied. In addition, the decays $\Lambda_{b}\rightarrow\Xi_{c}^{0}K^{0},\Sigma^{-}D_{s}^{+}$ are analyzed. For all these decays the dominant contribution comes from $W-$exchange, and for the decay $\Lambda_{b}\rightarrow\Lambda_{c}^{+}\pi^{-}$, in addition to factorization, baryon pole contribution to the $p$-wave (parity conserving) decay amplitude $B$ is discussed.
The universal fractality of river networks is very well known, however understanding of the underlying mechanisms for them is still lacking in terms of stochastic processes. By introducing probability changing dynamically, we have described the fractal natures of river networks stochastically. The dynamical probability depends on the drainage area at a site that is a key dynamical quantity of the system, meanwhile the river network is developed by the probability, which induces dynamical persistency in river flows resulting in the self-affine property shown in real river basins, although the process is a Markovian process with short-term memory.
Recovering 3D human pose from 2D joints is still a challenging problem, especially without any 3D annotation, video information, or multi-view information. In this paper, we present an unsupervised GAN-based model consisting of multiple weight-sharing generators to estimate a 3D human pose from a single image without 3D annotations. In our model, we introduce single-view-multi-angle consistency (SVMAC) to significantly improve the estimation performance. With 2D joint locations as input, our model estimates a 3D pose and a camera simultaneously. During training, the estimated 3D pose is rotated by random angles and the estimated camera projects the rotated 3D poses back to 2D. The 2D reprojections will be fed into weight-sharing generators to estimate the corresponding 3D poses and cameras, which are then mixed to impose SVMAC constraints to self-supervise the training process. The experimental results show that our method outperforms the state-of-the-art unsupervised methods by 2.6% on Human 3.6M and 15.0% on MPI-INF-3DHP. Moreover, qualitative results on MPII and LSP show that our method can generalize well to unknown data.
Congestion caused in the electrical network due to renewable generation can be effectively managed by integrating electric and thermal infrastructures, the latter being represented by large scale District Heating (DH) networks, often fed by large combined heat and power (CHP) plants. The CHP plants could further improve the profit margin of district heating multi-utilities by selling electricity in the power market by adjusting the ratio between generated heat and power. The latter is possible only for certain CHP plants, which allow decoupling the two commodities generation, namely the ones provided by two independent variables (degrees-of-freedom) or by integrating them with thermal energy storage and Power-to-Heat (P2H) units. CHP units can, therefore, help in the congestion management of the electricity network. A detailed mixed-integer linear programming (MILP) optimization model is introduced for solving the network-constrained unit commitment of integrated electric and thermal infrastructures. The developed model contains a detailed characterization of the useful effects of CHP units, i.e., heat and power, as a function of one and two independent variables. A lossless DC flow approximation models the electricity transmission network. The district heating model includes the use of gas boilers, electric boilers, and thermal energy storage. The conducted studies on IEEE 24 bus system highlight the importance of a comprehensive analysis of multi-energy systems to harness the flexibility derived from the joint operation of electric and heat sectors and managing congestion in the electrical network.
We establish weighted inequalities for $BMO$ commutators of sublinear operators for all $0<p<\infty$. For weights $w$ satisfying the doubling condition of order $q$ with $0<q<p$ and the reverse H\"{o}lder condition, we prove that $\bullet$ commutators $T_b$, which are bounded on $L^p$ with $1<p<\infty$, are bounded from some subspaces of $L^p_w$ to $L^p_w$ and to themselves for all $0<p<\infty$, these are applied to the commutators of singular integral operators and Hardy-Littelwood maximal operator, et.al, which are known to fail to be bounded from $H^1$ to $L^1$ and whose estimate has been open problems for $0<p$ enough small; $\bullet$ commutators $T_b$, whose associated operators $T$ are bounded on $L^p$ with $1<p<\infty$, are bounded from some subspaces of $L^p_w$ to $L^p_w$ and from some subspaces of $L^p_w$ to others for all $0<p<\infty$, these are applied to the commutators of maximal operators such as singular integral maximal operators, Carleson operator and the polynomial Carleson operator, et.al, the estimate of these commutators has been open problems for each $0<p<\infty$; $\bullet$ in particular, these imply that the commutators above are bounded from $H^p_w$ to $L^p_w$ and to itself for all $0<p\leq 1$.
Let $G$ be a connected reductive group over a non-archimedean local field $F$ and $I$ be an Iwahori subgroup of $G(F)$. Let $I_n$ is the $n$-th Moy-Prasad filtration subgroup of $I$. The purpose of this paper is two-fold: to give some nice presentations of the Hecke algebra of connected, reductive groups with $I_n$-level structure; and to introduce the Tits group of the Iwahori-Weyl group of groups $G$ that split over an unramified extension of $F$. The first main result of this paper is a presentation of the Hecke algebra $\mathcal H(G(F),I_n)$, generalizing the previous work of Iwahori-Matsumoto on the affine Hecke algebras. For split $GL_n$, Howe gave a refined presentation of the Hecke algebra $\mathcal H(G(F),I_n)$. To generalize such a refined presentation to other groups requires the existence of some nice lifting of the Iwahori-Weyl group $W$ to $G(F)$. The study of a certain nice lifting of $W$ is the second main motivation of this paper, which we discuss below. In 1966, Tits introduced a certain subgroup of $G(\mathbf k)$, which is an extension of $W$ by an elementary abelian $2$-group. This group is called the Tits group and provides a nice lifting of the elements in the finite Weyl group. The "Tits group" $\mathcal T$ for the Iwahori-Weyl group $W$ is a certain subgroup of $G(F)$, which is an extension of the Iwahori-Weyl group $W$ by an elementary abelian $2$-group. The second main result of this paper is a construction of Tits group $\mathcal T$ for $W$ when $G$ splits over an unramified extension of $F$. As a consequence, we generalize Howe's presentation to such groups. We also show that when $G$ is ramified over $F$, such a group $\mathcal T$ of $W$ may not exist.
In a hard switched MOSFET based converter, turn-on energy losses is predominant in the total switching loss. At higher junction temperature the turn-on energy loss further increases due to the reverse recovery effect of the complementary MOSFETs body diode in a half-bridge configuration. Estimation of the switching loss under different operating conditions at an early design stage is essential for optimising the thermal design. Analytical switching loss models available in literature are generally used for estimating the switching losses, due to its accuracy and simplicity. In this paper, the inaccuracy in the reported loss models due to non inclusion of temperature dependent reverse recovery characteristics of body diode, is investigated. A structured method to determine the temperature-dependent switching loss of a SiC MOSFET in a half-bridge is presented. A simple methodology has been proposed to analyze the carrier lifetime's temperature dependencies of a SiC MOSFETs body diode. Device parameters from a 1.2kV/36A SiC MOSFETs datasheet are used for developing the loss model and experimental validation of the model.
This study aims to optimize Deep Feedforward Neural Networks (DFNNs) training using nature-inspired optimization algorithms, such as PSO, MTO, and its variant called MTOCL. We show how these algorithms efficiently update the weights of DFNNs when learning from data. We evaluate the performance of DFNN fused with optimization algorithms using three Wisconsin breast cancer datasets, Original, Diagnostic, and Prognosis, under different experimental scenarios. The empirical analysis demonstrates that MTOCL is the most performing in most scenarios across the three datasets. Also, MTOCL is comparable to past weight optimization algorithms for the original dataset, and superior for the other datasets, especially for the challenging Prognostic dataset.
In this paper, an implicit nonsymplectic exact energy-preserving integrator is specifically designed for a ten-dimensional phase-space conservative Hamiltonian system with five degrees of freedom. It is based on a suitable discretization-averaging of the Hamiltonian gradient, with a second-order accuracy to numerical solutions. A one-dimensional disordered discrete nonlinear Schr\"{o}dinger equation and a post-Newtonian Hamiltonian system of spinning compact binaries are taken as our two examples. We demonstrate numerically that the proposed algorithm exhibits good long-term performance in the preservation of energy, if roundoff errors are neglected. This result is independent of time steps, initial orbital eccentricities, and regular and chaotic orbital dynamical behavior. In particular, the application of appropriately large time steps to the new algorithm is helpful in reducing time-consuming and roundoff errors. This new method, combined with fast Lyapunov indicators, is well suited related to chaos in the two example problems. It is found that chaos in the former system is mainly responsible for one of the parameters. In the latter problem, a combination of small initial separations and high initial eccentricities can easily induce chaos.
Dirac spin liquids represent a class of highly-entangled quantum phases in two dimensional Mott insulators, featuring exotic properties such as critical correlation functions and absence of well-defined low energy quasi-particles. Existing numerical works suggest that the spin-orbital $SU(4)$ symmetric Kugel-Khomskii model of Mott insulators on the honeycomb lattice realizes a Dirac spin-orbital liquid, described at low energy by $(2+1)d$ quantum electrodynamics (QED$_3$) with $N_f=8$ Dirac fermions. We generalize methods previously developed for $SU(2)$ spin systems to analyze the symmetry properties and stability of the Dirac spin-orbital liquid. We conclude that the standard Dirac state in the $SU(4)$ honeycomb system, based on a simple parton mean-field ansatz, is intrinsically unstable at low energy due to the existence of a monopole perturbation that is allowed by physical symmetries and relevant under renormalization group flow. We propose two plausible alternative scenarios compatible with existing numerics. In the first scenario, we propose an alternative $U(1)$ Dirac spin-orbital liquid, which is similar to the standard one except for its monopole symmetry quantum numbers. This alternative $U(1)$ state represents a stable gapless phase. In the second scenario, we start from the standard $U(1)$ Dirac liquid and Higgs the $U(1)$ gauge symmetry down to $\mathbb{Z}_4$. The resulting $\mathbb{Z}_4$ Dirac spin-orbital liquid is stable. We also discuss the continuous quantum phase transitions from the $\mathbb{Z}_4$ Dirac liquids to conventional symmetry-breaking orders, described by the QED$_3$ theory with $N_f=8$ supplemented with a critical charge-$4$ Higgs field. We discuss possible ways to distinguish these scenarios in numerics. We also extend previous calculations of the quantum anomalies of QED$_3$ and match with generalized lattice Lieb-Schultz-Mattis constraints.
Recent work has shown that the performance of machine learning models can vary substantially when models are evaluated on data drawn from a distribution that is close to but different from the training distribution. As a result, predicting model performance on unseen distributions is an important challenge. Our work connects techniques from domain adaptation and predictive uncertainty literature, and allows us to predict model accuracy on challenging unseen distributions without access to labeled data. In the context of distribution shift, distributional distances are often used to adapt models and improve their performance on new domains, however accuracy estimation, or other forms of predictive uncertainty, are often neglected in these investigations. Through investigating a wide range of established distributional distances, such as Frechet distance or Maximum Mean Discrepancy, we determine that they fail to induce reliable estimates of performance under distribution shift. On the other hand, we find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts. We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference. $DoC$ reduces predictive error by almost half ($46\%$) on several realistic and challenging distribution shifts, e.g., on the ImageNet-Vid-Robust and ImageNet-Rendition datasets.
This paper presents the first adversarial example based method for attacking human instance segmentation networks, namely person segmentation networks in short, which are harder to fool than classification networks. We propose a novel Fashion-Guided Adversarial Attack (FashionAdv) framework to automatically identify attackable regions in the target image to minimize the effect on image quality. It generates adversarial textures learned from fashion style images and then overlays them on the clothing regions in the original image to make all persons in the image invisible to person segmentation networks. The synthesized adversarial textures are inconspicuous and appear natural to the human eye. The effectiveness of the proposed method is enhanced by robustness training and by jointly attacking multiple components of the target network. Extensive experiments demonstrated the effectiveness of FashionAdv in terms of robustness to image manipulations and storage in cyberspace as well as appearing natural to the human eye. The code and data are publicly released on our project page https://github.com/nii-yamagishilab/fashion_adv
This paper introduces Timers and Such, a new open source dataset of spoken English commands for common voice control use cases involving numbers. We describe the gap in existing spoken language understanding datasets that Timers and Such fills, the design and creation of the dataset, and experiments with a number of ASR-based and end-to-end baseline models, the code for which has been made available as part of the SpeechBrain toolkit.
An important physical phenomenon that manifests itself during the inspiral of two orbiting compact objects is the tidal deformation of each under the gravitational influence of its companion. In the case of binary neutron star mergers, this tidal deformation and the associated Love numbers have been used to probe properties of dense matter and the nuclear equation of state. Non-spinning black holes on the other hand have a vanishing (field) tidal Love number in General Relativity. This pertains to the deformation of the asymptotic gravitational field. In certain cases, especially in the late stages of the inspiral phase when the black holes get close to each other, the source multipole moments might be more relevant in probing their properties and the No-Hair theorem; contrastingly, these Love numbers do not vanish. In this paper, we track the source multipole moments in simulations of several binary black hole mergers and calculate these Love numbers. We present evidence that, at least for modest mass ratios, the behavior of the source multipole moments is universal.
We obtain a polynomial upper bound in the finite-field version of the multidimensional polynomial Szemer\'{e}di theorem for distinct-degree polynomials. That is, if $P_1, ..., P_t$ are nonconstant integer polynomials of distinct degrees and $v_1, ..., v_t$ are nonzero vectors in $\mathbb{F}_p^D$, we show that each subset of $\mathbb{F}_p^D$ lacking a nontrivial configuration of the form $$ x, x + v_1 P_1(y), ..., x + v_t P_t(y)$$ has at most $O(p^{D-c})$ elements. In doing so, we apply the notion of Gowers norms along a vector adapted from ergodic theory, which extends the classical concept of Gowers norms on finite abelian groups.
Noncollinear chiral spin textures in ferromagnetic multilayers are at the forefront of recent research in nano-magnetism with the promise for fast and energy-efficient devices. The recently demonstrated possibilities to stabilize such chiral structures in synthetic antiferromagnets (SAF) has raised interests as they are immune to dipolar field, hence favoring the stabilization of ultra small textures, improve mobility and avoid the transverse deflections of moving skyrmions limiting the efficiency in some foreseen applications. However, such systems with zero net magnetization are hence difficult to characterize by most of the standard techniques. Here, we report that the relevant parameters of a magnetic SAF texture, those being its period, its type (N\'eel or Bloch) and its chirality (clockwise or counterclockwise), can be directly determined using the circular dichroism in x-ray resonant scattering (CD-XRMS) at half integer multilayer Bragg peaks in reciprocal space. The analysis of the dependence in temperature down to 40K allows us moreover to address the question of the temperature stability of a spin spiral in a SAF sample and of the temperature scaling of the symmetric and antisymmetric exchange interactions.
This paper presents our submitted system to SemEval 2021 Task 4: Reading Comprehension of Abstract Meaning. Our system uses a large pre-trained language model as the encoder and an additional dual multi-head co-attention layer to strengthen the relationship between passages and question-answer pairs, following the current state-of-the-art model DUMA. The main difference is that we stack the passage-question and question-passage attention modules instead of calculating parallelly to simulate re-considering process. We also add a layer normalization module to improve the performance of our model. Furthermore, to incorporate our known knowledge about abstract concepts, we retrieve the definitions of candidate answers from WordNet and feed them to the model as extra inputs. Our system, called WordNet-enhanced DUal Multi-head Co-Attention (WN-DUMA), achieves 86.67% and 89.99% accuracy on the official blind test set of subtask 1 and subtask 2 respectively.
Information diffusion on networks is an important concept in network science observed in many situations such as information spreading and rumor controlling in social networks, disease contagion between individuals, cascading failures in power grids. The critical interactions in networks are the ones that play critical roles in information diffusion and primarily affect network structure and functions. Besides, interactions can occur between not only two nodes as pairwise interactions, i.e., edges, but also three or more nodes, described as higher-order interactions. This report presents a novel method to identify critical higher-order interactions. We propose two new Laplacians that allow redefining classical graph centrality measures for higher-order interactions. We then compare the redefined centrality measures using the Susceptible-Infected-Recovered (SIR) simulation model. Experimental results suggest that the proposed method is promising in identifying critical higher-order interactions.
Both conceptual modeling and machine learning have long been recognized as important areas of research. With the increasing emphasis on digitizing and processing large amounts of data for business and other applications, it would be helpful to consider how these areas of research can complement each other. To understand how they can be paired, we provide an overview of machine learning foundations and development cycle. We then examine how conceptual modeling can be applied to machine learning and propose a framework for incorporating conceptual modeling into data science projects. The framework is illustrated by applying it to a healthcare application. For the inverse pairing, machine learning can impact conceptual modeling through text and rule mining, as well as knowledge graphs. The pairing of conceptual modeling and machine learning in this this way should help lay the foundations for future research.
We present a bijection between the set of standard Young tableaux of staircase minus rectangle shape, and the set of marked shifted standard Young tableaux of a certain shifted shape. Numerically, this result is due to DeWitt (2012). Combined with other known bijections this gives a bijective proof of the product formula for the number of standard Young tableaux of staircase minus rectangle shape. This resolves an open problem by Morales, Pak and Panova (2019), and allows for efficient random sampling. Other applications include a bijection for semistandard Young tableaux, and a bijective proof of Stembridge's symmetry of LR-coefficients of the staircase shape. We also extend these results to set-valued standard Young tableaux in the combinatorics of K-theory, leading to new proofs of results by Lewis and Marberg (2019) and Abney-McPeek, An and Ng (2020).
Smart speakers and voice-based virtual assistants are core components for the success of the IoT paradigm. Unfortunately, they are vulnerable to various privacy threats exploiting machine learning to analyze the generated encrypted traffic. To cope with that, deep adversarial learning approaches can be used to build black-box countermeasures altering the network traffic (e.g., via packet padding) and its statistical information. This letter showcases the inadequacy of such countermeasures against machine learning attacks with a dedicated experimental campaign on a real network dataset. Results indicate the need for a major re-engineering to guarantee the suitable protection of commercially available smart speakers.
We consider a system consisting of a sequential composition of Mealy machines, called head and tail. We study two problems related to these systems. In the first problem, models of both head and tail components are available, and the aim is to obtain a replacement for the tail with the minimum number of states. We introduce a minimization method for this context which yields an exponential improvement over the state of the art. In the second problem, only the head is known, and a desired model for the whole system is given. The objective is to construct a tail that causes the system to behave according to the given model. We show that, while it is possible to decide in polynomial time whether such a tail exists, there are instances where its size is exponential in the sizes of the head and the desired system. This shows that the complexity of the synthesis procedure is at least exponential, matching the upper bound in complexity provided by the existing methods for solving unknown component equations.
Partial differential equation-based numerical solution frameworks for initial and boundary value problems have attained a high degree of complexity. Applied to a wide range of physics with the ultimate goal of enabling engineering solutions, these approaches encompass a spectrum of spatiotemporal discretization techniques that leverage solver technology and high performance computing. While high-fidelity solutions can be achieved using these approaches, they come at a high computational expense and complexity. Systems with billions of solution unknowns are now routine. The expense and complexity do not lend themselves to typical engineering design and decision-making, which must instead rely on reduced-order models. Here we present an approach to reduced-order modelling that builds off of recent graph theoretic work for representation, exploration, and analysis on computed states of physical systems (Banerjee et al., Comp. Meth. App. Mech. Eng., 351, 501-530, 2019). We extend a non-local calculus on finite weighted graphs to build such models by exploiting first order dynamics, polynomial expansions, and Taylor series. Some aspects of the non-local calculus related to consistency of the models are explored. Details on the numerical implementations and the software library that has been developed for non-local calculus on graphs are described. Finally, we present examples of applications to various quantities of interest in mechano-chemical systems.
Federated learning (FL) enables multiple clients to jointly train a global model under the coordination of a central server. Although FL is a privacy-aware paradigm, where raw data sharing is not required, recent studies have shown that FL might leak the private data of a client through the model parameters shared with the server or the other clients. In this paper, we present the HyFed framework, which enhances the privacy of FL while preserving the utility of the global model. HyFed provides developers with a generic API to develop federated, privacy-preserving algorithms. HyFed supports both simulation and federated operation modes and its source code is publicly available at https://github.com/tum-aimed/hyfed.
Lattice-based cryptography relies on generating random bases which are difficult to fully reduce. Given a lattice basis (such as the private basis for a cryptosystem), all other bases are related by multiplication by matrices in $GL(n,\mathbb{Z})$. We compare the strengths of various methods to sample random elements of $GL(n,\mathbb{Z})$, finding some are stronger than others with respect to the problem of recognizing rotations of the $\mathbb{Z}^n$ lattice. In particular, the standard algorithm of multiplying unipotent generators together (as implemented in Magma's RandomSLnZ command) generates instances of this last problem which can be efficiently broken, even in dimensions nearing 1,500. Likewise, we find that the random basis generation method in one of the NIST Post-Quantum Cryptography competition submissions (DRS) generates instances which can be efficiently broken, even at its 256-bit security settings. Other random basis generation algorithms (some older, some newer) are described which appear to be much stronger.
New copulas, based on perturbation theory, are introduced to clarify a \emph{symmetrization} procedure for asymmetric copulas. We give also some properties of the \emph{symmetrized} copula. Finally, we examine families of copulas with a prescribed symmetrized one. By the way, we study topologically, the set of all symmetric copulas and give some of its classical and new properties.
Recently, a new choice of variables was identified to understand how the quantum group structure appeared in three-dimensional gravity [1]. These variables are introduced via a canonical transformation generated by a boundary term. We show that this boundary term can actually be taken to be the volume of the boundary and that the new variables can be defined in any dimension greater than three. In addition, we study the associated metric and teleparallel formalisms. The former is a variant of the Henneaux--Teitelboim model for unimodular gravity. The latter provides a non-abelian generalization of the usual abelian teleparallel formulation.
Accurate hand joints detection from images is a fundamental topic which is essential for many applications in computer vision and human computer interaction. This paper presents a two stage network for hand joints detection from single unmarked image by using serial-parallel multi-scale feature fusion. In stage I, the hand regions are located by a pre-trained network, and the features of each detected hand region are extracted by a shallow spatial hand features representation module. The extracted hand features are then fed into stage II, which consists of serially connected feature extraction modules with similar structures, called "multi-scale feature fusion" (MSFF). A MSFF contains parallel multi-scale feature extraction branches, which generate initial hand joint heatmaps. The initial heatmaps are then mutually reinforced by the anatomic relationship between hand joints. The experimental results on five hand joints datasets show that the proposed network overperforms the state-of-the-art methods.
Mass and radius measurements of stars are important inputs for models of stellar structure. Binary stars are of particular interest in this regard, because astrometry and spectroscopy of a binary together provide the masses of both stars as well as the distance to the system, while interferometry can both improve the astrometry and measure the radii of the stars. In this work we simulate parameter recovery from intensity interferometry, especially the challenge of disentangling the radii of two stars from their combined interferometric signal. Two approaches are considered: separation of the visibility contributions of each star with the help of differing brightness ratios at different wavelengths, and direct fitting of the intensity correlation to a multi-parameter model. Full image reconstructions is not attempted. Measurement of angular radii, angular separation and first-order limb-darkening appears readily achievable for bright binary stars with current instrumentation.
The Gordian graph and H(2)-Gordian graphs of knots are abstract graphs whose vertex sets represent isotopy classes of unoriented knots, and whose edge sets record whether pairs of knots are related by crossing changes or H(2)-moves, respectively. We investigate quotients of these graphs under equivalence relations defined by several knot invariants including the determinant, the span of the Jones polynomial, and an invariant related to tricolorability. We show, in all cases considered, that the quotient graphs are Gromov hyperbolic. We then prove a collection of results about the graph isomorphism type of the quotient graphs. In particular, we find that the H(2)-Gordian graph of links modulo the relation induced by the span of the Jones polynomial is isomorphic with the complete graph on infinitely many vertices.
We consider the problem of sparse nonnegative matrix factorization (NMF) with archetypal regularization. The goal is to represent a collection of data points as nonnegative linear combinations of a few nonnegative sparse factors with appealing geometric properties, arising from the use of archetypal regularization. We generalize the notion of robustness studied in Javadi and Montanari (2019) (without sparsity) to the notions of (a) strong robustness that implies each estimated archetype is close to the underlying archetypes and (b) weak robustness that implies there exists at least one recovered archetype that is close to the underlying archetypes. Our theoretical results on robustness guarantees hold under minimal assumptions on the underlying data, and applies to settings where the underlying archetypes need not be sparse. We propose new algorithms for our optimization problem; and present numerical experiments on synthetic and real datasets that shed further insights into our proposed framework and theoretical developments.
We recently showed that the DFT+U approach with a linear-response U yields adiabatic energy differences biased towards high spin [Mariano et al. J. Chem. Theory Comput. 2020, 16, 6755-6762]. Such bias is removed here by employing a density-corrected DFT approach where the PBE functional is evaluated on the Hubbard U -corrected density. The adiabatic energy differences of six Fe(II) molecular complexes computed using this approach, named here PBE[U], are in excellent agreement with coupled luster-corrected CASPT2 values for both weak- and strong-field ligands resulting in a mean absolute error (MAE) of 0.44 eV, smaller than the recently proposed Hartree-Fock density-corrected DFT (1.22 eV) and any other tested functional, including the best performer TPSSh (0.49 eV). We take advantage of the computational efficiency of this approach and compute the adiabatic energy differences of five molecular crystals using PBE[U] with periodic boundary conditions. The results show, again, an excellent agreement (MAE=0.07 eV) with experimentally-extracted values and a superior performance compared with the best performers TPSSh (MAE=0.08 eV) and M06-L (MAE=0.31 eV) computed on molecular fragments.
The scale of deep learning nowadays calls for efficient distributed training algorithms. Decentralized momentum SGD (DmSGD), in which each node averages only with its neighbors, is more communication efficient than vanilla Parallel momentum SGD that incurs global average across all computing nodes. On the other hand, the large-batch training has been demonstrated critical to achieve runtime speedup. This motivates us to investigate how DmSGD performs in the large-batch scenario. In this work, we find the momentum term can amplify the inconsistency bias in DmSGD. Such bias becomes more evident as batch-size grows large and hence results in severe performance degradation. We next propose DecentLaM, a novel decentralized large-batch momentum SGD to remove the momentum-incurred bias. The convergence rate for both non-convex and strongly-convex scenarios is established. Our theoretical results justify the superiority of DecentLaM to DmSGD especially in the large-batch scenario. Experimental results on a variety of computer vision tasks and models demonstrate that DecentLaM promises both efficient and high-quality training.
Recent work on unsupervised question answering has shown that models can be trained with procedurally generated question-answer pairs and can achieve performance competitive with supervised methods. In this work, we consider the task of unsupervised reading comprehension and present a method that performs "test-time learning" (TTL) on a given context (text passage), without requiring training on large-scale human-authored datasets containing \textit{context-question-answer} triplets. This method operates directly on a single test context, uses self-supervision to train models on synthetically generated question-answer pairs, and then infers answers to unseen human-authored questions for this context. Our method achieves accuracies competitive with fully supervised methods and significantly outperforms current unsupervised methods. TTL methods with a smaller model are also competitive with the current state-of-the-art in unsupervised reading comprehension.
In this paper, a two-stage stochastic day-ahead (DA) scheduling model is proposed incorporating wind power units and compressed air energy storage (CAES) to clear a co-optimized energy and reserve market. The two-stage stochastic programming method is employed to deal with the wind power generation uncertain nature. A linearized AC optimal power flow (LAC-OPF) approach with consideration of network losses, reactive power, and voltage magnitude constraints is utilized in the proposed two-stage stochastic DA scheduling model. Using an engineering insight, a two-level LAC-OPF (TL-LAC-OPF) approach is proposed to (i) reduce the number of binary variables of the LAC-OPF approach which decreases the computational burden, and (ii) obtain LAC-OPF pre-defined parameters adaptively so that the accuracy of LAC-OPF approach is increased as a result of reducing artificial losses. Furthermore, as the CAES efficiency depends on its thermodynamic and operational conditions, the proposed two-stage stochastic DA scheduling model is developed by considering its thermodynamic characteristics to obtain a more realistic market decision at the first place. The proposed model is applied to IEEE 30-bus and 57-bus test systems using GAMS software, and is compared with three traditional approaches, i.e., AC-OPF, DC-OPF, and LAC-OPF. Simulation results demonstrate effectiveness of the proposed methodology.
We present a novel approach to identify potential dispersed signals of new physics in the slew of published LHC results. It employs a random walk algorithm to introduce sets of new particles, dubbed "proto-models", which are tested against simplified-model results from ATLAS and CMS searches for new physics by exploiting the SModelS software framework. A combinatorial algorithm identifies the set of analyses and/or signal regions that maximally violates the Standard Model hypothesis, while remaining compatible with the entirety of LHC constraints in our database. Crucial to the method is the ability to construct a reliable likelihood in proto-model space; we explain the various approximations which are needed depending on the information available from the experiments, and how they impact the whole procedure.
We use gradient boosting machines and logistic regression to predict academic throughput at a South African university. The results highlight the significant influence of socio-economic factors and field of study as predictors of throughput. We further find that socio-economic factors become less of a predictor relative to the field of study as the time to completion increases. We provide recommendations on interventions to counteract the identified effects, which include academic, psychosocial and financial support.
Extraction of the multi-TeV proton and lead LHC beams with a bent crystal or by using an internal gas target allows one to perform the most energetic fixed-target experiment ever. pp, pd and pA collisions at $\sqrt{s}$ = 115 GeV and Pbp and PbA collisions at $\sqrt{s_{\rm{NN}}}$ = 72 GeV can be studied with high precision and modern detection techniques over a broad rapidity range. Using the LHCb or the ALICE detector in a fixed-target mode offers unprecedented possibilities to access heavy-flavour production in a new energy domain, half way between the SPS and the nominal RHIC energy. In this contribution, a review of projection studies for quarkonium and open charm and beauty production with both detector set-ups used with various nuclear targets and the LHC lead beams is presented.
Modern single image super-resolution (SISR) system based on convolutional neural networks (CNNs) achieves fancy performance while requires huge computational costs. The problem on feature redundancy is well studied in visual recognition task, but rarely discussed in SISR. Based on the observation that many features in SISR models are also similar to each other, we propose to use shift operation to generate the redundant features (i.e., Ghost features). Compared with depth-wise convolution which is not friendly to GPUs or NPUs, shift operation can bring practical inference acceleration for CNNs on common hardware. We analyze the benefits of shift operation for SISR and make the shift orientation learnable based on Gumbel-Softmax trick. For a given pre-trained model, we first cluster all filters in each convolutional layer to identify the intrinsic ones for generating intrinsic features. Ghost features will be derived by moving these intrinsic features along a specific orientation. The complete output features are constructed by concatenating the intrinsic and ghost features together. Extensive experiments on several benchmark models and datasets demonstrate that both the non-compact and lightweight SISR models embedded in our proposed module can achieve comparable performance to that of their baselines with large reduction of parameters, FLOPs and GPU latency. For instance, we reduce the parameters by 47%, FLOPs by 46% and GPU latency by 41% of EDSR x2 network without significant performance degradation.
We determine the exact value of the optimal symmetric rate point $(r, r)$ in the Dueck zero-error capacity region of the binary adder channel with complete feedback. We proved that the average zero-error capacity $r = h(1/2-\delta) \approx 0.78974$, where $h(\cdot)$ is the binary entropy function and $\delta = 1/(2\log_2(2+\sqrt3))$. Our motivation is a problem in quantitative group testing. Given a set of $n$ elements two of which are defective, the quantitative group testing problem asks for the identification of these two defectives through a series of tests. Each test gives the number of defectives contained in the tested subset, and the outcomes of previous tests are assumed known at the time of designing the current test. We establish that the minimum number of tests is asymptotic to $(\log_2 n) / r$ as $n \to \infty$.
Psychological theories of habit posit that when a strong habit is formed through behavioral repetition, it can trigger behavior automatically in the same environment. Given the reciprocal relationship between habit and behavior, changing lifestyle behaviors (e.g., toothbrushing) is largely a task of breaking old habits and creating new and healthy ones. Thus, representing users' habit strengths can be very useful for behavior change support systems (BCSS), for example, to predict behavior or to decide when an intervention reaches its intended effect. However, habit strength is not directly observable and existing self-report measures are taxing for users. In this paper, built on recent computational models of habit formation, we propose a method to enable intelligent systems to compute habit strength based on observable behavior. The hypothesized advantage of using computed habit strength for behavior prediction was tested using data from two intervention studies, where we trained participants to brush their teeth twice a day for three weeks and monitored their behaviors using accelerometers. Through hierarchical cross-validation, we found that for the task of predicting future brushing behavior, computed habit strength clearly outperformed self-reported habit strength (in both studies) and was also superior to models based on past behavior frequency (in the larger second study). Our findings provide initial support for our theory-based approach of modeling user habits and encourages the use of habit computation to deliver personalized and adaptive interventions.
Correlations between different regions of a quantum many-body system can be quantified through measures based on entropies of (reduced) subsystem states. For closed systems, several analytical and numerical tools, e.g., hydrodynamic theories or tensor networks, can accurately capture the time-evolution of subsystem entropies, thus allowing for a profound understanding of the unitary dynamics of quantum correlations. However, so far, these methods either cannot be applied to open quantum systems or do not permit an efficient computation of quantum entropies for mixed states. Here, we make progress in solving this issue by formulating a dissipative quasi-particle picture -- describing the dynamics of quantum entropies in the hydrodynamic limit -- for a general class of noninteracting open quantum systems. Our results show that also in dissipative many-body systems, correlations are generically established through the propagation of quasi-particles.
We report deep imaging observations with DOLoRes@TNG of an ultra-faint dwarf satellite candidate of the Triangulum galaxy (M33) found by visual inspection of the public imaging data release of the DESI Legacy Imaging Surveys. Pisces VII/Triangulum (Tri) III is found at a projected distance of 72 kpc from M33, and using the tip of the red giant branch method we estimate a distance of D=1.0 +0.3,-0.2 Mpc, meaning the galaxy could either be an isolated ultra-faint or the second known satellite of M33. We estimate an absolute magnitude of M_V=-6.1+/-0.2 if Pisces VII/Tri II is at the distance of M33, or as bright as M_V=-6.8+/-0.2 if the galaxy is isolated. At the isolated distance, it has a physical half-light radius of r_h=131+/-61 pc consistent with similarly faint galaxies around the Milky Way. As the tip of the red giant branch is sparsely populated, constraining a precision distance is not possible, but if Pisces VII/Tri III can be confirmed as a true satellite of M33 it is a significant finding. With only one potential satellite detected around M33 previously (Andromeda XXII/Tri I), it lacks a significant satellite population in stark contrast to the similarly massive Large Magellanic Cloud. The detection of more satellites in the outskirts of M33 could help to better illuminate if this discrepancy between expectation and observations is due to a poor understanding of the galaxy formation process, or if it is due to the low luminosity and surface brightness of the M33 satellite population which has thus far fallen below the detection limits of previous surveys. If it is truly isolated, it would be the faintest known field dwarf detected to date.
The classic classification scheme for Active Galactic Nuclei (AGNs) was recently challenged by the discovery of the so-called changing-state (changing-look) AGNs (CSAGNs). The physical mechanism behind this phenomenon is still a matter of open debate and the samples are too small and of serendipitous nature to provide robust answers. In order to tackle this problem, we need to design methods that are able to detect AGN right in the act of changing-state. Here we present an anomaly detection (AD) technique designed to identify AGN light curves with anomalous behaviors in massive datasets. The main aim of this technique is to identify CSAGN at different stages of the transition, but it can also be used for more general purposes, such as cleaning massive datasets for AGN variability analyses. We used light curves from the Zwicky Transient Facility data release 5 (ZTF DR5), containing a sample of 230,451 AGNs of different classes. The ZTF DR5 light curves were modeled with a Variational Recurrent Autoencoder (VRAE) architecture, that allowed us to obtain a set of attributes from the VRAE latent space that describes the general behaviour of our sample. These attributes were then used as features for an Isolation Forest (IF) algorithm, that is an anomaly detector for a "one class" kind of problem. We used the VRAE reconstruction errors and the IF anomaly score to select a sample of 8,809 anomalies. These anomalies are dominated by bogus candidates, but we were able to identify 75 promising CSAGN candidates.
With a highly coherent, optically addressable electron spin, the nitrogen vacancy (NV) centre in diamond is a promising candidate for a node in a quantum network. However, the NV centre is a poor source of coherent single photons owing to a long radiative lifetime, a small branching ratio into the zero-phonon line (ZPL) and a poor extraction efficiency out of the high-index host material. In principle, these three shortcomings can be addressed by resonant coupling to a single mode of an optical cavity. Utilising the weak-coupling regime of cavity electrodynamics, resonant coupling between the ZPL and a single cavity-mode enhances the transition rate and branching ratio into the ZPL. Furthermore, the cavity channels the light into a well-defined mode thereby facilitating detection with external optics. Here, we present an open Fabry-Perot microcavity geometry containing a single-crystal diamond membrane, which operates in a regime where the vacuum electric field is strongly confined to the diamond membrane. There is a field anti-node at the diamond-air interface. Despite the presence of surface losses, quality factors exceeding $120\,000$ and a finesse $\mathcal{F}=11\,500$ were observed. We investigate the interplay between different loss mechanisms, and the impact these loss channels have on the performance of the cavity. This analysis suggests that the "waviness" (roughness with a spatial frequency comparable to that of the microcavity mode) is the mechanism preventing the quality factors from reaching even higher values. Finally, we apply the extracted cavity parameters to the NV centre and calculate a predicted Purcell factor exceeding 150.
The number of solutions to $a^2+b^2=c^2+d^2 \le x$ in integers is a well-known result, while if one restricts all the variables to primes Erdos showed that only the diagonal solutions, namely, the ones with $\{a,b\}=\{c,d\}$ contribute to the main term, hence there is a paucity of the off-diagonal solutions. Daniel considered the case of $a,c$ being prime and proved that the main term has both the diagonal and the non-diagonal contributions. Here we investigate the remaining cases, namely when only $c$ is a prime and when both c,d are primes and, finally, when $b,c,d$ are primes by combining techniques of Daniel, Hooley and Plaksin.
We strengthen the maximal ergodic theorem for actions of groups of polynomial growth to a form involving jump quantity, which is the sharpest result among the family of variational or maximal ergodic theorems. As a consequence, we deduce in this setting the quantitative ergodic theorem, in particular, the upcrossing inequalities with exponential decay. The ideas or techniques involve probability theory, non-doubling Calder\'on-Zygmund theory, almost orthogonality argument and some delicate geometric argument involving the balls and the cubes on the group equipped with a not necessarily doubling measure.
Landmark localization plays an important role in medical image analysis. Learning based methods, including CNN and GCN, have demonstrated the state-of-the-art performance. However, most of these methods are fully-supervised and heavily rely on manual labeling of a large training dataset. In this paper, based on a fully-supervised graph-based method, DAG, we proposed a semi-supervised extension of it, termed few-shot DAG, \ie five-shot DAG. It first trains a DAG model on the labeled data and then fine-tunes the pre-trained model on the unlabeled data with a teacher-student SSL mechanism. In addition to the semi-supervised loss, we propose another loss using JS divergence to regulate the consistency of the intermediate feature maps. We extensively evaluated our method on pelvis, hand and chest landmark detection tasks. Our experiment results demonstrate consistent and significant improvements over previous methods.
We study the Jellium model of Wigner at finite, non-zero, temperature through a computer simulation using the canonical path integral worm algorithm where we successfully implemented the fixed-nodes free particles restriction necessary to circumvent the fermion sign problem. Our results show good agreement with the recent simulation data of Brown et al. and of other similar computer experiments on the Jellium model at high density and low temperature. Our algorithm can be used to treat any quantum fluid model of fermions at finite, non zero, temperature and has never been used before in literature.
Instead of only considering technology, computer security research now strives to also take into account the human factor by studying regular users and, to a lesser extent, experts like operators and developers of systems. We focus our analysis on the research on the crucial population of experts, whose human errors can impact many systems at once, and compare it to research on regular users. To understand how far we advanced in the area of human factors, how the field can further mature, and to provide a point of reference for researchers new to this field, we analyzed the past decade of human factors research in security and privacy, identifying 557 relevant publications. Of these, we found 48 publications focused on expert users and analyzed all in depth. For additional insights, we compare them to a stratified sample of 48 end-user studies. In this paper we investigate: (i) The perspective on human factors, and how we can learn from safety science (ii) How and who are the participants recruited, and how this -- as we find -- creates a western-centric perspective (iii) Research objectives, and how to align these with the chosen research methods (iv) How theories can be used to increase rigor in the communities scientific work, including limitations to the use of Grounded Theory, which is often incompletely applied (v) How researchers handle ethical implications, and what we can do to account for them more consistently Although our literature review has limitations, new insights were revealed and avenues for further research identified.
Miniaturized instruments are highly needed for robot assisted medical healthcare and treatment, especially for less invasive surgery as it empowers more flexible access to restricted anatomic intervention. But the robotic design is more challenging due to the contradictory needs of miniaturization and the capability of manipulating with a large dexterous workspace. Thus, kinematic parameter optimization is of great significance in this case. To this end, this paper proposes an approach based on dexterous workspace determination for designing a miniaturized tendon-driven surgical instrument under necessary restraints. The workspace determination is achieved by boundary determination and volume estimation with partition and least-squares polynomial fitting methods. The final robotic configuration with optimized kinematic parameters is proved to be eligible with a large enough dexterous workspace and targeted miniature size.
Unsupervised domain adaptation (UDA) in semantic segmentation is a fundamental yet promising task relieving the need for laborious annotation works. However, the domain shifts/discrepancies problem in this task compromise the final segmentation performance. Based on our observation, the main causes of the domain shifts are differences in imaging conditions, called image-level domain shifts, and differences in object category configurations called category-level domain shifts. In this paper, we propose a novel UDA pipeline that unifies image-level alignment and category-level feature distribution regularization in a coarse-to-fine manner. Specifically, on the coarse side, we propose a photometric alignment module that aligns an image in the source domain with a reference image from the target domain using a set of image-level operators; on the fine side, we propose a category-oriented triplet loss that imposes a soft constraint to regularize category centers in the source domain and a self-supervised consistency regularization method in the target domain. Experimental results show that our proposed pipeline improves the generalization capability of the final segmentation model and significantly outperforms all previous state-of-the-arts.
We relate Gruet formula for the heat kernel on real hyperbolic spaces to the commonly used one derived from Millson induction. The bridge between both formulas is settled by Yor result on the joint distribution of a Brownian motion and of its exponential functional at fixed time. This result allows further to relate Gruet formula with real parameter to the heat kernel of the hyperbolic Jacobi operator and to derive a new integral representation for the heat kernel of the Maass Laplacian. When applied to harmonic AN groups, Yor result yields also new a integral representation of their corresponding heat kernels which does not distinguish the parity of the dimension of the center of the Lie group N.
This work constructs Jonson-Lindenstrauss embeddings with best accuracy, as measured by variance, mean-squared error and exponential concentration of the length distortion. Lower bounds for any data and embedding dimensions are determined, and accompanied by matching and efficiently samplable constructions (built on orthogonal matrices). Novel techniques: a unit sphere parametrization, the use of singular-value latent variables and Schur-convexity are of independent interest.
The melt productivity of a differentiated planet's mantle is primarily controlled by its iron content, which is itself approximated by the planet's core mass fraction (CMF). Here we show that estimates of an exo-planet's CMF allows robust predictions of the thickness, composition and mineralogy of the derivative crust. These predicted crustal compositions allow constraints to be placed on volatile cycling between surface and the deep planetary interior, with implications for the evolution of habitable planetary surfaces. Planets with large, terrestrial-like, CMFs ($\geq$0.32) will exhibit thin crusts that are inefficient at transporting surface water and other volatiles into the underlying mantle. By contrast, rocky planets with smaller CMFs ($\leq$0.24) and higher, Mars-like, mantle iron contents will develop thick crusts capable of stabilizing hydrous minerals, which can effectively sequester volatiles into planetary interiors and act to remove surface water over timescales relevant to evolution. The extent of core formation has profound consequences for the subsequent planetary surface environment and may provide additional constraints in the hunt for habitable, Earth-like exo-planets.
Getting good performance out of numerical equation solvers requires that the user has provided stable and efficient functions representing their model. However, users should not be trusted to write good code. In this manuscript we describe ModelingToolkit (MTK), a symbolic equation-based modeling system which allows for composable transformations to generate stable, efficient, and parallelized model implementations. MTK blurs the lines of traditional symbolic computing by acting directly on a user's numerical code. We show the ability to apply graph algorithms for automatically parallelizing and performing index reduction on code written for differential-algebraic equation (DAE) solvers, "fixing" the performance and stability of the model without requiring any changes to on the user's part. We demonstrate how composable model transformations can be combined with automated data-driven surrogate generation techniques, allowing machine learning methods to generate accelerated approximate models within an acausal modeling framework. These reduced models are shown to outperform the Dymola Modelica compiler on an HVAC model by 590x at 3\% error. Together, this demonstrates MTK as a system for bringing the latest research in graph transformations directly to modeling applications.
Hybrid quantum-classical algorithms, such as variational quantum algorithms (VQA), are suitable for implementation on NISQ computers. In this Letter we expand an implicit step of VQAs: the classical pre-computation subroutine which can non-trivially use classical algorithms to simplify, transform, or specify problem instance-specific variational quantum circuits. In VQA there is a trade-off between quality of solution and difficulty of circuit construction and optimization. In one extreme, we find VQA for MAXCUT which are exact, but circuit design or variational optimization is NP-HARD. At the other extreme are low depth VQA, such as QAOA, with tractable circuit construction and optimization but poor approximation ratios. Combining these two we define the Spanning Tree QAOA (ST-QAOA) to solve MAXCUT, which uses an ansatz whose structure is derived from an approximate classical solution and achieves the same performance guarantee as the classical algorithm and hence can outperform QAOA at low depth. In general, we propose integrating these classical pre-computation subroutines into VQA to improve heuristic or guaranteed performance.
We develop and study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence (AI) systems including deep learning neural networks. In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself. Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team. It could also be made by those wishing to exploit a "democratization of AI" agenda, where network architectures and trained parameter sets are shared publicly. Building on work by [Tyukin et al., International Joint Conference on Neural Networks, 2020], we develop a range of new implementable attack strategies with accompanying analysis, showing that with high probability a stealth attack can be made transparent, in the sense that system performance is unchanged on a fixed validation set which is unknown to the attacker, while evoking any desired output on a trigger input of interest. The attacker only needs to have estimates of the size of the validation set and the spread of the AI's relevant latent space. In the case of deep learning neural networks, we show that a one neuron attack is possible - a modification to the weights and bias associated with a single neuron - revealing a vulnerability arising from over-parameterization. We illustrate these concepts in a realistic setting. Guided by the theory and computational results, we also propose strategies to guard against stealth attacks.
We study the prospect for discovering the $cg\to bH^+\to b A W^+$ process at the LHC. Induced by the top-flavor changing neutral Higgs coupling $\rho_{tc}$, the process may emerge if $m_{H^+} > m_A + m_{W^+}$, where $H^+$ and $A$ are charged and $CP$-odd Higgs bosons in the general two Higgs Doublet Model (g2HDM). We show that the $cg\to bH^+\to b A W^+$ process can be discovered at LHC Run 3, while the full Run 2 data at hand can constrain the parameter space significantly by searching for the same-sign dilepton final state. The process has unique implications on the hint of $gg\to A \to t \bar t$ excess at $m_A\approx 400$ GeV reported by CMS. When combined with other existing constraints, the $cg\to bH^+\to b A W^+$ process can essentially rule out the g2HDM explanation of such an excess.
Representations and O-operators of Hom-(pre)-Jacobi-Jordan algebras are introduced and studied. The anticommutator of a Hom-pre-Jacobi-Jordan algebra is a Hom-Jacobi-Jordan algebra and the left multiplication operator gives a representation of a Hom-Jacobi-Jordan algebra. The notion of matched pairs and Nijenhuis operators of Hom-(pre)-Jacobi-Jordan algebras are given and various relevant constructions are obtained.
We observe minima of the longitudinal resistance corresponding to the quantum Hall effect of composite fermions at quantum numbers $p=1$, 2, 3, 4, and 6 in an ultraclean strongly interacting bivalley SiGe/Si/SiGe two-dimensional electron system. The minima at $p=3$ disappear below a certain electron density, although the surrounding minima at $p=2$ and $p=4$ survive at significantly lower densities. Furthermore, the onset for the resistance minimum at a filling factor $\nu=3/5$ is found to be independent of the tilt angle of the magnetic field. These surprising results indicate the intersection or merging of the quantum levels of composite fermions with different valley indices, which reveals the valley effect on fractions.
This paper presents a joint source separation algorithm that simultaneously reduces acoustic echo, reverberation and interfering sources. Target speeches are separated from the mixture by maximizing independence with respect to the other sources. It is shown that the separation process can be decomposed into cascading sub-processes that separately relate to acoustic echo cancellation, speech dereverberation and source separation, all of which are solved using the auxiliary function based independent component/vector analysis techniques, and their solving orders are exchangeable. The cascaded solution not only leads to lower computational complexity but also better separation performance than the vanilla joint algorithm.
Current event-centric knowledge graphs highly rely on explicit connectives to mine relations between events. Unfortunately, due to the sparsity of connectives, these methods severely undermine the coverage of EventKGs. The lack of high-quality labelled corpora further exacerbates that problem. In this paper, we propose a knowledge projection paradigm for event relation extraction: projecting discourse knowledge to narratives by exploiting the commonalities between them. Specifically, we propose Multi-tier Knowledge Projection Network (MKPNet), which can leverage multi-tier discourse knowledge effectively for event relation extraction. In this way, the labelled data requirement is significantly reduced, and implicit event relations can be effectively extracted. Intrinsic experimental results show that MKPNet achieves the new state-of-the-art performance, and extrinsic experimental results verify the value of the extracted event relations.
We elucidate universal many-body properties of a one-dimensional, two-component ultracold Fermi gas near the $p$-wave Feshbach resonance. The low-energy scattering in this system can be characterized by two parameters, that is, $p$-wave scattering length and effective range. At the unitarity limit where the $p$-wave scattering length diverges and the effective range is reduced to zero without conflicting with the causality bound, the system obeys universal thermodynamics as observed in a unitary Fermi gas with contact $s$-wave interaction in three dimensions. It is in contrast to a Fermi gas with the $p$-wave resonance in three dimensions in which the effective range is inevitably finite. We present the universal equation of state in this unitary $p$-wave Fermi gas within the many-body $T$-matrix approach as well as the virial expansion method. Moreover, we examine the single-particle spectral function in the high-density regime where the virial expansion is no longer valid. On the basis of the Hartree-like self-energy shift at the divergent scattering length, we conjecture that the equivalence of the Bertsch parameter across spatial dimensions holds even for a one-dimensional unitary $p$-wave Fermi gas.
We have developed two metrics related to AGN variability observables (time-lags, periodicity, and Structure Function (SF)) to evaluate LSST OpSim FBS 1.5, 1.6, 1.7 performance in AGN time-domain analysis. For this purpose, we generate an ensemble of AGN light curves based on AGN empirical relations and LSST OpSim cadences. Although our metrics show that denser LSST cadences produce more reliable time-lag, periodicity, and SF measurements, the discrepancies in the performance between different LSST OpSim cadences are not drastic based on Kullback-Leibler divergence. This is complementary to Yu and Richards results on DCR and SF metrics, extending them to include the point of view of AGN variability.
Motivated by the search for methods to establish strong minimality of certain low order algebraic differential equations, a measure of how far a finite rank stationary type is from being minimal is introduced and studied: The {\em degree of nonminimality} is the minimum number of realisations of the type required to witness a nonalgebraic forking extension. Conditional on the truth of a conjecture of Borovik and Cherlin on the generic multiple-transitivity of homogeneous spaces definable in the stable theory being considered, it is shown that the nonminimality degree is bounded by the $U$-rank plus $2$. The Borovik-Cherlin conjecture itself is verified for algebraic and meromorphic group actions, and a bound of $U$-rank plus $1$ is then deduced unconditionally for differentially closed fields and compact complex manifolds. An application is given regarding transcendence of solutions to algebraic differential equations.
Network games study the strategic interaction of agents connected through a network. Interventions in such a game -- actions a coordinator or planner may take that change the utility of the agents and thus shift the equilibrium action profile -- are introduced to improve the planner's objective. We study the problem of intervention in network games where the network has a group structure with local planners, each associated with a group. The agents play a non-cooperative game while the planners may or may not have the same optimization objective. We model this problem using a sequential move game where planners make interventions followed by agents playing the intervened game. We provide equilibrium analysis and algorithms that find the subgame perfect equilibrium. We also propose a two-level efficiency definition to study the efficiency loss of equilibrium actions in this type of game.
Hadamard's maximal determinant problem consists in finding the maximal value of the determinant of a square $n\times n$ matrix whose entries are plus or minus ones. This is a difficult mathematical problem which is not yet solved. In the present paper a simplified version of the problem is considered and studied numerically.
The complexity and non-Euclidean structure of graph data hinder the development of data augmentation methods similar to those in computer vision. In this paper, we propose a feature augmentation method for graph nodes based on topological regularization, in which topological structure information is introduced into end-to-end model. Specifically, we first obtain topology embedding of nodes through unsupervised representation learning method based on random walk. Then, the topological embedding as additional features and the original node features are input into a dual graph neural network for propagation, and two different high-order neighborhood representations of nodes are obtained. On this basis, we propose a regularization technique to bridge the differences between the two different node representations, eliminate the adverse effects caused by the topological features of graphs directly used, and greatly improve the performance. We have carried out extensive experiments on a large number of datasets to prove the effectiveness of our model.
Magnetic induction tomography (MIT) is an efficient solution for long-term brain disease monitoring, which focuses on reconstructing bio-impedance distribution inside the human brain using non-intrusive electromagnetic fields. However, high-quality brain image reconstruction remains challenging since reconstructing images from the measured weak signals is a highly non-linear and ill-conditioned problem. In this work, we propose a generative adversarial network (GAN) enhanced MIT technique, named MITNet, based on a complex convolutional neural network (CNN). The experimental results on the real-world dataset validate the performance of our technique, which outperforms the state-of-art method by 25.27%.
Growth occurs in a wide range of systems ranging from biological tissue to additive manufacturing. This work considers surface growth, in which mass is added to the boundary of a continuum body from the ambient medium or from within the body. In contrast to bulk growth in the interior, the description of surface growth requires the addition of new continuum particles to the body. This is challenging for standard continuum formulations for solids that are meant for situations with a fixed amount of material. Recent approaches to handle this have used time-evolving reference configurations. In this work, an Eulerian approach to this problem is formulated, enabling the side-stepping of the issue of constructing the reference configuration. However, this raises the complementary challenge of determining the stress response of the solid, which typically requires the deformation gradient that is not immediately available in the Eulerian formulation. To resolve this, the approach introduces additional kinematic descriptors, namely the relaxed zero-stress deformation and the elastic deformation; in contrast to the deformation gradient, these have the important advantage that they are not required to satisfy kinematic compatibility. The resulting model has only the density, velocity, and elastic deformation as variables in the Eulerian setting. The introduction in this formulation of the relaxed deformation and the elastic deformation provides a description of surface growth whereby the added material can bring in its own kinematic information. Loosely, the added material "brings in its own reference configuration" through the specification of the relaxed deformation and the elastic deformation of the added material. This kinematic description enables, e.g., modeling of non-normal growth using a standard normal growth velocity and a simple approach to prescribing boundary conditions.
We theoretically investigate the out-of-equilibrium dynamics in a binary Bose-Einstein condensate confined within two-dimensional box potentials. One species of the condensate interacts with a pair of oppositely wound, but otherwise identical Laguerre-Gaussian laser pulses, while the other species is influenced only via the interspecies interaction. Starting from the Hamiltonian, we derive the equations of motion that accurately delineate the behavior of the condensates during and after the light-matter interaction. Depending on the number the helical windings (or the magnitude of topological charge), the species directly participating in the interaction with lasers is dynamically segmented into distinct parts which collide together as the pulses gradually diminish. This collision event generates nonlinear structures in the related species, coupled with the complementary structures produced in the other species, due to the interspecies interaction. The long-time dynamics of the optically perturbed species is found to develop the Kolmogorov-Saffman scaling law in the incompressible kinetic energy spectrum, a characteristic feature of the quantum turbulent state. However, the same scaling law is not definitively exhibited in the other species. This study warrants the usage of Laguerre-Gaussian beams for future experiments on quantum turbulence in Bose-Einstein condensates.
Face representation learning using datasets with massive number of identities requires appropriate training methods. Softmax-based approach, currently the state-of-the-art in face recognition, in its usual "full softmax" form is not suitable for datasets with millions of persons. Several methods, based on the "sampled softmax" approach, were proposed to remove this limitation. These methods, however, have a set of disadvantages. One of them is a problem of "prototype obsolescence": classifier weights (prototypes) of the rarely sampled classes, receive too scarce gradients and become outdated and detached from the current encoder state, resulting in an incorrect training signals. This problem is especially serious in ultra-large-scale datasets. In this paper, we propose a novel face representation learning model called Prototype Memory, which alleviates this problem and allows training on a dataset of any size. Prototype Memory consists of the limited-size memory module for storing recent class prototypes and employs a set of algorithms to update it in appropriate way. New class prototypes are generated on the fly using exemplar embeddings in the current mini-batch. These prototypes are enqueued to the memory and used in a role of classifier weights for usual softmax classification-based training. To prevent obsolescence and keep the memory in close connection with encoder, prototypes are regularly refreshed, and oldest ones are dequeued and disposed. Prototype Memory is computationally efficient and independent of dataset size. It can be used with various loss functions, hard example mining algorithms and encoder architectures. We prove the effectiveness of the proposed model by extensive experiments on popular face recognition benchmarks.
A problem that is frequently encountered in a variety of mathematical contexts, is to find the common invariant subspaces of a single, or set of matrices. A new method is proposed that gives a definitive answer to this problem. The key idea consists of finding common eigenvectors for exterior powers of the matrices concerned. A convenient formulation of the Pl\"ucker relations is then used to ensure that these eigenvectors actually correspond to subspaces or provide the initial constraints for eigenvectors involving parameters. A procedure for computing the divisors of totally decomposable vector is also provided. Several examples are given for which the calculations are too tedious to do by hand and are performed by coding the conditions found into Maple.
We present the analysis of the colour-magnitude diagram (CMD) morphology of the ~ 800 Myr old star cluster NGC1831 in the Large Magellanic Cloud, exploiting deep, high-resolution photometry obtained using the Wide Field Camera 3 onboard the Hubble Space Telescope. We perform a simultaneous analysis of the wide upper main sequence and main sequence turn-off observed in the cluster, to verify whether these features are due to an extended star formation or a range of stellar rotation rates, or a combination of these two effects. Comparing the observed CMD with Monte Carlo simulations of synthetic stellar populations, we derive that the morphology of NGC1831 can be fully explained in the context of the rotation velocity scenario, under the assumption of a bimodal distribution for the rotating stars, with ~40% of stars being slow-rotators ($\Omega$ / $\Omega_{crit}$ < 0.5) and the remaining ~ 60% being fast rotators ($\Omega$ / $\Omega_{crit}$ > 0.9). We derive the dynamical properties of the cluster, calculating the present cluster mass and escape velocity, and predicting their past evolution starting at an age of 10 Myr. We find that NGC1831 has an escape velocity $v_{esc}$ = 18.4 km/s, at an age of 10 Myr, above the previously suggested threshold of 15 km/s, below which the cluster cannot retain the material needed to create second-generation stars. These results, combined with those obtained from the CMD morphology analysis, indicate that for the clusters whose morphology cannot be easily explained only in the context of the rotation velocity scenario, the threshold limit should be at least ~ 20 km/s.
The $q$-Onsager algebra $O_q$ is presented by two generators $W_0$, $W_1$ and two relations, called the $q$-Dolan/Grady relations. Recently Baseilhac and Koizumi introduced a current algebra $\mathcal A_q$ for $O_q$. Soon afterwards, Baseilhac and Shigechi gave a presentation of $\mathcal A_q$ by generators and relations. We show that these generators give a PBW basis for $\mathcal A_q$. Using this PBW basis, we show that the algebra $\mathcal A_q$ is isomorphic to $O_q \otimes \mathbb F \lbrack z_1, z_2, \ldots \rbrack$, where $\mathbb F$ is the ground field and $\lbrace z_n \rbrace_{n=1}^\infty $ are mutually commuting indeterminates. Recall the positive part $U^+_q$ of the quantized enveloping algebra $U_q(\widehat{\mathfrak{sl}}_2)$. Our results show that $O_q$ is related to $\mathcal A_q$ in the same way that $U^+_q$ is related to the alternating central extension of $U^+_q$. For this reason, we propose to call $\mathcal A_q$ the alternating central extension of $O_q$.
We study how violations of structural assumptions like expected utility and exponential discounting can be connected to reference dependent preferences with set-dependent reference points, even if behavior conforms with these assumptions when the reference is fixed. An axiomatic framework jointly and systematically relaxes general rationality (WARP) and structural assumptions to capture reference dependence across domains. It gives rise to a linear order that determines references points, which in turn determines the preference parameters for a choice problem. This allows us to study risk, time, and social preferences collectively, where seemingly independent anomalies are interconnected through the lens of reference-dependent choice.
Early diagnosis is essential for the successful treatment of bowel cancers including colorectal cancer (CRC) and capsule endoscopic imaging with robotic actuation can be a valuable diagnostic tool when combined with automated image analysis. We present a deep learning rooted detection and segmentation framework for recognizing lesions in colonoscopy and capsule endoscopy images. We restructure established convolution architectures, such as VGG and ResNets, by converting them into fully-connected convolution networks (FCNs), fine-tune them and study their capabilities for polyp segmentation and detection. We additionally use Shape from-Shading (SfS) to recover depth and provide a richer representation of the tissue's structure in colonoscopy images. Depth is incorporated into our network models as an additional input channel to the RGB information and we demonstrate that the resulting network yields improved performance. Our networks are tested on publicly available datasets and the most accurate segmentation model achieved a mean segmentation IU of 47.78% and 56.95% on the ETIS-Larib and CVC-Colon datasets, respectively. For polyp detection, the top performing models we propose surpass the current state of the art with detection recalls superior to 90% for all datasets tested. To our knowledge, we present the first work to use FCNs for polyp segmentation in addition to proposing a novel combination of SfS and RGB that boosts performance
Infants acquire words and phonemes from unsegmented speech signals using segmentation cues, such as distributional, prosodic, and co-occurrence cues. Many pre-existing computational models that represent the process tend to focus on distributional or prosodic cues. This paper proposes a nonparametric Bayesian probabilistic generative model called the prosodic hierarchical Dirichlet process-hidden language model (Prosodic HDP-HLM). Prosodic HDP-HLM, an extension of HDP-HLM, considers both prosodic and distributional cues within a single integrative generative model. We conducted three experiments on different types of datasets, and demonstrate the validity of the proposed method. The results show that the Prosodic DAA successfully uses prosodic cues and outperforms a method that solely uses distributional cues. The main contributions of this study are as follows: 1) We develop a probabilistic generative model for time series data including prosody that potentially has a double articulation structure; 2) We propose the Prosodic DAA by deriving the inference procedure for Prosodic HDP-HLM and show that Prosodic DAA can discover words directly from continuous human speech signals using statistical information and prosodic information in an unsupervised manner; 3) We show that prosodic cues contribute to word segmentation more in naturally distributed case words, i.e., they follow Zipf's law.
The first principles momentum dependent local ansatz wavefunction method (MLA) has been extended to the ferromagnetic state by introducing spin-dependent variational parameters. The theory is applied to the ferromagnetic Fe, Co, and Ni. It is shown that the MLA yields the magnetizations being comparable to the results obtained by the GGA (generalized gradient approximation) in the density functional theory. The projected momentum distribution functions as well as the mass enhancement factors are also calculated on the same footing, and are compared with those in the paramagnetic state. It is shown that the calculated mass enhancement factor of Fe is strongly suppressed by the spin polarization due to exchange splitting of the e${}_{\rm g}$ flat bands, while those of Co and Ni remain unchanged by the polarization. These results are shown to be consistent with the experimental results obtained from the low-temperature specific heats.
In this paper we develop an asymptotic theory for steadily travelling gravity-capillary waves under the small-surface tension limit. In an accompanying work [Shelton et al. (2021), J. Fluid Mech., accepted/in press] it was demonstrated that solutions associated with a perturbation about a leading-order gravity wave (a Stokes wave) contain surface-tension-driven parasitic ripples with an exponentially-small amplitude. Thus a naive Poincar\'e expansion is insufficient for their description. Here, we shall develop specialised methodologies in exponential asymptotics for derivation of the parasitic ripples on periodic domains. The ripples are shown to arise in conjunction with Stokes lines and the Stokes phenomenon. The analysis relies crucially upon the derivation and analysis of singularities in the analytic continuation of the classic Stokes wave. A solvability condition is derived, showing that solutions of this type do not exist at certain values of the Bond number. The asymptotic results are compared to full numerical solutions and show excellent agreement. The work provides corrections and insight of a seminal theory on parasitic capillary waves first proposed by Longuet-Higgins [J. Fluid Mech., vol. 16 (1), 1963, pp. 138-159].
Quality-Diversity algorithms refer to a class of evolutionary algorithms designed to find a collection of diverse and high-performing solutions to a given problem. In robotics, such algorithms can be used for generating a collection of controllers covering most of the possible behaviours of a robot. To do so, these algorithms associate a behavioural descriptor to each of these behaviours. Each behavioural descriptor is used for estimating the novelty of one behaviour compared to the others. In most existing algorithms, the behavioural descriptor needs to be hand-coded, thus requiring prior knowledge about the task to solve. In this paper, we introduce: Autonomous Robots Realising their Abilities, an algorithm that uses a dimensionality reduction technique to automatically learn behavioural descriptors based on raw sensory data. The performance of this algorithm is assessed on three robotic tasks in simulation. The experimental results show that it performs similarly to traditional hand-coded approaches without the requirement to provide any hand-coded behavioural descriptor. In the collection of diverse and high-performing solutions, it also manages to find behaviours that are novel with respect to more features than its hand-coded baselines. Finally, we introduce a variant of the algorithm which is robust to the dimensionality of the behavioural descriptor space.
We introduce a protection-based IP security scheme to protect soft and firm IP cores which are used on FPGA devices. The scheme is based on Finite State Machin (FSM) obfuscation and exploits Physical Unclonable Function (PUF) for FPGA unique identification (ID) generation which help pay-per-device licensing. We introduce a communication protocol to protect the rights of parties in this market. On standard benchmark circuits, the experimental results show that our scheme is secure, attack-resilient and can be implemented with low area, power and delay overheads.
In this paper, we address the Cauchy problem for the relativistic BGK model proposed by Anderson and Witting for massless particles in the Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetime.
Acoustophoresis deals with the manipulation of sub-wavelength scatterers in an incident acoustic field. The geometric details of manipulated particles are often neglected by replacing them with equivalent symmetric geometries such as spheres, spheroids, cylinders or disks. It has been demonstrated that geometric asymmetry, represented by Willis coupling terms, can strongly affect the scattering of a small object, hence neglecting these terms may miss important force contributions. In this work, we present a generalized formalism of acoustic radiation force and radiation torque based on the polarizability tensor, where Willis coupling terms are included to account for geometric asymmetry. Following Gorkov's approach, the effects of geometric asymmetry are explicitly formulated as additional terms in the radiation force and torque expressions. By breaking the symmetry of a sphere along one axis using intrusion and protrusion, we characterize the changes in the force and torque in terms of partial components, associated with the direct and Willis Coupling coefficients of the polarizability tensor. We investigate in detail the cases of standing and travelling plane waves, showing how the equilibrium positions and angles are shifted by these additional terms. We show that while the contributions of asymmetry to the force are often negligible for small particles, these terms greatly affect the radiation torque. Our presented theory, providing a way of calculating radiation force and torque directly from polarizability coefficients, shows that in general it is essential to account for shape of objects undergoing acoustophoretic manipulation, and this may have important implications for applications such as the manipulation of biological cells.
Third-order approximate solutions for surface gravity waves in the finite water depth are studied in the context of potential flow theory. This solution provides explicit expressions for the surface elevation, free-surface velocity potential and velocity potential. The amplitude dispersion relation is also provided. Two approaches are used to derive the third order analytical solution, resulting in two types of approximate solutions: the perturbation solution and the Hamiltonian solution. The perturbation solution is obtained by classical perturbation technique in which the time variable is expanded in multiscale to eliminate secular terms. The Hamiltonian solution is derived from the canonical transformation in the Hamiltonian theory of water waves. By comparing the two types of solutions, it is found that they are completely equivalent for the first to second order solutions and the nonlinear dispersion, but for the third order part only the sum-sum terms are the same. Due to the canonical transformation that could completely separate the dynamic and bound harmonics, the Hamiltonian solutions break through the difficulty that the perturbation theory breaks down due to singularities in the transfer functions when quartet resonance criterion is satisfied. Furthermore, it is also found that some time-averaged quantities based on the Hamiltonian solution, such as mean potential energy and mean kinetic energy, are equal to those in the initial state in which sea surface is assumed to be a Gaussian random process. This is because there are associated conserved quantities in the Hamiltonian form. All of these show that the Hamiltonian solution is more reasonable and accurate to describe the third order steady-state wave field. Finally, based on the Hamiltonian solution, some statistics are given such as the volume flux, skewness, and excess kurtosis.
Machine learning-inspired techniques have emerged as a new paradigm for analysis of phase transitions in quantum matter. In this work, we introduce a supervised learning algorithm for studying critical phenomena from measurement data, which is based on iteratively training convolutional networks of increasing complexity, and test it on the transverse field Ising chain and q=6 Potts model. At the continuous Ising transition, we identify scaling behavior in the classification accuracy, from which we infer a characteristic classification length scale. It displays a power-law divergence at the critical point, with a scaling exponent that matches with the diverging correlation length. Our algorithm correctly identifies the thermodynamic phase of the system and extracts scaling behavior from projective measurements, independently of the basis in which the measurements are performed. Furthermore, we show the classification length scale is absent for the $q=6$ Potts model, which has a first order transition and thus lacks a divergent correlation length. The main intuition underlying our finding is that, for measurement patches of sizes smaller than the correlation length, the system appears to be at the critical point, and therefore the algorithm cannot identify the phase from which the data was drawn.
An electron is usually considered to have only one type of kinetic energy, but could it have more, for its spin and charge, or by exciting other electrons? In one dimension (1D), the physics of interacting electrons is captured well at low energies by the Tomonaga-Luttinger-Liquid (TLL) model, yet little has been observed experimentally beyond this linear regime. Here, we report on measurements of many-body modes in 1D gated-wires using a tunnelling spectroscopy technique. We observe two separate Fermi seas at high energies, associated with spin and charge excitations, together with the emergence of three additional 1D 'replica' modes that strengthen with decreasing wire length. The effective interaction strength in the wires is varied by changing the amount of 1D inter-subband screening by over 45%. Our findings demonstrate the existence of spin-charge separation in the whole energy band outside the low-energy limit of validity of the TLL model, and also set a limit on the validity of the newer nonlinear TLL theory.
In a Hilbertian framework, for the minimization of a general convex differentiable function $f$, we introduce new inertial dynamics and algorithms that generate trajectories and iterates that converge fastly towards the minimizer of $f$ with minimum norm. Our study is based on the non-autonomous version of the Polyak heavy ball method, which, at time $t$, is associated with the strongly convex function obtained by adding to $f$ a Tikhonov regularization term with vanishing coefficient $\epsilon(t)$. In this dynamic, the damping coefficient is proportional to the square root of the Tikhonov regularization parameter $\epsilon(t)$. By adjusting the speed of convergence of $\epsilon(t)$ towards zero, we will obtain both rapid convergence towards the infimal value of $f$, and the strong convergence of the trajectories towards the element of minimum norm of the set of minimizers of $f$. In particular, we obtain an improved version of the dynamic of Su-Boyd-Cand\`es for the accelerated gradient method of Nesterov. This study naturally leads to corresponding first-order algorithms obtained by temporal discretization. In the case of a proper lower semicontinuous and convex function $f$, we study the proximal algorithms in detail, and show that they benefit from similar properties.
Terahertz (THz) frequency bands can be promising for data transmissions between the core network and access points (AP) for next-generation wireless systems. In this paper, we analyze the performance of a dual-hop THz-RF wireless system where an AP facilitates data transmission between a core network and user equipment (UE). We consider a generalized model for the end-to-end channel with an independent and not identically distributed (i.ni.d.) fading model for THz and RF links using the $\alpha$-$\mu$ distribution, the THz link with pointing errors, and asymmetrical relay position. We derive a closed-form expression of the cumulative distribution function (CDF) of the end-to-end signal-to-noise ratio (SNR) for the THz-RF link, which is valid for continuous values of $\mu$ for a generalized performance analysis over THz fading channels. Using the derived CDF, we analyze the performance of the THz-RF relayed system using decode-and-forward (DF) protocol by deriving analytical expressions of diversity order, moments of SNR, ergodic capacity, and average BER in terms of system parameters. We also analyze the considered system with an i.i.d. model and develop simplified performance to provide insight on the system behavior analytically under various practically relevant scenarios. Simulation and numerical analysis show a significant effect of fading parameters of the THz link and a nominal effect of normalized beam-width on the performance of the relay-assisted THz-RF system.
We consider the cosmology obtained using scalar fields with a negative potential energy, such as employed to obtain an Ekpyrotic phase of contraction. Applying the covariant entropy bound to the tower of states dictated by the distance conjecture, we find that the relative slope of the potential $|V^{\prime}| / |V|$ is bounded from below by a constant of the order one in Planck units. This is consistent with the requirement to obtain slow Ekpyrotic contraction. We also derive a refined condition on the potential which holds near local minima of a negative potential.
Recently, Graph Convolutional Networks (GCNs) have proven to be a powerful mean for Computer Aided Diagnosis (CADx). This approach requires building a population graph to aggregate structural information, where the graph adjacency matrix represents the relationship between nodes. Until now, this adjacency matrix is usually defined manually based on phenotypic information. In this paper, we propose an encoder that automatically selects the appropriate phenotypic measures according to their spatial distribution, and uses the text similarity awareness mechanism to calculate the edge weights between nodes. The encoder can automatically construct the population graph using phenotypic measures which have a positive impact on the final results, and further realizes the fusion of multimodal information. In addition, a novel graph convolution network architecture using multi-layer aggregation mechanism is proposed. The structure can obtain deep structure information while suppressing over-smooth, and increase the similarity between the same type of nodes. Experimental results on two databases show that our method can significantly improve the diagnostic accuracy for Autism spectrum disorder and breast cancer, indicating its universality in leveraging multimodal data for disease prediction.
We construct a $2$-generated pro-$2$ group with full normal Hausdorff spectrum $[0,1]$, with respect to each of the four standard filtration series: the $2$-power series, the lower $2$-series, the Frattini series, and the dimension subgroup series. This answers a question of Klopsch and the second author, for the even prime case; the odd prime case was settled by the first author and Klopsch. Also, our construction gives the first example of a finitely generated pro-$2$ group with full Hausdorff spectrum with respect to the lower $2$-series.