content
stringlengths
10
4.9M
<filename>editor-server/src/resolvers/types/color.ts<gh_stars>1-10 import { Field, ObjectType, ID, Float } from "type-graphql"; @ObjectType() export class Color { @Field((type) => String) color: string; @Field((type) => String) colorCode: string; }
#include "Partial.h" #include "Partial.make.h" #include "Partial.trait.h" #include <meta17/Const.ops.h> // index == index #include <gtest/gtest.h> using namespace partial17; using meta17::type; TEST(Partial, basic) { auto x = Partial<char, int, float>{}; // auto y = x; // copy constructor x = y; // copy operator x = std::move(y); // move operator auto z = Partial<char, int, float>{std::move(x)}; // move construct } TEST(Partial, fromFactory) { const auto created = Partial<char, int, float>::fromFactory( [](size_t i) { return (0 == i % 2); }, [](auto i) { if constexpr (i == index<0>) { return 'a'; } if constexpr (i == index<1>) { return 23; } if constexpr (i == index<2>) { return 3.14f; } }); ASSERT_TRUE(created.has<0>()); ASSERT_FALSE(created.has<1>()); ASSERT_TRUE(created.has<2>()); EXPECT_EQ(created.get<0>(), 'a'); EXPECT_EQ(created.get<2>(), 3.14f); auto copy = created; ASSERT_TRUE(copy.has<0>()); ASSERT_FALSE(copy.has<1>()); ASSERT_TRUE(copy.has<2>()); EXPECT_EQ(copy.get<0>(), 'a'); EXPECT_EQ(copy.get<2>(), 3.14f); } TEST(Partial, fromArgs) { const auto created = Partial<char, int, float>(3.14f, 'a'); // ASSERT_TRUE(created.has<0>()); ASSERT_FALSE(created.has<1>()); ASSERT_TRUE(created.has<2>()); EXPECT_EQ(created.get<0>(), 'a'); EXPECT_EQ(created.get<2>(), 3.14f); } TEST(Partial, byType) { auto x = Partial<char, int, float>{'\x23'}; // if (x.has(type<char>)) { ASSERT_EQ(35, x.get(type<char>)); } }
Dynamical Electroweak Breaking and Latticized Extra Dimensions Using gauge invariant effective Lagrangians in 1+3 dimensions describing the Standard Model in 1+4 dimensions, we explore dynamical electroweak symmetry breaking. The Top Quark Seesaw model arises naturally, as well as the full CKM structure. We include a discussion of effects of warping, and indicate how other dynamical schemes may also be realized. ing aspect of the Standard Model in the latticized bulk construction is that it provides the essential ingredients of a Topcolor model . Indeed, Topcolor is a dynamical gauge theory basis for top quark condensation and involves rather uniquely the imbedding of SU(3) → SU(3) 1 × SU(3) 2 .... Here the third generation feels the stronger SU(3) 1 interaction, while the first and second generations feel the weaker SU(3) 2 . Such an imbedding, or enlargement of the SU(3) gauge group is a natural consequence of extra dimensions with localized fermions . Indeed, Topcolor viewed as a remodeled extra dimensional theory anticipates the fermionic generations arising in a localized way in extra dimensions . Extra dimensional models with gauge fields in the bulk , or their remodeled counterparts, are inherently strongly coupled. We show that the inherent strong coupling expected in these models can naturally provide a dynamical condensation of tt . In the remodeled description this is on a firm footing since the dynamics can be approximated reliably by a Nambu-Jona-Lasinio model. We should say that a priori nothing precludes the addition of more physics, e.g., supersymmetry or technicolor, etc. We pursue Topcolor and Top Seesaw models at present because the remodeling of the 1 + 4 Standard Model supplies all the ingredients for free! If we wanted to construct a pure Topcolor model, or a model such as Topcolor Assisted Technicolor , we also require a "tilting" mechanism to block the formation of a bb . Again, the Standard Model in the latticized bulk provides the desired extra weak hypercharge imbedding U(1) Y → U(1) Y 1 × U(1) Y 2 ... needed to tilt in the direction of the top condensate. The fact that the top-anti-top channel is the most attractive channel in a Standard Model configuration then drives the formation of the top condensate alone. In the present paper, however, we will explore a further aspect of the dynamics of a remodeled 1 + 4 theory with the Standard Model gauge structure propagating in the bulk. We will show that the Top Seesaw model , which may indeed be the best and most natural model of dynamical electroweak symmetry breaking, arises completely and naturally from extra dimensions. In a Top Seeaw model a top condensate forms with the natural electroweak mass gap, µ ∼ 600 GeV, but there exist additional vectorlike partners to the t R quark, usually designated by χ R and χ L . These objects form heavier Dirac mass combinations as Mχχ and m ′ χ L t R , and taken together the physical top mass is given by m top = m ′ µ/M. The Top Seesaw affords a way to make a heavy top quark, and explain all of the electroweak breaking with a minimum amount of fine tuning. It has a heavy Higgs boson ∼ 1 TeV, yet is in full consistency with the S − T error ellipse constraints . Remarkably, the vectorlike χ quarks of the Top Seesaw are also available for free from extra dimensions. These are simply the "roaming" t R quark in the bulk, away from the domain wall that localizes it's chiral zero mode t R . The possibility of generating top condensation (or other) schemes in the context of extra dimensions has been developed previously in explicit continuum extra dimensions ; indeed, Dobrescu first observed that dynamical electroweak symmetry breaking was a likely consequence of the strong coupling of QCD in extra dimensions. The geometric reasoning we inherit from extra dimensions leads us to a systematic way of extending the models. Remodeled extra dimensions has led us in the present paper to the first theory of flavor physics from the Top Quark Seesaw, with CKM structure and light fermion masses. We also show that one can readily construct a viable 4th generation scheme along these lines. All fermions are condensed by the SU(3) × U(1) Y structure on the 4th generation-brane, and one can postulate a Majorana masses for the ν Ri as well, allowing the Gell-Mann-Ramond-Slansky-Yanagida neutrino seesaw (see, e.g., and references therein). Our present discussion will be largely schematic. We will describe the structure of the theory, and in a later work we will present the full phenomenology . To make the present discussion as transparent as possible we will "thin the degrees of freedom." Normally, we would approximate the bulk with a very large number of branes and interlinking Higgs fields. Presently, however, we will describe reduced n-brane models, in which n is small, typically n = 2, 3, 4, 5.... In our minimal Top Seesaw scheme we have n = 4, i.e., there is one brane per generation and one extra spectator brane (required for technical reasons). Hence, in this case all of the bulk is approximated by a transverse lattice with four branes. The gauge group we consider in 1 + 3 dimensions for the n-brane model is SU(3) n × SU(2) n L × U(1) n Y and we have n − 1 link-Higgs fields per gauge group. Thus, we keep only the zero-modes and n − 1 Kaluza-Klein (KK) modes for each gauge field. We will also keep some of the vectorlike KK modes of the fermions, in particular for the third generation. The masses of the vectorlike KK fermions are controlled by the mechanism that produces the chiral fermions on the branes and these can be lifted to arbitrarily large Dirac masses, independent of the compactification scale. The thinning of degrees of freedom is a mathematical approximation to the full theory. It is presumably derived from the fine-grained theory by a Kadanoff-style renormalization group. As a result, we expect many renormalization effects, and e.g., any translational invariance that may be softly broken by background fields of the short-distance theory can be lost in the thinned degrees of freedom of the effective theory. Our residual engineering freedom, leading to any given scheme, arises largely from the localization of the chiral fermions and the freedom to renormalize the linking-Higgs VEV's and gauge couplings in a non-translationally invariant way. How all of this ultimately interfaces with flavor physics constraints, e.g., flavor changing neutral current constraints , etc., remains to be examined in detail . Thus our models can be viewed as transverse lattice descriptions of a Standard Model in 1 + 4 dimensions in which the gauge fields and fermions and Higgs all live in the bulk with thinned degrees of freedom. Alternatively, they are a new class of 1 + 3 models with Topcolor and Top Seesaw dynamics. The two pictures are equivalent through remodeling. Effective Lagrangians in Warped Latticized Backgrounds We begin with some essential preliminaries on latticized extra dimensions. We wish to describe the low energy effective Lagrangian of, e.g., the Standard Model in 1+4 dimensions using the transverse lattice, but we include presently effects that break translational invariance in x 5 . We begin with the QCD content and allow a general background geometry described by a metric with dependence upon the extra dimension x 5 . Consider the pure gauge Lagrangian in 1 + 3 dimensions for N + 1 copies of QCD: in which we have N + 1 gauge groups SU(3) j with gauge couplingsg j that depend upon j and N link-Higgs fields, Φ j forming (3 j−1 , 3 j ) representations. The covariant derivative is defined as j annihilates a field that is singlet under the SU(3) j ; when the covariant derivative acts upon Φ j we have a commutator of the gauge part with Φ j , T B † j acting on the left and T B j−1 acting on the right; the jth field strength is determined as usual, G B jµν ∝ tr T B j , etc. We treat the Φ j as explicit Higgs fields. Renormalizable potentials can be constructed for each of the link-Higgs fields, and we can always arrange the parameters in the potential such that the diagonal components of each Φ j develop a common vacuum expectation value (VEV) v j , while the Higgs and U(1) pseudo-Nambu-Goldstone boson (PNGB) are arbitrarily heavy (for the perturbative unitarity constraint on this limit see ref. ). Hence, each Φ j becomes effectively a nonlinear-σ model field : In our previous discussion , we assumed thatg j and v j were common for all N + 1 gauge groups and N links, i.e., independent of j. This corresponds to a translationally invariant extra dimension with physical parameters independent of x 5 . In general, we must consider non-uniformg j and v j in the remodeled theory. These correspond to a large variety of possible effects. For example, we may have an extra dimension with non-trivial background metric and a space dependent gauge coupling. These effects can arise from a bulk cosmological constant, background space dependent dilaton field, or from other fields and the finite renormalization effects due to localization of these fields. Alternatively, a background scalar field with nontrivial dependence upon x 5 , ϕ(x 5 ), and coupled to the gauge kinetic term, (G B µν ) 2 , will give finite x 5 dependent renormalization ofg. Let us consider presently the case of a warped geometry, where the metric will contain an overall warp-factor or background dilaton field, e.g., a Randall-Sundrum model . The effect of the dilaton field can be seen through the implicit identification of the link-Higgs fields Φ n with the Wilson lines: where a is the lattice spacing. One finds: Let us compare this with the 1+4 dimensional action for the gauge field in the background metric: We have for the gauge action: where the indices µ, ν are raised and lowered by the Minkowskian metric η µν . We thus can see by comparison that the gauge couplingg j is related to the 5-dimensional gauge coupling byg 2 j = g 2 5 (ja)/a, (assuming that g 5 is smoothly varying,) and v j is simply related to the warp factor by:g For smoothly varyingg j and v j , we can make the following interpolation: An example with 3 lattice points is described in Appendix A. It is also straightforward to obtain the transverse lattice Lagrangian for scalar and fermion fields under the warped background metric. The action for a scalar field under the background (2.5) is given by : After discretization, we have We can rescale e −σ j H → H, the Lagrangian is then given by: As discussed in the previous paper Hj if it depends on y, which can come from a y-dependent VEV of some field or the renormalization effects. The action of a fermion under the background (2.5) is given by : After rescaling and discretization, the fermion Lagrangian is given by where we have used the relation (2.7) and imposed the boundary conditions Ψ −1R = Ψ N R = 0 and Ψ (N +1)L = Ψ N L , corresponding to having Ψ R (Ψ L ) odd (even) under Z 2 . There is one more Ψ L than Ψ R at lattice N, so there is a massless left-handed chiral fermion left. The gauge anomaly must be canceled by including additional chiral fermions. Reversing the Z 2 parity of Ψ L and Ψ R gives rise a massless right-handed fermion. This can be obtained by imposing the boundary conditions Ψ −1R = Ψ 0R , Ψ 0L = Ψ (N +1)L = 0. Alternatively, we can make the changes L ↔ R,g → −g, (a → −a) in the Lagrangian (2.14), (corresponding to an opposite sign for the Wilson term which is included to avoid the fermion doubling problem,) and impose the boundary conditions Ψ −1L = Ψ N L = 0, Ψ (N +1)R = Ψ N R . Top Quark Seesaw from Remodeled Extra Dimensions We consider a sequence of n-brane schemes. We put one generation of fermions and a copy of SU(2) L × U(1) Y × SU(3) on each brane. In addition, we have n − 1 link-Higgs-fields (chiral fields, one for each gauge group). In the end we will have a set of links from a spectator brane to the up brane (which is defined as the brane on which the chiral up quark is localized), one set from up to charm, another from charm to top. We will thus have constructed an "aliphatic model," as in . There are the usual zero-mode gauge fields and the n − 1 KK modes, which are determined exactly. No Nambu-Goldstone boson (NGB) zero modes occur as is usually the case in Technicolor-like models; (indeed these models have nothing to do with Technicolor). We are assuming throughout that we have an underlying Jackiw-Rebbi mechansim to trap the fermionic chiral modes at the specific locations in the bulk. This involves scalar fields in 1 + 4, ϕ(x 5 ) q , which couple to ψ q ψ q and have domain wall configurations on which chiral zero-mode solutions exist. Away from the domain wall the fermions are vectorlike and have large Dirac masses. For the remodeled description of matter fields, we exploit the fact that the chiral fermions can always be engineered on any given brane, with arbitrarily massive vectorlike KK modes partners on all branes, so we need keep only the chiral zero-modes and the lower mass vectorlike fermions. Indeed, it is an advantage of the remodeled 1 + 3 formalism that we can do this; in a sense the chiral generations are put in by hand in the remodeled theory, and we retain only the minimal relevant information that defines the low energy effective Lagrangian. We require a mechanism to make the bareg 3 coupling of SU(3) C,j critically strong on the top brane j, such that the top quark will condense. Of course, with N branes (of equal couplings) the bareg 3 coupling is already √ N times stronger than the QCD physical coupling. The freedom exists to choose an arbitrarily strong bareg 3 on brane j for a variety of reasons as described in Section 2. For example, if the kink field that localizes the chiral fermions couples to (G B µν ) 2 , it can give finite renormalization to the top brane gauge coupling constants and trigger the formation of the condensate (see below). Any non-universal translational invariance breaking in x 5 may provide such a mechanism. The vectorlike fermions of the Top Seesaw arise in a simple way: they are the roaming t R (and/or t L ) in the bulk. In a sense it is remarkable that all of the ingredients are present. In addition, we get Topflavor , with the copies of the SU(2) L gauge groups. Here arises a novel problem first noted in ref. . With large SU(2) L couplings the instanton mediated baryon number violation mechanism of 't Hooft becomes potentially problematic. Finally, we ask: how is CKM matrix generated? We can put generational linking terms in by hand, which presumeably arise from an underlying mechanism of overlapping wave-functions for split fermions . In our remodeled formulation we get no more or less information out than is put in by localizing the fermions in the bulk in the first place. The Schematic Top Seesaw Let us first briefly review the Top Seesaw model. In a schematic form of the Top Seesaw model, QCD is embedded into the gauge groups SU(3) 1 × SU(3) 2 , with gauge couplings g 3,1 andg 3,2 respectively. The relevant fermions transform under these gauge groups are (anomalies are dealt with by extension to include the b-quark) : We include a scalar field, Φ, transforming as (3,3), and it develops a diagonal VEV, Φ i j = vδ i j , which breaks the Topcolor to QCD, The massive gauge bosons (colorons) have mass Since χ L , t R have the same gauge quantum numbers, we can write down an explicit Dirac mass term: A second Dirac mass term between χ L and χ R can be induced from the Yukawa coupling to Φ, These masses are assume to be in the TeV range and have the order µ χt < µ χχ < M. Below the scale M, various 4-fermion interactions are generated after integrating out the heavy gauge bosons. We assume thatg 3,1 is supercritical and ≫g 3,2 . A T L χ R condensate will form and break the electroweak symmetry. To obtain the correct electroweak breaking scale, t L χ R should have a dynamical mass m tχ ∼ 600 GeV . The mass matrix for the t L,R , χ L,R is then : The light eigenstate is identified as the top quark and have a mass Top Seesaw from Remodeled Extra Dimensions All of the ingredients of a Topcolor scenario, in particular the Top Qaurk seesaw, are present in an extra-dimensional scheme. We assume that we have only the fermions T and t in 1 + 4 dimensions. The χ fields will appear automatically as the vectorlike KK mode components of these fields. The fermions are coupled in 1 + 4 to a background field as ϕ(x 5 )T T and ϕ(x 5 )tt and we assume that ϕ(x 5 ) produces a domain wall kink at x 5 0 which we identify in our latticized approximation as the brane 1 in the Figures. Before the formation of the top condensate the top quark configuration on the lattice branes is depicted in Fig. . The basic idea underlying the formation of a condensate is to allow a particular gauge coupling constant to become supercritical on a particular brane. In Fig. 2 we show the 00 00 00 00 00 11 11 11 11 11 0 0 0 0 0 is supercritical. This can be triggered from ϕ(x 5 )(G 2 µν ) in the 1 + 4 underlying theory, but is a free parameter choice in the 1 + 3 effective Lagrangian. formation of the condensate T L t R on brane 1 where we assume that the SU(3) 1 coupling constant,g 3,1 is supercritical, i.e., in the NJL model approximation to QCD 3g 2 3,1 /8π 2 > 1. A trigger mechanism in the 1 + 4 theory for supercritical coupling at the location of the trapping domain wall can arise from a coupling of ϕ to the squared field strength (G a µν ) 2 such that the gauge Lagrangian in 1 + 4 becomes: Such a coupling will always be induced by the fermion fields which couple to the gauge fields. We assume ϕ → M (ϕ → −M) for x 5 → R (x 5 → 0). For λ > 0 the action is well-behaved, and off the domain wall the effective coupling constant, g 2 3 = g 2 3 /(1 + λ) is suppressed. On the domain wall the effective coupling is then g 2 3 which we assume is supercritical. Moreover, the condensate is generally suppressed, for fixed coupling g 2 3 in NJL model approximation for fields with large Dirac masses, so we expect only the chiral fields to pair up. In fact, one need not appeal to the trigger mechanism alluded to above, but it is a useful way to suppress g 2 3 elsewhere in the bulk, and such operators are expected on general grounds when we construct the renormalized effective Lagrangian with fewer degrees of freedom. In our latticized 1 + 3 decription the varying coupling constantsg 3,j and Dirac mass terms can be put in "by hand" as defining parameters. : Two brane approximation. In the limit that T 2 decouples this is just the original Top Seesaw Model of . has exactly the right structure for Topcolor breaking. The SU(2) L doublet and singlet quark propagate in the 1 + 4 bulk, and in the latticized scheme are represented by the fields T j , t j , j = 1, 2 on the two branes: We have projected out the chiral partners T 1R and t 1L by coupling to the background localizing field with a domain wall kink at brane 1, which produces chiral fermions. The kinetic terms in the extra dimension give rise the mass terms in the 1 + 3 effective Lagrangian that interconnect T 1L to T 2R etc. The backgound localizing fields ϕ also produces the Dirac masses that interlink, e.g. T 2R and T 2L , etc. So, in the two brane approximation we have: with This configuration is shown in Fig. . We can see explicitly that this matches onto the schematic Top Seesaw model. To match we first assume that the ϕ contribution to the T 2L T 2R mass term is so large that T 2L , T 2R decouple. Then, the 2-brane model is identical to the schematic Top Seesaw model described above through the following identification: For a supercritical gauge couplingg 3,1 , the T 1L t 1R condensate will form, breaking the electroweak symmetry. The top quark mass is then obtained from the seesaw mechanism. The Light Generations and Flavor Physics We now consider all three fermionic generations of the Standard Model in the latticized bulk. We discuss the issue of how we can generate light quark masses and mixings in a generalized geometric Top Seesaw scenario. Clearly, in order to generate light fermion masses from the third generation condensates, some flavor mixing terms must be present. Small masses and mixings can be generated in 1 + 4 models by the overlap of the Higgs and fermion wavefunctions in the extra dimension and/or small flavor mixing effects arising from localization. We examine this mechanism in the latticized extra dimension with the simplest flavor mixing mass terms. We find that the light generation fermion masses are generated radiatively in this picture. There is a copy of the SU(3) × SU(2) L × U(1) Y Standard Model gauge group on each brane, with gauge couplingsg a,j respectively, where a = 1, 2, 3, is the gauge group index and j = 0, 1, 2, 3 is the brane index. There are link fields Φ a,j , a = 1, 2, 3; j = 1, 2, 3 which break the full SU (3) We will denote the 3 generation SU(2) L doublet quarks with uppercase letters (T, C, U) and SU(2) L singlet fermions with lower case letters (t, c, u) respectively. We assume that the third generation fermions propagates on all branes, with the localization removing the right-handed SU(2) L doublets and the left-handed singlets on brane 0. Hence the third generation fields T j and t j , etc., carry the brane index j, while the C, c (U, u) are localized on brane 1 (2). The localization of the top, charm and up quarks is accomplished with additional ϕ(x 5 ) t , ϕ(x 5 ) c and ϕ(x 5 ) u fields that produce domain walls in the underlying 1 + 4 theory. If we assume that only theg drives the formation of the condensate T L0 t R0 , breaking the electroweak symmetry. The top quark mass is then obtained from the generalized seesaw mechanism. In general the left-and the right-handed top quark zero modes are linear combinations of T Lj and t Rj (and C L , U L , c R , u R after including flavor mixings). where α T j , α t j are coefficients determined by the direct and link mass terms among T L,R 's and t L,R 's. The top quark mass is suppressed by the mixings α T 0 and α t 0 , Let us first thin the degrees of freedom of the extra dimension to a 3-brane model, and we consider first the generation of the charm quark mass. This configuration is as shown in Fig. . To generate the charm quark mass, we include flavor-mixing mass terms. In the underlying 1 + 4 theory we might suppose that these can arise on a given brane from couplings of the form, e.g., ǫϕ(x 5 ) t C L ψ R . In the 1 + 3 theory this is a common Dirac mass on brane 1 that mixes all fermions with equivalent quantum numbers. However, the direct contact mass terms C L T R1 , t L1 c R , can all be rotated away by redefinitions of the fields T R1 and t L1 . This can be seen by considering the overall mass term on brane 1, where the second term is just the mass of the Dirac (non-chiral) vectorlike t quark on brane 1. Thus, by redefining we eliminate the direct charm quark mass term. The important point is that this redefinition involves only fields on the common brane 1, and there is no residual kinetic term mixing since all of the kinetic terms involve the same gauge fields A B µ,1 . In order to generate a surviving mass term for the charm quark we thus need additional terms to frustrate the chiral redefinition. Such terms are seen to be present when we consider brane 2. Note that when we rotate away the direct charm mass terms on brane 1, we in general will obtain the linking mass terms, where the ( a Φ a,i ) is a product of linking-Higgs, as shown in Fig. . These terms can be viewed as mass terms, but in reality they are all higher dimension operators; since we are in the broken phase in which the Φ's all have VEV's we can only approximately describe these terms as though they are mass terms. However, even with the flavor mixing linking (higher dimension) mass terms and the direct mass terms, at tree level, (neglecting the gauge interactions on branes 1 and 2,) we can again perform a field redefinition as before and we still fail to generate a charm quark electroweak mass and only the top quark retains a nonzero electroweak mass. This can be seen readily as the EWSB condensate only couples to T L0 , t R0 . We can rewrite T L0 , t R0 in terms of the redefined eigenstates of the electroweak preserving masses, After decoupling the heavy vector-like states, the 3 × 3 up-type quark mass matrix M U is of rank 1. However, the result of the field redefinition is that now off-diagonal couplings to the gluons on branes 1 and 2 are generated! When we now take into account the gauge interactions on branes 1 and 2, the charm quark does indeed obtain a nonzero mass from radiative corrections as shown in Fig. 5. For this to occur we require the interference with the linking mass terms, because otherwise the gauge radiative corrections only produce multiplicative corrections to the (zero) mass on a given brane. More explicitly, we now have the interbrane mass term of the form, e.g., m ′ t L2 c R for the charm quark. This implies that on brane 2 there is the overall mass term m ′ t L2 c R + Mt L2 t R2 where the second term the mass of the Dirac vectorlike t quark. Thus, redefining t R2 → cos θt R2 + sin θc R and c R → − sin θt R2 + cos θc R we can eliminate the direct charm quark mass term. However, in the kinetic terms we have: Upon performing the redefinitions we generate off-diagonal transitions: whereà 1(2) = cos 2 θA 1(2) + sin 2 θA 2(1) and κ = sin θ cos θ, and the ellipsis represents diagonal terms. On the left-handed side of Fig. we also generate off-diagonal couplings of the form κ ′ (C L (A / 1 − A / 2 )T L2 + h.c.). In evaluating the induced charm quark mass and mixing it is useful to remain in the current eigenbasis in which the gluon interactions are diagonal. We emphasize that this effect is different than that described by in which localization produces off-diagonal flavor transitions amongst fermions coupled to KK mode vector bosons. These off-diagonal kinetic terms, we emphasize, are higher dimension operators involving the link-Higgs fields! They take the apparent d = 4 form only as a result of working in the broken phase of the Φ's. However, the result is that we have generated now an interaction that acts like extended technicolor . When we include the radiative effects of the gluons we generate charm quark mass. In Fig. we illustrate the diagram in the basis in which the gluon couplings are diagonal (the current eigenbasis) We also generate radiative mixing between charm and top through diagrams as in Fig. where now the mixing of the gluonic gauge groups on different branes must be included. The extension of the scheme to include the up quark mass generation and the mixing is shown in Fig. . Again, we require nearest neighbor mixing between branes which produces vanishing mass in tree approximation, but off-diagonal gluon vertices in the broken phase due to kinetic term mixing. The full mass matrix is regenerated when radiative corrections are included. One can understand the origin of the mass matrix in the language of the "shining Higgs VEV profile" as discussed in our previous paper . The gauge interactions on branes 1-2 are subcritical, so the Higgs bound states formed on these branes have positive squared masses. However, due to the links with brane 0, the composite Higgs fields on brane 1-2 will receive tadpole terms as shown in Fig. , and therefore obtain nonzero VEVs. From the shining and the flavor mixing effects, the final Higgs VEV will contain some small components of C L c R and U L u R after diagonalization, which are responsible for generating the charm and up quark masses. To generate the down-type quark mass matrix requires a mechanism to first generate the b-quark mass. One possibility is to condense the b-quark as in the case of the top quark, and exploit a larger seesaw. This encounters generally a large degree of fine-tuning; to have the large seesaw suppression of the physical b mass requires a larger vectorlike Dirac mass for the roaming b quarks, and this can turn off the condensate except for large supercritical coupling. An alternative and less fine-tuned approach is to exploit SU(3) 0 instantons on brane 0 which produce a 't Hooft determinant containing terms like ∼ b L b R t L t R + ... Then the nonzero tt induces the b-quark mass. The magnitude of this condensate can be controlled by seesaw with the vectorlike b-quarks. In any case, the dynamical and phenomenological details of the generation of the b quark mass is a Top Seesaw modeling issue, and will be described in detail in a forthcoming paper by He, Hill and Tait . For our present purposes we can simply assume that an induced b-quark mass or bb can be arranged for brane 0. The full model then takes the form of Fig. 8 with (u, c, t) replaced by (d, s, b). This produces a second species of Higgs boson, the b-Higgs H b which then shines through the bulk. Topcolor does not specifically address the issue of leptons. We can in principle use the U(1) Y 0 on the brane 0 to condense the τ lepton, and a corresponding seesaw to produce the physical m τ . Alternatively, any new physics that produces the higher dimension operator ttτ τ structure will suffice to give the τ lepton a mass. Having produced the τ τ = 0 on brane 0, we again repeat the construction to provide the masses for µ and e. In the lepton case the U(1) Y radiative corrections replace the gluonic radiative corrections. The neutrinos do not condense since U(1) Y does not produce a nontrivial 't Hooft determinant (!), and we do not presently address the origin of the small neutrino Majorana masses. 00 00 00 00 00 00 00 00 00 00 00 00 00 11 11 11 11 11 11 11 11 11 11 11 11 11 00 00 00 00 00 00 00 00 00 00 00 00 11 11 11 11 11 11 11 11 11 11 11 11 T U C In Fig. we illustrate how the lepton sector can be dynamically generated. Here there is a U(1) Y 0 condensate of EE on brane 0 which produces the leptonic Higgs boson. As before, the fermion masses are generated by linking flavor changing terms. Fourth Generation Condensates We also include the right-handed neutrino N R . We emphasize that this need not be chiral, i.e., it need not be a localized chiral zero mode associated with brane 0 kink. The N R is a gauge singlet so we can write down the Majorana mass terms, N C R N R which presumeably comes from external physics, e.g., it may come from the effective Planck scale (we can certainly complicate the picture including all possible allowed Majorana A higher dimension operator exists which allows the bilinear LN R to feel the electroweak condensates, as in: 1 Once the master 4th generation Dirac mass is established we can invoke the seesaw. We depict this in Fig. . The Feynman diagram of Fig. then shows the formation of the radiatively induced Majorana mass (by, e.g., Z exchange) for the ν τ . Similar mixings and mass terms arise for the first and second generation neutrinos in analogy to the quark and charged lepton masses. We mention that one should be wary of the possibility of enhanced proton decay coming from the 't Hooft process with the strong SU(2) L gauge groups located on various branes. This is an issue for "topflavor" models which we defer to another session. Moreover, as discussed in Ref. , the KK gauge bosons can induce flavor-changing ef- fects in the split fermion generation models. This puts a strong constraint on the KK gauge boson masses. In our model, the first two generations are localized away from the EWSB brane. Flavor-changing effects involving the first two generations from heavy gauge boson exchanges can be suppressed if the link VEVs associated with the first two generation branes are much larger than the weak scale (since they are not directly related to EWSB). We have not yet touched upon the vacuum alignment of the VEV's of the composite Higgs, possibly through mechanisms such as radiative corrections, higher order couplings. Such questions have been discussed by earlier works on dynamical electroweak symetry breaking, e.g. , we will put off discussion on our specific model till the future work . Discussion and Conclusion In conclusion, we have given a description of a fairly complete extension of the Standard Model with dynamical electroweak symmetry breaking. This arises from the bulk 1 + 4 dimensions as a 1 + 3 dimensional effective theory after remodeling. We have kept only a small number of lattice slices (branes) as a minimal approximation with thinned degree of freedom. A dynamical electroweak symmetry breaking scheme emerges naturally in this description, as first anticipated by Dobrescu . We see immediately the emergence of an imbedding of QCD as in SU(3) → SU(3) 1 × SU(3) 2 ..., and the appearance of vectorlike partners of the elementary fermions such as the top quark. With fermionic localization we can have flavor dependent couplings to these gauge groups, and trigger the formation of condensates using localization background fields or warped geometry. These elements are all part of the structure of Topcolor, , and the Top Seesaw , and we are thus led naturally to this class of extra-dimensional models in which the electroweak symmetry is broken dynamically. However, one can go beyond these schemes to, e.g., a fourth generation scheme which is somewaht more remniscent of Technicolor and may have direct advantages for the lepton sector masses. One can always discard the notion of extra-dimensions and view this as an extension of the Standard Model within 1 + 3 dimensions with extra discrete symmetries, however the specific structures we have considered are almost compelled by extra dimensions. The connection to extra dimensions is made through remodeling , bulk inhabitation of gauge fields , the the transverse lattice , hidden local symmetries , etc., and may be viewed as a manifestly gauge invariant low energy effective theory for an extension of the Standard Model in 1 + 4. Remodeling is a remarkable model building tool, and a system of new organizational principles. Remodeling has guided our thinking in producing the present sketch of a full theory of flavor physics based upon Top Seesaw, something which has not been previously done. Much work remains to sort out and to check that the systematics of experimental constraints can be accomodated , and to see if the model survives as a natural scheme without a great deal of fine-tuning. It is already encouraging that the Top Seesaw model is a strong dynamics that is consistent with experimental S − T constraints. Appendix A: Three Brane Example of Gauge Fields sans Translational Invariance We now wish to address the impact of the breaking of translational invariance in x 5 on the physics of the effective 1 + 3 Lagrangian. It is generally advantageous to thin the degrees of freedom in the lattice description of the extra dimensions. We can construct a coarse grain n-brane model with n << N as a crude approximation to a fine grained N-brane model. Such a description can be improved in principle by a block-spin renormalization group, which is beyond the scope of our present discussion. Consider, for example, a 3-brane model. The effective 1+3 Lagrangian now contains 3 copies of the Standard Model gauge group and link fields interpolating each of the SU(3) C , SU(2) W , and U(1) Y groups in the aliphatic configuration. The pure gauge Lagrangian in 1 + 3 dimensions for 3 copies of QCD is given by: where G B iµν has been rescaled so that the gauge couplingg 3,j appears in the covariant derivative. The electroweak gauge Lagrangian can be written down analogously. After substituting the VEVs of the link fields, the Φ j kinetic terms lead to a mass-squared matrix for the gauge fields: This mass-squared matrix can be written as an 3 × 3 matrix sandwiched between the column vector A = (A B 0µ , A B 1µ , A B 2µ ), and it's transpose, as A T MA, where: where we have kept the full set of effects of j-dependence in v j andg 3,j . We can diagonalize the mass-matrix as: In 1 + 4 dimensions free fermions are vectorlike. Chiral fermion zero modes can be obtained by using domain wall kinks in a background field which couples to the fermion like a mass term. This can trap a chiral zero-mode at the kink . This mechanism can be generalized to the lattice action . We now discuss the chiral fermions in the discretized version of the Jackiw-Rebbi domain wall. We first consider an infinite fifth dimension, (i.e., there are infinite number of SU(3)'s for QCD,) and for simplicity, we assume thatg and v are constant. From eq.(2.14) we see that the kinetic term in the fifth dimension appears as a fermion mass terms on the lattice: Note that the derivative hops → . We can equally well represent the derivative as: The mass matrix between Ψ L and Ψ R , Ψ L M f Ψ R in the first convention is: . . . 1 0 · · · · · · −1 1 0 · · · 0 −1 1 0 · · · . . . 0 A left-handed chiral zero mode can be localized at y = y k by a kink fermion mass term which has m Ψ (y < y k ) > 0 and m Ψ (y > y k ) < 0. In the discrete version, one can add positive and negative masses to the diagonal term, −mΨ iL Ψ iR for i < k and i > k respectively as in Ref. . For example, with the kink at k = 3 we have: The approximate solution for a zero mode is then: where, in this case we require m <<gv and hence localization of the zero mode requires a fine grain lattice such that |gv − m| <gv. Alternatively, and more efficient for a coarse grain lattice, we can give a positive mass to the diagonal mass term mΨ iL Ψ iR , m > 0 for i < k and a negative mass to the offdiagonal mass term −mΨ iL Φ i Ψ (i−1)R /v for i > k. As in the previous example, we now have: This enhances the diagonal links for i < k and the off-diagonal links for i > k. A lefthanded chiral zero mode then arises centered around Ψ kL which has the "weakest links". One can easily check that the state is a zero mode, while there is no normalizable right-handed zero mode. The width of the zero mode becomes narrower for smaller ǫ. In the limit m ≫gv, the zero mode is effectively localized only on the lattice point k. Similarly, a right-handed chiral mode can be localized by considering the opposite mass profile, and the method can easily be adapted to the opposite derivative (hopping) definition. If we compactify the extra dimension with the periodic boundary condition, there will be another zero mode with the opposite chirality localized at the anti-kink of the mass term. In general, the pair of zero modes will receive a small mass due to the tunneling between the finite distance of the kink-anti-kink separation unless some fine-tuning is made. With the S 1 /Z 2 orbifold compactification, however, one of the zero mode will be projected out. One can see that in the discrete aliphatic model, the boundary conditions removes one chiral fermion at the end of the lattice point, so there must be a chiral fermion left massless due to the mismatch of the numbers of the left-handed and righthanded fermions. The massless chiral fermion can be localized anywhere on the lattice using the discrete domain wall mass term.
<reponame>homer54/OpenSlides import { Injectable } from '@angular/core'; import { TranslateService } from '@ngx-translate/core'; import { CollectionStringMapperService } from 'app/core/core-services/collection-string-mapper.service'; import { DataSendService } from 'app/core/core-services/data-send.service'; import { DataStoreService } from 'app/core/core-services/data-store.service'; import { RelationManagerService } from 'app/core/core-services/relation-manager.service'; import { ViewModelStoreService } from 'app/core/core-services/view-model-store.service'; import { RelationDefinition } from 'app/core/definitions/relations'; import { Topic } from 'app/shared/models/topics/topic'; import { ViewMediafile } from 'app/site/mediafiles/models/view-mediafile'; import { TopicTitleInformation, ViewTopic } from 'app/site/topics/models/view-topic'; import { BaseIsAgendaItemAndListOfSpeakersContentObjectRepository } from '../base-is-agenda-item-and-list-of-speakers-content-object-repository'; const TopicRelations: RelationDefinition[] = [ { type: 'M2M', ownIdKey: 'attachments_id', ownKey: 'attachments', foreignViewModel: ViewMediafile } ]; /** * Repository for topics */ @Injectable({ providedIn: 'root' }) export class TopicRepositoryService extends BaseIsAgendaItemAndListOfSpeakersContentObjectRepository< ViewTopic, Topic, TopicTitleInformation > { /** * Constructor calls the parent constructor * * @param DS Access the DataStore * @param mapperService OpenSlides mapping service for collections * @param dataSend Access the DataSendService */ public constructor( DS: DataStoreService, dataSend: DataSendService, mapperService: CollectionStringMapperService, viewModelStoreService: ViewModelStoreService, translate: TranslateService, relationManager: RelationManagerService ) { super(DS, dataSend, mapperService, viewModelStoreService, translate, relationManager, Topic, TopicRelations); } public getTitle = (titleInformation: TopicTitleInformation) => { if (titleInformation.agenda_item_number && titleInformation.agenda_item_number()) { return `${titleInformation.agenda_item_number()} · ${titleInformation.title}`; } else { return titleInformation.title; } }; public getAgendaListTitle = (titleInformation: TopicTitleInformation) => { // Do not append ' (Topic)' to the title. return this.getTitle(titleInformation); }; public getAgendaSlideTitle = (titleInformation: TopicTitleInformation) => { // Do not append ' (Topic)' to the title. return this.getTitle(titleInformation); }; /** * @override The base function. * * @returns The plain title. */ public getAgendaListTitleWithoutItemNumber = (titleInformation: TopicTitleInformation) => { return titleInformation.title; }; public getVerboseName = (plural: boolean = false) => { return this.translate.instant(plural ? 'Topics' : 'Topic'); }; }
/* * Copyright (c) 2016, 2019, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. * */ #ifndef SHARE_OPTO_ARRAYCOPYNODE_HPP #define SHARE_OPTO_ARRAYCOPYNODE_HPP #include "gc/shared/c2/barrierSetC2.hpp" #include "opto/callnode.hpp" class GraphKit; class ArrayCopyNode : public CallNode { private: // What kind of arraycopy variant is this? enum { None, // not set yet ArrayCopy, // System.arraycopy() CloneInst, // A clone of instances CloneArray, // A clone of arrays that don't require a barrier // - depends on GC - some need to treat oop arrays separately CloneOopArray, // An oop array clone that requires GC barriers CopyOf, // Arrays.copyOf() CopyOfRange // Arrays.copyOfRange() } _kind; #ifndef PRODUCT static const char* _kind_names[CopyOfRange+1]; #endif // Is the alloc obtained with // AllocateArrayNode::Ideal_array_allocation() tightly coupled // (arraycopy follows immediately the allocation)? // We cache the result of LibraryCallKit::tightly_coupled_allocation // here because it's much easier to find whether there's a tightly // couple allocation at parse time than at macro expansion time. At // macro expansion time, for every use of the allocation node we // would need to figure out whether it happens after the arraycopy (and // can be ignored) or between the allocation and the arraycopy. At // parse time, it's straightforward because whatever happens after // the arraycopy is not parsed yet so doesn't exist when // LibraryCallKit::tightly_coupled_allocation() is called. bool _alloc_tightly_coupled; bool _has_negative_length_guard; bool _arguments_validated; static const TypeFunc* arraycopy_type() { const Type** fields = TypeTuple::fields(ParmLimit - TypeFunc::Parms); fields[Src] = TypeInstPtr::BOTTOM; fields[SrcPos] = TypeInt::INT; fields[Dest] = TypeInstPtr::BOTTOM; fields[DestPos] = TypeInt::INT; fields[Length] = TypeInt::INT; fields[SrcLen] = TypeInt::INT; fields[DestLen] = TypeInt::INT; fields[SrcKlass] = TypeKlassPtr::BOTTOM; fields[DestKlass] = TypeKlassPtr::BOTTOM; const TypeTuple *domain = TypeTuple::make(ParmLimit, fields); // create result type (range) fields = TypeTuple::fields(0); const TypeTuple *range = TypeTuple::make(TypeFunc::Parms+0, fields); return TypeFunc::make(domain, range); } ArrayCopyNode(Compile* C, bool alloc_tightly_coupled, bool has_negative_length_guard); intptr_t get_length_if_constant(PhaseGVN *phase) const; int get_count(PhaseGVN *phase) const; static const TypePtr* get_address_type(PhaseGVN* phase, const TypePtr* atp, Node* n); Node* try_clone_instance(PhaseGVN *phase, bool can_reshape, int count); bool prepare_array_copy(PhaseGVN *phase, bool can_reshape, Node*& adr_src, Node*& base_src, Node*& adr_dest, Node*& base_dest, BasicType& copy_type, const Type*& value_type, bool& disjoint_bases); void array_copy_test_overlap(PhaseGVN *phase, bool can_reshape, bool disjoint_bases, int count, Node*& forward_ctl, Node*& backward_ctl); Node* array_copy_forward(PhaseGVN *phase, bool can_reshape, Node*& ctl, Node* mem, const TypePtr* atp_src, const TypePtr* atp_dest, Node* adr_src, Node* base_src, Node* adr_dest, Node* base_dest, BasicType copy_type, const Type* value_type, int count); Node* array_copy_backward(PhaseGVN *phase, bool can_reshape, Node*& ctl, Node* mem, const TypePtr* atp_src, const TypePtr* atp_dest, Node* adr_src, Node* base_src, Node* adr_dest, Node* base_dest, BasicType copy_type, const Type* value_type, int count); bool finish_transform(PhaseGVN *phase, bool can_reshape, Node* ctl, Node *mem); static bool may_modify_helper(const TypeOopPtr *t_oop, Node* n, PhaseTransform *phase, CallNode*& call); public: static Node* load(BarrierSetC2* bs, PhaseGVN *phase, Node*& ctl, MergeMemNode* mem, Node* addr, const TypePtr* adr_type, const Type *type, BasicType bt); private: void store(BarrierSetC2* bs, PhaseGVN *phase, Node*& ctl, MergeMemNode* mem, Node* addr, const TypePtr* adr_type, Node* val, const Type *type, BasicType bt); public: enum { Src = TypeFunc::Parms, SrcPos, Dest, DestPos, Length, SrcLen, DestLen, SrcKlass, DestKlass, ParmLimit }; // Results from escape analysis for non escaping inputs const TypeOopPtr* _src_type; const TypeOopPtr* _dest_type; static ArrayCopyNode* make(GraphKit* kit, bool may_throw, Node* src, Node* src_offset, Node* dest, Node* dest_offset, Node* length, bool alloc_tightly_coupled, bool has_negative_length_guard, Node* src_klass = NULL, Node* dest_klass = NULL, Node* src_length = NULL, Node* dest_length = NULL); void connect_outputs(GraphKit* kit, bool deoptimize_on_exception = false); bool is_arraycopy() const { assert(_kind != None, "should bet set"); return _kind == ArrayCopy; } bool is_arraycopy_validated() const { assert(_kind != None, "should bet set"); return _kind == ArrayCopy && _arguments_validated; } bool is_clone_inst() const { assert(_kind != None, "should bet set"); return _kind == CloneInst; } // is_clone_array - true for all arrays when using GCs that has no barriers bool is_clone_array() const { assert(_kind != None, "should bet set"); return _kind == CloneArray; } // is_clone_oop_array is used when oop arrays need GC barriers bool is_clone_oop_array() const { assert(_kind != None, "should bet set"); return _kind == CloneOopArray; } // is_clonebasic - is true for any type of clone that doesn't need a writebarrier. bool is_clonebasic() const { assert(_kind != None, "should bet set"); return _kind == CloneInst || _kind == CloneArray; } bool is_copyof() const { assert(_kind != None, "should bet set"); return _kind == CopyOf; } bool is_copyof_validated() const { assert(_kind != None, "should bet set"); return _kind == CopyOf && _arguments_validated; } bool is_copyofrange() const { assert(_kind != None, "should bet set"); return _kind == CopyOfRange; } bool is_copyofrange_validated() const { assert(_kind != None, "should bet set"); return _kind == CopyOfRange && _arguments_validated; } void set_arraycopy(bool validated) { assert(_kind == None, "shouldn't bet set yet"); _kind = ArrayCopy; _arguments_validated = validated; } void set_clone_inst() { assert(_kind == None, "shouldn't bet set yet"); _kind = CloneInst; } void set_clone_array() { assert(_kind == None, "shouldn't bet set yet"); _kind = CloneArray; } void set_clone_oop_array() { assert(_kind == None, "shouldn't bet set yet"); _kind = CloneOopArray; } void set_copyof(bool validated) { assert(_kind == None, "shouldn't bet set yet"); _kind = CopyOf; _arguments_validated = validated; } void set_copyofrange(bool validated) { assert(_kind == None, "shouldn't bet set yet"); _kind = CopyOfRange; _arguments_validated = validated; } virtual int Opcode() const; virtual uint size_of() const; // Size is bigger virtual bool guaranteed_safepoint() { return false; } virtual Node *Ideal(PhaseGVN *phase, bool can_reshape); virtual bool may_modify(const TypeOopPtr *t_oop, PhaseTransform *phase); bool is_alloc_tightly_coupled() const { return _alloc_tightly_coupled; } bool has_negative_length_guard() const { return _has_negative_length_guard; } static bool may_modify(const TypeOopPtr *t_oop, MemBarNode* mb, PhaseTransform *phase, ArrayCopyNode*& ac); static int get_partial_inline_vector_lane_count(BasicType type, int const_len); bool modifies(intptr_t offset_lo, intptr_t offset_hi, PhaseTransform* phase, bool must_modify) const; #ifndef PRODUCT virtual void dump_spec(outputStream *st) const; virtual void dump_compact_spec(outputStream* st) const; #endif }; #endif // SHARE_OPTO_ARRAYCOPYNODE_HPP
<gh_stars>1-10 #include <iostream> #include <cstdio> #include <cstring> #define N 500010 using namespace std; int rest[N*3],lson[N*3],rson[N*3],lend[N*3],rend[N*3],mid[N*3],root,ntop=1; void build(int &x,int l,int r){ x=ntop++; lend[x]=l,rend[x]=r,mid[x]=l+r>>1; rest[x]=r-l+1; if(l!=r){ build(lson[x],l,mid[x]); build(rson[x],mid[x]+1,r); } } void cover(int &x,int l,int r){ if(lend[x]==l&&rend[x]==r){ rest[x]=0; return; } if(r<=mid[x]){ cover(lson[x],l,r); }else if(l>mid[x]){ cover(rson[x],l,r); }else{ cover(lson[x],l,mid[x]); cover(rson[x],mid[x]+1,r); } rest[x]=rest[lson[x]]+rest[rson[x]]; } int main(){ int n,m,l,r; scanf("%d%d",&n,&m); build(root,1,n); while(m--){ scanf("%d%d",&l,&r); cover(root,l,r); printf("%d\n",rest[root]); } }
// Maps the required operations to the functions defined below. // The map of strings to Schema pointers defines the properties of a resource. func playerVirtualMachine() *schema.Resource { return &schema.Resource{ Create: playerVirtualMachineCreate, Read: playerVirtualMachineRead, Update: playerVirtualMachineUpdate, Delete: playerVirtualMachineDelete, Schema: map[string]*schema.Schema{ "vm_id": { Type: schema.TypeString, Optional: true, ForceNew: true, DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { return new == "" }, }, "url": { Type: schema.TypeString, Optional: true, DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { return strings.HasPrefix(old, new) || (strings.HasSuffix(old, "/vm/"+d.Id()+"/console") && new == "") }, }, "name": { Type: schema.TypeString, Required: true, }, "team_ids": { Type: schema.TypeList, Required: true, Elem: &schema.Schema{ Type: schema.TypeString, }, }, "user_id": { Type: schema.TypeString, Optional: true, }, "console_connection_info": { Type: schema.TypeList, Optional: true, Default: nil, Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "hostname": { Type: schema.TypeString, Optional: true, }, "port": { Type: schema.TypeString, Optional: true, }, "protocol": { Type: schema.TypeString, Optional: true, }, "username": { Type: schema.TypeString, Optional: true, }, "password": { Type: schema.TypeString, Optional: true, }, }, }, }, }, } }
// NkSubimageHandle function as declared in nk/nuklear.h:2271 func NkSubimageHandle(arg0 Handle, w uint16, h uint16, subRegion Rect) Image { carg0, _ := *(*C.nk_handle)(unsafe.Pointer(&arg0)), cgoAllocsUnknown cw, _ := (C.ushort)(w), cgoAllocsUnknown ch, _ := (C.ushort)(h), cgoAllocsUnknown csubRegion, _ := *(*C.struct_nk_rect)(unsafe.Pointer(&subRegion)), cgoAllocsUnknown __ret := C.nk_subimage_handle(carg0, cw, ch, csubRegion) __v := *(*Image)(unsafe.Pointer(&__ret)) return __v }
<gh_stars>0 import pygame import copy import random from settings import FOOD_COL, CELL_SIZE class Food: def __init__(self, app): self.app = app self.app_window = self.app.app_window self.origin = copy.deepcopy(self.app_window.pos) self.pos = [ random.randint(0, self.app.app_window.cols - 1), random.randint(0, self.app.app_window.rows - 1) ] def update(self): pass def draw(self): pygame.draw.rect(self.app.window, FOOD_COL[0], (self.origin[0] + (self.pos[0] * CELL_SIZE), self.origin[1] + (self.pos[1] * CELL_SIZE), CELL_SIZE, CELL_SIZE))
/** * Created by StReaM on 9/3/2017. */ public class SelfDelegate extends BottomPageDelegate { public static final String ORDER_TYPE = "order_type"; private Bundle mBundle = new Bundle(); @BindView(R2.id.rv_profile_setting) RecyclerView mRecyclerView = null; private void startOrderListByType() { final OrderListDelegate delegate = new OrderListDelegate(); delegate.setArguments(mBundle); getParentDelegate().start(delegate); } @OnClick(R2.id.tv_all_order) void onAllOrderClick() { mBundle.putString(ORDER_TYPE, "all"); startOrderListByType(); } @OnClick(R2.id.img_user_avatar) void onAvatarClick() { getParentDelegate().start(new ProfileDelegate()); } @Override public Object setLayout() { return R.layout.delegate_self; } @Override public void onBindView(@Nullable Bundle savedInstanceState, View rootView) { ListBean addr = new ListBean.Builder() .setItemType(ListItemType.ITEM_NORMAL) .setId(1) .setText("收货地址") .setDelegate(new AddressDelegate()) .setValue("") .build(); ListBean sys = new ListBean.Builder() .setItemType(ListItemType.ITEM_NORMAL) .setId(2) .setText("系统设置") .setDelegate(new SettingDelegate()) .setValue("") .build(); final List<ListBean> data = new ArrayList<>(); data.add(addr); data.add(sys); final ListAdapter adapter = new ListAdapter(data); LinearLayoutManager manager = new LinearLayoutManager(getContext()); mRecyclerView.setLayoutManager(manager); mRecyclerView.setAdapter(adapter); mRecyclerView.addOnItemTouchListener(new SelfClickListener(this)); } }
/** * Configures a {@link UserSessionRegistry} that maps a {@link Principal}s name to a * WebSocket session id. Additionally a {@link UserEventMessenger} is configured as a bean * that allows sending {@link EventMessage}s to user names in addition to WebSocket * session ids. * * <p> * Example: * </p> * * <pre class="code"> * &#064;Configuration * &#064;EnableWamp * public class UserWampConfigurer extends AbstractUserWampConfigurer { * * &#064;Override * public void registerWampEndpoints(WampEndpointRegistry registry) { * registry.addEndpoint(&quot;/wamp&quot;).withSockJS(); * } * } * </pre> * * @author Rob Winch * @author Ralph Schaer */ public abstract class AbstractUserWampConfigurer extends AbstractWampConfigurer { @Bean public UserEventMessenger userEventMessenger(EventMessenger eventMessenger) { return new UserEventMessenger(eventMessenger, userSessionRegistry()); } @Bean public UserSessionRegistry userSessionRegistry() { return new DefaultUserSessionRegistry(); } @Override public void configureWebSocketTransport(WebSocketTransportRegistration registration) { registration.addDecoratorFactory(new UserSessionWebSocketHandlerDecoratorFactory( userSessionRegistry())); } }
package handler import ( "errors" "fmt" "net/http" "testing" "github.com/golang/mock/gomock" "github.com/gorilla/mux" "github.com/rknruben56/feedback-api/entity" "github.com/rknruben56/feedback-api/usecase/template/mock" "github.com/steinfletcher/apitest" ) func Test_ListTemplates(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() service := mock.NewMockUseCase(controller) temp := &entity.Template{ ID: entity.NewID(), } service.EXPECT(). ListTemplates(). Return([]*entity.Template{temp}, nil). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) apitest.New(). Handler(r). Get("/v1/templates"). Expect(t). Status(http.StatusOK). End() } func Test_ListTemplates_NotFound(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() service := mock.NewMockUseCase(controller) service.EXPECT(). ListTemplates(). Return(nil, entity.ErrNotFound). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) apitest.New(). Handler(r). Get("/v1/templates"). Expect(t). Status(http.StatusNotFound). End() } func Test_GetTemplate(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() temp := &entity.Template{ ID: entity.NewID(), } service := mock.NewMockUseCase(controller) service.EXPECT(). GetTemplate(temp.ID). Return(temp, nil). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) apitest.New(). Handler(r). Getf("/v1/templates/%s", temp.ID.String()). Expect(t). Status(http.StatusOK). End() } func Test_GetTemplate_ServerError(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() temp := &entity.Template{ ID: entity.NewID(), } service := mock.NewMockUseCase(controller) service.EXPECT(). GetTemplate(temp.ID). Return(temp, errors.New("server error")). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) apitest.New(). Handler(r). Getf("/v1/templates/%s", temp.ID.String()). Expect(t). Status(http.StatusInternalServerError). End() } func Test_GetTemplate_NotFound(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() temp := &entity.Template{ ID: entity.NewID(), } service := mock.NewMockUseCase(controller) service.EXPECT(). GetTemplate(temp.ID). Return(nil, entity.ErrNotFound). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) apitest.New(). Handler(r). Getf("/v1/templates/%s", temp.ID.String()). Expect(t). Status(http.StatusNotFound). End() } func Test_ListTemplates_ServerError(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() service := mock.NewMockUseCase(controller) service.EXPECT(). ListTemplates(). Return(nil, errors.New("bad error")). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) apitest.New(). Handler(r). Get("/v1/templates"). Expect(t). Status(http.StatusInternalServerError). End() } func Test_CreateTemplate(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() service := mock.NewMockUseCase(controller) service.EXPECT(). CreateTemplate(gomock.Any(), gomock.Any()). Return(entity.NewID(), nil). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) apitest.New(). Handler(r). Post("/v1/templates"). Body(`{"class": "Class123", "content": "[Student] is doing well"}`). Expect(t). Status(http.StatusCreated). End() } func Test_CreateTemplate_ServerError(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() service := mock.NewMockUseCase(controller) service.EXPECT(). CreateTemplate(gomock.Any(), gomock.Any()). Return(entity.NewID(), errors.New("error creating template")). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) apitest.New(). Handler(r). Post("/v1/templates"). Body(`{"class": "Class123", "content": "[Student] is doing well"}`). Expect(t). Status(http.StatusInternalServerError). End() } func Test_UpdateTemplate(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() service := mock.NewMockUseCase(controller) temp := &entity.Template{ ID: entity.NewID(), } service.EXPECT(). GetTemplate(temp.ID). Return(temp, nil). AnyTimes() service.EXPECT(). UpdateTemplate(temp). Return(nil). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) body := fmt.Sprintf(`{"id": "%s", "class": "Class123", "content": "[Student] is doing well"}`, temp.ID.String()) apitest.New(). Handler(r). Put("/v1/templates"). Body(body). Expect(t). Status(http.StatusOK). End() } func Test_UpdateTemplate_GetError(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() service := mock.NewMockUseCase(controller) temp := &entity.Template{ ID: entity.NewID(), } service.EXPECT(). GetTemplate(temp.ID). Return(nil, errors.New("service error")). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) body := fmt.Sprintf(`{"id": "%s", "class": "Class123", "content": "[Student] is doing well"}`, temp.ID.String()) apitest.New(). Handler(r). Put("/v1/templates"). Body(body). Expect(t). Status(http.StatusInternalServerError). End() } func Test_UpdateTemplate_NotFound(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() service := mock.NewMockUseCase(controller) temp := &entity.Template{ ID: entity.NewID(), } service.EXPECT(). GetTemplate(temp.ID). Return(nil, entity.ErrNotFound). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) body := fmt.Sprintf(`{"id": "%s", "class": "Class123", "content": "[Student] is doing well"}`, temp.ID.String()) apitest.New(). Handler(r). Put("/v1/templates"). Body(body). Expect(t). Status(http.StatusNotFound). End() } func Test_UpdateTemplate_UpdateError(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() service := mock.NewMockUseCase(controller) temp := &entity.Template{ ID: entity.NewID(), } service.EXPECT(). GetTemplate(temp.ID). Return(temp, nil). AnyTimes() service.EXPECT(). UpdateTemplate(temp). Return(errors.New("update failed")). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) body := fmt.Sprintf(`{"id": "%s", "class": "Class123", "content": "[Student] is doing well"}`, temp.ID.String()) apitest.New(). Handler(r). Put("/v1/templates"). Body(body). Expect(t). Status(http.StatusInternalServerError). End() } func Test_DeleteTemplate(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() service := mock.NewMockUseCase(controller) temp := &entity.Template{ ID: entity.NewID(), } service.EXPECT().DeleteTemplate(temp.ID).Return(nil).AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) apitest.New(). Handler(r). Deletef("/v1/templates/%s", temp.ID.String()). Expect(t). Status(http.StatusOK). End() } func Test_DeleteTemplate_ServerError(t *testing.T) { controller := gomock.NewController(t) defer controller.Finish() service := mock.NewMockUseCase(controller) temp := &entity.Template{ ID: entity.NewID(), } service.EXPECT(). DeleteTemplate(temp.ID). Return(errors.New("server error")). AnyTimes() r := mux.NewRouter() MakeTemplateHandlers(r, service) apitest.New(). Handler(r). Deletef("/v1/templates/%s", temp.ID.String()). Expect(t). Status(http.StatusInternalServerError). End() }
/** * Checks if the passed in values could be of a transformed method * @param returnValue The current return value * @param parameters The current parameters * @return -1 if they are incompatible, 0 if they are compatible, 1 if they should be transformed */ public int checkValidity(TransformTrackingValue returnValue, TransformTrackingValue... parameters){ if(returnValue != null) { if (!isApplicable(returnValue.getTransform(), target.getReturnType())) { return -1; } } for(int i = 0; i < parameters.length; i++){ if(!isApplicable(parameters[i].getTransform(), target.getParameterTypes()[i])){ return -1; } } if(minimums != null){ for(Minimum minimum : minimums){ if(minimum.isMet(returnValue, parameters)){ return 1; } } return 0; } return 1; }
Distributed collaborative space-time block codes for two-way relaying network Utilizing the recently-developed unique factorization of signals and distributed Alamouti coding scheme, a distributed collaborative Alamouti space-time block code design is presented for a two-way amplify-and-forward (AF) relaying network, where both two sources and the relay are equipped with a single antenna. For such system, two asymptotic pairwise error probability (PEP) formulae are first derived for both fixed-gain AF and variable-gain AF over Rayleigh channels with the maximum likelihood detector. Then, subject to the constraints on a fixed transmission bit rate and unity average transmission energy, the optimal constellation combinations with the optimal energy-scales as the coefficients are attained by minimizing the dominant term of PEP. It is shown that the PEP and the average block error rate of the newly-designed code are superior to those of the conventional distributed Alamouti code.
// Set the number of transfers (number of triggers until complete) void transferCount(unsigned int len) { uint32_t s, d, n = 0; uint32_t dcr = CFG->DCR; s = (dcr >> 20) & 3; d = (dcr >> 17) & 3; if (s == 0 || d == 0) n = 2; else if (s == 2 || d == 2) n = 1; CFG->DSR_BCR = len << n; }
<reponame>LivelyVideo/webrtc-stats export interface WebRTCStatsConstructorOptions { getStatsInterval: number rawStats: boolean statsObject: boolean filteredStats: boolean wrapGetUserMedia: boolean debug: boolean remote: boolean logLevel: LogLevel } /** * none: Show nothing at all. * error: Log all errors. * warn: Only log all warnings and errors. * info: Informative messages including warnings and errors. * debug: Show everything including debugging information */ export type LogLevel = 'none' | 'error' | 'warn' | 'info' | 'debug' export type TimelineTag = 'getUserMedia' | 'peer' | 'connection' | 'track' | 'datachannel' | 'stats' export interface TimelineEvent { event: string tag: TimelineTag timestamp?: Date data?: any peerId?: string error?: any rawStats?: RTCStatsReport statsObject?: any filteredStats?: any } export interface AddPeerOptions { pc: RTCPeerConnection peerId: string remote?: boolean } export interface GetUserMediaResponse { constraints?: MediaStreamConstraints stream?: MediaStream error?: DOMError } export interface MonitoredPeer { pc: RTCPeerConnection stream: MediaStream | null stats: any options: MonitorPeerOptions } export interface MonitoredPeersObject { [index: string]: MonitoredPeer } export interface TrackReport extends RTCStats { bitrate?: number packetRate?: number } interface StatsObjectDetails { inbound: TrackReport[] outbound: TrackReport[] } export interface StatsObject { audio: StatsObjectDetails video: StatsObjectDetails remote?: { audio: StatsObjectDetails video: StatsObjectDetails } connection: any } export interface CodecInfo { clockRate: number mimeType: number payloadType: number } export interface MonitorPeerOptions { remote: boolean } export interface ParseStatsOptions { remote?: boolean }
//Function to calculate the line numbers for each word in the list. public void defineWordIndexes(ArrayList<Word> wordList, ArrayList<Line> lineList) { for(Word w : wordList) { ArrayList<Integer> indexList = new ArrayList<Integer>(); for(int i=0; i<lineList.size(); i++) { if(lineList.get(i).getSentence().contains(w.getWord())) { indexList.add(i+1); } } w.setWordLineIndex(indexList); } }
// send a push notification to a certain user public void sendPushNotification(String token, String uid, String title, String text) { if (server_key == null) return; Log.d(Globals.TAG, "Sending push notification to " + token); HandlerThread thread = new HandlerThread("pushNotification"); thread.start(); final Handler handler = new Handler(thread.getLooper()); handler.postDelayed(() -> { try { String request = Globals.FCM_API; URL url = new URL(request); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setDoOutput(true); conn.setInstanceFollowRedirects(false); conn.setRequestMethod("POST"); conn.setRequestProperty("Content-Type", "application/json"); conn.setRequestProperty("Authorization", "key=" + server_key); conn.setUseCaches(false); conn.connect(); JSONObject body = new JSONObject(); body.put("to", token); JSONObject notification = new JSONObject(); notification.put("title", title + " sent you a message:"); notification.put("body", text); JSONObject data = new JSONObject(); data.put("senderId", uid); body.put("notification", notification); body.put("data", data); DataOutputStream wr = new DataOutputStream(conn.getOutputStream()); wr.write(body.toString().getBytes()); BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(conn.getInputStream(), "UTF-8")); String line = null; StringBuilder sb = new StringBuilder(); while ((line = bufferedReader.readLine()) != null) { sb.append(line); } bufferedReader.close(); Log.d(Globals.TAG, "Push notification sent."); } catch (Exception e) { e.printStackTrace(); } thread.quit(); }, 1000); }
<filename>src/final/final/wood.h #ifndef WOOD_H #define WOOD_H #include "object.h" class Wood : public Object { public: Wood(glm::vec3 pos, glm::vec3 size, glm::vec3 color) : Object(pos, size, color) { this->InitRenderData(); }; ~Wood() {}; unsigned int VAO; void Draw(Shader *shader) { shader->use(); shader->setInt("texture1", 0); glm::mat4 model(1.0f); model = glm::translate(model, this->Position); model = glm::scale(model, this->Size); shader->setMat4("model", model); auto texture = ResourceManager::GetTexture("wood"); glActiveTexture(GL_TEXTURE0); texture.Bind(); glBindVertexArray(this->VAO); glDrawArrays(GL_TRIANGLES, 0, 36); glBindVertexArray(0); } void InitRenderData() { float vertices[] = { -1.0f, -1.0f, -1.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, // bottom-left 1.0f, 1.0f, -1.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, // top-right 1.0f, -1.0f, -1.0f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f, // bottom-right 1.0f, 1.0f, -1.0f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, // top-right -1.0f, -1.0f, -1.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, // bottom-left -1.0f, 1.0f, -1.0f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, // top-left // front face -1.0f, -1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, // bottom-left 1.0f, -1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, // bottom-right 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, // top-right 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, // top-right -1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, // top-left -1.0f, -1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, // bottom-left // left face -1.0f, 1.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-right -1.0f, 1.0f, -1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, // top-left -1.0f, -1.0f, -1.0f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-left -1.0f, -1.0f, -1.0f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-left -1.0f, -1.0f, 1.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, // bottom-right -1.0f, 1.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-right // right face 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-left 1.0f, -1.0f, -1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-right 1.0f, 1.0f, -1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, // top-right 1.0f, -1.0f, -1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, // bottom-right 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, // top-left 1.0f, -1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, // bottom-left // bottom face -1.0f, -1.0f, -1.0f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, // top-right 1.0f, -1.0f, -1.0f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f, // top-left 1.0f, -1.0f, 1.0f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, // bottom-left 1.0f, -1.0f, 1.0f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, // bottom-left -1.0f, -1.0f, 1.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, // bottom-right -1.0f, -1.0f, -1.0f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, // top-right // top face -1.0f, 1.0f, -1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, // top-left 1.0f, 1.0f , 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, // bottom-right 1.0f, 1.0f, -1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, // top-right 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, // bottom-right -1.0f, 1.0f, -1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, // top-left -1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f // bottom-left }; unsigned int VBO; glGenVertexArrays(1, &this->VAO); glGenBuffers(1, &VBO); glBindVertexArray(this->VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(3 * sizeof(float))); glEnableVertexAttribArray(1); glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(6 * sizeof(float))); glEnableVertexAttribArray(1); } }; #endif
def MakeInt(array): for i in range(len(array)): array[i] = int(array[i]) return array def Qprocess(array1, variablenamehere): if array1[0] == '+': return int(array1[1]) else: if variablenamehere >= int(array1[1]): return -1*(int(array1[1])) else: return 0 k = MakeInt(input().split()) Current = k[1] Cry = 0 while(k[0] > 0): Temp = Qprocess(input().split(), Current) if(Temp != 0): Current += Temp else: Cry += 1 k[0] -= 1 print(f"{Current} {Cry}")
/** * Method to split input to SNR groups * * @param input the input * @return array of SNR groups */ public static SNRGroup[] splitInputToGroups(String input) { LangHelper.notNull(input); input = input.trim(); SNRHelper.validateInput(input); final List<SNRGroup> groups = new LinkedList<SNRGroup>(); final Matcher m = INPUT_SPLIT_PATTERN.matcher(input); while (m.find()) { groups.add(new SNRGroup(m.group())); } return groups.toArray(new SNRGroup[groups.size()]); }
#include<cstdio> #include<cstring> #include<iostream> #include<string> #include<cmath> using namespace std; int read() { int x=0,w=1; char c=getchar(); while(c<'0'||c>'9'){if(c=='-')w=-1;c=getchar();} while(c>='0'&&c<='9'){x=x*10+c-'0';c=getchar();} return w*x; } int h,t; int a[10010]; int n,i; int s,d; int main() { // freopen("in.txt","r",stdin); n=read(); for(i=1;i<=n;i++) { a[i]=read(); } t=n; h=1; for(i=1;i<=n;i++) { if(i%2!=0) { if(a[h]>a[t]) { s+=a[h]; h++; } else { s+=a[t]; t--; } } else { if(a[h]>a[t]) { d+=a[h]; h++; } else { d+=a[t]; t--; } } } cout<<s<<" "<<d; return 0; }
// Purge the AutoreleasePool on the stack top synchronously func AutoreleasePoolPurge() { AutoreleasePoolLock() autoreleasePoolTop().Purge() AutoreleasePoolUnLock() }
// DiscoverClusters returns a list of DiscoveredClusters found in both the accounts_mgmt and // accounts_mgmt apis with the given filters func DiscoverClusters(token string, baseURL string, baseAuthURL string, filters discovery.Filter) ([]discovery.DiscoveredCluster, error) { authRequest := auth.AuthRequest{ Token: token, BaseURL: baseAuthURL, } accessToken, err := auth.AuthClient.GetToken(authRequest) if err != nil { return nil, err } subscriptionRequestConfig := subscription.SubscriptionRequest{ Token: accessToken, BaseURL: baseURL, Filter: filters, } subscriptionClient := subscription.SubscriptionClientGenerator.NewClient(subscriptionRequestConfig) subscriptions, err := subscriptionClient.GetSubscriptions() if err != nil { return nil, err } var discoveredClusters []discovery.DiscoveredCluster for _, sub := range subscriptions { if dc, valid := formatCluster(sub); valid { discoveredClusters = append(discoveredClusters, dc) } } return discoveredClusters, nil }
//HandleGenerate handles admission-requests for policies with generate rules func (ws *WebhookServer) HandleGenerate(request *v1beta1.AdmissionRequest, policies []kyverno.ClusterPolicy, patchedResource []byte, roles, clusterRoles []string) (bool, string) { var engineResponses []response.EngineResponse resource, err := utils.ConvertToUnstructured(request.Object.Raw) if err != nil { glog.Errorf("unable to convert raw resource to unstructured: %v", err) return true, "" } glog.V(4).Infof("Handle Generate: Kind=%s, Namespace=%s Name=%s UID=%s patchOperation=%s", resource.GetKind(), resource.GetNamespace(), resource.GetName(), request.UID, request.Operation) userRequestInfo := kyverno.RequestInfo{ Roles: roles, ClusterRoles: clusterRoles, AdmissionUserInfo: request.UserInfo} ctx := context.NewContext() err = ctx.AddResource(request.Object.Raw) if err != nil { glog.Infof("Failed to load resource in context:%v", err) } err = ctx.AddUserInfo(userRequestInfo) if err != nil { glog.Infof("Failed to load userInfo in context:%v", err) } err = ctx.AddSA(userRequestInfo.AdmissionUserInfo.Username) if err != nil { glog.Infof("Failed to load service account in context:%v", err) } policyContext := engine.PolicyContext{ NewResource: *resource, AdmissionInfo: userRequestInfo, Context: ctx, } for _, policy := range policies { policyContext.Policy = policy engineResponse := engine.Generate(policyContext) if len(engineResponse.PolicyResponse.Rules) > 0 { engineResponses = append(engineResponses, engineResponse) } } if err := createGenerateRequest(ws.grGenerator, userRequestInfo, engineResponses...); err != nil { return false, "Kyverno blocked: failed to create Generate Requests" } return true, "" }
//Summary: // Print basic information to the terminal using various variable // creation techniques. The information may be printed using any // formatting you like. // //Requirements: //* Store your favorite color in a variable using the `var` keyword //* Store your birth year and age (in years) in two variables using // compound assignment //* Store your first & last initials in two variables using block assignment //* Declare (but don't assign!) a variable for your age (in days), // then assign it on the next line by multiplying 365 with the age // variable created earlier // //Notes: //* Use fmt.Println() to print out information //* Basic math operations are: // Subtraction - // Addition + // Multiplication * // Division / package main import "fmt" func main() { //Color: simple variable var color string = "red" fmt.Println("color =", color) //Compound Assignment: birth year/age birth_year, birth_age := 1997, 24 fmt.Println("birth year =", birth_year) fmt.Println("birth age =", birth_age) //Block Assignment: First name inital, Last name inital var ( first_initial = "L" last_initial = "U" ) fmt.Println("First initial =", first_initial) fmt.Println("Last inital =", last_initial) //Declare but no assignment var age_days int age_days = birth_age * 365 fmt.Println("Age =", age_days) /*Variables simple color := red compound assignment birth year, age := Block assignment first initial last name initial Declare but not assign var age int Assign variable on next line by multiplying 365 with age variable Print w/ fmt.Println() */ }
export * from './filters/size'; export * from './filters/number'; export * from './filters/html'; export * from './filters/date'; export function getFixedRatio(uploaded = 0, downloaded = 0): string | '∞' { const ratio = uploaded / downloaded; if (ratio === Infinity || ratio > 10000) { return '∞'; // Ratio过大时,将分享率设置为无限 } else { return ratio.toFixed(2); } } export function sleep(ms: number) { return new Promise(resolve => setTimeout(resolve, ms)); } /** * 生成随机字符串 * @param length * @param noSimilar 是否包含容易混淆的字符[oO,Ll,9gq,Vv,Uu,I1],默认为包含 */ export function getRandomString(length = 32, noSimilar = false): string { const chars = noSimilar ? 'abcdefhijkmnprstwxyz2345678ABCDEFGHJKMNPQRSTWXYZ' : 'abcdefghijkmnopqrstuvwxyz0123456789ABCDEFGHIJKMNOPQRSTUVWXYZ'; const maxLength = chars.length; const result = []; for (let i = 0; i < length; i++) { result.push(chars.charAt(Math.floor(Math.random() * maxLength))); } return result.join(''); }
Association of dipeptidyl peptidase IV polymorphism with clinicopathological characters of oral cancer. OBJECTIVE To evaluate the associations between dipeptidyl peptidase IV (DPP4) single nucleotide polymorphism (SNP) and clinicopathological characters of oral cancer. METHODS Four loci of DPP4 SNPs (rs7608798 A/G, rs3788979 C/T, rs2268889 T/C, and rs6741949 G/C) were genotyped by using the TaqMan allelic discrimination in 1238 oral cancers patients and 1197 non-cancer individuals. RESULTS The percentage of DPP4 SNP rs2268889 TC+CC was significantly higher in the oral cancer participants compared to the control group (odds ratio (OR): 1.178, 95% confidence interval (CI): 1.004-1.382, P = 0.045). Among 1676 smokers, DPP4 polymorphisms carriers with betel quid chewing were found to have a 8.785- to 10.903-fold risk to have oral cancer compared to DPP4 wild-type carriers without betel quid chewing. Similar trend was found in individuals with alcohol consumption. Moreover, the oral cancer individuals without cigarette smoking history with at least 1 varied C allele of DPP4 rs2268889 had a significantly higher percentage of large tumor size with the wild-type TT homozygote (P= 0.011). CONCLUSIONS The DPP4 SNP may correlate to the development of oral cancer in those with cigarette smoking and alcohol consumption. Besides, the DPP4 SNP rs2268889 could relate to worse clinical course of oral cancer in non-smokers.
package lol_test import ( "go-riot/lol" "testing" "github.com/stretchr/testify/assert" ) func TestClashBySummonerID(t *testing.T) { assert := assert.New(t) client := lol.NewClient(lol.KR, "RGAPI-6df8ce4c-c548-44cc-b35f-f06c59f95627", nil) res, err := client.Clash.BySummonerID("aPWJgSeY9bV4Jq6DJ7lOBo3YVr9VvB_fcrdQb3NKllH8WQ") if err != nil { assert.Fail(err.Error()) return } if len(res) == 0 { return } assert.Equal(res[0].AccountID, int64(4261996769)) } func TestClashByTeamID(t *testing.T) { assert := assert.New(t) client := lol.NewClient(lol.KR, "RGAPI-6df8ce4c-c548-44cc-b35f-f06c59f95627", nil) res, err := client.Clash.ByTeamID("1761") if err != nil { assert.Fail(err.Error()) return } assert.Equal(res.ID, "1761") } func TestClashTournaments(t *testing.T) { assert := assert.New(t) client := lol.NewClient(lol.KR, "RGAPI-6df8ce4c-c548-44cc-b35f-f06c59f95627", nil) res, err := client.Clash.Tournaments() if err != nil { assert.Fail(err.Error()) return } assert.Equal(res[0].ID, 1761) } func TestClashTournamentsByTeamID(t *testing.T) { assert := assert.New(t) client := lol.NewClient(lol.KR, "RGAPI-6df8ce4c-c548-44cc-b35f-f06c59f95627", nil) res, err := client.Clash.TournamentsByTeamID("1761") if err != nil { assert.Fail(err.Error()) return } assert.Equal(res.ID, 144) } func TestClashTournamentsByTournamentID(t *testing.T) { assert := assert.New(t) client := lol.NewClient(lol.KR, "RGAPI-6df8ce4c-c548-44cc-b35f-f06c59f95627", nil) res, err := client.Clash.TournamentsByTournamentID("144") if err != nil { assert.Fail(err.Error()) return } assert.Equal(res.ID, 144) }
#if defined(XMLRPC_THREADS) #include "XmlRpcThread.h" #ifdef WIN32 # define WIN32_LEAN_AND_MEAN # include <windows.h> # include <process.h> #else # include <pthread.h> #endif using namespace XmlRpc; //! Destructor. Does not perform a join() (ie, the thread may continue to run). XmlRpcThread::~XmlRpcThread() { if (_pThread) { #ifdef WIN32 ::CloseHandle((HANDLE)_pThread); #else ::pthread_detach((pthread_t)_pThread); #endif _pThread = 0; } } //! Execute the run method of the runnable object in a separate thread. //! Returns immediately in the calling thread. void XmlRpcThread::start() { if ( ! _pThread) { #ifdef WIN32 unsigned threadID; _pThread = (HANDLE)_beginthreadex(NULL, 0, &runInThread, this, 0, &threadID); #else ::pthread_create((pthread_t*) &_pThread, NULL, &runInThread, this); #endif } } //! Waits until the thread exits. void XmlRpcThread::join() { if (_pThread) { #ifdef WIN32 ::WaitForSingleObject(_pThread, INFINITE); ::CloseHandle(_pThread); #else ::pthread_join((pthread_t)_pThread, 0); #endif _pThread = 0; } } //! Start the runnable going in a thread unsigned int XmlRpcThread::runInThread(void* pThread) { XmlRpcThread* t = (XmlRpcThread*)pThread; t->getRunnable()->run(); return 0; } #endif // XMLRPC_THREADS
<filename>frontend-angular/src/app/modules/admin/modules/users/modules/statistics/components/statistics-info-dialog/statistics-info-dialog.component.spec.ts import { NO_ERRORS_SCHEMA } from '@angular/core'; import { ComponentFixture, TestBed } from '@angular/core/testing'; import { TranslateModule } from '@ngx-translate/core'; import { StatisticsInfoDialogComponent } from './statistics-info-dialog.component'; describe('StatisticsInfoDialogComponent', () => { let fixture: ComponentFixture<StatisticsInfoDialogComponent>; let component: StatisticsInfoDialogComponent; let hostElement: HTMLElement; beforeEach(() => { TestBed.configureTestingModule({ declarations: [StatisticsInfoDialogComponent], imports: [TranslateModule.forRoot()], schemas: [NO_ERRORS_SCHEMA], }); fixture = TestBed.createComponent(StatisticsInfoDialogComponent); component = fixture.componentInstance; hostElement = fixture.nativeElement; }); it('should image source of dialog content section is valid', async () => { const imgElem = hostElement.querySelector<HTMLImageElement>('img.statistics-info-dialog'); const fakeImgElem = new Image(); const loadingImg = new Promise<{ type: string }>((resolve, reject) => { fakeImgElem.onload = resolve; fakeImgElem.onerror = reject; }); fakeImgElem.src = imgElem.src; try { const result = await loadingImg; expect(result.type).toBe('load'); } catch (result) { expect(result.type).not.toBe('error'); } }); });
/** * A subclass of {@link TrackIconAsyncTask} that loads the generated track icon bitmap into * the provided {@link ImageView}. This class also handles concurrency in the case the * ImageView is recycled (eg. in a ListView adapter) so that the incorrect image will not show * in a recycled view. */ public static class TrackIconViewAsyncTask extends TrackIconAsyncTask { private WeakReference<ImageView> mImageViewReference; public TrackIconViewAsyncTask(ImageView imageView, String trackName, int trackColor, BitmapCache bitmapCache) { super(trackName, trackColor, bitmapCache); // Store this AsyncTask in the tag of the ImageView so we can compare if the same task // is still running on this ImageView once processing is complete. This helps with // view recycling that takes place in a ListView type adapter. imageView.setTag(this); // If we have a BitmapCache, check if this track icon is available already. Bitmap bitmap = bitmapCache != null ? bitmapCache.getBitmapFromMemCache(trackName) : null; // If found in BitmapCache set the Bitmap directly and cancel the task. if (bitmap != null) { imageView.setImageBitmap(bitmap); cancel(true); } else { // Otherwise clear the ImageView and store a WeakReference for later use. Better // to use a WeakReference here in case the task runs long and the holding Activity // or Fragment goes away. imageView.setImageDrawable(null); mImageViewReference = new WeakReference<ImageView>(imageView); } } @TargetApi(Build.VERSION_CODES.HONEYCOMB_MR1) @Override protected void onPostExecute(Bitmap bitmap) { ImageView imageView = mImageViewReference != null ? mImageViewReference.get() : null; // If ImageView is still around, bitmap processed OK and this task is not canceled. if (imageView != null && bitmap != null && !isCancelled()) { // Ensure this task is still the same one assigned to this ImageView, if not the // view was likely recycled and a new task with a different icon is now running // on the view and we shouldn't proceed. if (this.equals(imageView.getTag())) { // On HC-MR1 run a quick fade-in animation. if (hasHoneycombMR1()) { imageView.setAlpha(0f); imageView.setImageBitmap(bitmap); imageView.animate() .alpha(1f) .setDuration(ANIMATION_FADE_IN_TIME) .setListener(null); } else { // Before HC-MR1 set the Bitmap directly. imageView.setImageBitmap(bitmap); } } } } }
The necessity of historical inquiry in educational research: the case of religious education This article explores the mixed fortunes of historical inquiry as a method in educational studies and exposes evidence for the neglect of this method in religious education research in particular. It argues that historical inquiry, as a counterpart to other research methods, can add depth and range to our understanding of education, including religious education, and can illuminate important longer‐term, broader and philosophical issues. The article also argues that many historical voices have remained silent in the existing historiography of religious education because such historiography is too generalised and too biased towards the development of national policy and curriculum and pedagogical theory. To address this limitation in educational research, this article promotes rigorous historical studies that are more substantially grounded in the appropriate historiographical literature and utilise a wide range of original primary sources. Finally, the article explores a specific example of the way in which a historical approach may be fruitfully applied to a particular contemporary debate concerning the nature and purpose of religious education.
//called after the default drawItem method @Override public void afterDrawItem(RectF rect, Canvas canvas, Paint paintByLevel, int level) { CanvasUtil.drawPolygon(rect,canvas,paintByLevel, level + 3 ); }
def configure_sequencer_triggering( self, channel_index, aux_trigger, play_pulse_delay=0 ): self._daq.setString( f"/{self._device_id}/qachannels/{channel_index}/generator/auxtriggers/0/channel", aux_trigger, ) self._daq.setDouble( f"/{self._device_id}/qachannels/{channel_index}/generator/delay", play_pulse_delay, )
class AzureSBToMLRun: """ Listen in the background for messages on a Azure Service Bus Queue (like Nuclio). If a message is received, parse the message and use it to start a Kubeflow Pipeline. The current design expects to receive a message that was sent to the Queue from Azure Event Grid. This is leveraged by installing this package in the Nuclio image a build time, and importing the package into Nuclio. A new class is created inside the Nuclio function's init_context, with this object as the parent class. From here, the run_pipeline method should be overridden by a custom run_pipeline function that dictates how to start the execution of a Kubeflow Pipeline. This method will receive and event, which is the parsed message from Azure Service Bus, and should return an workflow_id Example ------- import time from src.handler import AzureSBQueueToNuclioHandler from mlrun import load_project def init_context(): pipeline = load_project(<LOAD MY PROJECT HERE>) class MyHandler(AzureSBQueuToNuclioHandler): def run_pipeline(self, event): i = 0 try: arguments = { "account_name": event.get("abfs_account_name"), "abfs_file": event.get("abfs_path") } workflow_id = pipeline.run(arguments = arguments) except Exception as e: # if this attempt returns an exception and retry logic i += 1 if i >= 3: raise RuntimeError(f"Failed to start pipeline for {e}") time.sleep(5) self.run_pipeline(event) return workflow_id This class also stores the run data in a V3IO key-value store, and Parameters ---------- credential A credential acquired from Azure connection_string An Azure Service Bus Queue Connection String queue_name The queue on which to listen tenant_id The Azure tenant where your Service Bus resides client_id The Azure client_id for a ServicePrincipal client_secret The secret for your Azure Service Principal credential Any credential that can be provided to Azure for authentication connection_string An Azure connection string for your Service Bus mlrun_project This is the name of the mlrun project that will be run By default this is pulled from environmental variables. Otherwise input will be taken from here, or be designated as "default" Users can authenticate to the Service Bus using a connection string and queue_name, or using a ClientSecretCredential, which requires also providing the namespace of your Service Bus, and the tenant_id, client_id, and client_secret """ def __init__( self, queue_name, namespace=None, tenant_id=None, client_id=None, client_secret=None, credential=None, connection_string=None, max_concurrent_pipelines=3, mlrun_project=None, transport_kind=None, ): self.credential = credential self.namespace = namespace self.sb_client = None self.tenant_id = ( tenant_id or os.getenv("AZURE_TENANT_ID") or os.getenv("AZURE_STORAGE_TENANT_ID") ) self.client_id = ( client_id or os.getenv("AZURE_CLIENT_ID") or os.getenv("AZURE_STORAGE_CLIENT_ID") ) self.client_secret = ( client_secret or os.getenv("AZURE_CLIENT_SECRET") or os.getenv("AZURE_STORAGE_CLIENT_SECRET") ) self.connection_string = connection_string self.max_concurrent_pipelines = max_concurrent_pipelines self.servicebus_queue_name = queue_name self.v3io_container = "projects" self.project = os.getenv("MLRUN_DEFAULT_PROJECT") or mlrun_project or "default" self.table = os.path.join(self.project, "servicebus_table") if ( self.credential is None and self.tenant_id is not None and self.client_id is not None and self.client_secret is not None ): self.credential = self._get_credential_from_service_principal() self.v3io_client = v3io.dataplane.Client( max_connections=1, transport_kind=transport_kind or "httpclient" ) self.tbl_init() self._listener_thread = Thread(target=self.do_connect, daemon=True) self._listener_thread.start() def _get_credential_from_service_principal(self): """ Create a Credential for authentication. This can include a TokenCredential, client_id, client_secret, and tenant_id """ credential = ClientSecretCredential( tenant_id=self.tenant_id, client_id=self.client_id, client_secret=self.client_secret, ) return credential def do_connect(self): """Create a connection to service bus""" logging.info("do_connect") while True: if self.connection_string is not None: self.sb_client = ServiceBusClient.from_connection_string( conn_str=self.connection_string ) elif self.namespace is not None and self.credential is not None: self.fqns = f"https://{self.namespace}.servicebus.windows.net" self.sb_client = ServiceBusClient(self.fqns, self.credential) else: raise ValueError("Unable to create connection to Service Bus!") self.get_servicebus_receiver() def tbl_init(self, overwrite=False): """ If it doesn't exist, create the v3io table and k,v schema :param overwrite: Manually overwrite the """ if (not Path("/v3io", self.v3io_container, self.table).exists()) or ( overwrite is True ): logging.info("Creating table.") self.v3io_client.create_schema( container=self.v3io_container, path=self.table, key="sb_message_id", fields=[ {"name": "abfs_account_name", "type": "string", "nullable": True}, {"name": "abfs_path", "type": "string", "nullable": True}, {"name": "blob_url", "type": "string", "nullable": True}, {"name": "run_attempts", "type": "long", "nullable": False}, { "name": "run_start_timestamp", "type": "timestamp", "nullable": True, }, {"name": "run_status", "type": "string", "nullable": True}, {"name": "sb_event_time", "type": "timestamp", "nullable": True}, {"name": "sb_event_type", "type": "string", "nullable": True}, {"name": "sb_message_id", "type": "string", "nullable": False}, {"name": "sb_message_topic", "type": "string", "nullable": True}, {"name": "workflow_id", "type": "string", "nullable": True}, ], ) else: logging.info("table already exists. Do not recreate") def get_servicebus_receiver(self): """Construct the service bus receiver""" with self.sb_client as client: receiver = client.get_queue_receiver(queue_name=self.servicebus_queue_name) self.receive_messages(receiver) def receive_messages(self, receiver): should_retry = True while should_retry: with receiver: try: for msg in receiver: try: logging.info("get message") message = json.loads(str(msg)) # Parse the message from Service Bus into a usable format parsed_message = self.parse_servicebus_message(message) # Add the message to the kv store and start a pipeline self.process_message(parsed_message) should_complete = True except Exception as e: logging.info( f"There an exception for {e}!" "Do not complete the message" ) should_complete = False for _ in range(3): # settlement retry try: if should_complete: logging.info("Complete the message") receiver.complete_message(msg) else: logging.info("Skipped should_complete") # break except MessageAlreadySettled: # Message was already settled. Continue logging.info("message already settled") break except MessageLockLostError: # Message lock lost before settlemenat. # Handle here logging.info("message lock lost") break except MessageNotFoundError: # Message does not exist logging.info("Message not found") break except ServiceBusError: # Undefined error logging.info("SB Error") continue return except ServiceBusAuthorizationError: # Permission error raise except: # NOQA E722 continue def check_kv_for_message(self, sb_message_id): """ Check to see if an entry with the specified Azure Service Bus If the message_id is present, return True, and the attributes from the key-value store :param message_id: The message to be interrogated :returns A tuple of True/False and if True, the k,v run status from the table """ query = f"sb_message_id == '{sb_message_id}'" print(f"is message_id: {sb_message_id} in kv store?") response = self.v3io_client.kv.scan( container=self.v3io_container, table_path=self.table, filter_expression=query, ) print(f"response: {response}") items = response.output.items if not items: logging.info("sb_message_id not in the kv store") return False, items elif len(items) == 1: item = items[0] logging.info("sb_message_id is in kv store") return True, item else: raise ValueError("Found duplicate entries by message_id in k,v store!") def update_kv_data(self, message, action=None): """Add the Service Bus message to the kv table""" try: if action in ["create_entry", "update_entry", "delete_entry"]: if action == "create_entry": logging.info("Adding record") for item_key, item_attributes in message.items(): self.v3io_client.kv.put( container=self.v3io_container, table_path=self.table, key=item_key, attributes=item_attributes, ) elif action == "update_entry": logging.info("Updating record") for item_key, item_attributes in message.items(): self.v3io_client.kv.update( container=self.v3io_container, table_path=self.table, key=item_key, attributes=item_attributes, ) elif action == "delete_entry": logging.info("Removing record") for item_key, item_attributes in message.items(): self.v3io_client.kv.delete( container=self.v3io_container, table_path=self.table, key=item_key, ) else: raise ValueError("Value passed to update_kv_data unknown!") except Exception as e: raise RuntimeError(f"Failed to add message to kv for {e}") def run_pipeline(self, event: dict): """ This is the method that starts the execution of the pipeline. It should be overridden in the Nuclio function :param event: The message that was sent by Service Bus, after being parsed. It can be used to provide arguments that are passed to the pipeline """ pass def _parse_blob_url_to_fsspec_path(self, blob_url): """ Convert the blob_url to fsspec compliant format and account information :param blob_url: For a createBlob event, this is the blobUrl sent in the message :returns A tuple of the Azure Storage Blob account name and a fsspec compliant filepath """ url_components = urlparse(blob_url) path = url_components.path.strip("/") account_name = url_components.netloc.partition(".")[0] abfs_path = f"az://{path}" return account_name, abfs_path def count_running_pipelines(self): """ Get a count of the pipelines in KV that are in a running or started state """ query = "run_status in ('Started', 'Running')" response = self.v3io_client.kv.scan( container=self.v3io_container, table_path=self.table, filter_expression=query, ) items = response.output.items return len(items) def get_run_status(self, workflow_id): """ Retrieves the status of a pipeline run from the mlrun database :param workflow_id: A workflow_id from the mlrun database :return A tuple of the run status of a pipeline, and the pipeline start timestamp """ try: db = get_run_db().connect() pipeline_info = db.get_pipeline(run_id=workflow_id) run_status = pipeline_info.get("run").get("status") or "none" data = json.loads( pipeline_info.get("pipeline_runtime").get("workflow_manifest") ) run_start_ts = data.get("status").get("startedAt") if run_start_ts is None: run_start_ts = "null" else: run_start_ts = datetime.strptime(run_start_ts, "%Y-%m-%dT%H:%M:%SZ") return run_status, run_start_ts except Exception as e: raise RuntimeError(f"Failed to get_run_status for {e}") def parse_servicebus_message(self, message): """ Write the logic here to parse the incoming message. :param message: A Python dict of the message received from Azure Service Bus :returns A nested Python dict with information information from the Service Bus message for handling a pipeline run, and for tracking that run as it flows through the pipeline. """ logging.info("Process the incoming message.") # logging.info(message) sb_message_id = message.get("id", None) if sb_message_id is None: raise ValueError("Unable to identify sb_message_id!") sb_message_topic = message.get("messageTopic", "null") sb_event_type = message.get("eventType", "null") sb_event_time = message.get("eventTime", "null") data = message.get("data", "null") if data != "none": blob_url = data.get("blobUrl", "null") # Reformat the blob_url to a fsspec-compatible file location abfs_account, abfs_path = self._parse_blob_url_to_fsspec_path(blob_url) parsed_message = { sb_message_id: { # The messageId from Service Bus "sb_message_topic": sb_message_topic, # messageTopic from Service Bus "sb_event_type": sb_event_type, # The eventType from Service Bus "sb_event_time": sb_event_time, # The eventTime from service Bus "blob_url": blob_url, # The blobUrl -- The blob created "workflow_id": "null", # This is the workflow_id set by mlrun "run_status": "null", # This is the run_status in Iguazio "run_attempts": 0, # zero-index count the number of run attempts "abfs_account_name": abfs_account or "null", "abfs_path": abfs_path or "null", "run_start_timestamp": "null", } } return parsed_message def process_message(self, message): """ Write the logic here to process the parsed message. Validate it is not present in the KV store, and start a Kubeflow Pipeline :param message: A Python dict of the parsed message from Azure Service Bus """ sb_message_id = list(message.keys())[0] # Check to see if the message_id is in the nosql run table has_message, existing_kv_entry = self.check_kv_for_message(sb_message_id) if not has_message: # If the message_id is not in the k,v store, Add it # to the store logging.info("message_id not found in kv store") self.update_kv_data(message, action="create_entry") # Check to see if the number of running pipelines exceeds the allowable # concurrent pipelines. num_running_pipelines = self.count_running_pipelines() logging.info(f"There are {num_running_pipelines} returned from kv") if num_running_pipelines < self.max_concurrent_pipelines: workflow_id = self.run_pipeline(event=message) run_status = "Started" logging.info(f"workflow_id is: {workflow_id}") if workflow_id != "none": # Here we're starting the pipeline and adding the workflow_id # to the kv store logging.info(f"workflow_id is: {workflow_id}") message[sb_message_id]["workflow_id"] = workflow_id message[sb_message_id]["run_status"] = run_status self.update_kv_data(message, action="update_entry") logging.info("Sleeping") time.sleep(20) # else: # logging.info( # "Found message_id in the kv store. Check to see if the " # "pipeline is running or has run" # ) # run_status = existing_kv_entry[message_id]["run_status"] # if run_status in ["none", "Failed"]: # workflow_id = self.run_pipeline(event=existing_kv_entry) # run_status = "Started" # existing_kv_entry[message_id]["workflow_id"] = workflow_id # existing_kv_entry[message_id]["run_status"] = run_status # self.update_kv_data(existing_kv_entry, action="update_entry") # elif run_status == "Running": # pass # else: # logging.info(f"run_status unknown: {run_status}") def check_and_update_run_status(self, ttl: int = 4): """ This will be run in the Nuclio handler on a CRON trigger. We will retrieve a list of active runs from the KV store and check their status. We can update the kv store if a run is done. :param ttl: Amount of time, in hours, to allow a run to remain in a running state before sending a kill signal """ # Find any entries in the kv store with no status or in a running state query = "run_status in ('Started', 'Running')" # And calculate the current time, so we can decide how to handle # long-running jobs and if we should ignore old runs now = datetime.datetime.now(datetime.timezone.utc) ttl = ttl * 3600 # Convert hours to seconds # Get the count of running pipelines in the KV store # as well as the pipelines in KV that are logged as in progress num_running_pipelines = self.count_running_pipelines() logging.info(f"How many running pipelines were found: {num_running_pipelines}") try: response = self.v3io_client.kv.scan( container=self.v3io_container, table_path=self.table, filter_expression=query, ) items = response.output.items if items: # If there are runs in the kv store that are in a started # or running state, check their run_status for item in items: logging.info(item) new_item = {} workflow_id = item["workflow_id"] sb_message_id = item.pop("__name") new_item[sb_message_id] = item logging.info(f"Checking run info for workflow_id:" f"{workflow_id}") # Get the latest run status for the workflow_id from the mlrun database # and update it here. run_status, run_start_ts = self.get_run_status(workflow_id) new_item[sb_message_id]["run_status"] = run_status new_item[sb_message_id]["run_start_timestamp"] = run_start_ts if run_status == "Succeeded": logging.info("run_status is 'Succeeded'") pass elif (run_status in ["Failed", "none"]) and ( num_running_pipelines <= self.max_concurrent_pipelines ): # and current_run_info["run_attempts"] < 3: logging.info( "Run status in a Failed or none state." "Retry the pipeline!" ) if run_status == "Failed": new_item[sb_message_id]["run_attempts"] += 1 workflow_id = self.run_pipeline(new_item) run_status, run_start_ts = self.get_run_status(workflow_id) new_item[sb_message_id]["workflow_id"] = workflow_id new_item[sb_message_id]["run_status"] = run_status new_item[sb_message_id]["run_start_timestamp"] = run_start_ts elif run_status in ["Running", "Started"]: logging.info("KV shows run in progress. Update status...") run_status, run_start_ts = self.get_run_status(workflow_id) # Check to see if the run has been going excessively long. # If so, kill the run and retry pipeline_runtime = (now - run_start_ts).seconds if pipeline_runtime > ttl: try: db = get_run_db.connect() db.abort_run(workflow_id) except Exception as e: logging.info(f"Failed to abort run with {e}") new_item[sb_message_id]["run_attempts"] += 1 if new_item[sb_message_id]["run_attempts"] <= 3: db = get_run_db().connect() pipe = db.get_pipeline(run_id=workflow_id) pipe = json.loads( pipe["pipeline_runtime"]["workflow_manifest"] ) uid = pipe["metadata"]["uid"] db.abort_run(uid, project=self.project) else: logging.info( f"run_status is not known." f"Got a run_status of {run_status}" ) logging.info("Update record in kv") self.update_kv_data(new_item, action="update_entry") num_running_pipelines = self.count_running_pipelines() logging.info("Sleeping") time.sleep(10) else: logging.info( "No runs found in kv store that match" "'Started' or 'Running' state." ) except Exception as e: logging.info(f"Caught exception {e}." "Update counter retry!") time.sleep(10)
//NewScanner creates a new scanner func NewScanner(exps Expressions) (*Scanner, []error) { fileMatchers, errs1 := buildFileMatchers(exps.FileMatchExps) contentMatchers, errs2 := buildContentMatchers(exps.ContentMatchExps) errs := make([]error, 0, len(errs1)+len(errs2)) errs = append(errs, errs1...) errs = append(errs, errs2...) if len(errs) > 0 { return nil, errs } scanner := &Scanner{ fileMatchers: fileMatchers, contentMatchers: contentMatchers, logger: &noopLogger{}, } return scanner, nil }
<reponame>abhi-13-07/DSA101 # author : mani-barathi def merge(arr,l,m,r): i = l # starting index of left sub array j = m+1 # starting index of right sub array k = 0 # starting index of temp arr temp = ['' for i in range(l,r+1)] while i<=m and j<=r: if arr[i]<arr[j]: temp[k] = arr[i] i+=1 else: temp[k] = arr[j] j+=1 k+=1 while i<=m: # if left subarray has more element temp[k] = arr[i] i+=1 k+=1 while j<=r: # if right subarray has more element temp[k] = arr[j] j+=1 k+=1 k = 0 for p in range(l,r+1): arr[p] = temp[k] k+=1 def merge_sort(arr,l,r): if l<r: m = (l+r)//2 merge_sort(arr,l,m) merge_sort(arr,m+1,r) merge(arr,l,m,r) arr = [64, 34, 25, 12,12, 22, 11, 90] print('Before: ',arr) merge_sort(arr,0,len(arr)-1) print('After : ',arr)
<reponame>bharathraja23/mlops-on-gcp import numpy as np import pandas as pd from os import system import fire from sklearn.linear_model import SGDClassifier from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from joblib import dump, load # used for saving and loading sklearn objects from scipy.sparse import save_npz, load_npz, csr_matrix # used for saving and loading sparse matrices def train_model(training_dataset_file, validation_dataset_file, gcs_model_path): imdb_train = pd.read_csv(training_dataset_file) imdb_test = pd.read_csv(validation_dataset_file) if not os.path.exists('data_preprocessors'): system("mkdir 'data_preprocessors'") if not os.path.exists('vectorized_data'): system("mkdir 'vectorized_data'") if not os.path.exists('model'): system("mkdir 'model'") #preprocessing # Bigram Counts bigram_vectorizer = CountVectorizer(ngram_range=(1, 2)) bigram_vectorizer.fit(imdb_train['text'].values) dump(bigram_vectorizer, 'data_preprocessors/bigram_vectorizer.joblib') # bigram_vectorizer = load('data_preprocessors/bigram_vectorizer.joblib') X_train_bigram = bigram_vectorizer.transform(imdb_train['text'].values) save_npz('vectorized_data/X_train_bigram.npz', X_train_bigram) # X_train_bigram = load_npz('vectorized_data/X_train_bigram.npz') # Bigram Tf-Idf bigram_tf_idf_transformer = TfidfTransformer() bigram_tf_idf_transformer.fit(X_train_bigram) dump(bigram_tf_idf_transformer, 'data_preprocessors/bigram_tf_idf_transformer.joblib') # bigram_tf_idf_transformer = load('data_preprocessors/bigram_tf_idf_transformer.joblib') X_train_bigram_tf_idf = bigram_tf_idf_transformer.transform(X_train_bigram) save_npz('vectorized_data/X_train_bigram_tf_idf.npz', X_train_bigram_tf_idf) # X_train_bigram_tf_idf = load_npz('vectorized_data/X_train_bigram_tf_idf.npz') y_train = imdb_train['label'].values def train_and_show_scores(X: csr_matrix, y: np.array, title: str) -> None: #splitting data X_train, X_valid, y_train, y_valid = train_test_split( X, y, train_size=0.75, stratify=y ) clf = SGDClassifier() clf.fit(X_train, y_train) train_score = clf.score(X_train, y_train) valid_score = clf.score(X_valid, y_valid) print(f'{title}\nTrain score: {round(train_score, 2)} ; Validation score: {round(valid_score, 2)}\n') dump(clf, 'model/model.joblib.pkl') subprocess.check_call(['gsutil', 'cp', 'model/model.joblib.pkl', gcs_model_path], stderr=sys.stdout) print('Saved model in: {}'.format(gcs_model_path)) train_and_show_scores(X_train_bigram_tf_idf, y_train, 'Bigram Tf-Idf') if __name__ == '__main__': fire.Fire(train_model)
// CAdvisorCmd install and configures the cAdvisor on docker host func CAdvisorCmd(cmd *cobra.Command, args []string) { machine, _ := cmd.Flags().GetString("machine") file, _ := cmd.Flags().GetString("file") log.Debug("Machine: ", machine) log.Debug("File: ", file) client := libmachine.NewClient(machinePath, machinePath) defer client.Close() h, err := client.Load(machine) if err != nil { log.Error(err) return } if err := h.Start(); err != nil { log.Error(err) } _, err = h.RunSSHCommand("docker ps | grep cadvisor") if err != nil { _, err = h.RunSSHCommand(cAdvisorTemplateCmd) if err != nil { log.Error(err) } } tcpURL, _ := h.URL() url := strings.Split(tcpURL, ":") fmt.Printf("cAdvisor is successfully setup\nYou can access it at http:%s:8080 \n", url[1]) }
Samuel Pufendorf: majority rule (logic, justification and limits) and forms of government The article analyses one of the first and more important doctrines of majority rule and its relationship with forms of government, the one presented by Samuel Pufendorf in his De jure naturae et gentium, the most popular work of political theory in the 17th and 18th centuries. It also considers some of the objections raised in the contemporary debate concerning its limits. L’article analyse l’une des premières et plus importantes doctrines de la règle de majorité et son rapport avec les formes de gouvernement, celle présentée par Samuel Pufendorf dans le De jure naturae et gentium, l’ouvrage le plus populaire de théorie politique pendant les 17e et 18e siècles, ainsi que certaines des objections présentées dans le débat contemporain concernant ses limites.
<reponame>zhangyuchen0411/leetcode<filename>impl-rust/src/no_0024_swap_nodes_in_pairs.rs use crate::common::head_list_node; use crate::common::ListNode; struct Solution; impl Solution { // 参考别人的 pub fn swap_pairs(mut head: Option<Box<ListNode>>) -> Option<Box<ListNode>> { let mut dummy = ListNode::new(0); let mut tail = &mut dummy; while let Some(mut n1) = head { head = n1.next.take(); // 取下 n1 if let Some(mut n2) = head { head = n2.next.take(); // 取下 n2 n2.next.replace(n1); // n2.next -> n1 tail.next.replace(n2); // tail.next -> n2 tail = tail.next.as_mut().unwrap().next.as_mut().unwrap(); // tail = tail.next.next } else { // 只有一个节点了,挂到 tail 后面。 tail.next.replace(n1); } } dummy.next } pub fn swap_pairs1(mut head: Option<Box<ListNode>>) -> Option<Box<ListNode>> { let mut dummy = Some(Box::new(ListNode::new(0))); let mut ptr = &mut dummy; loop { let (first, remain) = head_list_node(head); let (second, remain) = head_list_node(remain); let should_break = second.is_none(); if second.is_some() { if let Some(p) = ptr { p.next = second; ptr = &mut p.next; } } if first.is_some() { if let Some(p) = ptr { p.next = first; ptr = &mut p.next; } } head = remain; if should_break { break; } } dummy.unwrap().next } } #[cfg(test)] mod tests { use super::*; #[test] fn test_swap_pairs() { let head = ListNode::new_from_arr(&vec![1, 2, 3, 4]); let want = ListNode::new_from_arr(&vec![2, 1, 4, 3]); let ans = Solution::swap_pairs(head); assert_eq!(ans, want); } }
def fixChainIds(self): lastResNum = -10000 lastChain = '-' chainsUsed = [] replacingChains = False newChain = False for atomIndex, residueNumber in enumerate(self.resNums): if residueNumber < lastResNum: replacingChains = True newChain = chr(max(ord(self.chains[atomIndex]), max(chainsUsed)) + 1) if replacingChains: self.updateOneChain(atomIndex, newChain) if ord(self.chains[atomIndex]) not in chainsUsed: chainsUsed.append(ord(self.chains[atomIndex])) lastResNum = residueNumber lastChain = self.chains[atomIndex]
def normalize_homogeneous_torch(points): uv = points[..., :-1] w = torch.unsqueeze(points[..., -1], -1) return divide_safe_torch(uv, w)
<filename>twinkle-asm/src/main/java/com/twinkle/framework/asm/designer/InstanceBuilderDesigner.java package com.twinkle.framework.asm.designer; import com.twinkle.framework.asm.builder.InstanceBuilder; import com.twinkle.framework.asm.utils.TypeUtil; import lombok.Getter; import org.objectweb.asm.*; import java.util.Arrays; import java.util.List; /** * Function: TODO ADD FUNCTION. <br/> * Reason: TODO ADD REASON. <br/> * Date: 2019-08-02 21:25<br/> * * @author chenxj * @see * @since JDK 1.8 */ @Getter public class InstanceBuilderDesigner extends AbstractClassDesigner{ private final String builderClassName; private final Type builderType; private final Type instanceType; private final Type interfaceType; protected InstanceBuilderDesigner(String _className, Type _builderType, Type _interfaceType, Type _instanceType) { this.builderClassName = toInternalName(_className); this.builderType = _builderType; this.instanceType = _instanceType; this.interfaceType = _interfaceType; } public InstanceBuilderDesigner(String _className, Class<? extends InstanceBuilder> _builderClass, Class _interfaceClass, String _instanceClassName) { this.builderClassName = toInternalName(_className); if (!_builderClass.isInterface()) { throw new IllegalArgumentException(_builderClass.getName() + "is not an interface"); } else { this.builderType = Type.getType(_builderClass); this.instanceType = Type.getObjectType(toInternalName(_instanceClassName)); this.interfaceType = Type.getType(_interfaceClass); } } @Override public String getCanonicalClassName() { return this.toCanonicalName(this.builderClassName); } protected Type getInterfaceBaseType() { return TypeUtil.OBJECT_TYPE; } protected Type getSuperType() { return TypeUtil.OBJECT_TYPE; } /** * Add Class Declaration. * * @param _visitor * @return */ @Override protected ClassVisitor addClassDeclaration(ClassVisitor _visitor) { _visitor.visit(TARGET_JVM, Opcodes.ACC_PUBLIC + Opcodes.ACC_SUPER, this.getBuilderClassName(), getClassSignature(this.getSuperType(), this.getBuilderType(), Arrays.asList(this.getInterfaceType())), this.getSuperType().getInternalName(), new String[]{this.getBuilderType().getInternalName()}); return _visitor; } @Override protected void addClassDefinition(ClassVisitor _visitor) { this.addDefaultConstructorDefinition(_visitor); this.addNewInstanceDefinition(_visitor); this.addNewArrayDefinition(_visitor); this.addSyntheticNewInstanceDefinition(_visitor, this.getInterfaceBaseType()); this.addSyntheticNewArrayDefinition(_visitor, this.getInterfaceBaseType()); } protected MethodVisitor addDefaultConstructorDefinition(ClassVisitor _visitor) { String tempDescriptor = "()V"; MethodVisitor tempVisitor = _visitor.visitMethod(Opcodes.ACC_PUBLIC, "<init>", tempDescriptor, (String)null, (String[])null); tempVisitor.visitCode(); tempVisitor.visitVarInsn(Opcodes.ALOAD, 0); tempVisitor.visitMethodInsn(Opcodes.INVOKESPECIAL, this.getSuperType().getInternalName(), "<init>", tempDescriptor); tempVisitor.visitInsn(Opcodes.RETURN); tempVisitor.visitMaxs(AUTO_STACK_SIZE, AUTO_LOCAL_VARS); tempVisitor.visitEnd(); return tempVisitor; } protected MethodVisitor addNewInstanceDefinition(ClassVisitor _visitor) { MethodVisitor tempVisitor = _visitor.visitMethod(Opcodes.ACC_PUBLIC, "newInstance", "()L" + this.getInterfaceType().getInternalName() + ";", (String)null, (String[])null); tempVisitor.visitCode(); tempVisitor.visitTypeInsn(Opcodes.NEW, this.getInstanceType().getInternalName()); tempVisitor.visitInsn(Opcodes.DUP); tempVisitor.visitMethodInsn(Opcodes.INVOKESPECIAL, this.getInstanceType().getInternalName(), "<init>", "()V"); tempVisitor.visitInsn(Opcodes.ARETURN); tempVisitor.visitMaxs(AUTO_STACK_SIZE, AUTO_LOCAL_VARS); tempVisitor.visitEnd(); return tempVisitor; } protected MethodVisitor addNewArrayDefinition(ClassVisitor _visitor) { MethodVisitor tempVisitor = _visitor.visitMethod(Opcodes.ACC_PUBLIC, "newArray", "(I)[L" + this.getInterfaceType().getInternalName() + ";", (String)null, (String[])null); tempVisitor.visitCode(); tempVisitor.visitVarInsn(Opcodes.ILOAD, 1); tempVisitor.visitTypeInsn(Opcodes.ANEWARRAY, this.getInterfaceType().getInternalName()); tempVisitor.visitInsn(Opcodes.ARETURN); tempVisitor.visitMaxs(AUTO_STACK_SIZE, AUTO_LOCAL_VARS); tempVisitor.visitEnd(); return tempVisitor; } protected MethodVisitor addSyntheticNewInstanceDefinition(ClassVisitor _visitor, Type _type) { MethodVisitor tempVisitor = _visitor.visitMethod(Opcodes.ACC_PUBLIC + Opcodes.ACC_VOLATILE + Opcodes.ACC_SYNTHETIC, "newInstance", "()" + _type.getDescriptor(), (String)null, (String[])null); tempVisitor.visitCode(); tempVisitor.visitVarInsn(Opcodes.ALOAD, 0); tempVisitor.visitMethodInsn(Opcodes.INVOKEVIRTUAL, this.getBuilderClassName(), "newInstance", "()L" + this.getInterfaceType().getInternalName() + ";"); tempVisitor.visitInsn(Opcodes.ARETURN); tempVisitor.visitMaxs(AUTO_STACK_SIZE, AUTO_LOCAL_VARS); tempVisitor.visitEnd(); return tempVisitor; } protected MethodVisitor addSyntheticNewArrayDefinition(ClassVisitor _visitor, Type _type) { MethodVisitor tempVisitor = _visitor.visitMethod(Opcodes.ACC_PUBLIC + Opcodes.ACC_VOLATILE + Opcodes.ACC_SYNTHETIC, "newArray", "(I)[" + _type.getDescriptor(), null, null); tempVisitor.visitCode(); tempVisitor.visitVarInsn(Opcodes.ALOAD, 0); tempVisitor.visitVarInsn(Opcodes.ILOAD, 1); tempVisitor.visitMethodInsn(Opcodes.INVOKEVIRTUAL, this.getBuilderClassName(), "newArray", "(I)[L" + this.getInterfaceType().getInternalName() + ";"); tempVisitor.visitInsn(Opcodes.ARETURN); tempVisitor.visitMaxs(AUTO_STACK_SIZE, AUTO_LOCAL_VARS); tempVisitor.visitEnd(); return tempVisitor; } protected static String getClassSignature(Type _mainType, Type _interfaceType, List<Type> _genericTypes) { StringBuilder tempBuilder = new StringBuilder(); tempBuilder.append(_mainType.getDescriptor()); tempBuilder.append("L").append(_interfaceType.getInternalName()); if (_genericTypes != null && _genericTypes.size() > 0) { tempBuilder.append("<"); _genericTypes.stream().forEach(item -> tempBuilder.append(item.getDescriptor())); tempBuilder.append(">"); } tempBuilder.append(";"); return tempBuilder.toString(); } protected static String toInternalName(String _className) { return _className.replace('.', '/'); } protected String toCanonicalName(String _className) { return _className.replace('/', '.'); } }
package picocli.codegen.aot.graalvm.processor; import picocli.codegen.aot.graalvm.DynamicProxyConfigGenerator; import picocli.codegen.aot.graalvm.ReflectionConfigGenerator; import picocli.codegen.aot.graalvm.ResourceConfigGenerator; import javax.annotation.processing.ProcessingEnvironment; import javax.annotation.processing.SupportedOptions; import static picocli.codegen.aot.graalvm.processor.ProxyConfigGen.OPTION_INTERFACE_CLASSES; import static picocli.codegen.aot.graalvm.processor.ResourceConfigGen.OPTION_BUNDLES; import static picocli.codegen.aot.graalvm.processor.ResourceConfigGen.OPTION_RESOURCE_REGEX; /** * @see ReflectionConfigGenerator * @see ResourceConfigGenerator * @see DynamicProxyConfigGenerator * @since 4.0 */ @SupportedOptions({NativeImageConfigGeneratorProcessor.OPTION_PROJECT, OPTION_BUNDLES, OPTION_RESOURCE_REGEX, OPTION_INTERFACE_CLASSES, ReflectConfigGen.OPTION_DISABLE, ResourceConfigGen.OPTION_DISABLE, ProxyConfigGen.OPTION_DISABLE, }) public class NativeImageConfigGeneratorProcessor extends AbstractCompositeGeneratorProcessor { /** * Base path where generated files will be written to: {@value}. */ public static final String BASE_PATH = "META-INF/native-image/picocli-generated/"; /** * Name of the annotation processor {@linkplain ProcessingEnvironment#getOptions() option} * that can be used to control the actual location where the generated file(s) * are to be written to, relative to the {@link #BASE_PATH}. * The value of this constant is {@value}. */ public static final String OPTION_PROJECT = "project"; @Override public synchronized void init(ProcessingEnvironment processingEnv) { super.init(processingEnv); generators.add(new ReflectConfigGen(processingEnv)); generators.add(new ResourceConfigGen(processingEnv)); generators.add(new ProxyConfigGen(processingEnv)); } }
// NewModelSlotBuilder returns an initialized modelSlotBuilder. func NewModelSlotBuilder(intent, name, typeName string) *modelSlotBuilder { return &modelSlotBuilder{ registry: l10n.NewRegistry(), intent: intent, name: name, typeName: typeName, samplesName: intent + "_" + name + l10n.KeyPostfixSamples, } }
def load_namespace_schemas(cls, schema_dir: Optional[str] = None) -> dict: metadata_testing_enabled = bool(os.getenv("METADATA_TESTING", 0)) namespace_schemas = {} if schema_dir is None: schema_dir = os.path.join(os.path.dirname(__file__), 'schemas') if not os.path.exists(schema_dir): raise RuntimeError("Metadata schema directory '{}' was not found!".format(schema_dir)) schema_files = [json_file for json_file in os.listdir(schema_dir) if json_file.endswith('.json')] for json_file in schema_files: schema_file = os.path.join(schema_dir, json_file) with io.open(schema_file, 'r', encoding='utf-8') as f: schema_json = json.load(f) namespace = schema_json.get('namespace') if namespace is None: warnings.warn("Schema file '{}' is missing its namespace attribute! Skipping...".format(schema_file)) continue if namespace == METADATA_TEST_NAMESPACE and not metadata_testing_enabled: continue if namespace not in namespace_schemas: namespace_schemas[namespace] = {} name = schema_json.get('name') if name is None: name = os.path.splitext(os.path.basename(schema_file))[0] schema_filter_class_name = schema_json.get('schema_filter_class_name', default_schema_filter_class_name) schema_filter_class = import_item(schema_filter_class_name) filtered_schema = schema_filter_class().post_load(name, schema_json) namespace_schemas[namespace][name] = filtered_schema return copy.deepcopy(namespace_schemas)
The man who came into the Cologne citizen registry office in early summer 2009 was in a hurry. He introduced himself as Michael Bodenheimer and requested a national identity card and a German passport. He was an Israeli citizen, he explained -- he had moved to Germany in mid-June. He offered proof in form of an Israeli passport that had been issued in Tel Aviv in November 2008. As proof of his eligibility for German citizenship, he provided the marriage certificate of his parents, who had fled Germany to escape the Nazis. According to the Article 116 of the German constitution, "former citizens of Germany, and their descendants, are entitled to German citizenship." On June 18, 2009, the man picked up his passport and national identity card. Eight months later, police in Dubai identified Michael Bodenheimer as one of the alleged killers of Hamas weapons dealer Mahmoud al-Mabhouh. The man had used the German passport he procured in Cologne to enter the United Arab Emirates. The Cologne prosecutor's office initiated proceedings alleging document falsification. The Federal Prosecutor's Office is currently pondering taking over the case because of the involvement of foreign agent activity. Because all investigations thusfar indicate that the Israeli foreign intelligence agency, the Mossad, is behind the hit, the Israeli ambassador, Emmanuel Nachshon, was invited to the Foreign Ministry in Berlin last Thursday. The German government's Middle East envoy, Andreas Michaelis, asked Nachshon for an explanation -- but so far the Israelis have yet to offer a convincing answer. SPIEGEL reviewed the information Michael Bodenheimer submitted -- and made a surprising discovery. When he applied for his passport in Cologne last year, Bodenheimer declared an apartment in Cologne, on Eigelstein Strasse, as his new address. This is in one of the city's poorer districts, a place where tenants are constantly moving in and out -- perfect if one wants to go unnoticed. His name isn't on any of the building's postboxes and the pizza chef downstairs shrugs, he knows no one of that name. In all probability Michael Bodenheimer never lived here. 'I've Never Heard the Name Bodenheimer' Bodenheimer declared in Cologne that he was born on July 15, 1967 in the Israeli village of Liman. That's in the north of Israel, just a few kilometers from the border with Lebanon. Liman is the English pronunciation of the German surname Lehman -- the village was founded by former Israeli soliders in 1949 and named after US Senator Herbert H. Lehman (a relative of the brothers who founded the infamous US bank). About 60 families live here. It's Thursday evening, and retiree Baruch van Tijn is on his way home from a meeting at the town hall. "I was born here," van Tijn says, "but I've never heard the name Bodenheimer." No, and he's never heard of anyone around 40 who left the village for Germany. Van Tijn uses his cellphone to call his brother and a neighbor, but they don't know anything either. It's a new day in Herzliya, the Mediterranean city just a few kilometers north of Tel Aviv that is home to diplomats and the wealthy. The address Bodenheimer gave as his last place of residence before leaving for Germany is Yad Harutzim Street No. 7, in the business district of Herzliya. It's a modern four-story building. A high-end kitchenware shop is on the ground floor. The Sabbath has already started, but lights still shine from a few of the building's windows. A blue board hangs by the building's reception desk. 19 firms are listed here, including one Michael Budenheimer (the letters u and o are identical in Hebrew). The security guard says he doesn't know anyone by that name; the firm that rented out the fourth floor moved out six months ago. "They change all the time," he says. On top of the board is the name "Top Office." On the Internet, "Top Office" offers all kinds of office services, including "virtual offices:" "Put your firm's name on the door!" the Web site says. In under 24 hours, a company can be registered, the secretaries can start answering calls and the reception desk can receive visitors and packages. When SPIEGEL called Top Office, the call was passed to a woman who called herself "Iris." "Last name?" "That's not relevant." "Do you know a man or a firm named Michael Budenheimer?" "He might have been a client of ours." "You don't want to know for sure?" "Why would I?" "Because his name is connected with the incident in Dubai, and a man with the same name gave your address in Herzliya as his last place of residence." "We moved out six months ago." "Are you going to set up an investigation?" "Thanks for your call. Good luck. Goodbye." Two days later, the name Michael Budenheimer and Top Office have been removed from the board in the lobby of the building in Herzliya. The guard won't say who took them off, or why. A Fake Address? There's a lot of evidence to suggest that the Mossad established a fake address at No. 7 Yad Harutzim Street, just a kilometers from their headquarters. The agency must have counted on the fact that the citizen registry office in Cologne would have someone from the German Embassy check Bodenheimer's background. A nameplate at a firm that rents office space would be a perfect cover. Whoever put Bodenheimer's name on the board must have simply forgotten to take it down. The fact that it was taken down in the middle of the night, as though by magic, doesn't make the Mossad any less suspicious. Sometimes little things like this poke holes in an otherwise perfect operation. And sometimes these little things have far-reaching consequences. Whichever prosecutor takes the case up in Germany will make inquiries in Tel Aviv. One of their questions will be who this Michael Bodenheimer is -- this man whose trail leads from Cologne to the Middle East. In the diplomatic arena, things have become tense. The Israeli Foreign Minister Avigdor Lieberman, headed to Brussels on Monday to meet with EU foreign ministers. Last week he made a curt statement: "We're confirming nothing and denying nothing." After the latest revelations, this may not be enough to satisfy his EU colleagues.
<filename>src/api/api.controller.ts import { Body, ClassSerializerInterceptor, Controller, Delete, Get, HttpCode, Param, Patch, Post, Req, Session, SetMetadata, UseGuards, UseInterceptors, } from '@nestjs/common'; import { HttpInterceptor } from './interceptors/http.interceptor'; import { ApiBadRequestResponse, ApiBody, ApiConflictResponse, ApiCookieAuth, ApiCreatedResponse, ApiForbiddenResponse, ApiNoContentResponse, ApiOkResponse, ApiParam, ApiPreconditionFailedResponse, ApiTags, ApiUnauthorizedResponse, ApiUnprocessableEntityResponse, } from '@nestjs/swagger'; import { ApiService } from './api.service'; import { LoginUserDto } from '../user/dto/login-user.dto'; import { UserEntity } from '../user/entities/user.entity'; import { Observable, of } from 'rxjs'; import { CreateUserDto } from '../user/dto/create-user.dto'; import * as secureSession from 'fastify-secure-session'; import { tap } from 'rxjs/operators'; import { SecurityService } from '../security/security.service'; import { KeySessionDataDto } from '../security/dto/key-session-data.dto'; import { AuthGuard } from '../security/guards/auth.guard'; import { SessionDataDto } from '../security/dto/session-data.dto'; import { UserIdParams } from './validators/user-id.params'; import { PatchUserDto } from '../user/dto/patch-user.dto'; import { OwnerGuard } from '../security/guards/owner.guard'; import { CredentialEntity } from '../credential/entities/credential.entity'; import { FastifyRequest } from 'fastify'; import { CredentialsListEntity } from '../credential/entities/credentials-list.entity'; import { CredentialIdParams } from './validators/credential-id.params'; import { PatchCredentialDto } from '../credential/dto/patch-credential.dto'; import { StartAttestationDto } from '../webauthn/dto/start-attestation.dto'; import { WebAuthnAttestationSession } from '../webauthn/interfaces/webauthn-attestation-session.interface'; import { PublicKeyCredentialCreationOptionsEntity } from '../webauthn/entities/public-key-credential-creation-options.entity'; import { WebAuthnSessionGuard } from '../security/guards/webauthn-session.guard'; import { VerifyAttestationDto } from '../webauthn/dto/verify-attestation.dto'; import { WebAuthnAssertionSession } from '../webauthn/interfaces/webauthn-assertion-session.interface'; import { PublicKeyCredentialRequestOptionsEntity } from '../webauthn/entities/public-key-credential-request-options.entity'; import { VerifyAssertionDto } from '../webauthn/dto/verify-assertion.dto'; @ApiTags('api') @Controller('api') @UseInterceptors(ClassSerializerInterceptor) @UseInterceptors(HttpInterceptor) export class ApiController { /** * Class constructor * * @param {ApiService} _apiService dependency injection of ApiService instance * @param {SecurityService} _securityService dependency injection of SecurityService instance */ constructor( private readonly _apiService: ApiService, private readonly _securityService: SecurityService, ) {} /** * Handler to answer to POST /api/login route * * @param {LoginUserDto} loginUser payload to log in the user * @param {secureSession.Session} session secure data for the current session * * @return Observable<UserEntity> */ @ApiOkResponse({ description: 'Returns the successful login data', type: UserEntity, }) @ApiBadRequestResponse({ description: "The payload provided to log in the user isn't good", }) @ApiUnprocessableEntityResponse({ description: "The request can't be performed in the database", }) @ApiUnauthorizedResponse({ description: "Username and Password don't match" }) @ApiPreconditionFailedResponse({ description: 'An error occurred during login process', }) @ApiBody({ description: 'Payload to log in an user', type: LoginUserDto }) @HttpCode(200) @Post('login') login( @Body() loginUser: LoginUserDto, @Session() session: secureSession.Session, ): Observable<UserEntity> { return this._apiService.login(loginUser).pipe( tap((user: UserEntity) => this._securityService.setSessionData(session, 'user', user), ), tap(() => this._securityService.setSessionData(session, 'previous_step', 'login'), ), tap(() => this._securityService.setSessionData(session, 'auth_type', 'login'), ), ); } /** * Handler to answer to POST /webauthn/register/start route * * @param {StartAttestationDto} dto payload to generate attestation options * @param {secureSession.Session} session secure data for the current session * * @return {Observable<PublicKeyCredentialCreationOptionsEntity>} attestation options object */ @ApiOkResponse({ description: 'Returns the successful attestation options object', type: PublicKeyCredentialCreationOptionsEntity, }) @ApiBadRequestResponse({ description: "The payload provided to get attestation options isn't good", }) @ApiUnauthorizedResponse({ description: 'User is not logged in' }) @ApiUnprocessableEntityResponse({ description: "The request can't be performed in the database", }) @ApiBody({ description: 'Payload to start webauthn registration', type: StartAttestationDto, }) @ApiCookieAuth() @UseGuards(AuthGuard) @HttpCode(200) @Post('/webauthn/register/start') startAttestation( @Body() dto: StartAttestationDto, @Session() session: secureSession.Session, ): Observable<PublicKeyCredentialCreationOptionsEntity> { return this._apiService .startAttestation(dto.authenticator_attachment, session) .pipe( tap((_: PublicKeyCredentialCreationOptionsEntity) => this._securityService.setSessionData( session, 'webauthn_attestation', { challenge: _.challenge, user_handle: _.user.id, authenticator_attachment: _.authenticatorSelection.authenticatorAttachment, } as WebAuthnAttestationSession, ), ), ); } /** * Handler to answer to POST /webauthn/register/finish route * * @param {VerifyAttestationDto} attestation payload to verify attestation * @param {secureSession.Session} session secure data for the current session * @param {FastifyRequest} request current request object * * @return {Observable<CredentialEntity>} the credential created after attestation verification */ @ApiOkResponse({ description: 'The attestation has been successfully verified and the credential has been successfully created', type: CredentialEntity, }) @ApiBadRequestResponse({ description: "The payload provided to verify attestation isn't good", }) @ApiConflictResponse({ description: 'The credential already exists in the database', }) @ApiUnprocessableEntityResponse({ description: "The request can't be performed in the database", }) @ApiUnauthorizedResponse({ description: 'User is not logged in' }) @ApiForbiddenResponse({ description: 'Missing WebAuthn session data' }) @ApiPreconditionFailedResponse({ description: 'An error occurred during attestation verification process', }) @ApiBody({ description: 'Payload to verify webauthn attestation', type: VerifyAttestationDto, }) @ApiCookieAuth() @SetMetadata('webauthn_session', 'webauthn_attestation') @UseGuards(AuthGuard, WebAuthnSessionGuard) @HttpCode(200) @Post('/webauthn/register/finish') verifyAttestation( @Body() attestation: VerifyAttestationDto, @Session() session: secureSession.Session, @Req() request: FastifyRequest, ): Observable<CredentialEntity> { return this._apiService .verifyAttestation(attestation, session, request.headers['user-agent']) .pipe( tap(() => this._securityService.cleanSessionData( session, 'webauthn_attestation', ), ), ); } /** * Handler to answer to GET /webauthn/verify/start route * * @param {secureSession.Session} session secure data for the current session * * {Observable<PublicKeyCredentialRequestOptionsEntity>} assertion options object */ @ApiOkResponse({ description: 'Returns the successful assertion options object', type: PublicKeyCredentialRequestOptionsEntity, }) @Get('/webauthn/verify/start') startAssertion( @Session() session: secureSession.Session, ): Observable<PublicKeyCredentialRequestOptionsEntity> { return this._apiService.startAssertion().pipe( tap((_: PublicKeyCredentialRequestOptionsEntity) => this._securityService.setSessionData(session, 'webauthn_assertion', { challenge: _.challenge, } as WebAuthnAssertionSession), ), ); } /** * Handler to answer to POST /webauthn/register/finish route * * @param {VerifyAssertionDto} assertion payload to verify assertion * @param {secureSession.Session} session secure data for the current session * * @return {Observable<UserEntity>} the user authenticated after attestation verification */ @ApiOkResponse({ description: 'The assertion has been successfully verified and the user has been successfully authenticated', type: UserEntity, }) @ApiBadRequestResponse({ description: "The payload provided to verify assertion isn't good", }) @ApiUnprocessableEntityResponse({ description: "The request can't be performed in the database", }) @ApiUnauthorizedResponse({ description: 'Could not find authenticator for the user', }) @ApiForbiddenResponse({ description: 'Missing WebAuthn session data' }) @ApiPreconditionFailedResponse({ description: 'An error occurred during assertion verification process', }) @ApiBody({ description: 'Payload to verify webauthn assertion', type: VerifyAssertionDto, }) @SetMetadata('webauthn_session', 'webauthn_assertion') @UseGuards(WebAuthnSessionGuard) @HttpCode(200) @Post('/webauthn/verify/finish') verifyAssertion( @Body() assertion: VerifyAssertionDto, @Session() session: secureSession.Session, ): Observable<UserEntity> { return this._apiService.finishAssertion(assertion, session).pipe( tap((user: UserEntity) => this._securityService.setSessionData(session, 'user', user), ), tap(() => this._securityService.setSessionData( session, 'previous_step', 'webauthn', ), ), tap(() => this._securityService.setSessionData(session, 'auth_type', 'webauthn'), ), tap(() => this._securityService.cleanSessionData(session, 'webauthn_assertion'), ), ); } /** * Handler to answer to POST /api/users route * * @param {CreateUserDto} user payload to create the new user * * @return Observable<UserEntity> */ @ApiCreatedResponse({ description: 'The user has been successfully created', type: UserEntity, }) @ApiConflictResponse({ description: 'The username already exists in the database', }) @ApiBadRequestResponse({ description: "The payload provided to create the user isn't good", }) @ApiUnprocessableEntityResponse({ description: "The request can't be performed in the database", }) @ApiBody({ description: 'Payload to create a new user', type: CreateUserDto }) @Post('users') createUser(@Body() user: CreateUserDto): Observable<UserEntity> { return this._apiService.createUser(user); } /** * Handler to answer to GET /api/logged-in route * * @param {secureSession.Session} session secure data for the current session * * @return Observable<UserEntity> */ @ApiOkResponse({ description: 'Returns the user store in the secure session', type: UserEntity, }) @ApiUnauthorizedResponse({ description: 'User is not logged in' }) @ApiCookieAuth() @UseGuards(AuthGuard) @Get('logged-in') loggedIn(@Session() session: secureSession.Session): Observable<UserEntity> { return this._securityService.getLoggedInUser(session); } /** * Handler to answer to POST /api/users/:id route * * @param {UserIdParams} params list of route params to take user id * @param {PatchUserDto} user payload to patch the user * @param {secureSession.Session} session secure data for the current session * * @return Observable<UserEntity> */ @ApiOkResponse({ description: 'The user has been successfully patched', type: UserEntity, }) @ApiConflictResponse({ description: 'The username already exists in the database', }) @ApiBadRequestResponse({ description: "The payload or the parameter provided to patch the user isn't good", }) @ApiUnprocessableEntityResponse({ description: "The request can't be performed in the database", }) @ApiPreconditionFailedResponse({ description: 'An error occurred during patch process', }) @ApiUnauthorizedResponse({ description: 'User is not logged in' }) @ApiForbiddenResponse({ description: 'User is not the owner of the resource', }) @ApiParam({ name: 'id', description: 'Unique identifier of the user in the database', type: String, allowEmptyValue: false, }) @ApiBody({ description: 'Payload to patch an user', type: PatchUserDto }) @ApiCookieAuth() @UseGuards(AuthGuard, OwnerGuard) @Patch('users/:id') patchUser( @Param() params: UserIdParams, @Body() user: PatchUserDto, @Session() session: secureSession.Session, ): Observable<UserEntity> { return this._apiService .patchUser(params.id, user) .pipe( tap((user: UserEntity) => this._securityService.setSessionData(session, 'user', user), ), ); } /** * Handler to answer to GET /api/users/:id/credentials route * * @param {UserIdParams} params path parameters * @param {FastifyRequest} request current request object * * @return Observable<CredentialsListEntity> */ @ApiOkResponse({ description: 'Returns an array of credentials', type: CredentialsListEntity, }) @ApiBadRequestResponse({ description: "The parameter or the parameter provided to patch the credential isn't good", }) @ApiUnprocessableEntityResponse({ description: "The request can't be performed in the database", }) @ApiNoContentResponse({ description: 'No credential exists in the database for this user', }) @ApiUnauthorizedResponse({ description: 'User is not logged in' }) @ApiForbiddenResponse({ description: 'User is not the owner of the resource', }) @ApiParam({ name: 'id', description: 'Unique identifier of the user in the database', type: String, allowEmptyValue: false, }) @ApiCookieAuth() @UseGuards(AuthGuard, OwnerGuard) @Get('users/:id/credentials') findCredentialsListForUser( @Param() params: UserIdParams, @Req() request: FastifyRequest, ): Observable<CredentialsListEntity | void> { return this._apiService.findCredentialsListForUser( params.id, request.headers['user-agent'], ); } /** * Handler to answer to PATCH /api/users/:id/credentials/:credId route * * @param {CredentialIdParams} params path parameters * @param {PatchCredentialDto} credential payload to patch the credential * * @return Observable<CredentialEntity> */ @ApiOkResponse({ description: 'The credential has been successfully patched', type: CredentialEntity, }) @ApiBadRequestResponse({ description: "The payload or the parameters provided to patch the credential isn't good", }) @ApiUnprocessableEntityResponse({ description: "The request can't be performed in the database", }) @ApiPreconditionFailedResponse({ description: 'An error occurred during patch process', }) @ApiUnauthorizedResponse({ description: 'User is not logged in' }) @ApiForbiddenResponse({ description: 'User is not the owner of the resource', }) @ApiParam({ name: 'id', description: 'Unique identifier of the user in the database', type: String, allowEmptyValue: false, }) @ApiParam({ name: 'credId', description: 'Unique identifier of the credential in the database', type: String, allowEmptyValue: false, }) @ApiBody({ description: 'Payload to patch a credential', type: PatchCredentialDto, }) @ApiCookieAuth() @UseGuards(AuthGuard, OwnerGuard) @Patch('users/:id/credentials/:credId') patchCredential( @Param() params: CredentialIdParams, @Body() credential: PatchCredentialDto, ): Observable<CredentialEntity> { return this._apiService.patchCredential( params.credId, params.id, credential, ); } /** * Handler to answer to POST /api/users/:id/credentials/mock route * * @param {UserIdParams} params path parameters * @param {PatchCredentialDto} dto payload to create the credential mock * @param {FastifyRequest} request current request object * * @return Observable<CredentialEntity> */ /*@ApiCreatedResponse({ description: 'The credential mock has been successfully created', type: CredentialEntity }) @ApiConflictResponse({ description: 'The credential mock already exists in the database' }) @ApiBadRequestResponse({ description: 'The payload or the parameter provided to create the credential mock isn\'t good' }) @ApiUnprocessableEntityResponse({ description: 'The request can\'t be performed in the database' }) @ApiUnauthorizedResponse({ description: 'User is not logged in' }) @ApiForbiddenResponse({ description: 'User is not the owner of the resource' }) @ApiParam({ name: 'id', description: 'Unique identifier of the user in the database', type: String, allowEmptyValue: false, }) @ApiBody({ description: 'Payload to create a credential mock', type: StartAttestationDto }) @ApiCookieAuth() @UseGuards(AuthGuard, OwnerGuard) @Post('users/:id/credentials/mock') createCredentialMock(@Param() params: UserIdParams, @Body() dto: StartAttestationDto, @Req() request: FastifyRequest): Observable<CredentialEntity> { return this._apiService.createCredentialMock(params.id, dto, request.headers[ 'user-agent' ]); }*/ /** * Handler to answer to DELETE /api/users/:id/credentials/:credId route * * @param {CredentialIdParams} params path parameters * * @return {Observable<void>} */ @ApiNoContentResponse({ description: 'The credential has been successfully deleted', }) @ApiBadRequestResponse({ description: "The payload or the parameters provided to remove the credential isn't good", }) @ApiUnprocessableEntityResponse({ description: "The request can't be performed in the database", }) @ApiPreconditionFailedResponse({ description: 'An error occurred during remove process', }) @ApiUnauthorizedResponse({ description: 'User is not logged in' }) @ApiForbiddenResponse({ description: 'User is not the owner of the resource', }) @ApiParam({ name: 'id', description: 'Unique identifier of the user in the database', type: String, allowEmptyValue: false, }) @ApiParam({ name: 'credId', description: 'Unique identifier of the credential in the database', type: String, allowEmptyValue: false, }) @ApiCookieAuth() @UseGuards(AuthGuard, OwnerGuard) @Delete('users/:id/credentials/:credId') removeCredential(@Param() params: CredentialIdParams): Observable<void> { return this._apiService.removeCredential(params.credId, params.id); } /** * Handler to answer to GET /api/clean-session-data route * * @param {KeySessionDataDto} keySessionData payload to clear a session value * @param {secureSession.Session} session secure data for the current session * * @return Observable<void> */ @ApiNoContentResponse({ description: 'The value in session has been successfully deleted', }) @ApiBadRequestResponse({ description: 'Parameter provided is not good' }) @ApiUnauthorizedResponse({ description: 'User is not logged in' }) @ApiBody({ description: 'Payload to clear a session value', type: KeySessionDataDto, }) @ApiCookieAuth() @UseGuards(AuthGuard) @Patch('clean-session-data') cleanSessionData( @Body() keySessionData: KeySessionDataDto, @Session() session: secureSession.Session, ): Observable<void> { return of( this._securityService.cleanSessionData(session, keySessionData.key), ); } /** * Handler to answer to GET /api/set-session-data route * * @param {SessionDataDto} sessionData payload to set a session value * @param {secureSession.Session} session secure data for the current session * * @return Observable<void> */ @ApiNoContentResponse({ description: 'The value in session has been successfully deleted', }) @ApiBadRequestResponse({ description: 'Parameter provided is not good' }) @ApiUnauthorizedResponse({ description: 'User is not logged in' }) @ApiBody({ description: 'Payload to set a session value', type: SessionDataDto, }) @ApiCookieAuth() @UseGuards(AuthGuard) @Patch('set-session-data') setSessionData( @Body() sessionData: SessionDataDto, @Session() session: secureSession.Session, ): Observable<void> { return of( this._securityService.setSessionData( session, sessionData.key, sessionData.value, ), ); } /** * Handler to answer to GET /api/delete-session route * * @param {secureSession.Session} session secure data for the current session * * @return Observable<void> */ @ApiNoContentResponse({ description: 'The logout process has been successfully finished', }) @ApiUnauthorizedResponse({ description: 'User is not logged in' }) @ApiCookieAuth() @UseGuards(AuthGuard) @Delete('delete-session') deleteSession(@Session() session: secureSession.Session): Observable<void> { return of(this._securityService.deleteSession(session)); } }
import { createAction, props } from '@ngrx/store'; import { IOwner, IPet } from 'src/app/shared/interfaces'; const ownerDetailNamespace = `[OWNER DETAIL]`; export const ownerDetailSetOwner = createAction(`${ownerDetailNamespace} Set Owner`, props<{ owner: IOwner }>()); export const ownerDetailSetIsLoading = createAction(`${ownerDetailNamespace} Set Is Loading`, props<{ isLoading: boolean }>()); export const ownerDetailClear = createAction(`${ownerDetailNamespace} Clear`); export const ownerDetailOwnerDetailLoadError = createAction(`${ownerDetailNamespace} Load Owner Error`, props<{ error: string }>()); export const ownerDetailLoadOwnerDetail = createAction(`${ownerDetailNamespace} Load Owner`); const ownerDeleteNamespace = `[OWNER DELETE]`; export const ownerDeleteSetOwner = createAction(`${ownerDeleteNamespace} Set Owner`, props<{ owner: IOwner }>()); export const ownerDeleteSetIsLoading = createAction(`${ownerDeleteNamespace} Set Is Loading`, props<{ isLoading: boolean }>()); export const ownerDeleteClear = createAction(`${ownerDeleteNamespace} Clear`); export const ownerDeleteOwnerDeleteLoadError = createAction(`${ownerDeleteNamespace} Load Owner Error`, props<{ error: string }>()); export const ownerDeleteLoadOwnerDelete = createAction(`${ownerDeleteNamespace} Load Owner`); const ownerEditNamespace = `[Owner EDIT]`; export const ownerEditSetOwner = createAction(`${ownerEditNamespace} Set Owner`, props<{ owner: IOwner }>()); export const ownerEditSetIsLoading = createAction(`${ownerEditNamespace} Set Is Loading`, props<{ isLoading: boolean }>()); export const ownerEditClear = createAction(`${ownerEditNamespace} Clear`); export const ownerEditOwnerEditLoadError = createAction(`${ownerEditNamespace} Load Owner Error`, props<{ error: string }>()); export const ownerEditLoadOwnerEdit = createAction(`${ownerEditNamespace} Load Owner`); const ownerListNamespace = `[OWNER LIST]`; export const ownerListSetIsLoading = createAction(`${ownerListNamespace} Set Is Loading`, props<{ isLoading: boolean }>()); export const ownerListLoadOwnerList = createAction(`${ownerListNamespace} Load Owner List`); export const ownerListOwnerListLoadError = createAction(`${ownerListNamespace} Load Owner List Error`, props<{ error: string }>()); export const ownerListSetOwnerList = createAction(`${ownerListNamespace} Set Owner List`, props<{ ownerList: IOwner[] }>()); export const ownerListClear = createAction(`${ownerListNamespace} Clear`); // const ownerNewNamespace = `[OWNER NEW]`; // export const ownerNewSetIsLoading = createAction(`${ownerNewNamespace} Set Is Loading`, props<{ isLoading: boolean }>()); // export const ownerNewLoadOwnerList = createAction(`${ownerNewNamespace} Load Owner List`); // export const petNewOwnerListLoadError = createAction(`${ownerNewNamespace} Load Owner List Error`, props<{ error: string }>()); // export const petNewSetOwnerList = createAction(`${ownerNewNamespace} Set Owner List`, props<{ ownerList: IOwner[] }>()); // export const petNewClear = createAction(`${ownerNewNamespace} Clear`);
<filename>src/shadowbox/model/access_key.ts<gh_stars>1-10 // Copyright 2018 The Outline Authors // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. export type AccessKeyId = string; export type AccessKeyMetricsId = string; // Parameters needed to access a Shadowsocks proxy. export interface ProxyParams { // Hostname of the proxy readonly hostname: string; // Number of the port where the Shadowsocks service is running. readonly portNumber: number; // The Shadowsocks encryption method being used. readonly encryptionMethod: string; // The password for the encryption. readonly password: string; } // Parameters needed to limit access key data usage over a sliding window. export interface AccessKeyQuota { // The allowed metered data transfer measured in bytes. readonly data: {bytes: number}; // The sliding window size in hours. readonly window: {hours: number}; } // Parameters needed to enforce an access key data transfer quota. export interface AccessKeyQuotaUsage { // Data transfer quota on this access key. readonly quota: AccessKeyQuota; // Data transferred by this access key over the quota window. readonly usage: {bytes: number}; } // AccessKey is what admins work with. It gives ProxyParams a name and identity. export interface AccessKey { // The unique identifier for this access key. readonly id: AccessKeyId; // Admin-controlled, editable name for this access key. readonly name: string; // Used in metrics reporting to decouple from the real id. Can change. readonly metricsId: AccessKeyMetricsId; // Parameters to access the proxy readonly proxyParams: ProxyParams; // Admin-controlled, data transfer quota for this access key. Unlimited if unset. readonly quotaUsage?: AccessKeyQuotaUsage; // Returns whether the access key has exceeded its data transfer quota. isOverQuota(): boolean; } export interface AccessKeyRepository { // Creates a new access key. Parameters are chosen automatically. createNewAccessKey(): Promise<AccessKey>; // Removes the access key given its id. Returns true if successful. removeAccessKey(id: AccessKeyId): boolean; // Lists all existing access keys listAccessKeys(): AccessKey[]; // Apply the specified update to the specified access key. // Returns true if successful. renameAccessKey(id: AccessKeyId, name: string): boolean; // Gets the metrics id for a given Access Key. getMetricsId(id: AccessKeyId): AccessKeyMetricsId|undefined; // Sets the transfer quota for the specified access key. Returns true if successful. setAccessKeyQuota(id: AccessKeyId, quota: AccessKeyQuota): Promise<boolean>; // Clears the transfer quota for the specified access key. Returns true if successful. removeAccessKeyQuota(id: AccessKeyId): Promise<boolean>; }
<reponame>ethansaxenian/RosettaDecode import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import javax.imageio.ImageIO; public enum ImageProcessing { ; public static void main(String[] args) throws IOException { BufferedImage img = ImageIO.read(new File("example.png")); BufferedImage bwimg = toBlackAndWhite(img); ImageIO.write(bwimg, "png", new File("example-bw.png")); } private static int luminance(int rgb) { int r = (rgb >> 16) & 0xFF; int g = (rgb >> 8) & 0xFF; int b = rgb & 0xFF; return (r + b + g) / 3; } private static BufferedImage toBlackAndWhite(BufferedImage img) { int width = img.getWidth(); int height = img.getHeight(); int[] histo = computeHistogram(img); int median = getMedian(width * height, histo); BufferedImage bwimg = new BufferedImage(width, height, img.getType()); for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { bwimg.setRGB(x, y, luminance(img.getRGB(x, y)) >= median ? 0xFFFFFFFF : 0xFF000000); } } return bwimg; } private static int[] computeHistogram(BufferedImage img) { int width = img.getWidth(); int height = img.getHeight(); int[] histo = new int[256]; for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { histo[luminance(img.getRGB(x, y))]++; } } return histo; } private static int getMedian(int total, int[] histo) { int median = 0; int sum = 0; for (int i = 0; i < histo.length && sum + histo[i] < total / 2; i++) { sum += histo[i]; median++; } return median; } }
A survey for the future The Cytopathology Readership Survey, conducted in the spring of 1998, had a response rate that was 25% above the anticipated level. The results of the questionnaire can therefore be taken as a good indication of what matters most to readers and contributors and will help to shape the content and format of the journal as it approaches its second decade. It is worth looking at some of the highlights. •The majority of respondents were BSCC members, working as consultants in District General Hospitals, with an almost equal interest in gynaecological and non-gynaecological cytology. Replies were, however, received from 23 countries, somewhat in proportion to the journal circulation in countries outside the UK. •The most valued features of the journal were review articles, editorials, original papers and commentaries. Supplements were also felt to be important. It was gratifying to find that the overall level of the journal was deemed ‘about right’ by 86% of respondents. •Potential authors choose where to publish mainly according to impact factor, perceived prestige, circulation and speed of publication. Colour reproduction was, surprisingly, to the editor at any rate, of relatively low importance. Speed of publication was specifically mentioned by some as a problem for Cytopathology. Direct comparison with Acta Cytologica and Diagnostic Cytopathology, launched in 1957 and 1985, respectively, indicated a near equivalent ranking for the three options, a promising result for the newcomer. •The future developments proposed reflect the needs of District General Hospital consultants: journal watches, educational case reports with case commentaries, laboratory protocols and guidelines. Such initiatives will make extra demands on contributors, the Editorial Board and journal space, but these are not reasons for not considering the suggestions. •Finally it was dismaying to find that very few respondents had visited the Cytopathology Website or accessed Cytopathology on-line. From 1999 the journal will be available through the Blackwell Science on-line journal publishing service, called Synergy. It is one of the most advanced electronic systems offered by publishers at present, and should open up the contents of the journal to a much wider audience around the world. The Readership Survey has proved to be an extremely worthwhile exercise. Readers will note that the current issue for 1999 does include two review articles and a laboratory protocol as well as a range of interesting original articles. The correspondence columns have changed to a more condensed format to allow more space for publication of manuscripts, and other measures to expedite the publication time are in hand. Thank you, all respondents, for the help you have given. Cytopathology will certainly benefit from your views.
// Copyright (c) Microsoft Corporation. // Licensed under the MIT License. #![allow(clippy::manual_flatten)] use std::collections::{HashMap, HashSet}; use std::ffi::OsStr; use std::io; use std::path::Path; use std::process::Command; use lazy_static::lazy_static; use regex::Regex; pub fn find_missing(mut cmd: Command) -> Result<HashSet<MissingDynamicLibrary>, io::Error> { // Check for missing _linked_ dynamic libraries. // // We must do this first to avoid false positives or negatives when parsing `LD_DEBUG` // output. The debug output gets truncated when a linked shared library is not found, // since any in-progress searches are aborted. let linked = LinkedDynamicLibraries::search(cmd.get_program())?; let missing_linked = linked.not_found(); if !missing_linked.is_empty() { return Ok(missing_linked); } // Check for missing _loaded_ dynamic libraries. // // Invoke the command with `LD_DEBUG` set, and parse the debug output. cmd.env("LD_DEBUG", "libs"); let output = cmd.output()?; let logs = LdDebugLogs::parse(&*output.stderr); Ok(logs.missing()) } #[derive(Clone, Debug, Eq, Hash, PartialEq)] pub struct MissingDynamicLibrary { pub name: String, } /// Dynamic library searches, as extracted from the dynamic linker debug log output /// obtained by setting `LD_DEBUG=libs`. /// /// For more info about `LD_DEBUG`, see the docs for ld.so(8). pub struct LdDebugLogs { pub searches: HashMap<LdDebugSearchQuery, LdDebugSearchResult>, } impl LdDebugLogs { /// Extract attempted library searches from the debug logs. /// /// A search query is detected on a thread if we find message like `find /// library=libmycode.so`. /// /// We mark a library as found if and only if we find a matching log message like /// `calling init: /path/to/libmycode.so`, on the same thread. /// /// This is only really useful for detecting `dlopen()` failures, when dynamic linking /// succeeds. If process startup fails due to a missing linked library dependency, /// then the dynamic linker's search will stop early, and debug logs will be logically /// truncated. When that happens, we can get both false negatives (we'll only see the /// _first_ missing library) and false positives (we won't see evidence of module /// initialization for libraries that _were_ found). pub fn parse<R: io::Read>(readable: R) -> Self { use std::io::prelude::*; let mut searches = HashMap::default(); let reader = io::BufReader::new(readable); for line in reader.lines() { // If ok, line is valid UTF-8. if let Ok(line) = line { if let Some(query) = LdDebugSearchQuery::parse(&line) { searches.insert(query, LdDebugSearchResult::NotFound); continue; } if let Some(found) = FoundLibrary::parse(&line) { let query = found.query(); let result = LdDebugSearchResult::Found(found); searches.insert(query, result); continue; } } } Self { searches } } pub fn missing(&self) -> HashSet<MissingDynamicLibrary> { let mut missing = HashSet::default(); for (query, result) in &self.searches { if *result == LdDebugSearchResult::NotFound { let lib = MissingDynamicLibrary { name: query.name.clone(), }; missing.insert(lib); } } missing } } #[derive(Clone, Debug, Eq, Hash, PartialEq)] pub struct LdDebugSearchQuery { /// PID of the thread where the search query occurred. pub pid: u32, /// Name of the shared library that was searched for. pub name: String, } impl LdDebugSearchQuery { pub fn parse(text: &str) -> Option<Self> { let captures = FIND_LIBRARY_RE.captures(text)?; let pid = captures.get(1)?.as_str().parse().ok()?; let name = captures.get(2)?.as_str().to_owned(); Some(Self { pid, name }) } } #[derive(Clone, Debug, Eq, PartialEq)] pub enum LdDebugSearchResult { NotFound, Found(FoundLibrary), } #[derive(Clone, Debug, Eq, PartialEq)] pub struct FoundLibrary { /// PID of the thread where the satisfied search query occurred. pub pid: u32, /// Absolute path of the found shared library. /// /// Guaranteed to have a file name (as a `Path`). pub path: String, } impl FoundLibrary { pub fn name(&self) -> String { // Can't panic due to check in `parse_found()`. let name = Path::new(&self.path).file_name().unwrap(); // Can't panic because `self.path` is a `String`. let name = name.to_str().unwrap(); name.to_owned() } pub fn query(&self) -> LdDebugSearchQuery { LdDebugSearchQuery { pid: self.pid, name: self.name(), } } } impl FoundLibrary { pub fn parse(text: &str) -> Option<Self> { let captures = INIT_LIBRARY_RE.captures(text)?; let pid = captures.get(1)?.as_str().parse().ok()?; let path = captures.get(2)?.as_str().to_owned(); // Ensure `path` is a file path, and has a file name. Path::new(&path).file_name()?; Some(Self { pid, path }) } } lazy_static! { // Captures thread PID, file name of requested library. static ref FIND_LIBRARY_RE: Regex = Regex::new(r"(\d+):\s+find library=(.+) \[\d+\]; searching").unwrap(); // Captures thread PID, absolute path of found library. static ref INIT_LIBRARY_RE: Regex = Regex::new(r"(\d+):\s+calling init: (.+)").unwrap(); // Captures shared library name, absolute path of found library. static ref LDD_FOUND: Regex = Regex::new(r"([^\s]+) => (.+) \(0x[0-9a-f]+\)").unwrap(); // Captures shared library name. static ref LDD_NOT_FOUND: Regex = Regex::new(r"([^\s]+) => not found").unwrap(); } struct LddFound { name: String, path: String, } impl LddFound { pub fn parse(text: &str) -> Option<Self> { let captures = LDD_FOUND.captures(text)?; let name = captures.get(1)?.as_str().to_owned(); let path = captures.get(2)?.as_str().to_owned(); Some(Self { name, path }) } } struct LddNotFound { name: String, } impl LddNotFound { pub fn parse(text: &str) -> Option<Self> { let captures = LDD_NOT_FOUND.captures(text)?; let name = captures.get(1)?.as_str().to_owned(); Some(Self { name }) } } #[derive(Debug)] pub struct LinkedDynamicLibraries { pub libraries: HashMap<String, Option<String>>, } impl LinkedDynamicLibraries { pub fn search(module: impl AsRef<OsStr>) -> Result<Self, io::Error> { let mut cmd = Command::new("ldd"); cmd.arg(module); let output = cmd.output()?; let linked = Self::parse(&*output.stdout); Ok(linked) } pub fn parse<R: io::Read>(readable: R) -> Self { use std::io::prelude::*; let mut libraries = HashMap::default(); let reader = io::BufReader::new(readable); for line in reader.lines() { if let Ok(line) = line { if let Some(not_found) = LddNotFound::parse(&line) { libraries.insert(not_found.name, None); } if let Some(found) = LddFound::parse(&line) { libraries.insert(found.name, Some(found.path)); } } } Self { libraries } } pub fn not_found(&self) -> HashSet<MissingDynamicLibrary> { let mut missing = HashSet::default(); for linked in &self.libraries { if let (name, None) = linked { let name = name.clone(); let lib = MissingDynamicLibrary { name }; missing.insert(lib); } } missing } } #[cfg(test)] mod tests;
Antonio Conte will make a late check on N'Golo Kanté before deciding whether or not to start the Chelsea midfielder against Roma. Chelsea can clinch qualification into the Champions League knockout stages by beating Roma at the Olympic Stadium on Tuesday night and Kanté is close to making a comeback. Kanté returned to training last Friday and travelled with the squad to Rome, but is still believed to be unsure whether the hamstring he injured while on international with France is fully recovered. Chelsea head coach Conte made it clear he would like Kanté to start against Roma, but is set to leave the final decision to the player. “About N'Golo Kanté, he has trained with us,” said Conte. “He trained also before the game against Bournemouth. I think it's very important, in this moment, to speak with the player and then see his sensation. “I was a player and you know very well that, after an injury, above all a muscular problem – a bad injury – it's very important to listen to the player and what are his sensations and then make the best decision for him, for the team. For sure, on Wednesday, we'll try to make the best decision.”
<filename>src/main/java/org/reasm/m68k/messages/DuplicateRegistersInRegisterListWarningMessage.java package org.reasm.m68k.messages; import org.reasm.AssemblyWarningMessage; /** * A warning message that is generated during an assembly when a register list in a <code>MOVEM</code> instruction or a * <code>REG</code> directive contains duplicate registers. * * @author <NAME> */ public class DuplicateRegistersInRegisterListWarningMessage extends AssemblyWarningMessage { /** * Initializes a new DuplicateRegistersInRegisterListWarningMessage. */ public DuplicateRegistersInRegisterListWarningMessage() { super("Duplicate registers in register list"); } }
import os import shutil import settings import utils MODULE_DIR_PATH = os.path.dirname(os.path.abspath(__file__)) DEFAULT_ROOT_DIR_PATH = os.path.join(MODULE_DIR_PATH, '../..') DEFAULT_TEMPLATES_DIR_PATH = os.path.join(MODULE_DIR_PATH, 'templates') # if possible use PROJECTS_ROOT var from settings, otherwise use the parent dir of `toolbox` instead ROOT_DIR = utils.Folder( settings.get_settings_prop('PROJECTS_ROOT', default_value=DEFAULT_ROOT_DIR_PATH) ) TEMPLATES_DIR = utils.Folder(DEFAULT_TEMPLATES_DIR_PATH) def init_project(name): paths = {'root': ROOT_DIR.abs_path} paths['project_folder'] = os.path.join(paths['root'], name) if os.path.exists(paths['project_folder']): print(f'Project {name} already exists! Doing nothing.') return False os.mkdir(paths['project_folder']) for subfolder in ['data', 'output']: paths[subfolder] = os.path.join(paths['project_folder'], subfolder) os.mkdir(paths[subfolder]) for i, file_name in enumerate(TEMPLATES_DIR.file_names()): paths[file_name] = os.path.join(paths['project_folder'], file_name) shutil.copyfile(TEMPLATES_DIR.file_paths(i), paths[file_name]) return True
{-# LANGUAGE RankNTypes #-} -- | Generation of places from place kinds. module Game.LambdaHack.Server.DungeonGen.Place ( Place(..), TileMapEM, buildPlace, isChancePos, buildFenceRnd #ifdef EXPOSE_INTERNAL -- * Internal operations , placeCheck, interiorArea, pover, buildFence, buildFenceMap , tilePlace #endif ) where import Prelude () import Game.LambdaHack.Core.Prelude import qualified Data.Bits as Bits import qualified Data.EnumMap.Strict as EM import qualified Data.Text as T import Data.Word (Word32) import Game.LambdaHack.Common.Area import Game.LambdaHack.Common.Kind import Game.LambdaHack.Common.Point import qualified Game.LambdaHack.Common.Tile as Tile import Game.LambdaHack.Content.CaveKind import Game.LambdaHack.Content.PlaceKind import Game.LambdaHack.Content.TileKind (TileKind) import qualified Game.LambdaHack.Content.TileKind as TK import qualified Game.LambdaHack.Core.Dice as Dice import Game.LambdaHack.Core.Frequency import Game.LambdaHack.Core.Random import Game.LambdaHack.Definition.Defs import Game.LambdaHack.Server.DungeonGen.AreaRnd -- | The map of tile kinds in a place (and generally anywhere in a cave). -- The map is sparse. The default tile that eventually fills the empty spaces -- is specified in the cave kind specification with @cdefTile@. type TileMapEM = EM.EnumMap Point (ContentId TileKind) -- | The parameters of a place. All are immutable and rolled and fixed -- at the time when a place is generated. data Place = Place { qkind :: ContentId PlaceKind , qarea :: Area , qmap :: TileMapEM , qfence :: TileMapEM } deriving Show -- | For @CAlternate@ tiling, require the place be comprised -- of an even number of whole corners, with exactly one square -- overlap between consecutive coners and no trimming. -- For other tiling methods, check that the area is large enough for tiling -- the corner twice in each direction, with a possible one row/column overlap. placeCheck :: Area -- ^ the area to fill -> PlaceKind -- ^ the kind of place to construct -> Bool placeCheck r pk@PlaceKind{..} = case interiorArea pk r of Nothing -> False Just area -> let (_, xspan, yspan) = spanArea area dxcorner = case ptopLeft of [] -> 0 ; l : _ -> T.length l dycorner = length ptopLeft wholeOverlapped d dcorner = d > 1 && dcorner > 1 && (d - 1) `mod` (2 * (dcorner - 1)) == 0 largeEnough = xspan >= 2 * dxcorner - 1 && yspan >= 2 * dycorner - 1 in case pcover of CAlternate -> wholeOverlapped xspan dxcorner && wholeOverlapped yspan dycorner CStretch -> largeEnough CReflect -> largeEnough CVerbatim -> True CMirror -> True -- | Calculate interior room area according to fence type, based on the -- total area for the room and it's fence. This is used for checking -- if the room fits in the area, for digging up the place and the fence -- and for deciding if the room is dark or lit later in the dungeon -- generation process. interiorArea :: PlaceKind -> Area -> Maybe Area interiorArea kr r = let requiredForFence = case pfence kr of FWall -> 1 FFloor -> 1 FGround -> 1 FNone -> 0 in if pcover kr `elem` [CVerbatim, CMirror] then let (Point x0 y0, xspan, yspan) = spanArea r dx = case ptopLeft kr of [] -> error $ "" `showFailure` kr l : _ -> T.length l dy = length $ ptopLeft kr mx = (xspan - dx) `div` 2 my = (yspan - dy) `div` 2 in if mx < requiredForFence || my < requiredForFence then Nothing else toArea (x0 + mx, y0 + my, x0 + mx + dx - 1, y0 + my + dy - 1) else case requiredForFence of 0 -> Just r 1 -> shrink r _ -> error $ "" `showFailure` kr -- | Given a few parameters, roll and construct a 'Place' datastructure -- and fill a cave section acccording to it. buildPlace :: COps -- ^ the game content -> CaveKind -- ^ current cave kind -> Bool -- ^ whether the cave is dark -> ContentId TileKind -- ^ dark fence tile, if fence hollow -> ContentId TileKind -- ^ lit fence tile, if fence hollow -> Dice.AbsDepth -- ^ current level depth -> Dice.AbsDepth -- ^ absolute depth -> Word32 -- ^ secret tile seed -> Area -- ^ whole area of the place, fence included -> Maybe Area -- ^ whole inner area of the grid cell -> Freqs PlaceKind -- ^ optional fixed place freq -> Rnd Place buildPlace cops@COps{coplace, coTileSpeedup} kc@CaveKind{..} dnight darkCorTile litCorTile levelDepth@(Dice.AbsDepth ldepth) totalDepth@(Dice.AbsDepth tdepth) dsecret r minnerArea mplaceGroup = do let f !q !acc !p !pk !kind = let rarity = linearInterpolation ldepth tdepth (prarity kind) !fr = q * p * rarity in (fr, (pk, kind)) : acc g (placeGroup, q) = ofoldlGroup' coplace placeGroup (f q) [] pfreq = case mplaceGroup of [] -> cplaceFreq _ -> mplaceGroup placeFreq = concatMap g pfreq checkedFreq = filter (\(_, (_, kind)) -> placeCheck r kind) placeFreq freq = toFreq "buildPlace" checkedFreq let !_A = assert (not (nullFreq freq) `blame` (placeFreq, checkedFreq, r)) () (qkind, kr) <- frequency freq let smallPattern = pcover kr `elem` [CVerbatim, CMirror] && (length (ptopLeft kr) < 10 || T.length (head (ptopLeft kr)) < 10) -- Below we apply a heuristics to estimate if there are floor tiles -- in the place that are adjacent to floor tiles of the cave and so both -- should have the same lit condition. -- A false positive is walled staircases in LambdaHack, but it's OK. dark <- if cpassable && not (dnight && Tile.isLit coTileSpeedup darkCorTile) -- the colonnade can be illuminated just as the trail is && (pfence kr `elem` [FFloor, FGround] || pfence kr == FNone && smallPattern) then return dnight else oddsDice levelDepth totalDepth cdarkOdds rBetter <- case minnerArea of Just innerArea | pcover kr `elem` [CVerbatim, CMirror] -> do -- A hack: if a verbatim place was rolled, redo computing the area -- taking into account that often much smaller portion is taken by place. let requiredForFence = case pfence kr of FWall -> 1 FFloor -> 1 FGround -> 1 FNone -> 0 sizeBetter = ( 2 * requiredForFence + T.length (head (ptopLeft kr)) , 2 * requiredForFence + length (ptopLeft kr) ) mkRoom sizeBetter sizeBetter innerArea _ -> return r let qarea = fromMaybe (error $ "" `showFailure` (kr, r)) $ interiorArea kr rBetter plegend = if dark then plegendDark kr else plegendLit kr mOneIn <- pover cops plegend cmap <- tilePlace qarea kr let lookupOneIn :: Point -> Char -> ContentId TileKind lookupOneIn xy c = let tktk = EM.findWithDefault (error $ "" `showFailure` (c, mOneIn)) c mOneIn in case tktk of (Just (k, n, tkSpice), _) | isChancePos k n dsecret xy -> tkSpice (_, tk) -> tk qmap = EM.mapWithKey lookupOneIn cmap qfence <- buildFence cops kc dnight darkCorTile litCorTile dark (pfence kr) qarea return $! Place {..} isChancePos :: Int -> Int -> Word32 -> Point -> Bool isChancePos k' n' dsecret (Point x' y') = k' > 0 && n' > 0 && let k = toEnum k' n = toEnum n' x = toEnum x' y = toEnum y' z = dsecret `Bits.rotateR` x' `Bits.xor` y + x in if k < n then z `mod` ((n + k) `divUp` k) == 0 else z `mod` ((n + k) `divUp` n) /= 0 -- This can't be optimized by memoization (storing these results per place), -- because it would fix random assignment of tiles to groups -- for all instances of a place throughout dungeon. Right now the assignment -- is fixed for any single place instance and it's consistent and interesting. -- Even fixing this per level would make levels less interesting. -- -- This could be precomputed for groups that contain only one tile, -- but for these, no random rolls are performed, so little would be saved. pover :: COps -> EM.EnumMap Char (GroupName TileKind) -> Rnd ( EM.EnumMap Char ( Maybe (Int, Int, ContentId TileKind) , ContentId TileKind ) ) pover COps{cotile} plegend = let assignKN :: GroupName TileKind -> ContentId TileKind -> ContentId TileKind -> (Int, Int, ContentId TileKind) assignKN cgroup tk tkSpice = -- Very likely that legends have spice. let n = fromMaybe (error $ show cgroup) (lookup cgroup (TK.tfreq (okind cotile tk))) k = fromMaybe (error $ show cgroup) (lookup cgroup (TK.tfreq (okind cotile tkSpice))) in (k, n, tkSpice) getLegend :: GroupName TileKind -> Rnd ( Maybe (Int, Int, ContentId TileKind) , ContentId TileKind ) getLegend cgroup = do mtkSpice <- opick cotile cgroup (Tile.kindHasFeature TK.Spice) tk <- fromMaybe (error $ "" `showFailure` (cgroup, plegend)) <$> opick cotile cgroup (not . Tile.kindHasFeature TK.Spice) return (assignKN cgroup tk <$> mtkSpice, tk) in mapM getLegend plegend -- | Construct a fence around a place. buildFence :: COps -> CaveKind -> Bool -> ContentId TileKind -> ContentId TileKind -> Bool -> Fence -> Area -> Rnd TileMapEM buildFence COps{cotile} CaveKind{ccornerTile, cwallTile} dnight darkCorTile litCorTile dark fence qarea = do qFWall <- fromMaybe (error $ "" `showFailure` cwallTile) <$> opick cotile cwallTile (const True) qFCorner <- fromMaybe (error $ "" `showFailure` ccornerTile) <$> opick cotile ccornerTile (const True) let qFFloor = if dark then darkCorTile else litCorTile qFGround = if dnight then darkCorTile else litCorTile return $! case fence of FWall -> buildFenceMap qFWall qFCorner qarea FFloor -> buildFenceMap qFFloor qFFloor qarea FGround -> buildFenceMap qFGround qFGround qarea FNone -> EM.empty -- | Construct a fence around an area, with the given tile kind. -- Corners have a different kind, e.g., to avoid putting doors there. buildFenceMap :: ContentId TileKind -> ContentId TileKind -> Area -> TileMapEM buildFenceMap wallId cornerId area = let (x0, y0, x1, y1) = fromArea area in EM.fromList $ [ (Point x y, wallId) | x <- [x0-1, x1+1], y <- [y0..y1] ] ++ [ (Point x y, wallId) | x <- [x0..x1], y <- [y0-1, y1+1] ] ++ [ (Point x y, cornerId) | x <- [x0-1, x1+1], y <- [y0-1, y1+1] ] -- | Construct a fence around an area, with the given tile group. buildFenceRnd :: COps -> GroupName TileKind -> GroupName TileKind -> GroupName TileKind -> GroupName TileKind -> Area -> Rnd TileMapEM buildFenceRnd COps{cotile} cfenceTileN cfenceTileE cfenceTileS cfenceTileW area = do let (x0, y0, x1, y1) = fromArea area allTheSame = all (== cfenceTileN) [cfenceTileE, cfenceTileS, cfenceTileW] fenceIdRnd couterFenceTile (xf, yf) = do let isCorner x y = x `elem` [x0-1, x1+1] && y `elem` [y0-1, y1+1] tileGroup | isCorner xf yf && not allTheSame = TK.S_BASIC_OUTER_FENCE | otherwise = couterFenceTile fenceId <- fromMaybe (error $ "" `showFailure` tileGroup) <$> opick cotile tileGroup (const True) return (Point xf yf, fenceId) pointListN = [(x, y0-1) | x <- [x0-1..x1+1]] pointListE = [(x1+1, y) | y <- [y0..y1]] pointListS = [(x, y1+1) | x <- [x0-1..x1+1]] pointListW = [(x0-1, y) | y <- [y0..y1]] fenceListN <- mapM (fenceIdRnd cfenceTileN) pointListN fenceListE <- mapM (fenceIdRnd cfenceTileE) pointListE fenceListS <- mapM (fenceIdRnd cfenceTileS) pointListS fenceListW <- mapM (fenceIdRnd cfenceTileW) pointListW return $! EM.fromList $ fenceListN ++ fenceListE ++ fenceListS ++ fenceListW -- | Create a place by tiling patterns. tilePlace :: Area -- ^ the area to fill -> PlaceKind -- ^ the place kind to construct -> Rnd (EM.EnumMap Point Char) tilePlace area pl@PlaceKind{..} = do let (Point x0 y0, xspan, yspan) = spanArea area dxcorner = case ptopLeft of [] -> error $ "" `showFailure` (area, pl) l : _ -> T.length l (dx, dy) = assert (xspan >= dxcorner && yspan >= length ptopLeft `blame` (area, pl)) (xspan, yspan) fromX (x2, y2) = map (`Point` y2) [x2..] fillInterior :: (Int -> String -> String) -> (Int -> [String] -> [String]) -> [(Point, Char)] fillInterior f g = let tileInterior (y, row) = let fx = f dx row xStart = x0 + ((xspan - length fx) `div` 2) in filter ((/= 'X') . snd) $ zip (fromX (xStart, y)) fx reflected = let gy = g dy $ map T.unpack ptopLeft yStart = y0 + ((yspan - length gy) `div` 2) in zip [yStart..] gy in concatMap tileInterior reflected tileReflect :: Int -> [a] -> [a] tileReflect d pat = let lstart = take (d `divUp` 2) pat lend = take (d `div` 2) pat in lstart ++ reverse lend interior <- case pcover of CAlternate -> do let tile :: Int -> [a] -> [a] tile _ [] = error $ "nothing to tile" `showFailure` pl tile d pat = take d (cycle $ init pat ++ init (reverse pat)) return $! fillInterior tile tile CStretch -> do let stretch :: Int -> [a] -> [a] stretch _ [] = error $ "nothing to stretch" `showFailure` pl stretch d pat = tileReflect d (pat ++ repeat (last pat)) return $! fillInterior stretch stretch CReflect -> do let reflect :: Int -> [a] -> [a] reflect d pat = tileReflect d (cycle pat) return $! fillInterior reflect reflect CVerbatim -> return $! fillInterior (\ _ x -> x) (\ _ x -> x) CMirror -> do mirror1 <- oneOf [id, reverse] mirror2 <- oneOf [id, reverse] return $! fillInterior (\_ l -> mirror1 l) (\_ l -> mirror2 l) return $! EM.fromList interior
<gh_stars>0 package usecases_test import ( "testing" "github.com/janithl/paataka/database" "github.com/janithl/paataka/entities" "github.com/janithl/paataka/usecases" ) /** MockFeedReader */ type MockFeedReader struct { Posts []entities.Post } func (m MockFeedReader) Read(url string) ([]entities.Post, error) { return m.Posts, nil } func setupService(version string, reader usecases.FeedReader) *usecases.PublicationServiceImpl { repo := database.NewInMemoryPublicationRepository(version) search := usecases.NewSearchServiceImpl() return usecases.NewPublicationServiceImpl(search, repo, reader) } const version string = "Mock InMemoryRepository v1.0" var mockReader = MockFeedReader{Posts: nil} /* given PublicationService version is Mock InMemoryRepository v1.0 when GetRepositoryVersion is called then Mock InMemoryRepository v1.0 is returned */ func TestPublicationVersion(t *testing.T) { service := setupService(version, mockReader) got := service.GetRepositoryVersion() want := version if got != want { t.Errorf("got '%s' want '%s'", got, want) } } func TestPublicationAddAndList(t *testing.T) { service := setupService(version, mockReader) publications := make(map[string]entities.Publication) publications["pub-001"] = entities.Publication{ID: "pub-001", Title: "Alberta Blog", URL: "https://alberta.ca/blog"} publications["pub-002"] = entities.Publication{ID: "pub-002", Title: "Ben's Thoughts", URL: "https://ben-bert.me"} publications["pub-003"] = entities.Publication{ID: "pub-003", Title: "Cambrian Technical Group", URL: "http://blog.cambrian.tech"} service.Add(publications["pub-001"]) service.Add(publications["pub-002"]) service.Add(publications["pub-003"]) got := service.ListAll() want := publications ids := [3]string{"pub-001", "pub-002", "pub-003"} for _, id := range ids { if got[id].ID != want[id].ID { t.Errorf("got '%s' want '%s'", got[id].ID, want[id].ID) } if got[id].Title != want[id].Title { t.Errorf("got '%s' want '%s'", got[id].Title, want[id].Title) } if got[id].URL != want[id].URL { t.Errorf("got '%s' want '%s'", got[id].URL, want[id].URL) } } } /* given PublicationService with no publications when we try to find "pub-001" then PublicationNotFound error is thrown */ func TestPublicationFindFail(t *testing.T) { service := setupService(version, mockReader) if _, err := service.Find("pub-001"); err == nil { t.Errorf("got '%s' want '%s'", err, usecases.ErrPublicationNotFound) } } func TestPublicationFindAndUpdate(t *testing.T) { service := setupService(version, mockReader) // Initial add publication := entities.Publication{ID: "pub-010", Title: "Greenland Business Digest", URL: "https://gbd.org"} service.Add(publication) // Then find pub, err := service.Find("pub-010") if err != nil { t.Error(err) } // Verify got, want := pub.Title, publication.Title if got != want { t.Errorf("got '%s' want '%s'", got, want) } // Then update title, add (update) on service publication.Title = "Greenland Business Standard" service.Add(publication) // Find pub, err = service.Find("pub-010") if err != nil { t.Error(err) } // Verify got = pub.Title want = publication.Title if got != want { t.Errorf("got '%s' want '%s'", got, want) } } /* given 3 publications in the repository: pub 1 - fetched 1 hour ago pub 2 - fetched 30 minutes ago pub 3 - fetched 3 hours ago and fetch older than is 1 hour and fetch at a time limit is 10 when GetFetchable is called then pub 1 and pub 3 should be returned */ /* given publication is in repository: {Title: "Alberta Blog", URL: "https://alberta.ca/blog"} and publication's feed has 3 posts on it: {Title: "Hello World", URL: "https://alberta.ca/blog/001/hello-world"} {Title: "Yesterday", URL: "https://alberta.ca/blog/002/yesterday"} {Title: "Another Day", URL: "https://alberta.ca/blog/003/another-day"} when FetchPublicationPosts is called on it then the Posts should be added to the repository */ func TestFetchPublicationPostsAddAndListAll(t *testing.T) { mockFeedReader := MockFeedReader{} // create a slice of posts and assign it to the mock FeedReader mockFeedReader.Posts = []entities.Post{ entities.Post{Title: "Hello World", URL: "https://alberta.ca/blog/001/hello-world"}, entities.Post{Title: "Yesterday", URL: "https://alberta.ca/blog/002/yesterday"}, entities.Post{Title: "Another Day", URL: "https://alberta.ca/blog/003/another-day"}, } // setup service service := setupService(version, mockFeedReader) // create new publication publication := entities.Publication{Title: "Alberta Blog", URL: "https://alberta.ca/blog"} publication.ID = service.Add(publication) // fetch posts for the publication service.FetchPublicationPosts(publication) // find the publication, checking for errors if pub, err := service.Find(publication.ID); err != nil { t.Error(err) } else { t.Run("Make sure all the posts have been added in...", func(t *testing.T) { matches := 0 for _, gots := range pub.Posts { for _, wants := range mockFeedReader.Posts { if gots.Title == wants.Title && gots.URL == wants.URL { t.Logf("Got '%s', want '%s'", gots, wants) matches++ } } } // check if all the posts are matching got, want := matches, len(mockFeedReader.Posts) if got != want { t.Errorf("got '%d' matches, want '%d'", got, want) } }) } } /* given publication is in repository: {Title: "Alberta Blog", URL: "https://alberta.ca/blog"} and publication's feed has 3 posts on it: {Title: "Hello World", URL: "https://alberta.ca/blog/001/hello-world"} {Title: "Yesterday", URL: "https://alberta.ca/blog/002/yesterday"} {Title: "Another Day", URL: "https://alberta.ca/blog/003/another-day"} {Title: "Hello Japan", URL: "https://alberta.ca/blog/004/hello-japan"} and FetchPublicationPosts has been called when Search is called for Posts with query term "hello" then the Posts "Hello World" and "Hello Japan" should be returned */ func TestFetchPublicationPostsAddAndSearch(t *testing.T) { mockFeedReader := MockFeedReader{} // create a slice of posts and assign it to the mock FeedReader mockFeedReader.Posts = []entities.Post{ entities.Post{Title: "Hello World", URL: "https://alberta.ca/blog/001/hello-world"}, entities.Post{Title: "Yesterday", URL: "https://alberta.ca/blog/002/yesterday"}, entities.Post{Title: "Another Day", URL: "https://alberta.ca/blog/003/another-day"}, entities.Post{Title: "Hello Japan", URL: "https://alberta.ca/blog/004/hello-japan"}, } // setup service service := setupService(version, mockFeedReader) // create new publication publication := entities.Publication{Title: "Alberta Blog", URL: "https://alberta.ca/blog"} publication.ID = service.Add(publication) // fetch posts for the publication service.FetchPublicationPosts(publication) // search for hello results := service.Search("Post", "hello") // check if 2 posts are returned got, want := len(results), 2 if got != want { t.Errorf("got '%d' matches, want '%d'", got, want) } // search for yesterday results = service.Search("Post", "yesterday") // check if 1 post is returned got, want = len(results), 1 if got != want { t.Errorf("got '%d' matches, want '%d'", got, want) } }
<reponame>oljapriya/Team_Profile_Generator<filename>node_modules/webdriverio/build/commands/browser/keys.d.ts /** * * Send a sequence of key strokes to the active element. You can also use characters like * "Left arrow" or "Back space". WebdriverIO will take care of translating them into unicode * characters. You’ll find all supported characters [here](https://w3c.github.io/webdriver/webdriver-spec.html#keyboard-actions). * To do that, the value has to correspond to a key from the table. * * Modifier like Ctrl, Shift, Alt and Meta will stay pressed so you need to trigger them again to release them. * Modifiying a click however requires you to use the WebDriver Actions API through the [performActions](https://webdriver.io/docs/api/webdriver#performactions) method. * * <example> :keys.js it('copies text out of active element', () => { // copies text from an input element const input = $('#username') input.setValue('anonymous') browser.keys(['Meta', 'a']) browser.keys(['Meta', 'c']) }); * </example> * * @param {String|String[]} value The sequence of keys to type. An array or string must be provided. * @see https://w3c.github.io/webdriver/#dispatching-actions * */ export default function keys(this: WebdriverIO.Browser, value: string | string[]): Promise<void>; //# sourceMappingURL=keys.d.ts.map
"""" One user repeatedly logs in, logs out. Suppress interleaving with ReadInt, UpdateInt actions """ from WebModel import Login, Logout, UpdateInt, ReadInt actions = (Login, Logout, UpdateInt, ReadInt) # interleave Initialize only initial = 0 accepting = (0,) graph = ((0, (Login, ( 'VinniPuhh', 'Correct' ), 'Success'), 1), (1, (Logout, ( 'VinniPuhh', ), None), 0))
/** * Functional tests for <code>DentistryLicenseService</code>. * * @author j3_guile * @version 1.0 */ @RunWith(BlockJUnit4ClassRunner.class) public class DentistryLicenseServiceTest extends SOAPInvocationTestCase { /** * Service end point. */ private String serviceURL; /** * Default empty constructor. */ public DentistryLicenseServiceTest() { } /** * Setup test class. * * @throws Exception to JUnit */ @Before public void setUp() throws Exception { serviceURL = getProperty("DentistryLicenseServiceEndPoint"); } /** * Destroy properties. */ @After public void tearDown() { serviceURL = null; } /** * Tests search functionality. Only last name is provided. * * @throws Exception to JUnit */ @Test public void testSearchByLastName() throws Exception { matchInvoke(serviceURL, "DENT_testSearchByLastName_req.xml", "DENT_testSearchByLastName_res.xml"); } /** * Tests search functionality. Only middle name is provided. * * @throws Exception to JUnit */ @Test public void testSearchByMiddleName() throws Exception { matchInvoke(serviceURL, "DENT_testSearchByMiddleName_req.xml", "DENT_testSearchByMiddleName_res.xml"); } /** * Tests search functionality. Only first name is provided. * * @throws Exception to JUnit */ @Test public void testSearchByFirstName() throws Exception { matchInvoke(serviceURL, "DENT_testSearchByFirstName_req.xml", "DENT_testSearchByFirstName_res.xml"); } /** * Tests search functionality. Only license number is provided. * * @throws Exception to JUnit */ @Test public void testSearchByLicenseNumber() throws Exception { matchInvoke(serviceURL, "DENT_testSearchByLicenseNumber_req.xml", "DENT_testSearchByLicenseNumber_res.xml"); } /** * Tests search functionality. License number and type are provided. * * @throws Exception to JUnit */ @Test public void testSearchByLicenseNumberAndType() throws Exception { matchInvoke(serviceURL, "DENT_testSearchByLicenseNumberAndType_req.xml", "DENT_testSearchByLicenseNumberAndType_res.xml"); } /** * Tests search functionality. Only city is provided. * * @throws Exception to JUnit */ @Test public void testSearchByCity() throws Exception { matchInvoke(serviceURL, "DENT_testSearchByCity_req.xml", "DENT_testSearchByCity_res.xml"); } /** * Tests search functionality. Invalid. * * @throws Exception to JUnit */ @Test public void testSearchInvalid() throws Exception { matchInvoke(serviceURL, "DENT_testSearch_invalid_req.xml", "DENT_testSearch_invalid_res.xml"); } /** * Tests search functionality. No match. * * @throws Exception to JUnit */ @Test public void testSearchNoMatch() throws Exception { matchInvoke(serviceURL, "DENT_testSearch_nomatch_req.xml", "DENT_testSearch_nomatch_res.xml"); } /** * Tests search functionality. Paginated. * * @throws Exception to JUnit */ @Test public void testSearchPaginated() throws Exception { matchInvoke(serviceURL, "DENT_testSearch_paginated_req.xml", "DENT_testSearch_paginated_res.xml"); } /** * Tests search functionality. Sorted. * * @throws Exception to JUnit */ @Test public void testSearchSorted() throws Exception { matchInvoke(serviceURL, "DENT_testSearch_sorted_req.xml", "DENT_testSearch_sorted_res.xml"); } }
// Static function. // Return val in inches and true if ok, return false otherwise. str // is a floating point number possibly followed by "cm". // bool plot_bag::get_dim(const char *str, double *val) { char buf[128]; int i = sscanf(str, "%lf%s", val, buf); if (i == 2 && buf[0] == 'c' && buf[1] == 'm') *val *= 2.54; if (i > 0) return (true); return (false); }
/** * Comparison of variable to chosen value/variable * * @date 25-02-2013 * @author Michal Wronski * */ public final class Relation implements Expression { public enum RelationType { EQ, NEQ, EL, EG, GT, LT, REGEX } private final Variable var; private final RelationType relation; private final Object value; private final Variable varValue; private final boolean omittable; private final boolean caseInsensitive; public Relation(SqlRecorder recorder, Object var, final RelationType relation, final Object value, final boolean omittable, final boolean caseInsensitive) { this.var = recorder.nextVariable(); this.relation = relation; varValue = recorder.nextVariable(); this.value = value; this.omittable = omittable; this.caseInsensitive = caseInsensitive; validateArgumentsTypes(); } public Variable getVar() { return var; } public RelationType getType() { return relation; } public Object getValue() { return value; } public Variable getVarValue() { return varValue; } public boolean hasVarValue() { return varValue != null; } @Override public boolean isNullOmittable() { return omittable; } @Override public boolean isNull() { return varValue ==null && value == null; } public boolean shouldBeOmitted() { return isNullOmittable() && getValue() == null && getVarValue() == null; } public boolean isCaseInsensitive() { return caseInsensitive; } /** * Check that proper argument types are used. Currently only checks that * String arguments are used for case insensitive relations. * * @throws RuntimeException * if argument types are incorrect */ private void validateArgumentsTypes() { if (caseInsensitive) { if (this.var.getType() != String.class) { throw new RuntimeException("Parameter of case insensitive relation must be String, is: " + this.var.getType()); } if (this.value != null) { if (!(value instanceof String)) { throw new RuntimeException("Value of case insensitive relation must be String, is: " + this.value.getClass()); } } else { if (this.varValue.getType() != String.class) { throw new RuntimeException("Value of case insensitive relation must be String, is: " + this.varValue.getType()); } } } } }
/** * virGetStream: * @conn: the hypervisor connection * * Allocates a new stream object. When the object is no longer needed, * virObjectUnref() must be called in order to not leak data. * * Returns a pointer to the stream object, or NULL on error. */ virStreamPtr virGetStream(virConnectPtr conn) { virStreamPtr ret = NULL; if (virDataTypesInitialize() < 0) return NULL; if (!(ret = virObjectNew(virStreamClass))) return NULL; ret->conn = virObjectRef(conn); return ret; }
<gh_stars>0 /* Copyright 2021 The Silkworm Authors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ #ifndef SILKWORM_WASM_API_HPP_ #define SILKWORM_WASM_API_HPP_ // Preliminary Silkworm API for WebAssembly. // Currently it's unstable and is likely to change. // Used for https://torquem.ch/eth_tests.html #include <cstdbool> #include <cstdint> #include <evmc/evmc.h> #include <intx/intx.hpp> #include <silkworm/chain/blockchain.hpp> #include <silkworm/common/base.hpp> #include <silkworm/state/in_memory_state.hpp> #include <silkworm/types/account.hpp> #include <silkworm/types/transaction.hpp> #define SILKWORM_EXPORT __attribute__((visibility("default"))) extern "C" { SILKWORM_EXPORT void* new_buffer(size_t size); SILKWORM_EXPORT void delete_buffer(void* ptr); SILKWORM_EXPORT silkworm::Bytes* new_bytes_from_hex(const char* data, size_t size); SILKWORM_EXPORT void delete_bytes(silkworm::Bytes* x); SILKWORM_EXPORT uint8_t* bytes_data(silkworm::Bytes* str); SILKWORM_EXPORT size_t bytes_length(const silkworm::Bytes* str); // a + b*2^64 + c*2^128 + d*2^192 SILKWORM_EXPORT intx::uint256* new_uint256_le(uint64_t a, uint64_t b, uint64_t c, uint64_t d); SILKWORM_EXPORT void delete_uint256(intx::uint256* x); SILKWORM_EXPORT const silkworm::ChainConfig* lookup_config(uint64_t chain_id); SILKWORM_EXPORT silkworm::ChainConfig* new_config(uint64_t chain_id); SILKWORM_EXPORT void delete_config(silkworm::ChainConfig* x); SILKWORM_EXPORT void config_set_fork_block(silkworm::ChainConfig* config, evmc_revision fork, uint64_t block); SILKWORM_EXPORT void config_set_muir_glacier_block(silkworm::ChainConfig* config, uint64_t block); SILKWORM_EXPORT void config_set_dao_block(silkworm::ChainConfig* config, uint64_t block); // in_out: in parent difficulty, out current difficulty SILKWORM_EXPORT void difficulty(intx::uint256* in_out, uint64_t block_number, uint64_t block_timestamp, uint64_t parent_timestamp, bool parent_has_uncles, const silkworm::ChainConfig* config); SILKWORM_EXPORT silkworm::Transaction* new_transaction(const silkworm::Bytes* rlp); SILKWORM_EXPORT void delete_transaction(silkworm::Transaction* x); SILKWORM_EXPORT bool check_intrinsic_gas(const silkworm::Transaction* txn, bool homestead, bool istanbul); SILKWORM_EXPORT const uint8_t* recover_sender(silkworm::Transaction* txn); SILKWORM_EXPORT void keccak256(uint8_t* out, const silkworm::Bytes* in); SILKWORM_EXPORT silkworm::Account* new_account(uint64_t nonce, const intx::uint256* balance); SILKWORM_EXPORT void delete_account(silkworm::Account* x); SILKWORM_EXPORT uint64_t account_nonce(const silkworm::Account* a); SILKWORM_EXPORT intx::uint256* account_balance(silkworm::Account* a); SILKWORM_EXPORT uint8_t* account_code_hash(silkworm::Account* a); SILKWORM_EXPORT silkworm::Block* new_block(const silkworm::Bytes* rlp); SILKWORM_EXPORT void delete_block(silkworm::Block* x); SILKWORM_EXPORT silkworm::BlockHeader* block_header(silkworm::Block* b); SILKWORM_EXPORT uint64_t header_number(const silkworm::BlockHeader* header); SILKWORM_EXPORT uint8_t* header_state_root(silkworm::BlockHeader* header); SILKWORM_EXPORT void block_recover_senders(silkworm::Block* b); SILKWORM_EXPORT silkworm::InMemoryState* new_state(); SILKWORM_EXPORT void delete_state(silkworm::InMemoryState* x); SILKWORM_EXPORT size_t state_number_of_accounts(const silkworm::InMemoryState* state); SILKWORM_EXPORT size_t state_storage_size(const silkworm::InMemoryState* state, const uint8_t* address, const silkworm::Account* account); // Result has to be freed with delete_buffer SILKWORM_EXPORT uint8_t* state_root_hash_new(const silkworm::InMemoryState* state); // Result has to be freed with delete_account SILKWORM_EXPORT silkworm::Account* state_read_account_new(const silkworm::State* state, const uint8_t* address); // Result has to be freed with delete_bytes SILKWORM_EXPORT silkworm::Bytes* state_read_code_new(const silkworm::State* state, const uint8_t* code_hash); // Result has to be freed with delete_bytes SILKWORM_EXPORT silkworm::Bytes* state_read_storage_new(const silkworm::State* state, const uint8_t* address, const silkworm::Account* account, const silkworm::Bytes* location); SILKWORM_EXPORT void state_update_account(silkworm::State* state, const uint8_t* address, const silkworm::Account* current); SILKWORM_EXPORT void state_update_code(silkworm::State* state, const uint8_t* address, const silkworm::Account* account, const silkworm::Bytes* code); SILKWORM_EXPORT void state_update_storage(silkworm::State* state, const uint8_t* address, const silkworm::Account* account, const silkworm::Bytes* location, const silkworm::Bytes* value); SILKWORM_EXPORT silkworm::Blockchain* new_blockchain(silkworm::State* state, const silkworm::ChainConfig* config, const silkworm::Block* genesis_block); SILKWORM_EXPORT void delete_blockchain(silkworm::Blockchain* x); SILKWORM_EXPORT silkworm::ValidationResult blockchain_insert_block(silkworm::Blockchain* chain, silkworm::Block* block, bool check_state_root); } #endif // SILKWORM_WASM_API_HPP_
<filename>menus/org.lorainelab.igb.menu.api/src/main/java/org/lorainelab/igb/menu/api/model/ParentMenu.java /* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ package org.lorainelab.igb.menu.api.model; /** * ## MenuBarParentMenu * * This enum represents the parent menus that are available for * extension. * * * FILE * * EDIT * * VIEW * * TOOLS * * TABS * * GENOME * * HELP * * <img src="doc-files/toolbarParentMenus.png" alt="ToolBar Parent Menus"/> * * @author dcnorris * @module.info context-menu-api */ public enum ParentMenu { FILE("file"), EDIT("edit"), VIEW("view"), TOOLS("tools"), HELP("help"), GENOME("Genome"); private final String name; private ParentMenu(String name) { this.name = name; } public String getName() { return name; } }
def FixEndings(file, crlf, cr, lf): most = max(crlf, cr, lf) if most == crlf: result = subprocess.call('unix2dos.exe %s' % file, shell=True) if result: raise Error('Error running unix2dos.exe %s' % file) else: result = subprocess.call('dos2unix.exe %s' % file, shell=True) if result: raise Error('Error running dos2unix.exe %s' % file)
/**------------------------------------------------------ * Processes the given command. * @command the command to execute *------------------------------------------------------*/ public void processCommand(String cmd) { MiUtility.pushMouseAppearance( (MiPart )getTargetOfCommand(), MiiTypes.Mi_WAIT_CURSOR, "MiConnectMenuCommands"); if (cmd.equalsIgnoreCase(Mi_CONNECT_COMMAND_NAME)) { deleteSelectedObjects.processCommand(Mi_CONNECT_COMMAND_NAME); } else if (cmd.equalsIgnoreCase(Mi_DISCONNECT_COMMAND_NAME)) { deleteSelectedObjects.processCommand(Mi_DISCONNECT_COMMAND_NAME); } else if (cmd.equalsIgnoreCase(Mi_COLLAPSE_COMMAND_NAME)) { collapseExpandSelectedObjects.processCommand(Mi_COLLAPSE_COMMAND_NAME); handleSelectionState(1); } else if (cmd.equalsIgnoreCase(Mi_EXPAND_COMMAND_NAME)) { collapseExpandSelectedObjects.processCommand(Mi_EXPAND_COMMAND_NAME); handleSelectionState(1); } MiUtility.popMouseAppearance((MiPart )getTargetOfCommand(), "MiConnectMenuCommands"); }
/** * reading source file and fill ask & bid sets. */ public void readFile() { try { XMLStreamReader xmlr = XMLInputFactory.newInstance().createXMLStreamReader(name, new FileInputStream(name)); while (xmlr.hasNext()) { xmlr.next(); if (xmlr.hasName()) { if (xmlr.getName().getLocalPart().equals("AddOrder")) { if (xmlr.isStartElement()) { Order order = new Order(); order.setName(xmlr.getAttributeValue(0)); order.setOperation(Operation.valueOf(xmlr.getAttributeValue(1))); order.setValue(Double.valueOf(xmlr.getAttributeValue(2))); order.setVolume(Integer.valueOf(xmlr.getAttributeValue(3))); order.setId(Integer.valueOf(xmlr.getAttributeValue(4))); if (order.getOperation() == Operation.BUY) { bid.add(order); } else { ask.add(order); } } } else if (xmlr.getName().getLocalPart().equals("DeleteOrder")) { if (xmlr.isStartElement()) { Order order = new Order(); order.setName(xmlr.getAttributeValue(0)); order.setId(Integer.valueOf(xmlr.getAttributeValue(1))); ask.remove(order); bid.remove(order); } } } } } catch (IOException | XMLStreamException ex) { ex.printStackTrace(); } }
<gh_stars>1-10 module String where import Data.Range.Range as R listToRanges :: [(Int,Int)] -> [Range Int] listToRanges [] = [] listToRanges ((lb, le):xs) = R.SpanRange lb le : listToRanges xs rangesToList :: [Range Int] -> [(Int,Int)] rangesToList [] = [] rangesToList ((R.SpanRange n m):xs) = (n, m) : rangesToList xs rangesToList ((R.SingletonRange s):xs) = (s, s) : rangesToList xs invertMatches :: Int -> Int -> [(Int,Int)] -> [(Int,Int)] invertMatches lb ub ms = rangesToList $ R.intersection [R.SpanRange lb ub] $ R.invert $ listToRanges ms {-| Splits the string into two strings at the specified index -} binarySplit :: String -> Int -> (String, String) binarySplit string index = (take index string, drop index string) {-| Splits the string into three strings at the specified indices -} ternarySplit :: String -> Int -> Int -> (String, String, String) ternarySplit string start end = (left, middle, right) where (left, right') = binarySplit string start (middle, right) = binarySplit right' (end - start) {-| Maps only substrings captured by indices in the specified tuples -} partiallyMapString :: String -> [(Int, Int)] -> (String -> String) -> Bool -> String partiallyMapString string ls f False = partiallyMapString' string ls f partiallyMapString string ls f True = partiallyMapString' string inverted f where inverted = invertMatches 0 (length string) ls partiallyMapString' :: String -> [(Int, Int)] -> (String -> String) -> String partiallyMapString' string [] f = string partiallyMapString' string ((start, end) : ts) f = left ++ (f middle) ++ (partiallyMapString' right ts' f) where (left, middle, right) = ternarySplit string start end -- split the current string ts' = map ( \(l, r) -> (l - end, r - end) ) -- reduce all the following indices ts
X = int(input()) p5 = [i**5 for i in range(3000)] p5s = set(p5) M = 299**5 A = 0 while True: B5 = abs(A**5 - X) if B5 in p5s: B = p5.index(B5) if A**5 >= X else -p5.index(B5) print(A, B) exit() A += 1
// Just dump a file out over the socket void sendRawFile(SockWrapper const & wrapper, std::string const & filesend) { struct stat fileInfo; if (stat(filesend.c_str(), &fileInfo) == -1) { throw SendError{"Invalid File to Send"}; } std::ifstream file{filesend, std::ios::binary}; std::vector<char> sendvec; sendvec.resize(10000); while (!file.fail()) { file.read(sendvec.data(), 10000); int thisSent = file.gcount(); sendvec.resize(thisSent); wrapper.sendM(sendvec); } }
# Don't forget to properly setup: # /home/simulators/.spynnaker.cfg and /usr/local/lib/python2.7/dist-packages/spynnaker/spynnaker.cfg import spynnaker.pyNN as sim import spynnaker_external_devices_plugin.pyNN as ExternalDevices import spinnman.messages.eieio.eieio_type as eieio_type # Base code from: # http://spinnakermanchester.github.io/2015.005.Arbitrary/workshop_material/ sim.setup(timestep=1.0, min_delay=1.0, max_delay=144.0) num_of_neurons = 2560 # Parameter of the neuron model LIF with exponential currents cell_params_lif = {'cm' : 0.25, # nF 'i_offset' : 0.0, 'tau_m' : 2.0, 'tau_refrac': 2.0, 'tau_syn_E' : 1.0, 'tau_syn_I' : 1.0, 'v_reset' : -70.0, 'v_rest' : -65.0, 'v_thresh' : -50.0 } # Creates a population that will be sending the spikes through the ethernet pop_out = sim.Population(num_of_neurons, sim.IF_curr_exp, cell_params_lif, label='spikes_out') cell_params_spike_injector = { # The port on which the spiNNaker machine should listen for packets. # Packets to be injected should be sent to this port on the spiNNaker # machine 'port': 12346 } # Creates a population that will receive spikes from the ethernet # The label value is important as it will be used to find the association # between neuron index and SpiNNaker key inside the database file created # when the board is programmed. pop_in = sim.Population(num_of_neurons, ExternalDevices.SpikeInjector, cell_params_spike_injector, label='spikes_in') # Sets up the board to send all the spikes generated by pop_out to the IP=host (and port) ExternalDevices.activate_live_output_for(pop_out, port=12345, host='172.16.31.10', use_prefix=False, key_prefix=None, prefix_type=None, message_type=eieio_type.EIEIOType.KEY_32_BIT, payload_as_time_stamps=False, use_payload_prefix=False, payload_prefix=None) # Value used to interconnect the neuron populations (x1E-9) weight_to_spike = 20.0 # Connects the populations pop_in and pop_out (creates the synapses) sim.Projection(pop_in, pop_out, sim.OneToOneConnector(weights=weight_to_spike)) # sim.AllToAllConnector(weights=weight_to_spike)) # Initialises the membrane voltage (x1E-3) for the neurons in pop_out pop_out.initialize("v", [0]*num_of_neurons) # Starts the simulation inside SpiNNaker and runs it forever sim.run()
Daniel Craig will star in at least five James Bond films, taking his tenure through the 25th entry in the series... 5 Bonds For Craig 6th September 2012 Daniel Craig will play James Bond in at least five films in the EON Productions series, MI6 can confirm. He has already completed shooting on his third outing, "Skyfall", which will open nationwide in the UK on October 26th. His remaining contracted fourth and fifth films will be the 24th and 25th movies in the official series respectively. Sony Pictures, who through a new co-financing deal with MGM, are also set to continue their relationship that started with "Casino Royale" in to Bond 24. Studio executives have been alluding to a return to the two-year cycle to produce the 007 adventures, which would peg Bond 24 as a late 2014 release. This schedule may be too aggressive for the artistic process at EON Productions, who have savoured the longer than usual break between films to craft the script for "Skyfall" and have other non-007 related projects in the works, too. Craig himself is keen to have a breather before kicking off another Bond outing, telling press recently that he is not taking any other film work on until after all the "Skyfall" promotional work is over in the New Year. Behind the scenes, EON recently started inking screenwriting contracts for Craig's next two Bond movies. Producer Michael G. Wilson also said publicly that he hopes Craig will go on to be the longest serving actor in the role. A total of five outings would make him the third longest serving James Bond actor in terms of films produced, but Craig has already clocked up six years in the role over three movies, longer than the time it took Sean Connery to make five. Although Craig's contract has two more films yet to run, Bond actors have been released early from their commitments. Both Connery and Timothy Dalton left the role a film early for completely different reasons. A Brief History Of Bondage Sean Connery originally signed up to play 007 in six films starting with "Dr No' (1962), but was released from his contract one film early following "You Only Live Twice" (1967). George Lazenby was then cast as the first replacement, completing "On Her Majesty's Secret Service" (1969) before walking away from a contract for more. Connery was lured back to the role for "Diamonds Are Forever" (1971) and a record fee of £1.25 million which he donated to charity. Roger Moore then took the helm with "Live And Let Die" (1973), the third successive film in the series that had a change in the lead role. Moore steadied the ship and went on to complete a record total of seven outings, culminating in "A View To A Kill" (1985). His contract was originally for four films, but Moore and Broccoli negotiated on a per-film basis thereafter. Timothy Dalton, who had been previously considered for the role, was cast for three films, starting with "The Living Daylights" (1987). After "Licence To Kill" (1989), a six year hiatus followed due to studio legal disputes, leaving Dalton walked away from a possible third film. Pierce Brosnan - who had narrowly missed out on the role in 1987 due to an NBC contract renewal - debuted as 007 in "GoldenEye" (1995). His tenure was set for three films with an option for a fourth, which he completed with "Die Another Day" in 2002. Brosnan was not recalled, and after a four year break, Daniel Craig made his debut in "Casino Royale" (2006).
/* Do the first essential initializations that must precede all else. */ static inline void first_init (void) { __mach_init (); RUN_HOOK (_hurd_preinit_hook, ()); }
// MustConnect will panic if cannot connect to sql server func MustConnect(ctx context.Context, driver string, opt *options.ConnectOptions) *Client { conn, err := Connect(ctx, driver, opt) if err != nil { panic(err) } return conn }
import { FC } from 'react' import { MdCancel } from 'react-icons/md' import { setClipIndex, setCurrentClip } from 'src/state/reducer' import { useStateValue } from 'src/state/state' interface Props { filterSeen: boolean playlistVisible: boolean hidePlaylist: () => void } const Playlist: FC<Props> = ({ filterSeen, playlistVisible, hidePlaylist }) => { const [{ clips, clipIndex }, dispatch] = useStateValue() const playlistClips = filterSeen ? clips.filtered : clips.data const setNewClip = (index: number) => { const clipsData = filterSeen ? clips.filtered : clips.data dispatch(setCurrentClip(clipsData[index])) dispatch(setClipIndex(index)) } return ( <div className={`playlist-container ${playlistVisible ? 'is-visible' : ''}`}> <button onClick={hidePlaylist} className='btn-hide-playlist' title='Hidde Playlist'> <MdCancel size={14} /> </button> <ul className='playlist-list'> {playlistClips?.map(({ title, nsfw, seen }, index) => ( <li className={`playlist-item ${clipIndex === index ? 'is-active' : ''}`} key={index} onClick={() => setNewClip(index)} > {title} </li> ))} </ul> </div> ) } export default Playlist
<filename>gcc-cross/lib/gcc/i686-elf/4.9.1/plugin/include/cgraph.h<gh_stars>1-10 /* Callgraph handling code. Copyright (C) 2003-2014 Free Software Foundation, Inc. Contributed by <NAME> This file is part of GCC. GCC is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version. GCC is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with GCC; see the file COPYING3. If not see <http://www.gnu.org/licenses/>. */ #ifndef GCC_CGRAPH_H #define GCC_CGRAPH_H #include "is-a.h" #include "plugin-api.h" #include "vec.h" #include "basic-block.h" #include "function.h" #include "ipa-ref.h" /* Symbol table consists of functions and variables. TODO: add labels and CONST_DECLs. */ enum symtab_type { SYMTAB_SYMBOL, SYMTAB_FUNCTION, SYMTAB_VARIABLE }; /* Base of all entries in the symbol table. The symtab_node is inherited by cgraph and varpol nodes. */ class GTY((desc ("%h.type"), tag ("SYMTAB_SYMBOL"), chain_next ("%h.next"), chain_prev ("%h.previous"))) symtab_node { public: /* Return name. */ const char *name () const; /* Return asm name. */ const char * asm_name () const; /* Type of the symbol. */ ENUM_BITFIELD (symtab_type) type : 8; /* The symbols resolution. */ ENUM_BITFIELD (ld_plugin_symbol_resolution) resolution : 8; /*** Flags representing the symbol type. ***/ /* True when symbol corresponds to a definition in current unit. set via cgraph_finalize_function or varpool_finalize_decl */ unsigned definition : 1; /* True when symbol is an alias. Set by assemble_alias. */ unsigned alias : 1; /* True when alias is a weakref. */ unsigned weakref : 1; /* C++ frontend produce same body aliases and extra name aliases for virtual functions and vtables that are obviously equivalent. Those aliases are bit special, especially because C++ frontend visibility code is so ugly it can not get them right at first time and their visibility needs to be copied from their "masters" at the end of parsing. */ unsigned cpp_implicit_alias : 1; /* Set once the definition was analyzed. The list of references and other properties are built during analysis. */ unsigned analyzed : 1; /*** Visibility and linkage flags. ***/ /* Set when function is visible by other units. */ unsigned externally_visible : 1; /* The symbol will be assumed to be used in an invisible way (like by an toplevel asm statement). */ unsigned force_output : 1; /* Like FORCE_OUTPUT, but in the case it is ABI requiring the symbol to be exported. Unlike FORCE_OUTPUT this flag gets cleared to symbols promoted to static and it does not inhibit optimization. */ unsigned forced_by_abi : 1; /* True when the name is known to be unique and thus it does not need mangling. */ unsigned unique_name : 1; /* True when body and other characteristics have been removed by symtab_remove_unreachable_nodes. */ unsigned body_removed : 1; /*** WHOPR Partitioning flags. These flags are used at ltrans stage when only part of the callgraph is available. ***/ /* Set when variable is used from other LTRANS partition. */ unsigned used_from_other_partition : 1; /* Set when function is available in the other LTRANS partition. During WPA output it is used to mark nodes that are present in multiple partitions. */ unsigned in_other_partition : 1; /*** other flags. ***/ /* Set when symbol has address taken. */ unsigned address_taken : 1; /* Ordering of all symtab entries. */ int order; /* Declaration representing the symbol. */ tree decl; /* Linked list of symbol table entries starting with symtab_nodes. */ symtab_node *next; symtab_node *previous; /* Linked list of symbols with the same asm name. There may be multiple entries for single symbol name during LTO, because symbols are renamed only after partitioning. Because inline clones are kept in the assembler name has, they also produce duplicate entries. There are also several long standing bugs where frontends and builtin code produce duplicated decls. */ symtab_node *next_sharing_asm_name; symtab_node *previous_sharing_asm_name; /* Circular list of nodes in the same comdat group if non-NULL. */ symtab_node *same_comdat_group; /* Vectors of referring and referenced entities. */ struct ipa_ref_list ref_list; /* Alias target. May be either DECL pointer or ASSEMBLER_NAME pointer depending to what was known to frontend on the creation time. Once alias is resolved, this pointer become NULL. */ tree alias_target; /* File stream where this node is being written to. */ struct lto_file_decl_data * lto_file_data; PTR GTY ((skip)) aux; }; enum availability { /* Not yet set by cgraph_function_body_availability. */ AVAIL_UNSET, /* Function body/variable initializer is unknown. */ AVAIL_NOT_AVAILABLE, /* Function body/variable initializer is known but might be replaced by a different one from other compilation unit and thus needs to be dealt with a care. Like AVAIL_NOT_AVAILABLE it can have arbitrary side effects on escaping variables and functions, while like AVAILABLE it might access static variables. */ AVAIL_OVERWRITABLE, /* Function body/variable initializer is known and will be used in final program. */ AVAIL_AVAILABLE, /* Function body/variable initializer is known and all it's uses are explicitly visible within current unit (ie it's address is never taken and it is not exported to other units). Currently used only for functions. */ AVAIL_LOCAL }; /* This is the information that is put into the cgraph local structure to recover a function. */ struct lto_file_decl_data; extern const char * const cgraph_availability_names[]; extern const char * const ld_plugin_symbol_resolution_names[]; /* Information about thunk, used only for same body aliases. */ struct GTY(()) cgraph_thunk_info { /* Information about the thunk. */ HOST_WIDE_INT fixed_offset; HOST_WIDE_INT virtual_value; tree alias; bool this_adjusting; bool virtual_offset_p; /* Set to true when alias node is thunk. */ bool thunk_p; }; /* Information about the function collected locally. Available after function is analyzed. */ struct GTY(()) cgraph_local_info { /* Set when function function is visible in current compilation unit only and its address is never taken. */ unsigned local : 1; /* False when there is something makes versioning impossible. */ unsigned versionable : 1; /* False when function calling convention and signature can not be changed. This is the case when __builtin_apply_args is used. */ unsigned can_change_signature : 1; /* True when the function has been originally extern inline, but it is redefined now. */ unsigned redefined_extern_inline : 1; /* True if the function may enter serial irrevocable mode. */ unsigned tm_may_enter_irr : 1; }; /* Information about the function that needs to be computed globally once compilation is finished. Available only with -funit-at-a-time. */ struct GTY(()) cgraph_global_info { /* For inline clones this points to the function they will be inlined into. */ struct cgraph_node *inlined_to; }; /* Information about the function that is propagated by the RTL backend. Available only for functions that has been already assembled. */ struct GTY(()) cgraph_rtl_info { unsigned int preferred_incoming_stack_boundary; }; /* Represent which DECL tree (or reference to such tree) will be replaced by another tree while versioning. */ struct GTY(()) ipa_replace_map { /* The tree that will be replaced. */ tree old_tree; /* The new (replacing) tree. */ tree new_tree; /* Parameter number to replace, when old_tree is NULL. */ int parm_num; /* True when a substitution should be done, false otherwise. */ bool replace_p; /* True when we replace a reference to old_tree. */ bool ref_p; }; typedef struct ipa_replace_map *ipa_replace_map_p; struct GTY(()) cgraph_clone_info { vec<ipa_replace_map_p, va_gc> *tree_map; bitmap args_to_skip; bitmap combined_args_to_skip; }; enum cgraph_simd_clone_arg_type { SIMD_CLONE_ARG_TYPE_VECTOR, SIMD_CLONE_ARG_TYPE_UNIFORM, SIMD_CLONE_ARG_TYPE_LINEAR_CONSTANT_STEP, SIMD_CLONE_ARG_TYPE_LINEAR_VARIABLE_STEP, SIMD_CLONE_ARG_TYPE_MASK }; /* Function arguments in the original function of a SIMD clone. Supplementary data for `struct simd_clone'. */ struct GTY(()) cgraph_simd_clone_arg { /* Original function argument as it originally existed in DECL_ARGUMENTS. */ tree orig_arg; /* orig_arg's function (or for extern functions type from TYPE_ARG_TYPES). */ tree orig_type; /* If argument is a vector, this holds the vector version of orig_arg that after adjusting the argument types will live in DECL_ARGUMENTS. Otherwise, this is NULL. This basically holds: vector(simdlen) __typeof__(orig_arg) new_arg. */ tree vector_arg; /* vector_arg's type (or for extern functions new vector type. */ tree vector_type; /* If argument is a vector, this holds the array where the simd argument is held while executing the simd clone function. This is a local variable in the cloned function. Its content is copied from vector_arg upon entry to the clone. This basically holds: __typeof__(orig_arg) simd_array[simdlen]. */ tree simd_array; /* A SIMD clone's argument can be either linear (constant or variable), uniform, or vector. */ enum cgraph_simd_clone_arg_type arg_type; /* For arg_type SIMD_CLONE_ARG_TYPE_LINEAR_CONSTANT_STEP this is the constant linear step, if arg_type is SIMD_CLONE_ARG_TYPE_LINEAR_VARIABLE_STEP, this is index of the uniform argument holding the step, otherwise 0. */ HOST_WIDE_INT linear_step; /* Variable alignment if available, otherwise 0. */ unsigned int alignment; }; /* Specific data for a SIMD function clone. */ struct GTY(()) cgraph_simd_clone { /* Number of words in the SIMD lane associated with this clone. */ unsigned int simdlen; /* Number of annotated function arguments in `args'. This is usually the number of named arguments in FNDECL. */ unsigned int nargs; /* Max hardware vector size in bits for integral vectors. */ unsigned int vecsize_int; /* Max hardware vector size in bits for floating point vectors. */ unsigned int vecsize_float; /* The mangling character for a given vector size. This is is used to determine the ISA mangling bit as specified in the Intel Vector ABI. */ unsigned char vecsize_mangle; /* True if this is the masked, in-branch version of the clone, otherwise false. */ unsigned int inbranch : 1; /* True if this is a Cilk Plus variant. */ unsigned int cilk_elemental : 1; /* Doubly linked list of SIMD clones. */ struct cgraph_node *prev_clone, *next_clone; /* Original cgraph node the SIMD clones were created for. */ struct cgraph_node *origin; /* Annotated function arguments for the original function. */ struct cgraph_simd_clone_arg GTY((length ("%h.nargs"))) args[1]; }; /* The cgraph data structure. Each function decl has assigned cgraph_node listing callees and callers. */ struct GTY((tag ("SYMTAB_FUNCTION"))) cgraph_node : public symtab_node { public: struct cgraph_edge *callees; struct cgraph_edge *callers; /* List of edges representing indirect calls with a yet undetermined callee. */ struct cgraph_edge *indirect_calls; /* For nested functions points to function the node is nested in. */ struct cgraph_node *origin; /* Points to first nested function, if any. */ struct cgraph_node *nested; /* Pointer to the next function with same origin, if any. */ struct cgraph_node *next_nested; /* Pointer to the next clone. */ struct cgraph_node *next_sibling_clone; struct cgraph_node *prev_sibling_clone; struct cgraph_node *clones; struct cgraph_node *clone_of; /* For functions with many calls sites it holds map from call expression to the edge to speed up cgraph_edge function. */ htab_t GTY((param_is (struct cgraph_edge))) call_site_hash; /* Declaration node used to be clone of. */ tree former_clone_of; /* If this is a SIMD clone, this points to the SIMD specific information for it. */ struct cgraph_simd_clone *simdclone; /* If this function has SIMD clones, this points to the first clone. */ struct cgraph_node *simd_clones; /* Interprocedural passes scheduled to have their transform functions applied next time we execute local pass on them. We maintain it per-function in order to allow IPA passes to introduce new functions. */ vec<ipa_opt_pass> GTY((skip)) ipa_transforms_to_apply; struct cgraph_local_info local; struct cgraph_global_info global; struct cgraph_rtl_info rtl; struct cgraph_clone_info clone; struct cgraph_thunk_info thunk; /* Expected number of executions: calculated in profile.c. */ gcov_type count; /* How to scale counts at materialization time; used to merge LTO units with different number of profile runs. */ int count_materialization_scale; /* Unique id of the node. */ int uid; /* ID assigned by the profiling. */ unsigned int profile_id; /* Time profiler: first run of function. */ int tp_first_run; /* Set when decl is an abstract function pointed to by the ABSTRACT_DECL_ORIGIN of a reachable function. */ unsigned used_as_abstract_origin : 1; /* Set once the function is lowered (i.e. its CFG is built). */ unsigned lowered : 1; /* Set once the function has been instantiated and its callee lists created. */ unsigned process : 1; /* How commonly executed the node is. Initialized during branch probabilities pass. */ ENUM_BITFIELD (node_frequency) frequency : 2; /* True when function can only be called at startup (from static ctor). */ unsigned only_called_at_startup : 1; /* True when function can only be called at startup (from static dtor). */ unsigned only_called_at_exit : 1; /* True when function is the transactional clone of a function which is called only from inside transactions. */ /* ?? We should be able to remove this. We have enough bits in cgraph to calculate it. */ unsigned tm_clone : 1; /* True if this decl is a dispatcher for function versions. */ unsigned dispatcher_function : 1; /* True if this decl calls a COMDAT-local function. This is set up in compute_inline_parameters and inline_call. */ unsigned calls_comdat_local : 1; }; typedef struct cgraph_node *cgraph_node_ptr; /* Function Multiversioning info. */ struct GTY(()) cgraph_function_version_info { /* The cgraph_node for which the function version info is stored. */ struct cgraph_node *this_node; /* Chains all the semantically identical function versions. The first function in this chain is the version_info node of the default function. */ struct cgraph_function_version_info *prev; /* If this version node corresponds to a dispatcher for function versions, this points to the version info node of the default function, the first node in the chain. */ struct cgraph_function_version_info *next; /* If this node corresponds to a function version, this points to the dispatcher function decl, which is the function that must be called to execute the right function version at run-time. If this cgraph node is a dispatcher (if dispatcher_function is true, in the cgraph_node struct) for function versions, this points to resolver function, which holds the function body of the dispatcher. The dispatcher decl is an alias to the resolver function decl. */ tree dispatcher_resolver; }; /* Get the cgraph_function_version_info node corresponding to node. */ struct cgraph_function_version_info * get_cgraph_node_version (struct cgraph_node *node); /* Insert a new cgraph_function_version_info node into cgraph_fnver_htab corresponding to cgraph_node NODE. */ struct cgraph_function_version_info * insert_new_cgraph_node_version (struct cgraph_node *node); /* Record that DECL1 and DECL2 are semantically identical function versions. */ void record_function_versions (tree decl1, tree decl2); /* Remove the cgraph_function_version_info and cgraph_node for DECL. This DECL is a duplicate declaration. */ void delete_function_version (tree decl); /* A cgraph node set is a collection of cgraph nodes. A cgraph node can appear in multiple sets. */ struct cgraph_node_set_def { struct pointer_map_t *map; vec<cgraph_node_ptr> nodes; }; class varpool_node; typedef varpool_node *varpool_node_ptr; /* A varpool node set is a collection of varpool nodes. A varpool node can appear in multiple sets. */ struct varpool_node_set_def { struct pointer_map_t * map; vec<varpool_node_ptr> nodes; }; typedef struct cgraph_node_set_def *cgraph_node_set; typedef struct varpool_node_set_def *varpool_node_set; /* Iterator structure for cgraph node sets. */ struct cgraph_node_set_iterator { cgraph_node_set set; unsigned index; }; /* Iterator structure for varpool node sets. */ struct varpool_node_set_iterator { varpool_node_set set; unsigned index; }; #define DEFCIFCODE(code, type, string) CIF_ ## code, /* Reasons for inlining failures. */ enum cgraph_inline_failed_t { #include "cif-code.def" CIF_N_REASONS }; enum cgraph_inline_failed_type_t { CIF_FINAL_NORMAL = 0, CIF_FINAL_ERROR }; /* Structure containing additional information about an indirect call. */ struct GTY(()) cgraph_indirect_call_info { /* When polymorphic is set, this field contains offset where the object which was actually used in the polymorphic resides within a larger structure. If agg_contents is set, the field contains the offset within the aggregate from which the address to call was loaded. */ HOST_WIDE_INT offset; /* OBJ_TYPE_REF_TOKEN of a polymorphic call (if polymorphic is set). */ HOST_WIDE_INT otr_token; /* Type of the object from OBJ_TYPE_REF_OBJECT. */ tree otr_type, outer_type; /* Index of the parameter that is called. */ int param_index; /* ECF flags determined from the caller. */ int ecf_flags; /* Profile_id of common target obtrained from profile. */ int common_target_id; /* Probability that call will land in function with COMMON_TARGET_ID. */ int common_target_probability; /* Set when the call is a virtual call with the parameter being the associated object pointer rather than a simple direct call. */ unsigned polymorphic : 1; /* Set when the call is a call of a pointer loaded from contents of an aggregate at offset. */ unsigned agg_contents : 1; /* Set when this is a call through a member pointer. */ unsigned member_ptr : 1; /* When the previous bit is set, this one determines whether the destination is loaded from a parameter passed by reference. */ unsigned by_ref : 1; unsigned int maybe_in_construction : 1; unsigned int maybe_derived_type : 1; }; struct GTY((chain_next ("%h.next_caller"), chain_prev ("%h.prev_caller"))) cgraph_edge { /* Expected number of executions: calculated in profile.c. */ gcov_type count; struct cgraph_node *caller; struct cgraph_node *callee; struct cgraph_edge *prev_caller; struct cgraph_edge *next_caller; struct cgraph_edge *prev_callee; struct cgraph_edge *next_callee; gimple call_stmt; /* Additional information about an indirect call. Not cleared when an edge becomes direct. */ struct cgraph_indirect_call_info *indirect_info; PTR GTY ((skip (""))) aux; /* When equal to CIF_OK, inline this call. Otherwise, points to the explanation why function was not inlined. */ enum cgraph_inline_failed_t inline_failed; /* The stmt_uid of call_stmt. This is used by LTO to recover the call_stmt when the function is serialized in. */ unsigned int lto_stmt_uid; /* Expected frequency of executions within the function. When set to CGRAPH_FREQ_BASE, the edge is expected to be called once per function call. The range is 0 to CGRAPH_FREQ_MAX. */ int frequency; /* Unique id of the edge. */ int uid; /* Whether this edge was made direct by indirect inlining. */ unsigned int indirect_inlining_edge : 1; /* Whether this edge describes an indirect call with an undetermined callee. */ unsigned int indirect_unknown_callee : 1; /* Whether this edge is still a dangling */ /* True if the corresponding CALL stmt cannot be inlined. */ unsigned int call_stmt_cannot_inline_p : 1; /* Can this call throw externally? */ unsigned int can_throw_external : 1; /* Edges with SPECULATIVE flag represents indirect calls that was speculatively turned into direct (i.e. by profile feedback). The final code sequence will have form: if (call_target == expected_fn) expected_fn (); else call_target (); Every speculative call is represented by three components attached to a same call statement: 1) a direct call (to expected_fn) 2) an indirect call (to call_target) 3) a IPA_REF_ADDR refrence to expected_fn. Optimizers may later redirect direct call to clone, so 1) and 3) do not need to necesarily agree with destination. */ unsigned int speculative : 1; }; #define CGRAPH_FREQ_BASE 1000 #define CGRAPH_FREQ_MAX 100000 typedef struct cgraph_edge *cgraph_edge_p; /* The varpool data structure. Each static variable decl has assigned varpool_node. */ class GTY((tag ("SYMTAB_VARIABLE"))) varpool_node : public symtab_node { public: /* Set when variable is scheduled to be assembled. */ unsigned output : 1; /* Set if the variable is dynamically initialized, except for function local statics. */ unsigned dynamically_initialized : 1; }; /* Every top level asm statement is put into a asm_node. */ struct GTY(()) asm_node { /* Next asm node. */ struct asm_node *next; /* String for this asm node. */ tree asm_str; /* Ordering of all cgraph nodes. */ int order; }; /* Report whether or not THIS symtab node is a function, aka cgraph_node. */ template <> template <> inline bool is_a_helper <cgraph_node>::test (symtab_node *p) { return p->type == SYMTAB_FUNCTION; } /* Report whether or not THIS symtab node is a vriable, aka varpool_node. */ template <> template <> inline bool is_a_helper <varpool_node>::test (symtab_node *p) { return p->type == SYMTAB_VARIABLE; } extern GTY(()) symtab_node *symtab_nodes; extern GTY(()) int cgraph_n_nodes; extern GTY(()) int cgraph_max_uid; extern GTY(()) int cgraph_edge_max_uid; extern bool cgraph_global_info_ready; enum cgraph_state { /* Frontend is parsing and finalizing functions. */ CGRAPH_STATE_PARSING, /* Callgraph is being constructed. It is safe to add new functions. */ CGRAPH_STATE_CONSTRUCTION, /* Callgraph is being at LTO time. */ CGRAPH_LTO_STREAMING, /* Callgraph is built and IPA passes are being run. */ CGRAPH_STATE_IPA, /* Callgraph is built and all functions are transformed to SSA form. */ CGRAPH_STATE_IPA_SSA, /* Functions are now ordered and being passed to RTL expanders. */ CGRAPH_STATE_EXPANSION, /* All cgraph expansion is done. */ CGRAPH_STATE_FINISHED }; extern enum cgraph_state cgraph_state; extern bool cgraph_function_flags_ready; extern cgraph_node_set cgraph_new_nodes; extern GTY(()) struct asm_node *asm_nodes; extern GTY(()) int symtab_order; extern bool cpp_implicit_aliases_done; /* Classifcation of symbols WRT partitioning. */ enum symbol_partitioning_class { /* External declarations are ignored by partitioning algorithms and they are added into the boundary later via compute_ltrans_boundary. */ SYMBOL_EXTERNAL, /* Partitioned symbols are pur into one of partitions. */ SYMBOL_PARTITION, /* Duplicated symbols (such as comdat or constant pool references) are copied into every node needing them via add_symbol_to_partition. */ SYMBOL_DUPLICATE }; /* In symtab.c */ void symtab_register_node (symtab_node *); void symtab_unregister_node (symtab_node *); void symtab_remove_from_same_comdat_group (symtab_node *); void symtab_remove_node (symtab_node *); symtab_node *symtab_get_node (const_tree); symtab_node *symtab_node_for_asm (const_tree asmname); void symtab_insert_node_to_hashtable (symtab_node *); void symtab_add_to_same_comdat_group (symtab_node *, symtab_node *); void symtab_dissolve_same_comdat_group_list (symtab_node *node); void dump_symtab (FILE *); void debug_symtab (void); void dump_symtab_node (FILE *, symtab_node *); void debug_symtab_node (symtab_node *); void dump_symtab_base (FILE *, symtab_node *); void verify_symtab (void); void verify_symtab_node (symtab_node *); bool verify_symtab_base (symtab_node *); bool symtab_used_from_object_file_p (symtab_node *); void symtab_make_decl_local (tree); symtab_node *symtab_alias_ultimate_target (symtab_node *, enum availability *avail = NULL); bool symtab_resolve_alias (symtab_node *node, symtab_node *target); void fixup_same_cpp_alias_visibility (symtab_node *node, symtab_node *target); bool symtab_for_node_and_aliases (symtab_node *, bool (*) (symtab_node *, void *), void *, bool); symtab_node *symtab_nonoverwritable_alias (symtab_node *); enum availability symtab_node_availability (symtab_node *); bool symtab_semantically_equivalent_p (symtab_node *, symtab_node *); enum symbol_partitioning_class symtab_get_symbol_partitioning_class (symtab_node *); /* In cgraph.c */ void dump_cgraph (FILE *); void debug_cgraph (void); void dump_cgraph_node (FILE *, struct cgraph_node *); void debug_cgraph_node (struct cgraph_node *); void cgraph_remove_edge (struct cgraph_edge *); void cgraph_remove_node (struct cgraph_node *); void cgraph_release_function_body (struct cgraph_node *); void release_function_body (tree); void cgraph_node_remove_callees (struct cgraph_node *node); struct cgraph_edge *cgraph_create_edge (struct cgraph_node *, struct cgraph_node *, gimple, gcov_type, int); struct cgraph_edge *cgraph_create_indirect_edge (struct cgraph_node *, gimple, int, gcov_type, int); struct cgraph_indirect_call_info *cgraph_allocate_init_indirect_info (void); struct cgraph_node * cgraph_create_node (tree); struct cgraph_node * cgraph_create_empty_node (void); struct cgraph_node * cgraph_get_create_node (tree); struct cgraph_node * cgraph_same_body_alias (struct cgraph_node *, tree, tree); struct cgraph_node * cgraph_add_thunk (struct cgraph_node *, tree, tree, bool, HOST_WIDE_INT, HOST_WIDE_INT, tree, tree); struct cgraph_node *cgraph_node_for_asm (tree); struct cgraph_edge *cgraph_edge (struct cgraph_node *, gimple); void cgraph_set_call_stmt (struct cgraph_edge *, gimple, bool update_speculative = true); void cgraph_update_edges_for_call_stmt (gimple, tree, gimple); struct cgraph_local_info *cgraph_local_info (tree); struct cgraph_global_info *cgraph_global_info (tree); struct cgraph_rtl_info *cgraph_rtl_info (tree); struct cgraph_node *cgraph_create_function_alias (tree, tree); void cgraph_call_node_duplication_hooks (struct cgraph_node *, struct cgraph_node *); void cgraph_call_edge_duplication_hooks (struct cgraph_edge *, struct cgraph_edge *); void cgraph_redirect_edge_callee (struct cgraph_edge *, struct cgraph_node *); struct cgraph_edge *cgraph_make_edge_direct (struct cgraph_edge *, struct cgraph_node *); bool cgraph_only_called_directly_p (struct cgraph_node *); bool cgraph_function_possibly_inlined_p (tree); void cgraph_unnest_node (struct cgraph_node *); enum availability cgraph_function_body_availability (struct cgraph_node *); void cgraph_add_new_function (tree, bool); const char* cgraph_inline_failed_string (cgraph_inline_failed_t); cgraph_inline_failed_type_t cgraph_inline_failed_type (cgraph_inline_failed_t); void cgraph_set_nothrow_flag (struct cgraph_node *, bool); void cgraph_set_const_flag (struct cgraph_node *, bool, bool); void cgraph_set_pure_flag (struct cgraph_node *, bool, bool); bool cgraph_node_cannot_return (struct cgraph_node *); bool cgraph_edge_cannot_lead_to_return (struct cgraph_edge *); bool cgraph_will_be_removed_from_program_if_no_direct_calls (struct cgraph_node *node); bool cgraph_can_remove_if_no_direct_calls_and_refs_p (struct cgraph_node *node); bool cgraph_can_remove_if_no_direct_calls_p (struct cgraph_node *node); bool resolution_used_from_other_file_p (enum ld_plugin_symbol_resolution); bool cgraph_for_node_thunks_and_aliases (struct cgraph_node *, bool (*) (struct cgraph_node *, void *), void *, bool); bool cgraph_for_node_and_aliases (struct cgraph_node *, bool (*) (struct cgraph_node *, void *), void *, bool); vec<cgraph_edge_p> collect_callers_of_node (struct cgraph_node *node); void verify_cgraph (void); void verify_cgraph_node (struct cgraph_node *); void cgraph_mark_address_taken_node (struct cgraph_node *); typedef void (*cgraph_edge_hook)(struct cgraph_edge *, void *); typedef void (*cgraph_node_hook)(struct cgraph_node *, void *); typedef void (*varpool_node_hook)(varpool_node *, void *); typedef void (*cgraph_2edge_hook)(struct cgraph_edge *, struct cgraph_edge *, void *); typedef void (*cgraph_2node_hook)(struct cgraph_node *, struct cgraph_node *, void *); struct cgraph_edge_hook_list; struct cgraph_node_hook_list; struct varpool_node_hook_list; struct cgraph_2edge_hook_list; struct cgraph_2node_hook_list; struct cgraph_edge_hook_list *cgraph_add_edge_removal_hook (cgraph_edge_hook, void *); void cgraph_remove_edge_removal_hook (struct cgraph_edge_hook_list *); struct cgraph_node_hook_list *cgraph_add_node_removal_hook (cgraph_node_hook, void *); void cgraph_remove_node_removal_hook (struct cgraph_node_hook_list *); struct varpool_node_hook_list *varpool_add_node_removal_hook (varpool_node_hook, void *); void varpool_remove_node_removal_hook (struct varpool_node_hook_list *); struct cgraph_node_hook_list *cgraph_add_function_insertion_hook (cgraph_node_hook, void *); void cgraph_remove_function_insertion_hook (struct cgraph_node_hook_list *); struct varpool_node_hook_list *varpool_add_variable_insertion_hook (varpool_node_hook, void *); void varpool_remove_variable_insertion_hook (struct varpool_node_hook_list *); void cgraph_call_function_insertion_hooks (struct cgraph_node *node); struct cgraph_2edge_hook_list *cgraph_add_edge_duplication_hook (cgraph_2edge_hook, void *); void cgraph_remove_edge_duplication_hook (struct cgraph_2edge_hook_list *); struct cgraph_2node_hook_list *cgraph_add_node_duplication_hook (cgraph_2node_hook, void *); void cgraph_remove_node_duplication_hook (struct cgraph_2node_hook_list *); gimple cgraph_redirect_edge_call_stmt_to_callee (struct cgraph_edge *); struct cgraph_node * cgraph_function_node (struct cgraph_node *, enum availability *avail = NULL); bool cgraph_get_body (struct cgraph_node *node); struct cgraph_edge * cgraph_turn_edge_to_speculative (struct cgraph_edge *, struct cgraph_node *, gcov_type, int); void cgraph_speculative_call_info (struct cgraph_edge *, struct cgraph_edge *&, struct cgraph_edge *&, struct ipa_ref *&); extern bool gimple_check_call_matching_types (gimple, tree, bool); /* In cgraphunit.c */ struct asm_node *add_asm_node (tree); extern FILE *cgraph_dump_file; void cgraph_finalize_function (tree, bool); void finalize_compilation_unit (void); void compile (void); void init_cgraph (void); void cgraph_process_new_functions (void); void cgraph_process_same_body_aliases (void); void fixup_same_cpp_alias_visibility (symtab_node *, symtab_node *target, tree); /* Initialize datastructures so DECL is a function in lowered gimple form. IN_SSA is true if the gimple is in SSA. */ basic_block init_lowered_empty_function (tree, bool); void cgraph_reset_node (struct cgraph_node *); bool expand_thunk (struct cgraph_node *, bool); /* In cgraphclones.c */ struct cgraph_edge * cgraph_clone_edge (struct cgraph_edge *, struct cgraph_node *, gimple, unsigned, gcov_type, int, bool); struct cgraph_node * cgraph_clone_node (struct cgraph_node *, tree, gcov_type, int, bool, vec<cgraph_edge_p>, bool, struct cgraph_node *, bitmap); tree clone_function_name (tree decl, const char *); struct cgraph_node * cgraph_create_virtual_clone (struct cgraph_node *old_node, vec<cgraph_edge_p>, vec<ipa_replace_map_p, va_gc> *tree_map, bitmap args_to_skip, const char *clone_name); struct cgraph_node *cgraph_find_replacement_node (struct cgraph_node *); bool cgraph_remove_node_and_inline_clones (struct cgraph_node *, struct cgraph_node *); void cgraph_set_call_stmt_including_clones (struct cgraph_node *, gimple, gimple, bool update_speculative = true); void cgraph_create_edge_including_clones (struct cgraph_node *, struct cgraph_node *, gimple, gimple, gcov_type, int, cgraph_inline_failed_t); void cgraph_materialize_all_clones (void); struct cgraph_node * cgraph_copy_node_for_versioning (struct cgraph_node *, tree, vec<cgraph_edge_p>, bitmap); struct cgraph_node *cgraph_function_versioning (struct cgraph_node *, vec<cgraph_edge_p>, vec<ipa_replace_map_p, va_gc> *, bitmap, bool, bitmap, basic_block, const char *); void tree_function_versioning (tree, tree, vec<ipa_replace_map_p, va_gc> *, bool, bitmap, bool, bitmap, basic_block); struct cgraph_edge *cgraph_resolve_speculation (struct cgraph_edge *, tree); /* In cgraphbuild.c */ unsigned int rebuild_cgraph_edges (void); void cgraph_rebuild_references (void); int compute_call_stmt_bb_frequency (tree, basic_block bb); void record_references_in_initializer (tree, bool); void ipa_record_stmt_references (struct cgraph_node *, gimple); /* In ipa.c */ bool symtab_remove_unreachable_nodes (bool, FILE *); cgraph_node_set cgraph_node_set_new (void); cgraph_node_set_iterator cgraph_node_set_find (cgraph_node_set, struct cgraph_node *); void cgraph_node_set_add (cgraph_node_set, struct cgraph_node *); void cgraph_node_set_remove (cgraph_node_set, struct cgraph_node *); void dump_cgraph_node_set (FILE *, cgraph_node_set); void debug_cgraph_node_set (cgraph_node_set); void free_cgraph_node_set (cgraph_node_set); void cgraph_build_static_cdtor (char which, tree body, int priority); varpool_node_set varpool_node_set_new (void); varpool_node_set_iterator varpool_node_set_find (varpool_node_set, varpool_node *); void varpool_node_set_add (varpool_node_set, varpool_node *); void varpool_node_set_remove (varpool_node_set, varpool_node *); void dump_varpool_node_set (FILE *, varpool_node_set); void debug_varpool_node_set (varpool_node_set); void free_varpool_node_set (varpool_node_set); void ipa_discover_readonly_nonaddressable_vars (void); bool varpool_externally_visible_p (varpool_node *); /* In predict.c */ bool cgraph_maybe_hot_edge_p (struct cgraph_edge *e); bool cgraph_optimize_for_size_p (struct cgraph_node *); /* In varpool.c */ varpool_node *varpool_create_empty_node (void); varpool_node *varpool_node_for_decl (tree); varpool_node *varpool_node_for_asm (tree asmname); void varpool_mark_needed_node (varpool_node *); void debug_varpool (void); void dump_varpool (FILE *); void dump_varpool_node (FILE *, varpool_node *); void varpool_finalize_decl (tree); enum availability cgraph_variable_initializer_availability (varpool_node *); void cgraph_make_node_local (struct cgraph_node *); bool cgraph_node_can_be_local_p (struct cgraph_node *); void varpool_remove_node (varpool_node *node); void varpool_finalize_named_section_flags (varpool_node *node); bool varpool_output_variables (void); bool varpool_assemble_decl (varpool_node *node); void varpool_analyze_node (varpool_node *); varpool_node * varpool_extra_name_alias (tree, tree); varpool_node * varpool_create_variable_alias (tree, tree); void varpool_reset_queue (void); tree ctor_for_folding (tree); bool varpool_for_node_and_aliases (varpool_node *, bool (*) (varpool_node *, void *), void *, bool); void varpool_add_new_variable (tree); void symtab_initialize_asm_name_hash (void); void symtab_prevail_in_asm_name_hash (symtab_node *node); void varpool_remove_initializer (varpool_node *); /* In cgraph.c */ extern void change_decl_assembler_name (tree, tree); /* Return callgraph node for given symbol and check it is a function. */ static inline struct cgraph_node * cgraph (symtab_node *node) { gcc_checking_assert (!node || node->type == SYMTAB_FUNCTION); return (struct cgraph_node *)node; } /* Return varpool node for given symbol and check it is a variable. */ static inline varpool_node * varpool (symtab_node *node) { gcc_checking_assert (!node || node->type == SYMTAB_VARIABLE); return (varpool_node *)node; } /* Return callgraph node for given symbol and check it is a function. */ static inline struct cgraph_node * cgraph_get_node (const_tree decl) { gcc_checking_assert (TREE_CODE (decl) == FUNCTION_DECL); return cgraph (symtab_get_node (decl)); } /* Return varpool node for given symbol and check it is a function. */ static inline varpool_node * varpool_get_node (const_tree decl) { gcc_checking_assert (TREE_CODE (decl) == VAR_DECL); return varpool (symtab_get_node (decl)); } /* Walk all symbols. */ #define FOR_EACH_SYMBOL(node) \ for ((node) = symtab_nodes; (node); (node) = (node)->next) /* Return first variable. */ static inline varpool_node * varpool_first_variable (void) { symtab_node *node; for (node = symtab_nodes; node; node = node->next) if (varpool_node *vnode = dyn_cast <varpool_node> (node)) return vnode; return NULL; } /* Return next variable after NODE. */ static inline varpool_node * varpool_next_variable (varpool_node *node) { symtab_node *node1 = node->next; for (; node1; node1 = node1->next) if (varpool_node *vnode1 = dyn_cast <varpool_node> (node1)) return vnode1; return NULL; } /* Walk all variables. */ #define FOR_EACH_VARIABLE(node) \ for ((node) = varpool_first_variable (); \ (node); \ (node) = varpool_next_variable ((node))) /* Return first reachable static variable with initializer. */ static inline varpool_node * varpool_first_static_initializer (void) { symtab_node *node; for (node = symtab_nodes; node; node = node->next) { varpool_node *vnode = dyn_cast <varpool_node> (node); if (vnode && DECL_INITIAL (node->decl)) return vnode; } return NULL; } /* Return next reachable static variable with initializer after NODE. */ static inline varpool_node * varpool_next_static_initializer (varpool_node *node) { symtab_node *node1 = node->next; for (; node1; node1 = node1->next) { varpool_node *vnode1 = dyn_cast <varpool_node> (node1); if (vnode1 && DECL_INITIAL (node1->decl)) return vnode1; } return NULL; } /* Walk all static variables with initializer set. */ #define FOR_EACH_STATIC_INITIALIZER(node) \ for ((node) = varpool_first_static_initializer (); (node); \ (node) = varpool_next_static_initializer (node)) /* Return first reachable static variable with initializer. */ static inline varpool_node * varpool_first_defined_variable (void) { symtab_node *node; for (node = symtab_nodes; node; node = node->next) { varpool_node *vnode = dyn_cast <varpool_node> (node); if (vnode && vnode->definition) return vnode; } return NULL; } /* Return next reachable static variable with initializer after NODE. */ static inline varpool_node * varpool_next_defined_variable (varpool_node *node) { symtab_node *node1 = node->next; for (; node1; node1 = node1->next) { varpool_node *vnode1 = dyn_cast <varpool_node> (node1); if (vnode1 && vnode1->definition) return vnode1; } return NULL; } /* Walk all variables with definitions in current unit. */ #define FOR_EACH_DEFINED_VARIABLE(node) \ for ((node) = varpool_first_defined_variable (); (node); \ (node) = varpool_next_defined_variable (node)) /* Return first function with body defined. */ static inline struct cgraph_node * cgraph_first_defined_function (void) { symtab_node *node; for (node = symtab_nodes; node; node = node->next) { cgraph_node *cn = dyn_cast <cgraph_node> (node); if (cn && cn->definition) return cn; } return NULL; } /* Return next function with body defined after NODE. */ static inline struct cgraph_node * cgraph_next_defined_function (struct cgraph_node *node) { symtab_node *node1 = node->next; for (; node1; node1 = node1->next) { cgraph_node *cn1 = dyn_cast <cgraph_node> (node1); if (cn1 && cn1->definition) return cn1; } return NULL; } /* Walk all functions with body defined. */ #define FOR_EACH_DEFINED_FUNCTION(node) \ for ((node) = cgraph_first_defined_function (); (node); \ (node) = cgraph_next_defined_function ((node))) /* Return first function. */ static inline struct cgraph_node * cgraph_first_function (void) { symtab_node *node; for (node = symtab_nodes; node; node = node->next) if (cgraph_node *cn = dyn_cast <cgraph_node> (node)) return cn; return NULL; } /* Return next function. */ static inline struct cgraph_node * cgraph_next_function (struct cgraph_node *node) { symtab_node *node1 = node->next; for (; node1; node1 = node1->next) if (cgraph_node *cn1 = dyn_cast <cgraph_node> (node1)) return cn1; return NULL; } /* Walk all functions. */ #define FOR_EACH_FUNCTION(node) \ for ((node) = cgraph_first_function (); (node); \ (node) = cgraph_next_function ((node))) /* Return true when NODE is a function with Gimple body defined in current unit. Functions can also be define externally or they can be thunks with no Gimple representation. Note that at WPA stage, the function body may not be present in memory. */ static inline bool cgraph_function_with_gimple_body_p (struct cgraph_node *node) { return node->definition && !node->thunk.thunk_p && !node->alias; } /* Return first function with body defined. */ static inline struct cgraph_node * cgraph_first_function_with_gimple_body (void) { symtab_node *node; for (node = symtab_nodes; node; node = node->next) { cgraph_node *cn = dyn_cast <cgraph_node> (node); if (cn && cgraph_function_with_gimple_body_p (cn)) return cn; } return NULL; } /* Return next reachable static variable with initializer after NODE. */ static inline struct cgraph_node * cgraph_next_function_with_gimple_body (struct cgraph_node *node) { symtab_node *node1 = node->next; for (; node1; node1 = node1->next) { cgraph_node *cn1 = dyn_cast <cgraph_node> (node1); if (cn1 && cgraph_function_with_gimple_body_p (cn1)) return cn1; } return NULL; } /* Walk all functions with body defined. */ #define FOR_EACH_FUNCTION_WITH_GIMPLE_BODY(node) \ for ((node) = cgraph_first_function_with_gimple_body (); (node); \ (node) = cgraph_next_function_with_gimple_body (node)) /* Create a new static variable of type TYPE. */ tree add_new_static_var (tree type); /* Return true if iterator CSI points to nothing. */ static inline bool csi_end_p (cgraph_node_set_iterator csi) { return csi.index >= csi.set->nodes.length (); } /* Advance iterator CSI. */ static inline void csi_next (cgraph_node_set_iterator *csi) { csi->index++; } /* Return the node pointed to by CSI. */ static inline struct cgraph_node * csi_node (cgraph_node_set_iterator csi) { return csi.set->nodes[csi.index]; } /* Return an iterator to the first node in SET. */ static inline cgraph_node_set_iterator csi_start (cgraph_node_set set) { cgraph_node_set_iterator csi; csi.set = set; csi.index = 0; return csi; } /* Return true if SET contains NODE. */ static inline bool cgraph_node_in_set_p (struct cgraph_node *node, cgraph_node_set set) { cgraph_node_set_iterator csi; csi = cgraph_node_set_find (set, node); return !csi_end_p (csi); } /* Return number of nodes in SET. */ static inline size_t cgraph_node_set_size (cgraph_node_set set) { return set->nodes.length (); } /* Return true if iterator VSI points to nothing. */ static inline bool vsi_end_p (varpool_node_set_iterator vsi) { return vsi.index >= vsi.set->nodes.length (); } /* Advance iterator VSI. */ static inline void vsi_next (varpool_node_set_iterator *vsi) { vsi->index++; } /* Return the node pointed to by VSI. */ static inline varpool_node * vsi_node (varpool_node_set_iterator vsi) { return vsi.set->nodes[vsi.index]; } /* Return an iterator to the first node in SET. */ static inline varpool_node_set_iterator vsi_start (varpool_node_set set) { varpool_node_set_iterator vsi; vsi.set = set; vsi.index = 0; return vsi; } /* Return true if SET contains NODE. */ static inline bool varpool_node_in_set_p (varpool_node *node, varpool_node_set set) { varpool_node_set_iterator vsi; vsi = varpool_node_set_find (set, node); return !vsi_end_p (vsi); } /* Return number of nodes in SET. */ static inline size_t varpool_node_set_size (varpool_node_set set) { return set->nodes.length (); } /* Uniquize all constants that appear in memory. Each constant in memory thus far output is recorded in `const_desc_table'. */ struct GTY(()) constant_descriptor_tree { /* A MEM for the constant. */ rtx rtl; /* The value of the constant. */ tree value; /* Hash of value. Computing the hash from value each time hashfn is called can't work properly, as that means recursive use of the hash table during hash table expansion. */ hashval_t hash; }; /* Return true if set is nonempty. */ static inline bool cgraph_node_set_nonempty_p (cgraph_node_set set) { return !set->nodes.is_empty (); } /* Return true if set is nonempty. */ static inline bool varpool_node_set_nonempty_p (varpool_node_set set) { return !set->nodes.is_empty (); } /* Return true when function NODE is only called directly or it has alias. i.e. it is not externally visible, address was not taken and it is not used in any other non-standard way. */ static inline bool cgraph_only_called_directly_or_aliased_p (struct cgraph_node *node) { gcc_assert (!node->global.inlined_to); return (!node->force_output && !node->address_taken && !node->used_from_other_partition && !DECL_VIRTUAL_P (node->decl) && !DECL_STATIC_CONSTRUCTOR (node->decl) && !DECL_STATIC_DESTRUCTOR (node->decl) && !node->externally_visible); } /* Return true when function NODE can be removed from callgraph if all direct calls are eliminated. */ static inline bool varpool_can_remove_if_no_refs (varpool_node *node) { if (DECL_EXTERNAL (node->decl)) return true; return (!node->force_output && !node->used_from_other_partition && ((DECL_COMDAT (node->decl) && !node->forced_by_abi && !symtab_used_from_object_file_p (node)) || !node->externally_visible || DECL_HAS_VALUE_EXPR_P (node->decl))); } /* Return true when all references to VNODE must be visible in ipa_ref_list. i.e. if the variable is not externally visible or not used in some magic way (asm statement or such). The magic uses are all summarized in force_output flag. */ static inline bool varpool_all_refs_explicit_p (varpool_node *vnode) { return (vnode->definition && !vnode->externally_visible && !vnode->used_from_other_partition && !vnode->force_output); } /* Constant pool accessor function. */ htab_t constant_pool_htab (void); /* FIXME: inappropriate dependency of cgraph on IPA. */ #include "ipa-ref-inline.h" /* Return node that alias N is aliasing. */ static inline symtab_node * symtab_alias_target (symtab_node *n) { struct ipa_ref *ref; ipa_ref_list_reference_iterate (&n->ref_list, 0, ref); gcc_checking_assert (ref->use == IPA_REF_ALIAS); return ref->referred; } static inline struct cgraph_node * cgraph_alias_target (struct cgraph_node *n) { return dyn_cast <cgraph_node> (symtab_alias_target (n)); } static inline varpool_node * varpool_alias_target (varpool_node *n) { return dyn_cast <varpool_node> (symtab_alias_target (n)); } /* Given NODE, walk the alias chain to return the function NODE is alias of. Do not walk through thunks. When AVAILABILITY is non-NULL, get minimal availability in the chain. */ static inline struct cgraph_node * cgraph_function_or_thunk_node (struct cgraph_node *node, enum availability *availability = NULL) { struct cgraph_node *n; n = dyn_cast <cgraph_node> (symtab_alias_ultimate_target (node, availability)); if (!n && availability) *availability = AVAIL_NOT_AVAILABLE; return n; } /* Given NODE, walk the alias chain to return the function NODE is alias of. Do not walk through thunks. When AVAILABILITY is non-NULL, get minimal availability in the chain. */ static inline varpool_node * varpool_variable_node (varpool_node *node, enum availability *availability = NULL) { varpool_node *n; if (node) n = dyn_cast <varpool_node> (symtab_alias_ultimate_target (node, availability)); else n = NULL; if (!n && availability) *availability = AVAIL_NOT_AVAILABLE; return n; } /* Return true when the edge E represents a direct recursion. */ static inline bool cgraph_edge_recursive_p (struct cgraph_edge *e) { struct cgraph_node *callee = cgraph_function_or_thunk_node (e->callee, NULL); if (e->caller->global.inlined_to) return e->caller->global.inlined_to->decl == callee->decl; else return e->caller->decl == callee->decl; } /* Return true if the TM_CLONE bit is set for a given FNDECL. */ static inline bool decl_is_tm_clone (const_tree fndecl) { struct cgraph_node *n = cgraph_get_node (fndecl); if (n) return n->tm_clone; return false; } /* Likewise indicate that a node is needed, i.e. reachable via some external means. */ static inline void cgraph_mark_force_output_node (struct cgraph_node *node) { node->force_output = 1; gcc_checking_assert (!node->global.inlined_to); } /* Return true when the symbol is real symbol, i.e. it is not inline clone or abstract function kept for debug info purposes only. */ static inline bool symtab_real_symbol_p (symtab_node *node) { struct cgraph_node *cnode; if (DECL_ABSTRACT (node->decl)) return false; if (!is_a <cgraph_node> (node)) return true; cnode = cgraph (node); if (cnode->global.inlined_to) return false; return true; } /* Return true if NODE can be discarded by linker from the binary. */ static inline bool symtab_can_be_discarded (symtab_node *node) { return (DECL_EXTERNAL (node->decl) || (DECL_ONE_ONLY (node->decl) && node->resolution != LDPR_PREVAILING_DEF && node->resolution != LDPR_PREVAILING_DEF_IRONLY && node->resolution != LDPR_PREVAILING_DEF_IRONLY_EXP)); } /* Return true if NODE is local to a particular COMDAT group, and must not be named from outside the COMDAT. This is used for C++ decloned constructors. */ static inline bool symtab_comdat_local_p (symtab_node *node) { return (node->same_comdat_group && !TREE_PUBLIC (node->decl)); } /* Return true if ONE and TWO are part of the same COMDAT group. */ static inline bool symtab_in_same_comdat_p (symtab_node *one, symtab_node *two) { return DECL_COMDAT_GROUP (one->decl) == DECL_COMDAT_GROUP (two->decl); } #endif /* GCC_CGRAPH_H */
// ----------------------------------------------------------------------------- // The index manager tracks the current tips of each index by using a parent // bucket that contains an entry for index. // // The serialized format for an index tip is: // // [<number tips><block hash1><block hash2>],... // // Field Type Size // block height uint32 4 bytes // block hash chainhash.Hash chainhash.HashSize // ----------------------------------------------------------------------------- func dbPutIndexerTips(dbTx database.Tx, idxKey []byte, tipHashes []*chainhash.Hash) error { serialized := make([]byte, chainhash.HashSize * len(tipHashes) + 4) byteOrder.PutUint32(serialized[:], uint32(len(tipHashes))) for i, hash := range tipHashes { start := (chainhash.HashSize * i) + 4 copy(serialized[start:], hash[:]) } indexesBucket := dbTx.Metadata().Bucket(indexTipsBucketName) return indexesBucket.Put(idxKey, serialized) }
def _ops_from_givens_rotations_circuit_description( qubits: Sequence[cirq.Qid], circuit_description: Iterable[Iterable[ Union[str, Tuple[int, int, float, float]]]]) -> cirq.OP_TREE: for parallel_ops in circuit_description: for op in parallel_ops: if op == 'pht': yield cirq.X(qubits[-1]) else: i, j, theta, phi = cast(Tuple[int, int, float, float], op) yield Ryxxy(theta).on(qubits[i], qubits[j]) yield cirq.Z(qubits[j])**(phi / numpy.pi)
def sendServerPlayerStatsEvent(self, player): logging.info("sendServerStatsEvent: %s %f %d %s" % (player.name, player.playtime, player.killcount, player.killedcount)) self.callRemote(ServerPlayerStatsEvent, playername=player.name, playtime=player.playtime, killcount=player.killcount, killedcount=player.killedcount)
from pyethereum import utils import random def sha3(x): return utils.decode_int(utils.zunpad(utils.sha3(x))) class SeedObj(): def __init__(self, seed): self.seed = seed self.a = 3**160 self.c = 7**80 self.n = 2**256 - 4294968273 # secp256k1n, why not def rand(self, r): self.seed = (self.seed * self.a + self.c) % self.n return self.seed % r def encode_int(x): o = '' for _ in range(8): o = chr(x % 256) + o x //= 256 return o ops = { "plus": lambda x, y: (x + y) % 2**64, "times": lambda x, y: (x * y) % 2**64, "xor": lambda x, y: x ^ y, "and": lambda x, y: x & y, "or": lambda x, y: x | y, "not": lambda x, y: 2**64-1-x, "nxor": lambda x, y: (2**64-1-x) ^ y, "rshift": lambda x, y: x >> (y % 64) } def gentape(W, H, SEED): s = SeedObj(SEED) tape = [] for i in range(H): op = ops.keys()[s.rand(len(ops))] r = s.rand(100) if r < 20 and i > 20: x1 = tape[-r]["x1"] else: x1 = s.rand(W) x2 = s.rand(W) tape.append({"op": op, "x1": x1, "x2": x2}) return tape def runtape(TAPE, SEED, params): s = SeedObj(SEED) # Fill up tape inputs mem = [0] * params["w"] for i in range(params["w"]): mem[i] = s.rand(2**64) # Direction variable (run forwards or backwards) dir = 1 # Run the tape on the inputs for i in range(params["h"] // 100): for j in (range(100) if dir == 1 else range(99, -1, -1)): t = TAPE[i * 100 + j] mem[t["x1"]] = ops[t["op"]](mem[t["x1"]], mem[t["x2"]]) # 16% of the time, we flip the order of the next 100 ops; # this adds in conditional branching if 2 < mem[t["x1"]] % 37 < 9: dir *= -1 return sha3(''.join(encode_int(x) for x in mem)) def PoWVerify(header, nonce, params): tape = gentape(params["w"], params["h"], sha3(header + encode_int(nonce // params["n"]))) h = runtape(tape, sha3(header + encode_int(nonce)), params) print h return h < 2**256 / params["diff"] def PoWRun(header, params): # actual randomness, so that miners don't overlap nonce = random.randrange(2**50) * params["n"] tape = None while 1: print nonce if nonce % params["n"] == 0: tape = gentape(params["w"], params["h"], sha3(header + encode_int(nonce // params["n"]))) h = runtape(tape, sha3(header + encode_int(nonce)), params) if h < 2**256 / params["diff"]: return nonce else: nonce += 1 params = { "w": 100, "h": 15000, # generally, w*log(w) at least "diff": 2**24, # initial "n": 1000 }
I own the “Band of Brothers” series on Blu-ray and it is truly outstanding! Here is some insight into the interviews given to me by the man who conducted the interviews with the men of Easy Company, Mark Cowen… In 2012, I took a filmmaking class with Mark Cowen, who directed the Emmy nominated, “We Stand Alone Together: The Men of Easy Company”. During the class, he described to us what it was like interviewing the veterans of Easy Company. In order to get access to these men, he had to go through the “Biggest Brother”, Major Richard “Dick” Winters. Mark said that, even after so many years, Major Winters still commanded the respect of his troops and that they would do what he asked. Major Winters got on the phone and made some calls that went something like this, “This is Winters. I’m sending a man over to interview you. I want you to tell him everything he wants to know” or words to that effect. Mark said that this is the only way he could have gotten access to them and for them to tell their stories for these interviews. Mark faced a difficult problem before any of the interviews started. How could he make them “open up” to his questions and speak freely about these often painful experiences and memories? He couldn’t just go in and say, “Can you tell me what you did during the war”. Knowing that these men wouldn’t want to talk about themselves he came up with an idea which worked very well. He started each interview by asking, “Who was your best friend during the war? What was he like?” That is how he got these brave men to speak freely and express themselves as openly as they did on camera. Many of the men Mark interviewed had never told anyone about their combat experiences during the war, not even their families. While relating some of their stories, the brave veterans would sometimes break down and cry. Mark told us he often found himself crying along with them. During one of the interviews, an old veteran slowly came out and sat down. He started speaking about the war and his time with Easy Company. As the camera rolled and the interview progressed, Mark could hear this veteran’s family come up from behind to watch and listen to their loved one relate stories of bravery, of death, of friendship and of pain, which they had never heard. When he finished the interview, Mark turned to find not only the veteran’s family but also a lot of their neighbors standing there. Some were weeping quietly while others struggled to restrain from sobbing. Scenes like this became common during the interviews he did with these brave, old warriors. I often think of what Mark Cowen told us that day about his interview for, “We Stand Alone Together: The Men of Easy Company”. I wanted to get together with him again to hear more about these interviews but sadly, he passed away shortly thereafter, on September 10, 2012.
/** * Frees the resources used by this helper class. This method can be called by a {@code tearDown()} method of a unit * test class. * * @throws Exception if an error occurs */ public void tearDown() throws Exception { if (datasource != null) { datasource.getConnection().close(); } hsqlDB.close(); }
/** * Returns a {@code ModelManager} with the data from {@code storage}'s tea pet and {@code userPrefs}. <br> * The data from the sample tea pet will be used instead if {@code storage}'s tea pet is not found, * or an empty tea pet will be used instead if errors occur when reading {@code storage}'s tea pet. */ private Model initModelManager(Storage storage, ReadOnlyUserPrefs userPrefs) { Optional<ReadOnlyTeaPet> teaPetOptional; Optional<ReadOnlyAcademics> academicsOptional; Optional<ReadOnlyEvents> eventsOptional; Optional<ReadOnlyNotes> notesManagerOptional; Optional<ReadOnlyAdmin> adminOptional; ReadOnlyTeaPet initialData; ReadOnlyAcademics initialAcademics; ReadOnlyAdmin initialAdmin; ReadOnlyEvents initialEvents; ReadOnlyNotes initialNotesManager; try { teaPetOptional = storage.readTeaPet(); academicsOptional = storage.readAcademics(); adminOptional = storage.readAdmin(); eventsOptional = storage.readEvents(); notesManagerOptional = storage.readNotesManager(); if (!teaPetOptional.isPresent()) { logger.info("Data file not found. Will be starting with a sample TeaPet"); new File("data").mkdir(); } if (!academicsOptional.isPresent()) { logger.info("Academics file not found. Will be starting with a sample Academics."); } if (!adminOptional.isPresent()) { logger.info("Admin file not found."); } initialData = teaPetOptional.orElseGet(SampleDataUtil::getSampleTeaPet); initialAcademics = academicsOptional.orElseGet(SampleDataUtil::getSampleAcademics); initialAdmin = adminOptional.orElseGet(SampleDataUtil::getSampleAdmin); if (!eventsOptional.isPresent()) { logger.info("Events file not found, Will be starting with a sample Events file"); } initialData = teaPetOptional.orElseGet(SampleDataUtil::getSampleTeaPet); initialAcademics = academicsOptional.orElseGet(SampleDataUtil::getSampleAcademics); initialEvents = eventsOptional.orElseGet(SampleDataUtil::getSampleEvents); initialNotesManager = notesManagerOptional.orElseGet(SampleDataUtil::getSampleNotesManager); } catch (DataConversionException e) { logger.warning("Data file not in the correct format. Will be starting with an empty TeaPet"); initialData = new TeaPet(); initialAcademics = new Academics(); initialAdmin = new Admin(); initialEvents = new EventHistory(); initialNotesManager = new NotesManager(); } catch (IOException e) { logger.warning("Problem while reading from the file. Will be starting with an empty TeaPet"); initialData = new TeaPet(); initialAcademics = new Academics(); initialAdmin = new Admin(); initialNotesManager = new NotesManager(); initialEvents = new EventHistory(); initialNotesManager = new NotesManager(); } return new ModelManager(initialData, initialAcademics, initialAdmin, initialNotesManager, initialEvents, userPrefs); }
/* Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ #pragma once #ifdef PADDLE_EXECUTOR_MULTITHREAD #include <string> #include <unordered_map> #include <vector> #include "framework/operator.h" namespace paddle_mobile { class depCore { public: template <typename Dtype> void analysisDep( const std::vector<std::shared_ptr<framework::OperatorBase<Dtype>>>& ops) { std::unordered_map<std::string, int> vars; size_t nop = ops.size(); deps.resize(nop); next.resize(nop); for (size_t i = 0; i < nop; i++) { const auto& op = ops[i]; for (const auto& kv : op->Inputs()) { for (const auto& v : kv.second) { if (vars.find(v) == vars.end()) { continue; } int di = vars[v]; if (di == i) { continue; } if (std::find(deps[i].begin(), deps[i].end(), di) != deps[i].end()) { continue; } deps[i].push_back(di); next[di].push_back(i); } } for (const auto& kv : op->Outputs()) { for (const auto& v : kv.second) { vars[v] = i; } } } } const std::vector<int>& getNext(int i) { return next[i]; } const std::vector<int>& getDeps(int i) { return deps[i]; } std::vector<std::vector<int>> deps; std::vector<std::vector<int>> next; }; } // namespace paddle_mobile #endif
/** * Merge the two candidate clusters Ci and Cj, update model and distances... * * @throws DiarizationException the diarization exception * @throws IOException Signals that an I/O exception has occurred. */ public void mergeCandidates() throws DiarizationException, IOException { updateOrderOfCandidates(); mergeClusters(); updateDistanceMatrixSize(); updateGmms(); updateClusterAndGMM(); updateDistances(); }
#include <stdio.h> //int is_war(int vedtor) int main(void){ // Your code here! int n,m,i,a,b; scanf("%d %d ",&n,&m); int x[n+1],y[m+1]; scanf("%d %d\n",&x[n],&y[m]); a=x[n],b=y[m]; for(i=0; i<n; i++){ scanf("%d ",&x[i]); a>x[i]? (a=a):(a=x[i]); } for(i=0; i<m; i++){ scanf("%d ",&y[i]); b<y[i]? (b=b):(b=y[i]); } a>=b? printf("War\n"):printf("No War\n"); }
The Grammy-winning singer revealed her condition to AARP Friday. Linda Ronstadt accepts the Trailblazer award at the National Council of La Raza ALMA Awards in Pasadena, Calif. Ronstadt broke barriers for women as one of the top-selling artists of her generation. She has revealed she has Parkinson's Disease. (Photo: Matt Sayles AP) Story Highlights The 11-time Grammy winner was diagnosed eighth months ago but may have had symptoms for years Ronstadt's memoir, Simple Dreams, comes out Sept. 17 Her hits include "You're No Good," "Don't Know Much" and "Hurts So Bad" Parkinson's Disease has left Linda Ronstadt unable to sing. The 67-year-old singer, who will publish her memoir, Simple Dreams, next month, revealed her condition Friday in an interview with AARP. The singer of such '70s and '80s hits as You're No Good, Hurt So Bad and Don't Know Much now uses poles to assist her when walking on uneven ground and travels with a wheelchair. She says she was diagnosed with the neurological disorder eight months ago, though she began experiencing symptoms, including hand tremors and trouble controlling the muscles that let her sing, several years ago. She says she initially attributed her problems to the residual effects of a tick bite and shoulder surgery. "I think I've had it for seven or eight years already, because of the symptoms that I've had," the 11-time Grammy winner tells interviewer Alanna Nash. Ronstadt's last album was 2006's Adieu False Heart with Cajun musician Ann Savoy. "No one can sing with Parkinson's disease," she says. "No matter how hard you try." Read or Share this story: http://usat.ly/179pFKo
def call_async(self, msg, clb=None): urlpath = self._make_url(msg.svcname) _future = self.__controllerAsync.run_job(msg, urlpath, clb=clb) return AsyncHandler(_future)
On the improvement of the metrological properties of manganin sensors The paper presents the recent progress made in the accuracy improvement of the manganin pressure sensors at the Warsaw University of Technology (WUT) and at the Institute of Nuclear Energy. The efforts of the authors were concentrated on three problems: (1) The improvement of the temperature characteristics by the design of a manganin sensor with the temperature compensation achieved by the application of two manganin coils with two different temperature characteristics; (2) The improvement of the calibration process by - (a) using a high pressure deadweight piston gauge (with the correction of effective area versus pressure dependence, built at WUT), - (b) measurements of the manganin resistance as a function of the temperature and of the pressure using a precise multimeter and - (c) application of statistical methods for long series of measurements. (3) The modification of the thermal characteristic by implantation of bismuth and krypton ions.
By Radio Rahim New York - On Saturday night at the Nassau Coliseum, middleweight contender Daniel Jacobs (33-2, 29 KOs) made his return to the ring and won a dominating twelve round unanimous decision over previously undefeated Luis Arias. Jacobs controlled the fight from start to finish and it was hard to give Arias many of the rounds. Now Jacobs is setting his sights on a world title opportunity in the coming months. On December 16th, WBO middleweight champion Billy Joe Saunders (25-0, 12 KOs) will defend his world title against former IBF beltholder David Lemieux (38-3, 33 KOs). Saunders vs. Lemieux takes place at Place Bell in Laval, Canada. HBO will televise the fight. Jacobs is targeting the winner of that contest. In the opinion of Jacobs, both Saunders and Lemieux are not in his class - and they are also not in the class of top middleweight fighters like Canelo Alvarez and Gennady Golovkin. "No [they are not in our class], absolutely not. I think they are maybe B... B-level type of guys. No disrespect to them because they are dangerous at what they're dangerous at. BJ Saunders is a great boxer and Lemieux is a great puncher, but those guys to me are very limited," Jacobs said to BoxingScene.com. "I think I have more attributes, between my power and between my skills, and the variation of different things that I can do inside the ring. I think that I'm a little notch above those guys." Jacobs will travel to Canada next month to sit ringside at the event. He plans to challenge the winner in the aftermath. "I'm demanding every fight, but obviously that's the one that makes the most sense. They are fighting right now. And when we go up there to Montreal, we are going to call out who the winner is," Jacobs said. "When I called out David Lemieux and BJ Saunders [in the past], they said 'not right now, we have future plans.' Ok, go handle your business but know that I'm next. And if you avoid me now, then I know you're just scared."
// A binary search based function that returns index of a peak element int findPeakUtil(int arr[], int low, int high, int n) { // Find index of middle element int mid = low + (high - low)/2; /* (low + high)/2 */ // Compare middle element with its neighbours (if neighbours exist) if ((mid == 0 || arr[mid-1] <= arr[mid]) && (mid == n-1 || arr[mid+1] <= arr[mid])) return mid; // If middle element is not peak and its left neighbor is greater than it // then left half must have a peak element else if (mid > 0 && arr[mid-1] > arr[mid]) return findPeakUtil(arr, low, (mid -1), n); // If middle element is not peak and its right neighbor is greater than it // then right half must have a peak element else return findPeakUtil(arr, (mid + 1), high, n); } // A wrapper over recursive function findPeakUtil() int findPeak(int arr[], int n) { return findPeakUtil(arr, 0, n-1, n); }
/** * * @param index - index to check * @return true if the index is a custom order sort */ public static boolean isCustomOrder(int index) { return fromInt(index) == CUSTOM_ORDER; }
. OBJECTIVE To test the general hypothesis about executive deficits in language production in schizophrenia as well as more specific hypothesis that this deficit would be more pronounced in the case of higher demand on executive functions. MATERIAL AND METHODS Twenty-five patients with schizophrenia and twenty-seven healthy controls were asked to tell a story based on a series of pictures and then to give an oral composition on the given topic. RESULTS AND CONCLUSION Schizophrenia patients, compared to controls, demonstrated poorer programming as well as shorter text and phrase length in both tasks. Oral composition on the given topic in patients was characterized by the presence of agrammatism, need for leading questions due to the difficulties of story plot generation as well as higher variance in syntactic complexity and text length. Therefore, the authors revealed executive deficit in language production, more pronounced in the task with less numerous external cues for planning and sequential text explication, in schizophrenia patients.
def work_dir_file(relfile, release_dir, working_dir): path = os.path.relpath(relfile, release_dir) dest_file = os.path.join(working_dir, path) return dest_file
The Labour party may have avoided a divisive vote on Trident this week, but that doesn’t mean that it can always avoid working out whether it should have a new position. Last night Maria Eagle, the Shadow Defence Secretary, told a fringe that though she had made her mind in 2007 that she was in favour of the renewal of the nuclear deterrent, she wouldn’t have resigned had there been a vote that called for Trident to be scrapped at this conference. She said she’d reminded Corbyn when he offered her the job that she was pro-Trident, saying ‘I thought I need to make sure he remembers what my position is on this, because I don’t want him to appoint me without remembering, and he did remember, he knew what my position was and he still offered me the job and I accepted the job on the basis that we are going to have a debate, and I think this exemplifies this new politics’. She argued that the party’s policymaking processes meant that ‘resolutions that are passed do not automatically become party policy, they are passed on as having been carried by conference to the National Policy Forum’. The shadow minister added: ‘So even if a contemporary resolution saying we’re against Trident had been passed by the conference this week, by the way I don’t think it would have been, but even if it had been, that would not ave changed party policy: party policy, call me old-fashioned, is what has been decided to be thus far and that is that we are in favour of having a nuclear deterrent, a continuous at-sea deterrent…’ The party has ‘our ways of coming to a collective decision about this’, she added. She made the same point when asked about the potential for British involvement in air strikes against the so-called Islamic State in Syria: that Jeremy takes one view but that doesn’t necessarily mean it is the party’s view. Though on the basis of what Jeremy Corbyn and his frontbenchers have said this week, the new leader has his ways of having strong beliefs on matters while not trying to press them on his party.
/** * Removes the enclosing brackets form the specified text. * The bracket pairs which can be removed are specified by the second argument. * If the specified text is not enclosed by brackets, this method return it simply. * The result text is trimmed automatically. * * @param text the target text * @param pairs the array of the bracket pairs which can be removed * @return the result text * @see de.slopjong.erwiz.plain.BracketPair */ static String removeEnclosingBrackets(String text, BracketPair... pairs) { for (BracketPair pair : pairs) { if (isEnclosedByAnyBrackets(text, pair)) { return text.substring(1, text.length() - 1).trim(); } } return text; }