content
stringlengths 10
4.9M
|
---|
With the trade deadline passed and just more than one month left in the regular season, we’re officially into the Bubble Zone. That’s the section of the schedule where we obsessively chart the ups and downs of the various teams fighting for the final playoff spots.
How exactly do you define the bubble? In today’s NHL, with its loser points and fake parity, teams don’t actually move around all that much, and breathless declarations about how fascinating the playoff race will be often are an exercise in optimism. It’s basically a little fake excitement before the dust settles and we realize the same teams have been holding down spots since November.
But that kind of realism is no fun, so let’s pick an arbitrary number instead. How does 10 points sound? That’s a nice round number, so let’s go with that. Any team within 10 points of a playoff spot as of today is officially in bubble territory.
Based on that, we can go ahead and declare that any team more than 10 points clear of ninth place in its conference is a mortal lock, which means the following teams are in: Nashville, Montreal, Anaheim, St. Louis, Tampa Bay, both New York teams, Detroit, Pittsburgh, and Chicago. And we can pour one out for the following teams, which are at least 10 points back of the final spot and therefore have no hope: Buffalo, Arizona, Edmonton, Carolina, Toronto, and Columbus.
That leaves us with 14 teams fighting for six spots, which is pretty similar to what we’ve had in past years. Here’s how those teams would appear to shake out.
Group 1: Should Feel Pretty Safe
Washington Capitals
Current status: IN (35-20-10, 80 points, 11 points up on ninth)
Remaining schedule: They start a five-game homestand tonight, and nothing the rest of the way really stands out as especially difficult, with the possible exception of April, by which point they should have things wrapped up. They don’t have any games left against the ninth-place Panthers, but they do play the eighth-place Bruins twice.
The optimist’s view: The Caps have been rolling since December, and with a nine-point lead, it would take an epic collapse to cost them a spot. Those types of collapses do happen, but there haven’t been any signs that Washington is vulnerable.
The pessimist’s view: Braden Holtby has pretty much been a one-man show in goal, so if he ever got hurt, they’d be in trouble. That’s about all I can come up with.
Worth noting: While they’re sitting in a wild-card spot, they’re actually far closer to winning the Metro than they are to missing the playoffs.
Winnipeg Jets
Current status: IN (32-21-12, 76 points, four points up on ninth, although that’s a bit misleading)
Remaining schedule: It’s rough. They begin a four-game road trip this weekend that includes stops in Nashville, St. Louis, and Tampa, and they close the month with home games against Montreal, Chicago, and the New York Rangers.
The optimist’s view: Few thought the Jets would be a playoff team, and we’ve all spent the season waiting for them to drop out of the race, but they just keep banking points. Those points should come in handy now; with a four-point cushion and multiple teams between them and the last spot, they could survive a mini cold streak or two. As long as Michael Hutchinson keeps playing well (and getting starts), they should avoid the kind of losing streak that would drop them out of the race.
The pessimist’s view: The Wild are on fire, and if they pass Winnipeg, the Jets suddenly start to look vulnerable. Everyone still expects the Kings to find a way in, which would leave the Jets needing to fend off the Flames and Sharks, two teams that have been just about impossible to predict all season. And with Dustin Byfuglien getting hurt last night — at this point, we don’t know how badly — this whole thing doesn’t look quite so comfortable any more.
Worth noting: The Kings and Sharks are right behind them in the wild-card standings, but the lead is a little safer than it looks; both California teams would actually pass the Flames (and maybe Canucks) in the Pacific standings first.
Group 2: Would Need a Miracle
New Jersey Devils
Current status: OUT (27-27-10, 64 points, seven points back of eighth)
Remaining schedule: Five of their next seven come against non-playoff teams, including games with the Sabres and Coyotes. They’ll need to bank those points, because after that it gets tougher.
The optimist’s view: Sometimes you just need to get hot at the right time, and they’ve been reasonably hot lately, including Tuesday’s win over the Predators.
The pessimist’s view: They’re seven back, they don’t have games in hand, they don’t play the team they’re chasing (Boston), and they have to pass two other teams. That’s … not good.
Worth noting: This would be the third consecutive year the Devils miss the playoffs. Since their first playoff appearance in 1988, they’d missed the playoffs only three other times total.
Philadelphia Flyers
Current status: OUT (27-25-12, 66 points, five points back of eighth)
Remaining schedule: They face some tough teams, but with enough weaker ones mixed in that it all evens out. If they can somehow hang in until April, they finish with four straight at home.
The optimist’s view: They’re just five points back, and they only have to catch two teams, so on paper it’s not impossible.
The pessimist’s view: They’re done, and they should be focusing on the future rather than on a desperate playoff chase. Wait, sorry — that’s the general manager’s view, at least based on this week’s trade deadline. When your own front office isn’t even pretending you’re really in the race, you’re not in the race.
Worth noting: The Bruins also hold two games in hand.
Dallas Stars
Current status: OUT (28-26-10, 66 points, nine points back of eighth)
Remaining schedule: March is tough, but April looks brutal, with the Sharks, Blues, Ducks, and a pair against the Predators.
The optimist’s view: There’s still lots of time to make up ground. (In this case, we’re assuming our optimist thinks the season is 120 games long.)
The pessimist’s view: This was a fun team and they made a run, but shaky goaltending pushed the Stars out of the race, and the injury to Tyler Seguin basically killed any comeback hopes. They’re done.
Worth noting: This could be the sixth year in the last seven the Stars finish .500 or better in terms of points earned, yet still miss the playoffs.
Colorado Avalanche
Current status: OUT (28-25-11, 67 points, eight points back of eighth)
Remaining schedule: March looks relatively tough, although they’ll face six tough games in April if they’re still in the hunt (they won’t be).
The optimist’s view: There’s still a ton of talent here, and they shocked everyone with a fantastic season last year, so maybe they can do it again (they can’t).
The pessimist’s view: Even during last year’s run, there was plenty of reason to suspect the Avs weren’t really an especially good team. They’ve largely confirmed that this year despite a handful of strong stretches. If and when the Avalanche are officially eliminated, here’s hoping the stats-based naysayers can resist the urge to take a victory lap (they won’t resist).
Worth noting: Look, I’m just going to throw it against the wall and see if it sticks: Patrick Roy, Team Canada coach for the world championships.
Group 3: The True Bubble Teams
Vancouver Canucks
Current status: IN (36-24-3, 75 points, three points up on ninth)
Remaining schedule: There are lots of theoretically easy wins, including three games against the Coyotes plus visits by the Leafs and Oilers. They also get the Kings three times, so they should control their destiny.
The optimist’s view: Despite an unimpressive record, they’re sitting in second place in the Pacific, meaning at least two division rivals would need to pass them. With the Flames reeling from the Mark Giordano injury and the Sharks spiraling off toward who knows what, that seems unlikely, so they’d be safe even if the Kings get hot again.
The pessimist’s view: Their starting goalie is hurt, the backup has been just OK, and their three-point lead looks nice but far from unshakable. They’re in good shape, sure, but not home free yet.
Worth noting: With 32 ROW, they’re in great shape for a tiebreaker.
Ottawa Senators
Current status: OUT (28-23-11, 67 points, four points back of eighth)
Remaining schedule: Not bad. They get a chance to make up ground by hosting Boston twice in March, and they have a combined five games against the East’s weakling trio of the Sabres, Hurricanes, and Leafs.
The optimist’s view: After having been counted out just two weeks ago, the Senators caught fire and won five straight before Tuesday’s overtime loss to the Wild. Much of that streak was fueled by rookie call-up Andrew Hammond, who’s had one of the NHL’s best seven-game starts to a career. Every few years, it seems like some goalie comes out of nowhere to take the league by storm. Maybe Hammond is the guy this year.
The pessimist’s view: Hammond’s a 27-year-old career minor leaguer, so the odds the Senators have uncovered the next Ken Dryden seem slim. More importantly, their weak first half probably left them with just too much ground to make up. They’ve definitely got that pesky vibe going, though …
Worth noting: The Senators were the only team not to make a trade leading up to the deadline. They haven’t made a deal since sending Jason Spezza to the Stars last July.
Minnesota Wild
Current status: IN (34-22-7, 75 points, three points up on ninth)
Remaining schedule: There are two notably tough stretches coming up. The first, starting next week, sees them face the Blues twice along with the Ducks, Caps, and Predators over a five-game stretch. And their last seven games are downright brutal, including a season-ending three-game road trip through Chicago, Nashville, and St. Louis.
The optimist’s view: Their overall record may not be impressive, but the Wild have basically been a different team since acquiring Devan Dubnyk in mid-January. At the time, they were struggling badly and looked like a sure thing to miss the playoffs. Since, they’ve gone 16-3-2, the best record in the league over that stretch. Anything even close to that pace over the last month gets them in easily.
The pessimist’s view: Dubnyk has started 21 straight games, and at some point you have to wonder if he starts to wear down. And of course, if he ever got hurt, they’d be right back to the goaltending mess that torpedoed the first half of their season.
Worth noting: The same caveats about the Kings and Sharks catching up to the Pacific teams first also apply to the Wild.
Calgary Flames
Current status: IN (34-25-4, 72 points, tied for third in the Pacific but hold the tiebreaker)
Remaining schedule: Mixed. There’s a five-game homestand this month that will probably feature only one playoff team, so that’s an opportunity to bank points.
The optimist’s view: Everyone’s already written the Flames off based on the news that Giordano will miss the rest of the season. Which is fine, since everyone wrote them off before the season started and kept on writing them off all year long. If you love an old-school “nobody believes in us” story, this is your team. Guts! Character! Determination! Also, any hockey fan with a heart will be cheering them on.
The pessimist’s view: All the guts in the world can’t make up for the loss of a Norris favorite, especially for a team that was never all that strong on paper. Giordano’s injury was probably the most devastating of the season for any team, and even a group as impressively resilient as this year’s Flames will have a hard time staying in the mix.
Worth noting: If they can stay in the race until the end, they may end up controlling their own destiny; they finish the season with games against the Kings and Jets.
Boston Bruins
Current status: IN (31-22-9, 71 points, two points up on ninth)
Remaining schedule: Middle of the road. Most notably, it still features three matchups with Florida, the team that’s chasing them for the last spot. They also hold two games in hand over the Panthers and Flyers.
The optimist’s view: They’re the Bruins, right? Even if injuries and depth issues mean they’re not the dominant team they’ve been in years past, they’re still a playoff team.
The pessimist’s view: We’ve been waiting for the Bruins to snap out of it all season, and it just hasn’t happened. The trade deadline was shaping up to be a turning point, with aggressive moves to bring in reinforcements expected. But they mostly came up empty, while the team chasing them was adding a Hall of Famer, and the one guy they did add, Brett Connolly, is hurt now.
Worth noting: CEO Charlie Jacobs has already said he’d consider it “absolutely unacceptable” if the team missed the playoffs.
Florida Panthers
Current status: OUT (28-23-13, 69 points, two points back of eighth)
Remaining schedule: It’s not all that daunting. There are tough teams, but they’re broken up by some weaker opponents like the Leafs and Hurricanes, and the Panthers finish with a five-game homestand.
The optimist’s view: With three games left against the Bruins, the Panthers control their destiny. Adding Jaromir Jagr was a nice show of confidence by management that could give the team a boost, and as long as they’ve got Roberto Luongo back there, they can beat any team on any given night.
The pessimist’s view: I wrote that line about Luongo on Tuesday; sorry, Panthers fans. We don’t know yet how serious Luongo’s injury is, but if he’s out for any length of time, they’re in big trouble. With backup Al Montoya also hurt, the team may be faced with either relying on Dan Ellis or airlifting in a free agent.
Worth noting: The Panthers have stayed in the race largely on the strength of a league-high 13 loser points.
San Jose Sharks
Current status: OUT (32-25-8, 72 points, tied for third in the Pacific but lose the tiebreaker)
Remaining schedule: On the tough side. There’s a tough four-game homestand coming up that includes the Penguins, Hawks, and Predators, and that’s followed by a seven-game road trip. But if they’re still in it by April, they get the Coyotes twice and the Oilers once in their last five.
The optimist’s view: The Sharks are a good team. It’s been easy to lose sight of that in all the weirdness that followed last year’s playoff collapse, but there’s still a ton of talent here. They just need to beat out the Kings and Flames for third in the Pacific, and if they can’t do that, there’s still a shot at a wild-card berth.
The pessimist’s view: It’s hard to shake the feeling that this is a team just waiting for a wrecking ball. The goalie is on his way out. The coach probably is, too, barring a deep playoff run. They tried and failed to move their franchise player last offseason. They barely did anything at the deadline. And as they battle with Los Angeles for what could be the final spot, there’s a nagging feeling out there that the Sharks just can’t beat the Kings when it counts.
Worth noting: Their last game is against — who else? — the Kings.
Los Angeles Kings
Current status: OUT (30-21-12, 72 points, tied for third in the Pacific but lose the tiebreaker)
Remaining schedule: Not especially intimidating, although they play eight of their last 11 on the road, where they’ve struggled, including a tough five-game trip that features the Islanders, Rangers, and Blackhawks. The final weeks feature games with the Canucks, Flames, and Sharks, teams they could be battling with for a final spot.
The optimist’s view: This is just what the Kings do. First they struggle to make the playoffs. Then they win the Stanley Cup. We saw it in 2012, we saw it last year, and we’re probably seeing it all play out again. They’ll flip the switch. They always do. And in fact, they probably already have, with a recent eight-game win streak demonstrating just how good this team can be. Oh, and they bulked up at the trade deadline, adding Andrej Sekera to help a sagging blue line.
The pessimist’s view: That eight-game win streak didn’t really create any sort of cushion, so there’s not much margin of error to work with. And after last year’s Cup run, you have to wonder if fatigue is an issue down the stretch.
Worth noting: In a coaching career that stretches back to 1992, Darryl Sutter has never coached a full season and missed the playoffs. |
Morphogenesis and Cell Fate Determination within the Adaxial Cell Equivalence Group of the Zebrafish Myotome
One of the central questions of developmental biology is how cells of equivalent potential—an equivalence group—come to adopt specific cellular fates. In this study we have used a combination of live imaging, single cell lineage analyses, and perturbation of specific signaling pathways to dissect the specification of the adaxial cells of the zebrafish embryo. We show that the adaxial cells are myogenic precursors that form a cell fate equivalence group of approximately 20 cells that consequently give rise to two distinct sub-types of muscle fibers: the superficial slow muscle fibers (SSFs) and muscle pioneer cells (MPs), distinguished by specific gene expression and cell behaviors. Using a combination of live imaging, retrospective and indicative fate mapping, and genetic studies, we show that MP and SSF precursors segregate at the beginning of segmentation and that they arise from distinct regions along the anterior-posterior (AP) and dorsal-ventral (DV) axes of the adaxial cell compartment. FGF signaling restricts MP cell fate in the anterior-most adaxial cells in each somite, while BMP signaling restricts this fate to the middle of the DV axis. Thus our results reveal that the synergistic actions of HH, FGF, and BMP signaling independently create a three-dimensional (3D) signaling milieu that coordinates cell fate within the adaxial cell equivalence group.
Introduction
The mechanisms that are utilised to generate individual cell types from a set of equivalently fated set of precursors remains a central experimental focus of developmental biology. Studies from invertebrate systems have defined the concept of an equivalence group, where small clusters of lineage related cells are determined by a combination of inductive and intrinsic signals to adopt individual fates . This concept faces many difficulties when applied to complex three dimensional tissues such as those that typify vertebrate development, where the direct lineage relationships of many cells remain ill defined and the complicated morphogenesis of many tissues precludes definition of models of equivalence.
Zebrafish provides perhaps one of the most tractable contexts in which to examine concepts of cell fate determination in a vertebrate embryo, as a variety of lineage tracing techniques can be deployed in different genetic contexts in real time within an optically accessible embryo. One zebrafish lineage that has been examined in some detail is the embryonic myotome of zebrafish. As in all vertebrates, the majority of skeletal muscle in zebrafish forms from precursor cells present in the somites, which arise by segmentation of paraxial mesoderm in a rostral to caudal progression on either side of neural tube and notochord along the main body axis of the embryo. This process, referred to as myogenesis, gives rise to distinct slow and fast twitch muscle populations that differ in contraction speeds, metabolic activities and motoneuron innervation. In zebrafish, the location and origin of these two different cell populations are topographically separable . The early differentiating slow-muscle cells arise from a particular subset of presomitic mesodermal cells, termed the adaxial cells, which at the end of gastrulation align medially against the notochord . These precursors initially adopt a pseudo epithelial morphology but shortly after their incorporation within the formed somite, undergo stereotypic morphogenetic cell shape changes, moving from their columnar shape to flatten and interleave, adopting a triangular shape, that upon further differentiation results in single adaxial cells extending from one somite boundary to the other. These cells collectively flatten medio-laterally to form a set of elongated myocytes that span the somite, positioned against the notochord .
Ultimately, adaxial cells give rise to two distinct sub-types of slow muscle fibers: the superficial slow-twitch muscle fibers (SSFs) and the muscle pioneer cells (MPs). SSFs and MPs possess distinct morphological, molecular and functional properties. After undergoing the initial morphogenetic cell shape changes described above, SSFs migrate from their notochord-associated midline position to traverse the entire extent of the forming myotome and come to lie at its most lateral surface. There, the SSF precursors complete their differentiation to form a monolayer of approximately 20 slow twitch muscle fibers. By contrast, MPs (2 to 6 per somite) do not migrate from the midline and are the first cells of the zebrafish myotome to differentiate, forming slow twitch muscle fibers immediately adjacent to the notochord . All slow fibers express slow isoforms of myosin heavy-chain (SMyHC) as well as the homeodomain protein Prox1 and are mono nucleated cells . MPs, in addition, express high levels of homeodomaincontaining Engrailed proteins . By contrast to slow precursors, differentiating fast precursors originate from the lateral somite and fuse to form multinucleated fibers, subsequently to SSF migration, and are distinguished by their expression of fast MyHC. A subset of these fibers, known as medial fast fibers (MFFs) also expresses Engrailed at lower levels than MPs . The timing of the fate determination of these distinct cell types has been examined by a rigorous in vivo transplantation assays. By interchanging slow and fast muscle precursors at specific points in their development it has been demonstrated that at the time of gastrulation, although slow and fast muscle precursors are already spatially segregated, they remain uncommitted to their individual fates until they have entered into the segmental plate. Furthermore, the subdivision of adaxial compartment in to MP and non MP cell fates occurs at a similar period of development, with MP becoming irreversibly fated within the posterior part of the segmental plate during early somite formation .
In vertebrates, the specification and differentiation of the somite into specific cell types is under the influence of inductive signals from the somite itself or those derived from the surrounding tissues (reviewed in ). In the case of zebrafish myogenesis, by far the most well understood inductive signals controlling myogenesis are the Hedgehog (HH) family of secreted glycoproteins, which emanate from the embryonic midline. Numerous studies in the last two decades have demonstrated that HH is necessary and sufficient for induction of the slow twitch muscle fate. Indeed, analysis of loss of function mutants in HH pathway genes and the use of the HH pathway inhibitor cyclopamine have demonstrated that the timing and the level of HH signaling are critical for the formation of different muscle identities, including the MP cells, which require the highest level of HH signaling for their formation . However, even though HH over-expression can induce supernumerary MP cells, this is not sufficient to convert the dorsal and ventral extremes of the myotome into MP cells suggesting that other signals could induce MP in the midline region or repress MP differentiation in the dorsal and ventral muscle cells .
A further complication of these analyses is that they fail to explain how the symmetry of the adaxial cell compartment is initially broken to generate the dichotomy of MP and SFF fates within equivalent sets of cells. As the adaxial cells flank the notochord and floorplate, the source of HH peptide secretion, all adaxial cells would initially be exposed to the same level of secreted HH peptides. Hence, it is unclear how different levels of HH could act to generate the MP cell fate within a subset of adaxial cells and suggests that additional signals must influence adaxial cell fate. Recent studies have shed some light on the nature of other secreted signals that may act to influence muscle cell formation. Several studies have shown that manipulation of BMP signaling can alter MP number . Furthermore, Smad5, a downstream effector of BMP signaling has been shown to be activated in the dorsal and ventral adaxial cells and is absent within the central region of the somite . In addition, Smad binding sites have been shown to regulate activity of the eng2a promoter, the eng gene expressed the earliest within MP precursors . Collectively, these studies suggest that BMP signaling can influence the number of different cell types within the embryonic zebrafish myotome, but exactly how this is achieved has yet to be determined mechanistically.
In this study, we utilize a combination of live imaging, retrospective and indicative fate mapping, molecular and genetic studies to demonstrate that MP and SSF precursors arise from distinct regions along the anterior-posterior (AP) and dorsalventral (DV) axes of the adaxial cell compartment. Uniquely, this regionalization is controlled by the action of different signal transduction pathways that act specifically to direct specification in distinct axial dimensions. We demonstrate that the sprouty4mediated inhibition of FGF signaling induces MP cell fate in the anterior-most adaxial cells in each somite and that radar-mediated BMP signaling restricts this fate to the middle of the DV axis. Our results indicate that HH, FGF and BMP signaling synergize to determine cell fate within the adaxial cell equivalence group.
Results
Superficial slow twitch muscle and muscle pioneer precursors arise from distinct locations within the adaxial compartment In order to understand the origins of SSF and MP precursors from within the adaxial cell compartment ( Figure 1A-1B), we examined adaxial cell behaviors during the first phase of their differentiation via continuous 4D time-lapse analysis and retrospective fate map analysis of the entire forming myotome. The position and shape of the adaxial cells were followed using a membrane-bound GFP and a nuclear localized mCherry whose expression in all cells was achieved after mRNA injection at 1-cellstage. This analysis identified that the first adaxial cells to initiate differentiation and elongation arise adjacent to the anterior border of each somite at its DV mid-point ( Figure 1C-1M and Video S1). These cells are most likely MPs, which previously have been shown to differentiate precociously . To confirm this, we analyzed the expression of the MP marker gene engrailed2a (eng2a) during early somitogenesis by in situ hybridization. At the 10-
Author Summary
How specific genes and signals act on initially identical cells to generate the different tissues of the body remains one of the central questions of developmental genetics. Zebrafish are a useful model system to tackle this question as the optically clear embryo allows direct imaging of forming tissues, tracking individual cells in a myriad of different genetic contexts. The zebrafish myotome, the compartment of the embryo that gives rise to skeletal muscle, is subdivided into a number of specific cell typesone of which, the adaxial cells, gives rise exclusively to muscle of the ''slow twitch'' class. The adaxial cells give rise to two types of slow muscle cell types, muscle pioneer cells and non-muscle pioneer slow cells, distinguished by gene expression and different cellular behaviours. In this study we use lineage tracing live imaging and the manipulation of distinct genetic pathways to demonstrate that the adaxial cells form a cell fate ''equivalence group'' that is specified using separate signaling pathways that operating in distinct dimensions.
somite stage eng2a transcripts were detected within newly formed somites exclusively within a subset of adaxial cells, adjacent to the anterior somitic border, located precisely at the mid-point of the DV axis of the somite ( Figure 1N). To more precisely localize eng2a expression within the somite, we undertook dual in situ hybridization with myod, which marks the adaxial cells and the posterior aspect of the lateral somite, which contains the differentiating fast muscle progenitors ( Figure 1O). This analysis confirmed that the expression of eng2a initiates specifically in the anterior-most cells of the newly formed somites. The positioning of cells initiating eng2a expression to the dorsal ventral midline of the forming myotome was confirmed in transverse sections of similarly staged embryos individually stained for eng2a and slow myosin heavy chain 1 (smyhc1) gene expression ( Figure 1P, 1Q).
Collectively, these results suggest that SSF and MP precursors arise from distinct positions within the adaxial equivalence group. To test this hypothesis, we fate mapped the entire adaxial compartment by systematic iontophoretic injection of tetra-methyl rhodamine dextran (TMRD) lineage tracer dye into individual adaxial cells located at various AP and DV positions. Adaxial cells were labeled within the three most newly formed somites at the 10-15-somite stage and the fates of individually labeled cells were analyzed after the muscle fibers had terminally differentiated at 30 hpf. Individual injected embryos were sequentially incubated and imaged, first with an anti-Eng antibody and secondly with an anti-SMyHC antibody to unambiguously determine the fate of marked cells. This analysis confirmed that MP cells arise from the anterior-most adaxial cells at the dorso ventral midline of the somite (n = 8/8, Figure 2A, 2C, 2B, 2H) while posterior adaxial cells at this DV level make SSFs (n = 32/32, Figure 2A, 2D, 2E). Furthermore, we found that based on the initial position of a SSF precursor within the adaxial cell pool, we could predict its final location with the post migratory slow muscle palisade such that the dorsal-and ventral-most adaxial cells generate the dorsal and ventral-most post-migratory differentiated slow fibers respectively (n = 83, Figure 2B, 2F-2G, 2I-2J). This analysis not only demonstrates that MP and SSF precursors segregate at the beginning of somitogenesis but also determines the exact position of the precursors of every slow fiber. To further validate the fate of the adaxial cells located in the anterior somite at the DV midpoint, we examined their behaviour during the migration period. We thus performed a time-lapse analysis during a 20 hour period on embryos that were injected with a DNA construct containing the GFP gene under the control of the slow-twitch muscle-specific, slow myosin heavy chain 1 (smyhc1) promoter. When located in the anterior margin of the somite, the transgenically labeled adaxial cell elongates in an anterior to posterior movement but remains adjacent to the notochord identifying the labelled cell as a MP (Video S2).
FGF signaling controls the AP positioning of muscle pioneer precursors
We next turned our attention to the molecular basis of the adaxial cell fate specification events that we had defined by our fate mapping strategies. A candidate approach, examining AP restricted inductive signals within the myotome, highlighted the FGF pathway as a putative regulator of AP patterning in the adaxial progenitors. Indeed, in zebrafish, at least two of the genes encoding fgf ligands, fgf8 and fgf17b have been shown to be restricted in expression to the anterior somite . However an analysis of the expression of the downstream targets of the FGF cascade, erm and pea3 surprisingly revealed that asymmetric FGF responses occur specifically within the adaxial cells such that the anterior-most cells lose expression of FGF target genes during somite formation ( Figure 3A, 3B-3B0 and data not shown). The temporal and spatial regulation of FGF signal activation during zebrafish myogenesis suggests a simple hypothesis. Distinct levels of FGF activation along the AP axis of the somite inform the adaxial cells of their position within this axis and consequently control their fate. In order to test this hypothesis we disrupted FGF signaling by the addition of the pharmacological inhibitor SU5402, a drug that blocks the phosphorylation of FGF receptors (FGFRs) and so prevents downstream signaling, as revealed by the downregulation of the target genes erm, pea3 and spry4 in SU5402 treated embryos ( Figure 4A-4C, ). SU5402 treatments at the 6-somite stage did not affect the number of slow muscle fibers (Table S1) but instead increased the number of MPs at the expense of SSFs, as revealed by a failure in slow-muscle fiber migration to the surface of the myotome and a corresponding increase in Engrailed positive MP cells evident at the midline ( Figure 4D, 4E). Furthermore FGF inhibition does not alter the number of En positive medial fast fibers ( Figure S1). The increase in MP number is foreshadowed by an expansion of the eng2a expression domain throughout the AP dimension of the adaxial cell compartment at the 10-somite stage ( Figure 4F). Furthermore, the heat shock induced expression of a dominant negative form of FGFR1 that blocks the FGF/ERK signaling cascade also causes a similar increase of eng2a expression at the expense of SSF migration at 1 dpf ( Figure 4G-4J). Collectively, these results show that FGF inhibition promotes the specification of the MP fate. Importantly, delayed addition of SU5402 until the 10-somite stage revealed that the more rostral 5-6 somites, which had already formed at the time of treatment, remained unaffected revealing a discrete temporal window of action for FGF signaling in MP specification within the newly formed somite ( Figure 4K). This correlates specifically with the period of development when cuboidal cells are arrayed along the AP axis, prior to their differentiation ( Figure 1A-1B). These data shows that FGF signaling inhibition specifies anterior identity and consequently MP fate within the adaxial cell equivalence group.
FGF signaling patterns the adaxial cells independently from HH signaling As described above the adaxial cells, and thus the slow twitch muscle lineage are highly dependent on Hedgehog (HH) signaling with the MP fate requiring higher levels and longer exposure to HH for proper specification than SSFs . To test a possible cross talk between FGF and HH signaling, we analyzed the expression of HH target gene patched1 (ptc1). However ptc1 expression remains unaffected by SU5402 treatment ( Figure S2A). FGF signaling was also recently shown to control the length of motile cilia within Kupffer's vesicle . Although non-motile cilia are a distinct class of cell organelle, one possible mechanism for FGF action could be to regulate HH signal reception through the length or number of primary cilia on adaxial cells, as reception and activation of the HH pathway is controlled within the primary cilia in vertebrate cells . However, our analysis suggests that SU5402 treatment doesn't affect the length or the number of primary cilia within the adaxial cells ( Figure S2B). Therefore, the effect of FGF signaling on MP specification cannot be explained by modulation of HH transduction within adaxial cells.
Sprouty4 controls muscle pioneer fate specification through FGF signal inhibition
To understand how the precise spatial activity of FGF is regulated to control the dichotomy of the cell fate decision evident with the adaxial cells, we systematically examined known inhibitors of the FGF pathway for their expression within the adaxial cells. This analysis revealed that sprouty4 (spry4), which encodes a known intracellular inhibitor of receptor tyrosine kinases (RTKs), including the Fgfrs , becomes specifically activated in the anterior adaxial cells. Furthermore, the loss of expression of FGF target gene erm in the anterior adaxial cells correlates spatially and temporally with the induction of expression of the spry4 gene in the identical cells ( Figure 3C, 3D-3D-). To test whether spry4 expression influences MP and SSF fate specification, we ectopically expressed it within the adaxial cell compartment. Mosaic overexpression of spry4 from the promoter of the smyhc1 gene (smyhc1:spry4-IRES-GFP), which drives expression throughout the adaxial cell compartment doubled the number of MP cells (47.29% of transgenic fibres, n fibres = 143) within the embryo compared to control embryos expressing GFP alone (24.9%, n fibres = 521) ( Figure 4L-4N). Furthermore, over-expression of spry4 induces a third population of transgenic fibers that possess attributes of both MPs and SSFs. These rare fibers (4.19%, n fibres = 143) are able to migrate to the surface of the myotome and express Engrailed, a unique behavior never observed in control embryos (untreated or smyhc:GFP injected) ( Figure 4L, 4O, 4O9 and 4O0). Reciprocally when we express a dominant negative form of spry4, using the identical smyhc1 promoter (smyhc1:dn-spry4-IRES-GFP) cell autonomous loss of spry4 leads to a loss of MP identity and adaxial cells that express dnspry4 are incapable of making MPs (0% of transgenic fibres, n fibres = 48, Figure 4L, 4P, ).
We next analyzed muscle development in mutants that have had the spry4 gene inactivated. spry4 fl117 mutants carry a single Ato-T transversion, which introduces a stop codon early in the ORF of the gene ( Figure 5A). The mutant allele encodes for a truncated protein, which lacks the putative activation domain involved in FGF signaling inhibition and is consequently predicted to engender a full loss of function in spry4 ( Figure 5B). Maternal zygotic (MZ) spry4 fh117 homozygous mutant embryos, but not heterozygous or zygotic (Z) mutants, exhibit a marked increase in FGF target gene expression (erm, n = 9/9; dpERK, n = 13/13; and spry4, n = 8/8, Figure 5C-5F and 5G-5H and data not shown), showing that FGF activity is increased in MZ spry4 fh117 mutants. Furthermore, while we could show that both the number of slow fibers and their position was unaffected in MZ spry4 fh117 mutant embryos ( Figure 5I-5J, Table S1 and Figure S3) the number of MPs was less than half that of controls (n = 31, Figure 5K, 5L and 5N) a deficit that was rescued by SU5402 treatment (n = 32, Figure 5K, 5M and 5O), indicating that the deficiency of MPs associated with the loss of spry4 is directly due to FGF overactivation and not modulation of other RTKs.
Radar-mediated Bmp signaling coordinates MP and SSF fate specification synergistically with FGF signaling Although the regional inhibition of FGF signaling can explain the localization of the MP precursors to anterior adaxial cells, it cannot explain the positioning of these progenitors to the DV midline of the somite. Several recent studies have shown that manipulation of BMP signaling can alter MP number and these studies also show that Smad5, a downstream effector of BMP signaling is activated in the dorsal and ventral adaxial cells and but not within cells of the central region of the compartment . Furthermore, Smad binding sites have been shown to regulate activity of the eng2a promoter . This has led to the suggestion that BMP activity could influence the fate of the myotome along the DV axis, although direct evidence for this assertion is lacking. Furthermore, Smad5 is also known to be activated by the TGF-ß signaling pathway in many biological systems and a number of tgf-ß genes are expressed during zebrafish during myogenesis complicating interpretation of these data . To visualize BMP signaling more specifically we generated a transgenic line that expresses GFP under the control of a BMP Responsive Element which contains 5 tandem BRE elements derived from the Xenopus vent2 gene coupled to a minimal Xenopus id3 promoter, promoter elements known to specifically respond to BMP signal transduction. The activation of this transgene (Tg(5XBRE :-20lid3:GFP) has been shown to occur specifically via the BMP signaling pathway, and not by other TGF-ß-related ligands Figure 6A-6D). The expression of GFP in Tg(5XBRE :-20lid3:GFP) embryos correlates with the distribution of phospho-Smad5 ( Figure 6F). By early somitogenesis, BMP signaling is activated in the adaxial cells specifically in cells of the dorsal and ventral edges of the myotome, and reporter expression decreases in the midline (n = 14/14, Figure 6A, 6B, 6D, 6H) where MP precursor formation occurs ( Figure 7C). Subsequent activity of the transgene is restricted to migrating adaxial cells but not to MPs (n = 12/12, Figure 6C and 6G). These data suggest that the different levels of BMP activation along the DV axis could control the dichotomy of the MP/SSFs cell fate choice.
As mentioned above, several BMP-like ligands are present in the tissues surrounding the myotome. gdf6a/radar exhibits polarized expression in the DV axis, with expression evident in the dorsal neural tube, hypochord, and the primitive gut endothelium . The specific temporal and spatial aspects of its expression suggest radar/gdf6a is the most likely BMP ligand to influence the DV patterning of the zebrafish myotome . To examine this question, we genetically down-regulated radar/gdf6a by the injection of antisense morpholinos specifically targeted to the zygotic radar/gdf6a mRNA (rdr MO , Figure 7A). Loss of zygotic radar/gdf6a function in Tg(5XBre :-201id3:gfp) embryos causes a reduction of BMP activation evident within this line (n = 5, Figure 6E versus 6A), and a concomitant medio-ventral expansion of both the MP precursor domain (n = 6/6, Figure 7D versus 7C, Figure S5) and the number of differentiated MP cells at 24 hpf (n somite = 21, Figure 7B and 7F versus 7E, Figure S5) consistent with previously reported results . To confirm the specific effect of the rdr MO we generated a p53 and radar double morphant in which the number of MPs was similarly increased ( Figure 7B and 7G) but non-Eng-positive SSFs now migrated properly at 24 hpf compared to the single radar morphant, (Figure 7M versus 7K, 7L). Furthermore, the phenotype of the p53/rdr MO injected embryos was identical to homozygous rdr s327 mutant embryos (n somite = 17, Figure 7B and 7H), an phenotype that could be reversed by careful titration with WT rdr mRNA injection ( Figure S4). Embryos treated with Dorsomorphin (DM) (n somite = 17), a specific pharmacological inhibitor of BMP signaling , exhibited a dose dependent increase in MP number ( Figure 7B and 7I, 7N-7Q) and a concomitant reduction of GFP expression in Tg(5XBre :-201id3:gfp) embryos ( Figure 7R, 7S and ). A similar increase in MP number is also seen when adaxial cells are cell autnomously inhibited from responding to BMP like ligands through use of a dominant negative form of the BMP receptor (dnbmpr) expressed from the adaxial specific smyhc promoter (smyhc:dn-BMPr GFP) (39.01% of transgenic fibres, n fibres = 326, Figure 4L and 4Q).
To elucidate whether FGF and BMP signaling co-operate to control adaxial cell fate, we examined the formation of MPs and SSFs when both pathways were simultaneously knocked down. rdr morpholino injections into SU5402-treated embryos caused an increase in MPs and eng2a expression compared to controls (DMSO, SU5402 treatment or rdr MO alone) that was essentially additive (n somite = 17, Figure 7B, 7J and 7T-7W), demonstrating that FGF and BMP cooperate to control the MP/SSF decision, and do so independently of one another.
FGF and BMP signaling independently coordinate specification of adaxial cells in the AP and DV planes
While the experiments outlined above, together with those of previously published studies, clearly show that BMP and FGF signaling can influence MP formation, they do not provide direct evidence for a role in DV or AP axis specification. It is possible that these signals could influence proliferation of MP precursors or recruitment to the adaxial cell compartment. In order to examine these issues more directly, we fate mapped the adaxial cell compartment using iontophoresis of TMRD into embryos where FGF signaling (SU5402 treatment) or FGF and BMP (SU5402+DM treatment) signaling had been inhibited ( Figure 8A). According to our model, the MP domain should expand in the AP axis without FGF signaling and along both the AP and DV axes in the absence of either signal. Consistent with these predictions we found that MPs in SU5402-treated embryos could be derived from posterior adaxial cells (n = 8/12), a situation never observed in untreated embryos, but remained restricted to the mid-point of the DV axis ( Figure 8B-8D). MPs in SU5402+DM-treated embryos arose from a pool of progenitors expanded in both the DV and AP axes of the adaxial cell equivalence group (n = 7/11, Figure 8E). Collectively, these results demonstrate that FGF and BMP signaling synergize to control specification of adaxial cells in the AP and DV axis, respectively.
A fate map for the HH-dependent adaxial cell compartment
At the beginning of segmentation all adaxial cells are columnar shaped epithelial-like precursors that align medially along the notochord, and display no morphological asymmetry. By initially undertaking fate map analyses of the entire forming myotome we have defined the adaxial cell compartment as a cell fate equivalence group that gives rise to these two specific slow muscle cell fates, the MPs and the SSFs. We have further defined mechanistically how these precursors are induced to give rise to these two distinct populations. The adaxial cells differentiate asynchronously within newly formed somites, with the cells adjacent to the anterior somitic border and located at the midpoint of the DV axis of the somite being the first to initiate the morphogenetic and differentiation movements we have previously describe . This morphogenetic asymmetry is mirrored at the molecular level where the same cells that undergo precocious differentiation simultaneously initiate expression of the MP specific marker gene eng2a. This analysis suggests that these cells are the progenitors of the MP cells. In order to examine this question directly we generated a fate map of the adaxial compartment and found that each slow muscle fiber type (SSF and MP) arose from a specific region of the adaxial cell array. While the anterior adaxial cells at the DV mid-point of the somite give rise to MP within the midline, the non-MP precursor adaxial cells go on to form the SSF palisade at the lateral surface of the myotome in direct topographical reflection of their position in the pre-migratory adaxial compartment. These data indicate that both dorso-ventral and the anterior-posterior identities need to be determined coordinately within the adaxial cell equivalence group for cell fate determination to occur correctly.
Previous analyses have indicated that HH signaling is required to specify the adaxial cells prior to the onset of segmentation and that levels of HH influence the fate of these cells . However, in the absence of HH signaling, cells with a distinct morphology still form adjacent to the notochord, indicating that not all aspects of adaxial cell morphogenesis are controlled by HH signal transduction . In the absence of HH signal activation, a fast twitch muscle gene expression profile is activated within these cells instead of genes indicative of the slow muscle lineage. Consequently, these cells differentiate as fast MyHC expressing, cells stochastically dispersed throughout the myotome . Despite the ability of HH signaling to control the determination of the slow muscle fate, the three HH ligands expressed in the embryonic midline (ehh/ihhb, shh/shha, twhh/shhb, are not restricted in the anterior-posterior direction, nor is there any indication that HH target genes are asymmetrically activated within the nascent adaxial cell compartment in either the anteriorposterior or dorso-ventral planes. Furthermore, we could also find no variation in the length of the primary cilia in adaxial cells, in line with the lack of modulation of HH target gene expression within adaxial cells. Thus, a model involving distinct regulators of cell fate needed to be invoked in order to conceptually generate the MP fate from the anterior-most cells of the dorso-ventral midline of the adaxial cell equivalence group.
A lack of FGF signaling in anterior adaxial cells induces MP fate
Many studies have examined the role of FGF signaling during myogenesis in vitro, where it has been shown to promote cell proliferation and represses myoblast differentiation. It has also been shown that early myoblast precursors require FGF in order to subsequently express their myogenic phenotype . However, despite these extensive in vitro studies the exact function of Fgf in the activation or the repression of muscle differentiation in vivo is controversial and appears to often to contradict this simple repressive role defined in vitro . For example, zebrafish Fgf8mediated signaling has been shown to drive the terminal differentiation of fast-twitch but not slow-twitch muscle fibers, and simultaneously also controls proliferation of the external cell progenitor layer, the equivalent of the amniote dermyotome . In amniote embryos, FGF signaling has been implicated in myogenesis in vivo, both in promoting progenitor cell proliferation and in promoting their differentiation . In chick embryos most, if not all, replicating myoblasts present within the skeletal muscle masses of the limb express high levels of the FGF receptor FREK/FGFR4 and the inhibition of FgfR4 leads to a dramatic loss of limb muscle . Conversely, over expression of FGF in the chick somite leads to muscle differentiation suggesting that, as in the zebrafish lateral myotome, myogenic differentiation is positively controlled by FGF signaling . This is consistent with observations in mouse where ectopic expression of the cell autonomous negative regulator of FGF signaling sprouty2 in myogenic progenitors inhibits their differentiation .
Here we show that the FGF pathway does play a role in muscle formation but it is downstream of the HH dependent process of slow-twitch fiber specification. FGF signaling is asymmetrically activated in the adaxial cells. Specifically, within anterior adaxial cells it is strongly reduced, to the point of complete inhibition of specific FGF target genes. We have shown, using a combination of genetic and pharmacological approaches that down regulation of the FGF pathway promotes MP formation at the expense of SSFs within the adaxial cell compartment. This does not appear to be driven by the restriction of the expression of FGF ligands, since the FGF encoding genes, Fgf8a and Fgf17, are both localized to the anterior somite . Rather, FGF signaling in anterior adaxial cells is inhibited by a cell autonomous negative regulator of the FGF signaling cascade, spry4 . spry4 expression is induced by FGF signaling and has been shown to act in a negative feed back loop on the FGF pathway in a number of contexts (this present study and ). The direct role of spry4 in MP formation is demonstrated by data that shows that the ectopic expression of spry4 in the adaxial cells induces MPs while its inactivation in spry4 mutant embryos inhibits this fate. Therefore, our results suggest a model where spry4 is activated within the anterior adaxial cell compartment in response to high levels of adjacent FGF ligands that ultimately suppress FGF signaling within these cells, thereby breaking equivalence in the anterior posterior dimension. This role appears to be more analogous to that played by FGF signaling during organogenesis rather than those outlined above for myogenesis, where the fate of various stem and progenitor cells are partitioned by activation or inhibition of FGF signaling in organs as diverse as the liver and pancreas , ear and teeth often in conjunction with opposing cell fate determining signals, including BMP signaling.
BMP signaling determines dorso-ventral identity of the adaxial cell equivalence group While FGF signaling restricts the fate of the adaxial cells to the anterior most cells of the myotome, a second signal is needed to restrict the positioning of these cells in the dorso-ventral dimension. Recently, studies have demonstrated that the downstream effector of BMP signaling, p-Smad5 is specifically restricted to the dorsal and ventral adaxial cells, and is absent from cells of the dorso-ventral midline of the myotome . Furthermore, several previous studies have shown that manipulation of BMP signaling can influence the number of engrailed positive MPs . Indeed, the ectopic expression of chick Dorsalin-1, a BMP-like family member, in the zebrafish notochord inhibits MP development . More recent studies have shown that inhibition of BMP via use of the small molecule inhibitor Dorsomorphin, or morpholinos against the BMP receptor bmpr1ba, results in an increase of MPs . However, exactly how BMP influences the formation of these muscle subtypes has remained unclear.
Here we show that the fate of the adaxial cells is specified in the DV axis by a radar-mediated BMP signaling. This statement is supported by several lines of evidence. Firstly, a transgenic reporter line specific for BMP signaling reveals that at the onset of segmentation, BMP signaling is active in the dorsal and ventral most adaxial cells, but absent from in the DV mid-point of the forming myotome. This region of low BMP activity of correlates with the location of MP precursor specification, as specifically determined via our fate map analysis. Secondly, BMP signaling is mediated by gdf6a/radar in the adaxial cells and knockdown of BMP activity modifies the fate muscle precursors in the adaxial compartment and promotes MP formation in a dose dependant manner.
Previous analysis of the activity of BMP signaling during muscle formation in amniotes has provided evidence that it negative regulates the myogenic program a role it appears to also play in controlling the proliferation and the onset of myogenesis within the external progenitor cell layer of the zebrafish myotome . However, in the context of the adaxial cells it does not appear to influence the proliferation of these progenitors, the timing of entry of these cells into myogenesis or the differentiation of the adaxial cells themselves. Our lineage analysis specifically illustrates that it alters the fate of this progenitor compartment.
The activities of the HH, FGF, and BMP signaling pathways specify MP identity In this study we show that in contrast to HH signal transduction, FGF and BMP signaling has no effect on the slow muscle fate but instead regulates the decision of adaxial cell progenitors to become either SSF or MP cells. Indeed, as discussed above, the activation of these signaling pathways promotes SSF formation while their decrease or absence promotes MP formation. Modulation of FGF or BMP signals does not affect HH signaling and the consequences of their knockdown on the adaxial fate are additive (this study and ). Similarly, manipulation of the level of HH signaling (mutants within the HH pathway or cyclopamine treatment) does not affect the expression pattern of phospho-Smad 5, suggesting that HH signaling does not influence cell fate indirectly through BMP signaling . Thus the FGF and BMP signals act independently of, and synergistically with, each other to control the SSF/MP cell fate dichotomy. Intriguingly, the application of both FGF and BMP is required for the induction of a specific muscle cell fate, the Pax7-positive satellite cell progenitors, in Xenopus animal caps . This suggests that the synergistic action of BMP and FGF may operate to specify other muscle cell types.
While HH and BMP signalling have been demonstrated to coordinate cell fate determination in the chick neural tube and HH, BMP and FGF signalling collectively control the specification of numerous cell types in vertebrate and invertebrate systems, the majority of these studies do not examine the fate of individual cells in real time. The developmental paradigm of the adaxial cells allows single cells to be labelled and tracked and their fate determined within a genetically defined cellular equivalence group in the living animal a set of attributes that is to our knowledge unique in vertebrate developmental systems. We therefore believe that our study suggests that the adaxial of zebrafish could emerge as a paradigmatic example of a vertebrate cell fate equivalence group, in the same manner as the, Drosophila neuroectoderm, parasegment and imaginal disc and the C. elegans vulva which have provided exquisite cellular and genetic resolution to generate a detailed understanding of cell specification mechanisms within invertebrate systems.
Our results also demonstrate an integrated signaling milieu that coordinates the specification of muscle cell fates within the adaxial cell compartment. The adaxial cell pool is initially specified in the somitic region adjacent to the notochord by HH signal transduction from the embryonic midline. This, together with regional inhibition of FGF in the anterior-most adaxial cells and a lack of BMP signaling at the DV midpoint of the somite, creates a 3-Dimensional network of signals that restricts the MP fate to the most anterior cells within a specific cellular equivalence group in the developing myotome ( Figure 9). These signals act independently from each other to determine fate and uniquely MP specification is controlled by the action of different signal transduction pathways that act specifically to direct specification in distinct axial dimensions. This essentially Cartesian system of cell fate determination is somewhat reminiscent of that deployed during the development of the ventral nerve chord of Drosophila where a complex series of patterning genes are deployed in gradients along the DV and AP axes to induced specific fate determining genes within individual neuroblasts within the neuroectodermal sheet . However, in the case of the adaxial cells there is no evidence for a role of lateral inhibition, which in the Drosophila ventral neuroectoderm is required for the expression Figure 9. Model of the synergic action of FGF and BMP signaling on adaxial cell specification. The diagram represents the adaxial compartment in the zebrafish somite, adjacent to the notochord (n), at the beginning of segmentation. The anterior-posterior (A/P) and dorsaldentral (D/V) axes are shown. The adaxial cell pool is initially specified by HH signal transduction from the embryonic axial structure. Spry4-mediated regional inhibition of FGF restricts the MP fate to the most anterior adaxial cells. By contrast, the absence of Radar-mediated-BMP signaling at the dorso-ventral mid-point of the somite restricts MP fate at the dorso-ventral mid-point of the myotome. The combination of the three signals forms a 3-D patterning system that coordinates the specification of adaxial cells into muscle pioneer cells and superficial slow fibers. doi:10.1371/journal.pgen.1003014.g009 of individual proneural genes and adoption of specific fates . Furthermore, our results reveal that individual secreted signals act in specific dimensions within this Cartesian system, rather than in a cooperative or mutually exclusive manner to specify cell fate, the prevalent ways by which cells are determined in vertebrate systems.
Statistical analysis
Counts of the number of differentiated MPs or SSFs were performed in the yolk extension region of 6 to 15 embryos. Analysis of variance (ANOVA) determined statistical significance of differences within a 95% confidence interval. In specific figures the following statistics were applied: Figure 4L: ANOVA analysis, Figure 5K: ANOVA analysis, Figure 7B: ANOVA analysis, Figures S1, S3, S5: Student test, 2 tails, unpaired, Table S1: ANOVA analysis.
Assembly of DNA constructs and RNA for live imaging
All constructs were assembled from entry clones using the Tol2kit (Kwan et al 2007). For transcription of RNA for wholesomite imaging, we assembled CMV/SP6-EGFPcaax and CMV/ SP6-H2/afz-mCherry. Plasmids were linearized with NotI before transcription of capped RNA using an mMessage-mMachine kit (Ambion). Vectors used for mosaic analysis of single cells were smyhc1:spry4-IRES-EGFP, smyhc1:EGFP, smyhc1:dnBMPr GFP and smyhc1:dnspry4-IRES-EGFP. The new entry clone p5E-smyhc1 was made by subcloning the smyhc1 promoter from the plasmid p9.7kbsmyhc1:GFP-I-SceI (Elworthy et al 2008) into p5E-MCS (Kwan et al 2007). The pME-spry4 clone was made by cloning the full-length spry4 ORF into pDONR221. Similarly, the ORF of Xenopus type Ia BMPr truncated in C terminal (BMPrDC) from BMPR22 construct ( or of dominant negative form of spry4 (spry4Y52A) from the pCS2-spry4Y52A were also cloned into pDONR221.
Injections, drug treatments, and heat shock inductions Injections were performed as described previously . 40 ng/ ml of DNA encoding smyhc:spry4 ires GFP or smyhc:GFP were injected in one cell stage. Adaxial cells were imaged in embryos where 25 ng/ml of both CAAX-GFP and NLS-mCherry encoding mRNAs were injected at the one cell stage. 3 ng/ml of radar morpholino alone (59-GCAATACAAACCTTTTCCCTTGTC-C-39) or in combination with 3 ng/ml of p53 morpholinos (59-GCGCCATTGCTTTGCAAGAATTG-39) were injected at the once cell stage. SU5402 (calbiochem) was added to the embryo medium at gastrulation or between 6-to 10-somites at a final concentration of 80 mM and maintained until the appropriate stage. 10 to 50 mM Dorsomorphin (Sigma) was applied to similarly staged embryos. Heat shock induction of dn-fgfr1 expression was carried out at 6-somite stage. (hsp70:dnFgfr1-EGFP) pd1 transgenic embryos in there plate were placed at 38u during 2 hours. GFP expression was visualized immediately after heat shock to confirm the expression of the transgenic protein.
Iontophoresis injections
Iontophoresis injections as described in with the following modifications: rhodamine dextran (10,000 MW, Molecular Probes, 5 mg/ml) combined with Biotin dextran (10,000 MW, Molecular Probes, 1.5 mg/ml) were injected into cells of agaroseimbedded, 10-to 15-somite stage embryos. Adaxial cell labelings were positioned on the dorso-ventral axis via references to adjacent tissue landmarks within injected embryos and were imaged as previously described. The labeled embryo was dissected free of agarose and was allowed to develop until 30 hpf; it was then remounted in a 3% solution of methylcellulose (Sigma) and imaged. Subsequently, the embryo was fixed 2H in 4% paraformaldehyde and sequentially stained for Engrailed and sMyhc as described above. Table S1 Manipulation of FGF and/or BMP signaling pathways does not affect slow-twitch lineage specification. The table represents the number of slow muscle cells per somite. These cells were counted using the expression of sMyHC or Prox1 in the yolk extension region. Values represent the means 6 standard error of the mean (s.e.m) and the total number of somites counted for the experiment. Analysis of variance (ANOVA) shows no statistical difference within a 95% confidence interval between the treatments/genotypes. (DOC) Video S1 Anterior adaxial cells differentiate and elongate first. This movie shows the adaxial cells behaviour occurring in their very first differentiation phase. Embryos were labelled with a membrane-bound GFP (green) and a nuclear localised mCherry (red) and imaged in a continuous 4D time-lapse analysis that covers a period of 30 minutes. The first part of the movie corresponds to a dorsal view in the dorso-ventral midline focal plan between 0 min to 30 min. The second part is an overview of the focal plans above and below the DV mid-point at 30 min. This movie reveals that the anterior most-adaxial cell in the dorsoventral midline is the first to differentiate. Adaxial cells above and below are less differentiated. |
/**
* A function generating senor data for carbon dioxide.
* @param x The x position of the measurement.
* @param y The y position of the measurement.
* @param time The time of the measurement.
* @return A measurement of carbon dioxide at the given position and time.
*/
@Override
public byte[] generateData(int x, int y, GeoPosition graphPosition, Time time) {
double result = CarbonDioxideDataGenerator.generateData(x, y);
return new byte[]{(byte) Math.floorMod((int) Math.round(result), 255)};
} |
import sys
# return list of letters present and status indicating whether all letters are distinct
def letters_missing(s):
d = {}
qmks = 0
for c in s:
if c == "?":
qmks += 1
elif c in d:
return d, True, qmks
else:
d[c] = 1
return d, False, qmks
def print_soln(substr, start, d):
alpha = set(list("ABCDEFGHIJKLMNOPQRSTUVWXYZ"))
keys = set(d.keys())
toadd = list(alpha - keys)
new = []
for i, c in enumerate(substr):
if c in alpha: new.append(c)
elif c == "?":
if i >= start and i < start + 26:
new.append(toadd.pop())
else:
new.append("A")
print "".join(new)
s = raw_input()
n = len(s)
if n < 26:
print -1
sys.exit(0)
done = False
for i in range(n - 25):
d, has_dupl, qmks = letters_missing(s[i : i + 26])
if not has_dupl and (qmks == 26 - len(d.items())):
# found a solution
print_soln(s, i, d)
done = True
break
if not done:
print -1
|
"""
Copyright 2013, 2014 <NAME>
Copyright 2013, 2014 <NAME>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
__author__ = '<EMAIL>'
USER_K = 'username'
# ### JSON keys for encoding/decoding information related to any segment
SEGMENT_ID_K = 'segment_id'
# ### JSON keys for enconding/decoding GroundStation dictionaries
GS_ID_K = 'groundstation_id'
GS_LATLON_K = 'groundstation_latlon'
GS_ALTITUDE_K = 'groundstation_altitude'
GS_CALLSIGN_K = 'groundstation_callsign'
GS_ELEVATION_K = 'groundstation_elevation'
# ### JSON keys for encoding/decoding Spacecraft dictionaries
SC_ID_K = 'spacecraft_id'
SC_CALLSIGN_K = 'spacecraft_callsign'
SC_TLE_ID_K = 'spacecraft_tle_id'
def serialize_sc_configuration(sc):
"""
Internal method for serializing the complete configuration of a
SpacecraftConfiguration object.
:param sc: The object to be serialized.
:return: The serializable version of the object.
"""
return {
SC_ID_K: sc.identifier,
SC_CALLSIGN_K: sc.callsign,
SC_TLE_ID_K: sc.tle.identifier,
USER_K: sc.user.username
}
def deserialize_sc_configuration(configuration):
"""
This method de-serializes the parameters for a Ground Station as provided
in the input configuration parameter.
:param configuration: Structure with the configuration parameters for the
Ground Station.
:return: All the parameteres returned as a N-tuple (callsign, tle_id)
"""
callsign = None
tle_id = None
if SC_CALLSIGN_K in configuration:
callsign = configuration[SC_CALLSIGN_K]
if SC_TLE_ID_K in configuration:
tle_id = configuration[SC_TLE_ID_K]
return callsign, tle_id
def serialize_gs_configuration(gs):
"""
Internal method for serializing the complete configuration of a
GroundStationConfiguration object.
:param gs: The object to be serialized.
:return: The serializable version of the object.
"""
return {
GS_ID_K: gs.identifier,
GS_CALLSIGN_K: gs.callsign,
GS_ELEVATION_K: gs.contact_elevation,
GS_LATLON_K: [gs.latitude, gs.longitude],
GS_ALTITUDE_K: gs.altitude,
USER_K: gs.user.username
}
def deserialize_gs_configuration(configuration):
"""
This method de-serializes the parameters for a Ground Station as provided
in the input configuration parameter.
:param configuration: Structure with the configuration parameters for the
Ground Station.
:return: All the parameteres returned as a N-tuple.
"""
callsign = None
contact_elevation = None
latitude = None
longitude = None
if GS_CALLSIGN_K in configuration:
callsign = configuration[GS_CALLSIGN_K]
if GS_ELEVATION_K in configuration:
contact_elevation = configuration[GS_ELEVATION_K]
if GS_LATLON_K in configuration:
latlon = configuration[GS_LATLON_K]
latitude = latlon[0]
longitude = latlon[1]
return callsign, contact_elevation, latitude, longitude
|
// SetInputState will set the input state of one of the inputs that is identified by ID 'id'
func (b *Instance) SetInputState(id int64, state bool) {
input, contains := b.inputs[id]
if contains {
input.state = state
}
} |
<filename>maximum-non-negative-product-in-a-matrix/maximum-non-negative-product-in-a-matrix.cpp
class Solution {
public:
int maxProductPath(vector<vector<int>>& grid) {
int m=grid.size(), n=grid[0].size(), MOD = 1e9+7;
// we use long long to avoid overflow
vector<vector<long long>>mx(m,vector<long long>(n)), mn(m,vector<long long>(n));
mx[0][0]=mn[0][0]=grid[0][0];
// initialize the top and left sides
for(int i=1; i<m; i++){
mn[i][0] = mx[i][0] = mx[i-1][0] * grid[i][0];
}
for(int j=1; j<n; j++){
mn[0][j] = mx[0][j] = mx[0][j-1] * grid[0][j];
}
for(int i=1; i<m; i++){
for(int j=1; j<n; j++){
if(grid[i][j] < 0){ // minimum product * negative number = new maximum product
mx[i][j] = (min(mn[i-1][j], mn[i][j-1]) * grid[i][j]);
mn[i][j] = (max(mx[i-1][j], mx[i][j-1]) * grid[i][j]);
}
else{ // maximum product * positive number = new maximum product
mx[i][j] = (max(mx[i-1][j], mx[i][j-1]) * grid[i][j]);
mn[i][j] = (min(mn[i-1][j], mn[i][j-1]) * grid[i][j]);
}
}
}
int ans = mx[m-1][n-1] % MOD;
return ans < 0 ? -1 : ans;
}
};
|
<gh_stars>0
// Copyright 2020 <NAME>
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
use crate::*;
use read::ReadGuard;
use upgrade::UpgradeFuture;
pub struct UpgradableReadFuture<'a, T: ?Sized> {
rwlock: &'a RwLock<T>,
lock_acquired: bool,
}
impl<'a, T: ?Sized> From<&'a RwLock<T>> for UpgradableReadFuture<'a, T> {
fn from(rwlock: &'a RwLock<T>) -> Self {
Self {
rwlock,
lock_acquired: false,
}
}
}
impl<'a, T: ?Sized> Future for UpgradableReadFuture<'a, T> {
type Output = UpgradableReadGuard<'a, T>;
fn poll(
mut self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> Poll<Self::Output> {
if self.try_acquire_upgradable_reader() {
self.lock_acquired = true;
Poll::Ready(UpgradableReadGuard {
rwlock: self.rwlock,
upgrade_called: false,
})
} else {
self.rwlock.store_waker(cx.waker());
Poll::Pending
}
}
}
impl<'a, T: ?Sized> UpgradableReadFuture<'a, T> {
fn try_acquire_upgradable_reader(&self) -> bool {
let state_ = self.rwlock.state.load(Ordering::Relaxed);
if state_ & (WRITER_FLAG | UPGRADABLE_FLAG) == 0 {
let new_state = state_
.checked_add(ONE_READER | UPGRADABLE_FLAG)
.expect("RwLock reader count overflow");
self.rwlock
.state
.compare_exchange_weak(state_, new_state, Ordering::Acquire, Ordering::Relaxed)
.is_ok()
} else {
false
}
}
}
pub struct UpgradableReadGuard<'a, T: ?Sized> {
rwlock: &'a RwLock<T>,
upgrade_called: bool,
}
impl<'a, T: ?Sized> From<&'a RwLock<T>> for UpgradableReadGuard<'a, T> {
fn from(rwlock: &'a RwLock<T>) -> Self {
Self {
rwlock,
upgrade_called: false,
}
}
}
impl<'a, T: ?Sized> UpgradableReadGuard<'a, T> {
/// Downgrades an upgradable lock to a shared lock.
pub fn downgrade_upgradable(&self) -> ReadGuard<'a, T> {
self.rwlock
.state
.fetch_sub(UPGRADABLE_FLAG, Ordering::Release);
ReadGuard(self.rwlock)
}
/// Upgrades an upgradable lock to an exclusive lock.
pub fn upgrade(mut self) -> UpgradeFuture<'a, T> {
self.upgrade_called = true;
UpgradeFuture::from(self.rwlock)
}
}
impl<'a, T: ?Sized> std::ops::Deref for UpgradableReadGuard<'a, T> {
type Target = T;
fn deref(&self) -> &Self::Target {
unsafe { &*self.rwlock.data.get() }
}
}
impl<'a, T: ?Sized> Drop for UpgradableReadGuard<'a, T> {
fn drop(&mut self) {
if !self.upgrade_called {
self.rwlock.unlock_upgradable_reader();
}
}
}
|
//Function to update the entry in the redeem table
func UpdateRedeem(db *sql.DB, adminentry RedeemAdminJSON, entry RedeemJSON) bool {
id := adminentry.Index
status := adminentry.Status
coinsToRedeem := entry.Coins
rollno := entry.Rollno
ctx := context.Background()
tx, err := db.BeginTx(ctx, nil)
CheckError(err)
log.Println("Processing a Redeem Request")
if status {
res, err := tx.ExecContext(ctx, "UPDATE users SET coins = coins - ? WHERE rollno=? AND coins - ? >=0 ", coinsToRedeem, rollno, coinsToRedeem)
CheckError(err)
rows_affected, err := res.RowsAffected()
if err != nil || rows_affected != 1 {
tx.Rollback()
status = false
tx, err = db.BeginTx(ctx, nil)
CheckError(err)
}
}
statusredeem := "Rejected"
if status {
statusredeem = "Approved"
}
res, err := tx.ExecContext(ctx, "UPDATE redeem SET status = ? WHERE id = ? ", statusredeem, id)
rows_affected, _ := res.RowsAffected()
if err != nil || rows_affected != 1 {
tx.Rollback()
}
err = tx.Commit()
CheckError(err)
if status {
log.Println("Redeem completed")
return true
} else {
return false
}
} |
Using Multiple Biomarker Parameters to Quantitatively Unravel Mixed Oils from Different Sources: An Example from the Slope of the Qikou Depression, Bohai Bay Basin, China
Using multiple biomarker parameters to quantitatively unravel mixed oils from different sources: an example from the slope of the Qikou Depression, Bohai Bay Basin, China L. ZHANG*, G. BAI, X. ZHAO, L. ZHOU, S. ZHOU, W. JIANG, Z. WANG 1 Key Laboratory of Petroleum Resource, Institute of Geology and Geophysics, CAS, Beijing 100029, China (*correspondence: [email protected]) 2 State Key Laboratory of Petroleum Resources and Prospecting, China University of Petroleum, Beijing 102249, China 3 Dagang Oilfield Company of PetroChina, Tianjin 300280, China |
/**
* Created by jialechan on 2017/2/22.
*/
public abstract class MultipartItem {
String key;
public abstract String genContentDispositionStr();
public abstract String genContentType();
public abstract void genBody(OutputStream out) throws UnsupportedEncodingException;
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
} |
/**
* This implementation simple logs to a commons logger at debug level, for all events. It's mainly
* for testing. It isn't very useful otherwise.
*/
public class CacheEventLoggerDebugLogger
implements ICacheEventLogger
{
/** This is the name of the category. */
private String logCategoryName = CacheEventLoggerDebugLogger.class.getName();
/** The logger. This is recreated on set logCategoryName */
private Log log = LogFactory.getLog( logCategoryName );
/**
* @param source
* @param region
* @param eventName
* @param optionalDetails
* @param key
* @return ICacheEvent
*/
@Override
public <T> ICacheEvent<T> createICacheEvent( String source, String region, String eventName,
String optionalDetails, T key )
{
ICacheEvent<T> event = new CacheEvent<T>();
event.setSource( source );
event.setRegion( region );
event.setEventName( eventName );
event.setOptionalDetails( optionalDetails );
event.setKey( key );
return event;
}
/**
* @param source
* @param eventName
* @param optionalDetails
*/
@Override
public void logApplicationEvent( String source, String eventName, String optionalDetails )
{
if ( log.isDebugEnabled() )
{
log.debug( source + " | " + eventName + " | " + optionalDetails );
}
}
/**
* @param source
* @param eventName
* @param errorMessage
*/
@Override
public void logError( String source, String eventName, String errorMessage )
{
if ( log.isDebugEnabled() )
{
log.debug( source + " | " + eventName + " | " + errorMessage );
}
}
/**
* @param event
*/
@Override
public <T> void logICacheEvent( ICacheEvent<T> event )
{
if ( log.isDebugEnabled() )
{
log.debug( event );
}
}
/**
* @param logCategoryName
*/
public synchronized void setLogCategoryName( String logCategoryName )
{
if ( logCategoryName != null && !logCategoryName.equals( this.logCategoryName ) )
{
this.logCategoryName = logCategoryName;
log = LogFactory.getLog( logCategoryName );
}
}
} |
def check_queue_values(queue):
assert len(queue) >= 3
assert queue[0][1] == 'progress'
assert queue[-2][1] == 'progress'
assert queue[-1][1] in ('success', 'redirect')
assert queue[0][2][0] == 0
assert queue[-2][2][0] == queue[-2][2][1] |
/// Writes the bulk of the YAML data to the provided output buffer.
fn render_yaml_out<T: Write>(&self, out: &mut T) -> Result<(), TableError> {
let data = self.create_output_data();
serde_yaml::to_writer(out, &data)?;
Ok(())
} |
"""Evolution of a circular patch of incompressible fluid. (60 seconds)
This shows how one can explicitly setup equations and the solver instead of
using a scheme.
"""
from __future__ import print_function
from numpy import ones_like, mgrid
# PySPH base and carray imports
from pysph.base.utils import get_particle_array_wcsph
from pysph.base.kernels import Gaussian
from pysph.solver.solver import Solver
from pysph.sph.integrator import EPECIntegrator
from pysph.sph.integrator_step import WCSPHStep
from pysph.sph.equation import Group
from pysph.sph.basic_equations import XSPHCorrection, ContinuityEquation
from pysph.sph.wc.basic import TaitEOS, MomentumEquation
from pysph.examples.elliptical_drop import EllipticalDrop as EDScheme
class EllipticalDrop(EDScheme):
def create_scheme(self):
# Don't create a scheme as done in the parent example class.
return None
def create_particles(self):
"""Create the circular patch of fluid."""
dx = self.dx
hdx = self.hdx
ro = self.ro
name = 'fluid'
x, y = mgrid[-1.05:1.05+1e-4:dx, -1.05:1.05+1e-4:dx]
# Get the particles inside the circle.
condition = ~((x*x + y*y - 1.0) > 1e-10)
x = x[condition].ravel()
y = y[condition].ravel()
m = ones_like(x)*dx*dx*ro
h = ones_like(x)*hdx*dx
rho = ones_like(x) * ro
u = -100*x
v = 100*y
pa = get_particle_array_wcsph(x=x, y=y, m=m, rho=rho, h=h,
u=u, v=v, name=name)
print("Elliptical drop :: %d particles" %
(pa.get_number_of_particles()))
# add requisite variables needed for this formulation
for name in ('arho', 'au', 'av', 'aw', 'ax', 'ay', 'az', 'rho0', 'u0',
'v0', 'w0', 'x0', 'y0', 'z0'):
pa.add_property(name)
# set the output property arrays
pa.set_output_arrays(['x', 'y', 'u', 'v', 'rho', 'm',
'h', 'p', 'pid', 'tag', 'gid'])
return [pa]
def create_solver(self):
print("Create our own solver.")
kernel = Gaussian(dim=2)
integrator = EPECIntegrator(fluid=WCSPHStep())
dt = 5e-6
tf = 0.0076
solver = Solver(kernel=kernel, dim=2, integrator=integrator,
dt=dt, tf=tf, adaptive_timestep=True,
cfl=0.3, n_damp=50,
output_at_times=[0.0008, 0.0038])
return solver
def create_equations(self):
print("Create our own equations.")
equations = [
Group(
equations=[
TaitEOS(
dest='fluid', sources=None, rho0=self.ro,
c0=self.co, gamma=7.0
),
],
real=False
),
Group(equations=[
ContinuityEquation(dest='fluid', sources=['fluid']),
MomentumEquation(
dest='fluid', sources=['fluid'],
alpha=self.alpha, beta=0.0, c0=self.co
),
XSPHCorrection(dest='fluid', sources=['fluid']),
]),
]
return equations
if __name__ == '__main__':
app = EllipticalDrop()
app.run()
app.post_process(app.info_filename)
|
<reponame>Fuud/sniffy
package io.sniffy.socket;
import org.junit.BeforeClass;
import org.junit.Rule;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.InetAddress;
import java.net.Socket;
import java.net.UnknownHostException;
import static org.junit.Assert.*;
public class BaseSocketTest {
protected final static byte[] RESPONSE = new byte[]{9,8,7,6,5,4,3,2};
protected final static byte[] REQUEST = new byte[]{1, 2, 3, 4};
protected static InetAddress localhost;
@Rule
public EchoServerRule echoServerRule = new EchoServerRule(RESPONSE);
@BeforeClass
public static void resolveLocalhost() throws UnknownHostException {
localhost = InetAddress.getByName(null);
}
protected void performSocketOperation() {
try {
Socket socket = new Socket(localhost, echoServerRule.getBoundPort());
socket.setReuseAddress(true);
assertTrue(socket.isConnected());
OutputStream outputStream = socket.getOutputStream();
outputStream.write(REQUEST);
outputStream.flush();
socket.shutdownOutput();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
InputStream inputStream = socket.getInputStream();
int read;
while ((read = inputStream.read()) != -1) {
baos.write(read);
}
socket.shutdownInput();
echoServerRule.joinThreads();
assertArrayEquals(REQUEST, echoServerRule.pollReceivedData());
assertArrayEquals(RESPONSE, baos.toByteArray());
} catch (IOException e) {
fail(e.getMessage());
}
}
}
|
package com.wk.project.weixin.util;
import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.net.ConnectException;
import java.net.URL;
import javax.net.ssl.HttpsURLConnection;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.wk.project.weixin.pojo.AccessToken;
import com.wk.project.weixin.pojo.Menu;
import net.sf.json.JSONObject;
/**
* 公众平台通用接口工具类
*
*/
public class WeixinUtil {
private static Logger log = LoggerFactory.getLogger(WeixinUtil.class);
/**
* 发起https请求并获取结果
*
* @param requestUrl 请求地址
* @param requestMethod 请求方式(GET、POST)
* @param outputStr 提交的数据
* @return JSONObject(通过JSONObject.get(key)的方式获取json对象的属性值)
*/
public static JSONObject httpRequest(String requestUrl, String requestMethod, String outputStr) {
JSONObject jsonObject = null;
StringBuffer buffer = new StringBuffer();
try {
// 创建SSLContext对象,并使用我们指定的信任管理器初始化
TrustManager[] tm = { new MyX509TrustManager() };
SSLContext sslContext = SSLContext.getInstance("SSL", "SunJSSE");
sslContext.init(null, tm, new java.security.SecureRandom());
// 从上述SSLContext对象中得到SSLSocketFactory对象
SSLSocketFactory ssf = sslContext.getSocketFactory();
//URL url = new URL(requestUrl);
URL url= new URL(null, requestUrl, new sun.net.www.protocol.https.Handler());
HttpsURLConnection httpUrlConn = (HttpsURLConnection) url.openConnection();
httpUrlConn.setSSLSocketFactory(ssf);
httpUrlConn.setDoOutput(true);
httpUrlConn.setDoInput(true);
httpUrlConn.setUseCaches(false);
// 设置请求方式(GET/POST)
httpUrlConn.setRequestMethod(requestMethod);
if ("GET".equalsIgnoreCase(requestMethod))
httpUrlConn.connect();
// 当有数据需要提交时
if (null != outputStr) {
OutputStream outputStream = httpUrlConn.getOutputStream();
// 注意编码格式,防止中文乱码
outputStream.write(outputStr.getBytes("UTF-8"));
outputStream.close();
}
// 将返回的输入流转换成字符串
InputStream inputStream = httpUrlConn.getInputStream();
InputStreamReader inputStreamReader = new InputStreamReader(inputStream, "utf-8");
BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
String str = null;
while ((str = bufferedReader.readLine()) != null) {
buffer.append(str);
}
bufferedReader.close();
inputStreamReader.close();
// 释放资源
inputStream.close();
inputStream = null;
httpUrlConn.disconnect();
jsonObject = JSONObject.fromObject(buffer.toString());
} catch (ConnectException ce) {
log.error("Weixin server connection timed out.");
} catch (Exception e) {
log.error("https request error:{}", e);
}
return jsonObject;
}
// 获取access_token的接口地址(GET) 限200(次/天)
public final static String access_token_url = "https://api.weixin.qq.com/cgi-bin/token?grant_type=client_credential&appid=APPID&secret=APPSECRET";
/**
* 获取access_token
*
* @param appid 凭证
* @param appsecret 密钥
* @return
*/
public static AccessToken getAccessToken(String appid, String appsecret) {
AccessToken accessToken = null;
String requestUrl = access_token_url.replace("APPID", appid).replace("APPSECRET", appsecret);
JSONObject jsonObject = httpRequest(requestUrl, "GET", null);
System.out.println("获取凭证:"+jsonObject.toString());
// 如果请求成功
if (null != jsonObject) {
try {
accessToken = new AccessToken();
accessToken.setToken(jsonObject.getString("access_token"));
accessToken.setExpiresIn(jsonObject.getInt("expires_in"));
// System.out.println(accessToken.getExpiresIn());
} catch (Exception e) {
accessToken = null;
// 获取token失败
log.error("获取token失败 errcode:{} errmsg:{}", jsonObject.getInt("errcode"), jsonObject.getString("errmsg"));
}
}
return accessToken;
}
// 菜单创建(POST) 限100(次/天)
public static String menu_create_url = "https://api.weixin.qq.com/cgi-bin/menu/create?access_token=ACCESS_TOKEN";
/**
* 创建菜单
*
* @param menu 菜单实例
* @param accessToken 有效的access_token
* @return 0表示成功,其他值表示失败
*/
public static int createMenu(Menu menu, String accessToken) {
int result = 0;
// 拼装创建菜单的url
String url = menu_create_url.replace("ACCESS_TOKEN", accessToken);
// 将菜单对象转换成json字符串
String jsonMenu = JSONObject.fromObject(menu).toString();
System.out.println(jsonMenu);
// 调用接口创建菜单
JSONObject jsonObject = httpRequest(url, "POST", jsonMenu);
System.out.println("创建菜单:"+jsonObject.toString());
if (null != jsonObject) {
if (0 != jsonObject.getInt("errcode")) {
result = jsonObject.getInt("errcode");
log.error("创建菜单失败 errcode:{} errmsg:{}", jsonObject.getInt("errcode"), jsonObject.getString("errmsg"));
}
}
return result;
}
}
|
<reponame>marbar3778/anchor<filename>ts/src/rpc.ts
import camelCase from "camelcase";
import EventEmitter from "eventemitter3";
import * as bs58 from "bs58";
import {
Account,
AccountMeta,
PublicKey,
ConfirmOptions,
SystemProgram,
Transaction,
TransactionSignature,
TransactionInstruction,
SYSVAR_RENT_PUBKEY,
Commitment,
} from "@solana/web3.js";
import Provider from "./provider";
import {
Idl,
IdlAccount,
IdlInstruction,
IdlAccountItem,
IdlStateMethod,
} from "./idl";
import { IdlError, ProgramError } from "./error";
import Coder, {
ACCOUNT_DISCRIMINATOR_SIZE,
accountDiscriminator,
stateDiscriminator,
accountSize,
} from "./coder";
/**
* Dynamically generated rpc namespace.
*/
export interface Rpcs {
[key: string]: RpcFn;
}
/**
* Dynamically generated instruction namespace.
*/
export interface Ixs {
[key: string]: IxFn;
}
/**
* Dynamically generated transaction namespace.
*/
export interface Txs {
[key: string]: TxFn;
}
/**
* Accounts is a dynamically generated object to fetch any given account
* of a program.
*/
export interface Accounts {
[key: string]: AccountFn;
}
/**
* RpcFn is a single rpc method generated from an IDL.
*/
export type RpcFn = (...args: any[]) => Promise<TransactionSignature>;
/**
* Ix is a function to create a `TransactionInstruction` generated from an IDL.
*/
export type IxFn = IxProps & ((...args: any[]) => any);
type IxProps = {
accounts: (ctx: RpcAccounts) => any;
};
/**
* Tx is a function to create a `Transaction` generate from an IDL.
*/
export type TxFn = (...args: any[]) => Transaction;
/**
* Account is a function returning a deserialized account, given an address.
*/
export type AccountFn<T = any> = AccountProps & ((address: PublicKey) => T);
/**
* Deserialized account owned by a program.
*/
export type ProgramAccount<T = any> = {
publicKey: PublicKey;
account: T;
};
/**
* Non function properties on the acccount namespace.
*/
type AccountProps = {
size: number;
all: (filter?: Buffer) => Promise<ProgramAccount<any>[]>;
subscribe: (address: PublicKey, commitment?: Commitment) => EventEmitter;
unsubscribe: (address: PublicKey) => void;
createInstruction: (account: Account) => Promise<TransactionInstruction>;
};
/**
* Options for an RPC invocation.
*/
export type RpcOptions = ConfirmOptions;
/**
* RpcContext provides all arguments for an RPC/IX invocation that are not
* covered by the instruction enum.
*/
type RpcContext = {
// Accounts the instruction will use.
accounts?: RpcAccounts;
remainingAccounts?: AccountMeta[];
// Instructions to run *before* the specified rpc instruction.
instructions?: TransactionInstruction[];
// Accounts that must sign the transaction.
signers?: Array<Account>;
// RpcOptions.
options?: RpcOptions;
__private?: { logAccounts: boolean };
};
/**
* Dynamic object representing a set of accounts given to an rpc/ix invocation.
* The name of each key should match the name for that account in the IDL.
*/
type RpcAccounts = {
[key: string]: PublicKey | RpcAccounts;
};
export type State = {
address: () => Promise<PublicKey>;
rpc: Rpcs;
};
// Tracks all subscriptions.
const subscriptions: Map<string, Subscription> = new Map();
/**
* RpcFactory builds an Rpcs object for a given IDL.
*/
export class RpcFactory {
/**
* build dynamically generates RPC methods.
*
* @returns an object with all the RPC methods attached.
*/
public static build(
idl: Idl,
coder: Coder,
programId: PublicKey,
provider: Provider
): [Rpcs, Ixs, Txs, Accounts, State] {
const idlErrors = parseIdlErrors(idl);
const rpcs: Rpcs = {};
const ixFns: Ixs = {};
const txFns: Txs = {};
const state = RpcFactory.buildState(
idl,
coder,
programId,
idlErrors,
provider
);
idl.instructions.forEach((idlIx) => {
const name = camelCase(idlIx.name);
// Function to create a raw `TransactionInstruction`.
const ix = RpcFactory.buildIx(idlIx, coder, programId);
// Ffnction to create a `Transaction`.
const tx = RpcFactory.buildTx(idlIx, ix);
// Function to invoke an RPC against a cluster.
const rpc = RpcFactory.buildRpc(idlIx, tx, idlErrors, provider);
rpcs[name] = rpc;
ixFns[name] = ix;
txFns[name] = tx;
});
const accountFns = idl.accounts
? RpcFactory.buildAccounts(idl, coder, programId, provider)
: {};
return [rpcs, ixFns, txFns, accountFns, state];
}
// Builds the state namespace.
private static buildState(
idl: Idl,
coder: Coder,
programId: PublicKey,
idlErrors: Map<number, string>,
provider: Provider
): State | undefined {
if (idl.state === undefined) {
return undefined;
}
// Fetches the state object from the blockchain.
const state = async (): Promise<any> => {
const addr = await programStateAddress(programId);
const accountInfo = await provider.connection.getAccountInfo(addr);
if (accountInfo === null) {
throw new Error(`Account does not exist ${addr.toString()}`);
}
// Assert the account discriminator is correct.
const expectedDiscriminator = await stateDiscriminator(
idl.state.struct.name
);
if (expectedDiscriminator.compare(accountInfo.data.slice(0, 8))) {
throw new Error("Invalid account discriminator");
}
return coder.state.decode(accountInfo.data);
};
// Namespace with all rpc functions.
const rpc: Rpcs = {};
const ix: Ixs = {};
idl.state.methods.forEach((m: IdlStateMethod) => {
const accounts = async (accounts: RpcAccounts): Promise<any> => {
const keys = await stateInstructionKeys(
programId,
provider,
m,
accounts
);
return keys.concat(RpcFactory.accountsArray(accounts, m.accounts));
};
const ixFn = async (...args: any[]): Promise<TransactionInstruction> => {
const [ixArgs, ctx] = splitArgsAndCtx(m, [...args]);
return new TransactionInstruction({
keys: await accounts(ctx.accounts),
programId,
data: coder.instruction.encodeState(
m.name,
toInstruction(m, ...ixArgs)
),
});
};
ixFn["accounts"] = accounts;
ix[m.name] = ixFn;
rpc[m.name] = async (...args: any[]): Promise<TransactionSignature> => {
const [_, ctx] = splitArgsAndCtx(m, [...args]);
const tx = new Transaction();
if (ctx.instructions !== undefined) {
tx.add(...ctx.instructions);
}
tx.add(await ix[m.name](...args));
try {
const txSig = await provider.send(tx, ctx.signers, ctx.options);
return txSig;
} catch (err) {
let translatedErr = translateError(idlErrors, err);
if (translatedErr === null) {
throw err;
}
throw translatedErr;
}
};
});
state["rpc"] = rpc;
state["instruction"] = ix;
// Calculates the address of the program's global state object account.
state["address"] = async (): Promise<PublicKey> =>
programStateAddress(programId);
// Subscription singleton.
let sub: null | Subscription = null;
// Subscribe to account changes.
state["subscribe"] = (commitment?: Commitment): EventEmitter => {
if (sub !== null) {
return sub.ee;
}
const ee = new EventEmitter();
state["address"]().then((address) => {
const listener = provider.connection.onAccountChange(
address,
(acc) => {
const account = coder.state.decode(acc.data);
ee.emit("change", account);
},
commitment
);
sub = {
ee,
listener,
};
});
return ee;
};
// Unsubscribe from account changes.
state["unsubscribe"] = () => {
if (sub !== null) {
provider.connection
.removeAccountChangeListener(sub.listener)
.then(async () => {
sub = null;
})
.catch(console.error);
}
};
return state;
}
// Builds the instuction namespace.
private static buildIx(
idlIx: IdlInstruction,
coder: Coder,
programId: PublicKey
): IxFn {
if (idlIx.name === "_inner") {
throw new IdlError("the _inner name is reserved");
}
const ix = (...args: any[]): TransactionInstruction => {
const [ixArgs, ctx] = splitArgsAndCtx(idlIx, [...args]);
validateAccounts(idlIx.accounts, ctx.accounts);
validateInstruction(idlIx, ...args);
const keys = RpcFactory.accountsArray(ctx.accounts, idlIx.accounts);
if (ctx.remainingAccounts !== undefined) {
keys.push(...ctx.remainingAccounts);
}
if (ctx.__private && ctx.__private.logAccounts) {
console.log("Outgoing account metas:", keys);
}
return new TransactionInstruction({
keys,
programId,
data: coder.instruction.encode(
idlIx.name,
toInstruction(idlIx, ...ixArgs)
),
});
};
// Utility fn for ordering the accounts for this instruction.
ix["accounts"] = (accs: RpcAccounts) => {
return RpcFactory.accountsArray(accs, idlIx.accounts);
};
return ix;
}
private static accountsArray(
ctx: RpcAccounts,
accounts: IdlAccountItem[]
): any {
return accounts
.map((acc: IdlAccountItem) => {
// Nested accounts.
// @ts-ignore
const nestedAccounts: IdlAccountItem[] | undefined = acc.accounts;
if (nestedAccounts !== undefined) {
const rpcAccs = ctx[acc.name] as RpcAccounts;
return RpcFactory.accountsArray(rpcAccs, nestedAccounts).flat();
} else {
const account: IdlAccount = acc as IdlAccount;
return {
pubkey: ctx[acc.name],
isWritable: account.isMut,
isSigner: account.isSigner,
};
}
})
.flat();
}
// Builds the rpc namespace.
private static buildRpc(
idlIx: IdlInstruction,
txFn: TxFn,
idlErrors: Map<number, string>,
provider: Provider
): RpcFn {
const rpc = async (...args: any[]): Promise<TransactionSignature> => {
const tx = txFn(...args);
const [_, ctx] = splitArgsAndCtx(idlIx, [...args]);
try {
const txSig = await provider.send(tx, ctx.signers, ctx.options);
return txSig;
} catch (err) {
console.log("Translating error", err);
let translatedErr = translateError(idlErrors, err);
if (translatedErr === null) {
throw err;
}
throw translatedErr;
}
};
return rpc;
}
// Builds the transaction namespace.
private static buildTx(idlIx: IdlInstruction, ixFn: IxFn): TxFn {
const txFn = (...args: any[]): Transaction => {
const [_, ctx] = splitArgsAndCtx(idlIx, [...args]);
const tx = new Transaction();
if (ctx.instructions !== undefined) {
tx.add(...ctx.instructions);
}
tx.add(ixFn(...args));
return tx;
};
return txFn;
}
// Returns the generated accounts namespace.
private static buildAccounts(
idl: Idl,
coder: Coder,
programId: PublicKey,
provider: Provider
): Accounts {
const accountFns: Accounts = {};
idl.accounts.forEach((idlAccount) => {
const name = camelCase(idlAccount.name);
// Fetches the decoded account from the network.
const accountsNamespace = async (address: PublicKey): Promise<any> => {
const accountInfo = await provider.connection.getAccountInfo(address);
if (accountInfo === null) {
throw new Error(`Account does not exist ${address.toString()}`);
}
// Assert the account discriminator is correct.
const discriminator = await accountDiscriminator(idlAccount.name);
if (discriminator.compare(accountInfo.data.slice(0, 8))) {
throw new Error("Invalid account discriminator");
}
return coder.accounts.decode(idlAccount.name, accountInfo.data);
};
// Returns the size of the account.
// @ts-ignore
accountsNamespace["size"] =
ACCOUNT_DISCRIMINATOR_SIZE + accountSize(idl, idlAccount);
// Returns an instruction for creating this account.
// @ts-ignore
accountsNamespace["createInstruction"] = async (
account: Account,
sizeOverride?: number
): Promise<TransactionInstruction> => {
// @ts-ignore
const size = accountsNamespace["size"];
return SystemProgram.createAccount({
fromPubkey: provider.wallet.publicKey,
newAccountPubkey: account.publicKey,
space: sizeOverride ?? size,
lamports: await provider.connection.getMinimumBalanceForRentExemption(
sizeOverride ?? size
),
programId,
});
};
// Subscribes to all changes to this account.
// @ts-ignore
accountsNamespace["subscribe"] = (
address: PublicKey,
commitment?: Commitment
): EventEmitter => {
if (subscriptions.get(address.toString())) {
return subscriptions.get(address.toString()).ee;
}
const ee = new EventEmitter();
const listener = provider.connection.onAccountChange(
address,
(acc) => {
const account = coder.accounts.decode(idlAccount.name, acc.data);
ee.emit("change", account);
},
commitment
);
subscriptions.set(address.toString(), {
ee,
listener,
});
return ee;
};
// Unsubscribes to account changes.
// @ts-ignore
accountsNamespace["unsubscribe"] = (address: PublicKey) => {
let sub = subscriptions.get(address.toString());
if (subscriptions) {
provider.connection
.removeAccountChangeListener(sub.listener)
.then(() => {
subscriptions.delete(address.toString());
})
.catch(console.error);
}
};
// Returns all instances of this account type for the program.
// @ts-ignore
accountsNamespace["all"] = async (
filter?: Buffer
): Promise<ProgramAccount<any>[]> => {
let bytes = await accountDiscriminator(idlAccount.name);
if (filter !== undefined) {
bytes = Buffer.concat([bytes, filter]);
}
// @ts-ignore
let resp = await provider.connection._rpcRequest("getProgramAccounts", [
programId.toBase58(),
{
commitment: provider.connection.commitment,
filters: [
{
memcmp: {
offset: 0,
bytes: bs58.encode(bytes),
},
},
],
},
]);
if (resp.error) {
console.error(resp);
throw new Error("Failed to get accounts");
}
return (
resp.result
// @ts-ignore
.map(({ pubkey, account: { data } }) => {
data = bs58.decode(data);
return {
publicKey: new PublicKey(pubkey),
account: coder.accounts.decode(idlAccount.name, data),
};
})
);
};
accountFns[name] = accountsNamespace;
});
return accountFns;
}
}
type Subscription = {
listener: number;
ee: EventEmitter;
};
function translateError(
idlErrors: Map<number, string>,
err: any
): Error | null {
// TODO: don't rely on the error string. web3.js should preserve the error
// code information instead of giving us an untyped string.
let components = err.toString().split("custom program error: ");
if (components.length === 2) {
try {
const errorCode = parseInt(components[1]);
let errorMsg = idlErrors.get(errorCode);
if (errorMsg === undefined) {
// Unexpected error code so just throw the untranslated error.
return null;
}
return new ProgramError(errorCode, errorMsg);
} catch (parseErr) {
// Unable to parse the error. Just return the untranslated error.
return null;
}
}
}
function parseIdlErrors(idl: Idl): Map<number, string> {
const errors = new Map();
if (idl.errors) {
idl.errors.forEach((e) => {
let msg = e.msg ?? e.name;
errors.set(e.code, msg);
});
}
return errors;
}
function splitArgsAndCtx(
idlIx: IdlInstruction,
args: any[]
): [any[], RpcContext] {
let options = {};
const inputLen = idlIx.args ? idlIx.args.length : 0;
if (args.length > inputLen) {
if (args.length !== inputLen + 1) {
throw new Error("provided too many arguments ${args}");
}
options = args.pop();
}
return [args, options];
}
// Allow either IdLInstruction or IdlStateMethod since the types share fields.
function toInstruction(idlIx: IdlInstruction | IdlStateMethod, ...args: any[]) {
if (idlIx.args.length != args.length) {
throw new Error("Invalid argument length");
}
const ix: { [key: string]: any } = {};
let idx = 0;
idlIx.args.forEach((ixArg) => {
ix[ixArg.name] = args[idx];
idx += 1;
});
return ix;
}
// Throws error if any account required for the `ix` is not given.
function validateAccounts(ixAccounts: IdlAccountItem[], accounts: RpcAccounts) {
ixAccounts.forEach((acc) => {
// @ts-ignore
if (acc.accounts !== undefined) {
// @ts-ignore
validateAccounts(acc.accounts, accounts[acc.name]);
} else {
if (accounts[acc.name] === undefined) {
throw new Error(`Invalid arguments: ${acc.name} not provided.`);
}
}
});
}
// Throws error if any argument required for the `ix` is not given.
function validateInstruction(ix: IdlInstruction, ...args: any[]) {
// todo
}
// Calculates the deterministic address of the program's "state" account.
async function programStateAddress(programId: PublicKey): Promise<PublicKey> {
let [registrySigner, _nonce] = await PublicKey.findProgramAddress(
[],
programId
);
return PublicKey.createWithSeed(registrySigner, "unversioned", programId);
}
// Returns the common keys that are prepended to all instructions targeting
// the "state" of a program.
async function stateInstructionKeys(
programId: PublicKey,
provider: Provider,
m: IdlStateMethod,
accounts: RpcAccounts
) {
if (m.name === "new") {
// Ctor `new` method.
const [programSigner, _nonce] = await PublicKey.findProgramAddress(
[],
programId
);
return [
{
pubkey: provider.wallet.publicKey,
isWritable: false,
isSigner: true,
},
{
pubkey: await programStateAddress(programId),
isWritable: true,
isSigner: false,
},
{ pubkey: programSigner, isWritable: false, isSigner: false },
{
pubkey: SystemProgram.programId,
isWritable: false,
isSigner: false,
},
{ pubkey: programId, isWritable: false, isSigner: false },
{
pubkey: SYSVAR_RENT_PUBKEY,
isWritable: false,
isSigner: false,
},
];
} else {
validateAccounts(m.accounts, accounts);
return [
{
pubkey: await programStateAddress(programId),
isWritable: true,
isSigner: false,
},
];
}
}
|
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.brixcms.plugin.menu.editor.cell;
import com.inmethod.grid.column.AbstractColumn;
import org.apache.wicket.Component;
import org.apache.wicket.markup.html.WebMarkupContainer;
import org.apache.wicket.model.IModel;
import org.apache.wicket.model.PropertyModel;
import org.brixcms.plugin.menu.Menu;
import org.brixcms.plugin.site.picker.reference.ReferenceEditorConfiguration;
import org.brixcms.web.reference.Reference;
/**
* Created by IntelliJ IDEA. User: korbinianbachl Date: 08.09.2010 Time: 21:11:23
*/
public class SwitcherColumn extends AbstractColumn {
ReferenceEditorConfiguration conf;
public SwitcherColumn(String id, IModel<String> displayModel, ReferenceEditorConfiguration conf) {
super(id, displayModel);
this.conf = conf;
}
/**
* {@inheritDoc}
*/
@Override
public Component newCell(WebMarkupContainer parent, String componentId, final IModel rowModel) {
IModel<Menu.ChildEntry.MenuType> typeModel = new PropertyModel<Menu.ChildEntry.MenuType>(rowModel, "entry.menuType");
IModel<Reference> referenceModel = new PropertyModel<Reference>(rowModel, "entry.reference");
IModel<String> labelOrCodeModel = new PropertyModel<String>(rowModel, "entry.labelOrCode");
return new SwitcherCellPanel(componentId, typeModel, referenceModel, labelOrCodeModel, conf) {
@Override
boolean isEditing() {
return getGrid().isItemEdited(rowModel);
}
};
}
}
|
Ruptured mycotic common femoral artery pseudoaneurysm: fatal pulmonary embolism after emergency stent-grafting in a drug abuser.
The rupture of a mycotic femoral artery pseudoaneurysm in an intravenous drug abuser is a limb- and life-threatening condition that necessitates emergency intervention. Emergency stent-grafting appears to be a viable, minimally invasive alternative, or a bridge, to subsequent open surgery. Caution is required in cases of suspected concomitant deep vein thrombosis in order to minimize the possibility of massive pulmonary embolism during stent-grafting, perhaps by omitting stent-graft postdilation or by inserting an inferior vena cava filter first. We describe the emergency endovascular management, in a 60-year-old male intravenous drug abuser, of a ruptured mycotic femoral artery pseudoaneurysm, which was complicated by a fatal pulmonary embolism. |
The most common heart valve defect among the elderly and senile is aortic valve stenosis. The traditional method of treating severe aortic valve stenosis is open surgery to replace aortic valve. At the same time, a more modern, minimally invasive method of correcting aortic stenosis is transcatheter aortic valve implantation (TAVI). This intervention is primarily indicated for patients of old age suffering from severe chronic heart failure associated with aortic stenosis, who have a high surgical risk. Currently, TAVI has evolved from a complex and dangerous procedure into an effective and safe method of treatment thanks to the development of a new generation of devices. Currently, there are still topical issues of using TAVI in individual clinical cases (use of TAVI in the elderly (60-75 years), TAVI in centenarians (90 years or more), TAVI in frailty, the feasibility of performing TAVI with low surgical risk, etc.), as well as issues related to longevity valves used for TAVI and prognosis in terms of quality and life expectancy. |
def _unique_match_hashes(self, id, hits, mode):
allids = hits[:, 0]
alltimes = hits[:, 1]
allhashes = hits[:, 2].astype(np.int64)
allotimes = hits[:, 3]
timebits = max(1, encpowerof2(np.amax(allotimes)))
matchix = np.nonzero(
np.logical_and(allids == id, np.less_equal(np.abs(alltimes - mode),
self.window)))[0]
matchhasheshash = np.unique(allotimes[matchix]
+ (allhashes[matchix] << timebits))
timemask = (1 << timebits) - 1
matchhashes = np.c_[matchhasheshash & timemask,
matchhasheshash >> timebits]
return matchhashes |
def _logger_callback(level, c_msg):
msg = ffi.string(c_msg).decode(errors="ignore")
m = {
lib.WGPULogLevel_Error: logger.error,
lib.WGPULogLevel_Warn: logger.warning,
lib.WGPULogLevel_Info: logger.info,
lib.WGPULogLevel_Debug: logger.debug,
lib.WGPULogLevel_Trace: logger.debug,
}
func = m.get(level, logger.warning)
func(msg) |
Environmental xenobiotics and the antihormones cyproterone acetate and spironolactone use the nuclear hormone pregnenolone X receptor to activate the CYP3A23 hormone response element.
The pregnenolone X receptor (PXR), a new member of the nuclear hormone receptor superfamily, was recently demonstrated to mediate glucocorticoid agonist and antagonist activation of a hormone response element spaced by three nucleotides (DR-3) within the rat CYP3A23 promoter. Because many other steroids and xenobiotics can up-regulate CYP3A23 expression, we determined whether some of these other regulators used PXR to activate the CYP3A23 DR-3. Transient co-transfection of LLC-PK1 cells with (CYP3A23)2-tk-CAT and mouse PXR demonstrated that the organochlorine pesticides transnonachlor and chlordane and the nonplanar polychlorinated biphenyls (PCBs) each induced the CYP3A23 DR-3 element, and this activation required PXR. Additionally, this study found that PXR is activated to induce (CYP3A23)2-tk-CAT by antihormones of several steroid classes including the antimineralocorticoid spironolactone and the antiandrogen cyproterone acetate. These studies reveal that PXR is involved in the induction of CYP3A23 by pharmacologically and structurally distinct steroids and xenobiotics. Moreover, PXR-mediated PCB activation of the (CYP3A23)2-tk-CAT may serve as a rapid assay for effects of nonplanar PCBs. |
/**
* Test of deleteAddress method, of class PersonController.
*
* @throws java.lang.Exception
*/
@Test
public void testDeleteAddress() throws Exception {
long id = 10001L;
Address ad1 = new Address(1L, "1 court", "Tallaght", "Dublin", "24");
Person person = new Person(id, "Achille", "Nisengwe");
person.addAddress(ad1);
when(this.personDataService.retrievePerson(id)).thenReturn(person);
when(personDataService.createPerson(person)).thenReturn(person);
RequestBuilder request = MockMvcRequestBuilders
.delete("/person/" + id + "/address/1")
.accept(MediaType.APPLICATION_JSON);
String expectedResult = "{id:" + id + ",address:[]}";
MvcResult result = mockMvc.perform(request)
.andExpect(status().isOk()).andReturn();
JSONAssert.assertEquals(expectedResult, result.getResponse().getContentAsString(), false);
} |
<reponame>jaredlunde/redis_structures<gh_stars>1-10
#!/usr/bin/python3 -S
# -*- coding: utf-8 -*-
"""
`Redis Dict Tests`
--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--·--
2015 <NAME> © The MIT License (MIT)
http://github.com/jaredlunde
"""
import datetime
import time
import pickle
import unittest
from redis_structures.debug import RandData, gen_rand_str
from redis_structures import StrictRedis, RedisDict
class TestJSONRedisDict(unittest.TestCase):
dict = RedisDict("json_dict", prefix="rs:unit_tests:", serialize=True)
is_str = False
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.addCleanup(self.dict.clear)
def cast(self, obj):
return str(obj) if self.is_str else obj
def reset(self, count=10, type=int):
self.dict.clear()
self.data = RandData(type).dict(count, 1)
self.data_len = len(self.data)
self.dict.update(self.data)
def test_prefix(self):
self.assertEqual(self.dict.prefix, 'rs:unit_tests')
self.assertEqual(self.dict.name, 'json_dict')
self.assertEqual(self.dict.key_prefix, 'rs:unit_tests:json_dict')
def test_incr_decr(self):
self.reset()
self.dict.incr('views', 1)
self.assertEqual(self.dict['views'], self.cast(1))
self.dict.incr('views', 3)
self.assertEqual(self.dict['views'], self.cast(4))
self.dict.decr('views', 1)
self.assertEqual(self.dict['views'], self.cast(3))
def test_get(self):
self.reset()
self.dict["hello"] = "world"
self.assertEqual(self.dict.get("hello"), 'world')
self.assertEqual(self.dict.get('world', 'hello'), 'hello')
def test_get_key(self):
self.assertEqual(
self.dict.get_key('views'),
"{}:{}:{}".format(self.dict.prefix, self.dict.name, 'views'))
def test_items(self):
self.reset()
self.assertDictEqual(
{k: v for k, v in self.dict.items()},
{k: self.cast(v) for k, v in self.data.items()})
def test_values(self):
self.reset()
self.assertSetEqual(
set(self.dict.values()),
set(map(self.cast, self.data.values())))
def test_iter(self):
self.reset()
self.assertSetEqual(
set(k for k in self.dict.iter()),
set(self.cast(k) for k in self.data.keys()))
def test_iter_match(self):
self.reset(count=10)
self.assertSetEqual(
set(k for k in self.dict.iter("a*")),
set(self.cast(k) for k in self.data.keys() if k.startswith('a')))
def test_mget(self):
self.reset(0)
self.dict.update({
'test1': 1,
'test2': 2,
'test3': 3,
'test4': 4,
'test5': 5})
self.assertListEqual(
self.dict.mget('test2', 'test3', 'test4'),
[self.cast(2), self.cast(3), self.cast(4)])
def test_pop(self):
self.reset()
self.dict['hello'] = 'world'
self.assertEqual(self.dict.pop('hello'), 'world')
self.assertNotIn('hello', self.dict)
def test_delete(self):
self.reset()
self.dict['hello'] = 'world'
self.assertEqual(self.dict['hello'], 'world')
del self.dict['hello']
self.assertNotIn('hello', self.dict)
def test_scan(self):
self.reset()
new_keys = []
cursor = '0'
while cursor:
cursor, keys = self.dict.scan(count=1, cursor=int(cursor))
if keys:
new_keys.extend(keys)
self.assertSetEqual(
set(self.dict.get_key(k) for k in self.data.keys()), set(new_keys))
def test_set(self):
self.reset()
self.dict.set("hello", "world")
self.assertIn("hello", self.dict)
def test_len(self):
self.reset(100)
self.assertEqual(len(self.dict), self.data_len)
self.reset(1000)
self.assertEqual(len(self.dict), self.data_len)
rem = [k for k in list(self.dict)[:250]]
self.dict.remove(*rem)
self.assertEqual(len(self.dict), self.data_len - len(rem))
class TestPickledRedisDict(TestJSONRedisDict):
dict = RedisDict("pickled_dict", prefix="rs:unit_tests:", serializer=pickle)
def test_prefix(self):
self.assertEqual(self.dict.prefix, 'rs:unit_tests')
self.assertEqual(self.dict.name, 'pickled_dict')
self.assertEqual(self.dict.key_prefix, 'rs:unit_tests:pickled_dict')
def test_incr_decr(self):
self.reset()
self.dict.incr('views', 1)
self.assertEqual(self.dict['views'], str(1))
self.dict.incr('views', 3)
self.assertEqual(self.dict['views'], str(4))
self.dict.decr('views', 1)
self.assertEqual(self.dict['views'], str(3))
class TestUnserializedRedisDict(TestJSONRedisDict):
dict = RedisDict("unserialized_dict", prefix="rs:unit_tests:")
is_str = True
def test_prefix(self):
self.assertEqual(self.dict.prefix, 'rs:unit_tests')
self.assertEqual(self.dict.name, 'unserialized_dict')
self.assertEqual(
self.dict.key_prefix, 'rs:unit_tests:unserialized_dict')
if __name__ == '__main__':
unittest.main()
|
<filename>src/LogDispatcher.cpp
#include <gtk-accounting/log/LogDispatcher.h>
#include <iostream>
namespace acc {
LogDispatcher LogDispatcher::instance = LogDispatcher();
LogDispatcher::LogDispatcher() : m_dispatcher(m_dispatcherImpl) {}
LogDispatcher &LogDispatcher::getInstance() { return instance; }
void LogDispatcher::queueMessage(const std::string &message) {
m_dispatcher.queueEvent([this, message](){ printMessge(message); });
}
void LogDispatcher::printMessge(const std::string &message)
{
std::cerr << message << "\n";
}
}
|
def create_compound(attributes):
data = {
"color": attributes.get("color", (0, 0, 0)),
"properties": {
"structure": attributes.get("structure", (0, 0)),
},
}
for k, v in attributes.items():
data[k] = v
return data |
def _load(self):
try:
with open(self._db_path(), "r") as fd:
return json.load(fd)
except FileNotFoundError:
logger.debug("initializing database")
return copy.deepcopy(DEFAULT_DATA) |
""" Tests for CLI sync subparser (__main__.py) """
import os
from os.path import extsep
from tempfile import TemporaryDirectory
from unittest import TestCase
from cdd import __version__
from cdd.tests.mocks.argparse import argparse_func_str
from cdd.tests.mocks.classes import class_str
from cdd.tests.mocks.methods import class_with_method_types_str
from cdd.tests.utils_for_tests import run_cli_test, unittest_main
class TestCliSync(TestCase):
"""Test class for __main__.py"""
def test_version(self) -> None:
"""Tests CLI interface gives version"""
run_cli_test(
self,
["--version"],
exit_code=0,
output=__version__,
output_checker=lambda output: output[output.rfind(" ") + 1 :][:-1],
)
def test_args_example0(self) -> None:
"""Tests CLI interface sets namespace correctly"""
with TemporaryDirectory() as tempdir:
filename = os.path.join(
os.path.realpath(tempdir),
"delete_this_0{}".format(os.path.basename(__file__)),
)
with open(filename, "wt") as f:
f.write(class_str)
try:
_, args = run_cli_test(
self,
[
"sync",
"--class",
filename,
"--class-name",
"ConfigClass",
"--argparse-function",
filename,
"--argparse-function-name",
"set_cli_args",
"--truth",
"class",
],
exit_code=None,
output=None,
return_args=True,
)
finally:
if os.path.isfile(filename):
os.remove(filename)
self.assertListEqual(args.argparse_functions, [filename])
self.assertListEqual(args.argparse_function_names, ["set_cli_args"])
self.assertListEqual(args.classes, [filename])
self.assertListEqual(args.class_names, ["ConfigClass"])
self.assertEqual(args.truth, "class")
def test_args_example1(self) -> None:
"""Tests CLI interface sets namespace correctly"""
with TemporaryDirectory() as tempdir:
argparse_filename = os.path.join(
os.path.realpath(tempdir),
"argparse{extsep}py".format(extsep=extsep),
)
class_filename = os.path.join(
os.path.realpath(tempdir),
"class_{extsep}py".format(extsep=extsep),
)
method_filename = os.path.join(
os.path.realpath(tempdir),
"method{extsep}py".format(extsep=extsep),
)
with open(argparse_filename, "wt") as f:
f.write(argparse_func_str)
with open(class_filename, "wt") as f:
f.write(class_str)
with open(method_filename, "wt") as f:
f.write(class_with_method_types_str)
_, args = run_cli_test(
self,
[
"sync",
"--class",
class_filename,
"--class-name",
"ConfigClass",
"--function",
method_filename,
"--function-name",
"train",
"--argparse-function",
argparse_filename,
"--argparse-function-name",
"set_cli_args",
"--truth",
"function",
],
exit_code=None,
output=None,
return_args=True,
)
self.assertListEqual(args.argparse_functions, [argparse_filename])
self.assertListEqual(args.argparse_function_names, ["set_cli_args"])
self.assertListEqual(args.classes, [class_filename])
self.assertListEqual(args.class_names, ["ConfigClass"])
self.assertEqual(args.truth, "function")
def test_non_existent_file_fails(self) -> None:
"""Tests nonexistent file throws the right error"""
with TemporaryDirectory() as tempdir:
filename = os.path.join(
os.path.realpath(tempdir),
"delete_this_1{}".format(os.path.basename(__file__)),
)
run_cli_test(
self,
[
"sync",
"--argparse-function",
filename,
"--class",
filename,
"--truth",
"class",
],
exit_code=2,
output="--truth must be an existent file. Got: {filename!r}\n".format(
filename=filename
),
)
def test_missing_argument_fails(self) -> None:
"""Tests missing argument throws the right error"""
run_cli_test(
self,
["sync", "--truth", "class"],
exit_code=2,
output="--truth must be an existent file. Got: None\n",
)
def test_missing_argument_fails_insufficient_args(self) -> None:
"""Tests missing argument throws the right error"""
with TemporaryDirectory() as tempdir:
filename = os.path.join(
os.path.realpath(tempdir),
"delete_this_2{}".format(os.path.basename(__file__)),
)
with open(filename, "wt") as f:
f.write(class_str)
run_cli_test(
self,
["sync", "--truth", "class", "--class", filename],
exit_code=2,
output="Two or more of `--argparse-function`, `--class`, and `--function` must be specified\n",
)
def test_incorrect_arg_fails(self) -> None:
"""Tests CLI interface failure cases"""
run_cli_test(
self,
["sync", "--wrong"],
exit_code=2,
output="the following arguments are required: --truth\n",
)
unittest_main()
|
package net.n2oaap.security.admin.sso.keycloak;
import net.n2oapp.security.admin.api.provider.SsoUserRoleProvider;
import net.n2oapp.security.admin.sso.keycloak.AdminSsoKeycloakProperties;
import net.n2oapp.security.admin.sso.keycloak.KeycloakRestRoleService;
import net.n2oapp.security.admin.sso.keycloak.KeycloakRestUserService;
import net.n2oapp.security.admin.sso.keycloak.KeycloakSsoUserRoleProvider;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.test.context.TestConfiguration;
import org.springframework.context.annotation.Bean;
import org.springframework.security.oauth2.client.OAuth2RestOperations;
import org.springframework.security.oauth2.client.OAuth2RestTemplate;
import org.springframework.security.oauth2.client.token.grant.client.ClientCredentialsResourceDetails;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.support.TransactionTemplate;
import org.springframework.web.client.RestOperations;
@TestConfiguration
public class SsoKeycloakTestConfiguration {
@Bean
SsoUserRoleProvider ssoUserRoleProvider(@Qualifier("adminSsoKeycloakProperties") AdminSsoKeycloakProperties properties) {
return new KeycloakSsoUserRoleProvider(properties);
}
@Bean
KeycloakRestRoleService keycloakRestRoleService(@Qualifier("adminSsoKeycloakProperties") AdminSsoKeycloakProperties properties,
@Qualifier("keycloakRestTemplate") RestOperations template) {
return new KeycloakRestRoleService(properties, template);
}
@Bean
KeycloakRestUserService keycloakRestUserService(@Qualifier("adminSsoKeycloakProperties") AdminSsoKeycloakProperties properties,
@Qualifier("keycloakRestTemplate") RestOperations template,
KeycloakRestRoleService roleService) {
return new KeycloakRestUserService(properties, template, roleService);
}
@Bean
OAuth2RestOperations keycloakRestTemplate(@Qualifier("adminSsoKeycloakProperties") AdminSsoKeycloakProperties properties) {
ClientCredentialsResourceDetails resource = new ClientCredentialsResourceDetails();
resource.setClientId(properties.getAdminClientId());
resource.setClientSecret(properties.getAdminClientSecret());
resource.setAccessTokenUri(String.format("%s/realms/%s/protocol/openid-connect/token", properties.getServerUrl(), properties.getRealm()));
return new OAuth2RestTemplate(resource);
}
@Bean
public TransactionTemplate transactionTemplate(PlatformTransactionManager transactionManager) {
return new TransactionTemplate(transactionManager);
}
}
|
/**
* @param textToSearch the text to search
* @param substring string to check in text
* @return TRUE when given text contains the given substring, FALSE otherwise
*/
public static boolean contains(String textToSearch, String substring) {
if (textToSearch != null && textToSearch.contains(substring)) {
return true;
}
return false;
} |
/*
* Copyright (c) 2003 by Hewlett-Packard Company. All rights reserved.
*
* This file is covered by the GNU general public license, version 2.
* see COPYING for details.
*/
/* Some basic sanity tests. These do not test the barrier semantics. */
#undef TA_assert
#define TA_assert(e) \
if (!(e)) { fprintf(stderr, "Assertion failed %s:%d (barrier: )\n", \
__FILE__, __LINE__), exit(1); }
#undef MISSING
#define MISSING(name) \
printf("Missing: %s\n", #name "")
#if defined(CPPCHECK)
void list_atomic(void);
void char_list_atomic(void);
void short_list_atomic(void);
void int_list_atomic(void);
void double_list_atomic(void);
#endif
void test_atomic(void)
{
AO_t x;
unsigned char b;
unsigned short s;
unsigned int zz;
# if defined(AO_HAVE_test_and_set)
AO_TS_t z = AO_TS_INITIALIZER;
# endif
# if defined(AO_HAVE_double_compare_and_swap) \
|| defined(AO_HAVE_double_load) \
|| defined(AO_HAVE_double_store)
static AO_double_t old_w; /* static to avoid misalignment */
AO_double_t new_w;
# endif
# if defined(AO_HAVE_compare_and_swap_double) \
|| defined(AO_HAVE_compare_double_and_swap_double) \
|| defined(AO_HAVE_double_compare_and_swap)
static AO_double_t w; /* static to avoid misalignment */
w.AO_val1 = 0;
w.AO_val2 = 0;
# endif
# if defined(CPPCHECK)
list_atomic();
char_list_atomic();
short_list_atomic();
int_list_atomic();
double_list_atomic();
# endif
# if defined(AO_HAVE_nop)
AO_nop();
# elif !defined(AO_HAVE_nop) || !defined(AO_HAVE_nop_full) \
|| !defined(AO_HAVE_nop_read) || !defined(AO_HAVE_nop_write)
MISSING(AO_nop);
# endif
# if defined(AO_HAVE_store)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile AO_t *)&x = 0; /* initialize to avoid false warning */
# endif
AO_store(&x, 13);
TA_assert(x == 13);
# else
# if !defined(AO_HAVE_store) || !defined(AO_HAVE_store_full) \
|| !defined(AO_HAVE_store_release) \
|| !defined(AO_HAVE_store_release_write) \
|| !defined(AO_HAVE_store_write)
MISSING(AO_store);
# endif
x = 13;
# endif
# if defined(AO_HAVE_load)
TA_assert(AO_load(&x) == 13);
# elif !defined(AO_HAVE_load) || !defined(AO_HAVE_load_acquire) \
|| !defined(AO_HAVE_load_acquire_read) \
|| !defined(AO_HAVE_load_dd_acquire_read) \
|| !defined(AO_HAVE_load_full) || !defined(AO_HAVE_load_read)
MISSING(AO_load);
# endif
# if defined(AO_HAVE_test_and_set)
TA_assert(AO_test_and_set(&z) == AO_TS_CLEAR);
TA_assert(AO_test_and_set(&z) == AO_TS_SET);
TA_assert(AO_test_and_set(&z) == AO_TS_SET);
AO_CLEAR(&z);
# else
MISSING(AO_test_and_set);
# endif
# if defined(AO_HAVE_fetch_and_add)
TA_assert(AO_fetch_and_add(&x, 42) == 13);
TA_assert(AO_fetch_and_add(&x, (AO_t)(-42)) == 55);
# else
MISSING(AO_fetch_and_add);
# endif
# if defined(AO_HAVE_fetch_and_add1)
TA_assert(AO_fetch_and_add1(&x) == 13);
# else
MISSING(AO_fetch_and_add1);
++x;
# endif
# if defined(AO_HAVE_fetch_and_sub1)
TA_assert(AO_fetch_and_sub1(&x) == 14);
# else
MISSING(AO_fetch_and_sub1);
--x;
# endif
# if defined(AO_HAVE_short_store)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile short *)&s = 0;
# endif
AO_short_store(&s, 13);
# else
# if !defined(AO_HAVE_short_store) || !defined(AO_HAVE_short_store_full) \
|| !defined(AO_HAVE_short_store_release) \
|| !defined(AO_HAVE_short_store_release_write) \
|| !defined(AO_HAVE_short_store_write)
MISSING(AO_short_store);
# endif
s = 13;
# endif
# if defined(AO_HAVE_short_load)
TA_assert(AO_short_load(&s) == 13);
# elif !defined(AO_HAVE_short_load) || !defined(AO_HAVE_short_load_acquire) \
|| !defined(AO_HAVE_short_load_acquire_read) \
|| !defined(AO_HAVE_short_load_dd_acquire_read) \
|| !defined(AO_HAVE_short_load_full) \
|| !defined(AO_HAVE_short_load_read)
MISSING(AO_short_load);
# endif
# if defined(AO_HAVE_short_fetch_and_add)
TA_assert(AO_short_fetch_and_add(&s, 42) == 13);
TA_assert(AO_short_fetch_and_add(&s, (unsigned short)-42) == 55);
# else
MISSING(AO_short_fetch_and_add);
# endif
# if defined(AO_HAVE_short_fetch_and_add1)
TA_assert(AO_short_fetch_and_add1(&s) == 13);
# else
MISSING(AO_short_fetch_and_add1);
++s;
# endif
# if defined(AO_HAVE_short_fetch_and_sub1)
TA_assert(AO_short_fetch_and_sub1(&s) == 14);
# else
MISSING(AO_short_fetch_and_sub1);
--s;
# endif
TA_assert(*(volatile short *)&s == 13);
# if defined(AO_HAVE_char_store)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile char *)&b = 0;
# endif
AO_char_store(&b, 13);
# else
# if !defined(AO_HAVE_char_store) || !defined(AO_HAVE_char_store_full) \
|| !defined(AO_HAVE_char_store_release) \
|| !defined(AO_HAVE_char_store_release_write) \
|| !defined(AO_HAVE_char_store_write)
MISSING(AO_char_store);
# endif
b = 13;
# endif
# if defined(AO_HAVE_char_load)
TA_assert(AO_char_load(&b) == 13);
# elif !defined(AO_HAVE_char_load) || !defined(AO_HAVE_char_load_acquire) \
|| !defined(AO_HAVE_char_load_acquire_read) \
|| !defined(AO_HAVE_char_load_dd_acquire_read) \
|| !defined(AO_HAVE_char_load_full) || !defined(AO_HAVE_char_load_read)
MISSING(AO_char_load);
# endif
# if defined(AO_HAVE_char_fetch_and_add)
TA_assert(AO_char_fetch_and_add(&b, 42) == 13);
TA_assert(AO_char_fetch_and_add(&b, (unsigned char)-42) == 55);
# else
MISSING(AO_char_fetch_and_add);
# endif
# if defined(AO_HAVE_char_fetch_and_add1)
TA_assert(AO_char_fetch_and_add1(&b) == 13);
# else
MISSING(AO_char_fetch_and_add1);
++b;
# endif
# if defined(AO_HAVE_char_fetch_and_sub1)
TA_assert(AO_char_fetch_and_sub1(&b) == 14);
# else
MISSING(AO_char_fetch_and_sub1);
--b;
# endif
TA_assert(*(volatile char *)&b == 13);
# if defined(AO_HAVE_int_store)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile int *)&zz = 0;
# endif
AO_int_store(&zz, 13);
# else
# if !defined(AO_HAVE_int_store) || !defined(AO_HAVE_int_store_full) \
|| !defined(AO_HAVE_int_store_release) \
|| !defined(AO_HAVE_int_store_release_write) \
|| !defined(AO_HAVE_int_store_write)
MISSING(AO_int_store);
# endif
zz = 13;
# endif
# if defined(AO_HAVE_int_load)
TA_assert(AO_int_load(&zz) == 13);
# elif !defined(AO_HAVE_int_load) || !defined(AO_HAVE_int_load_acquire) \
|| !defined(AO_HAVE_int_load_acquire_read) \
|| !defined(AO_HAVE_int_load_dd_acquire_read) \
|| !defined(AO_HAVE_int_load_full) || !defined(AO_HAVE_int_load_read)
MISSING(AO_int_load);
# endif
# if defined(AO_HAVE_int_fetch_and_add)
TA_assert(AO_int_fetch_and_add(&zz, 42) == 13);
TA_assert(AO_int_fetch_and_add(&zz, (unsigned int)-42) == 55);
# else
MISSING(AO_int_fetch_and_add);
# endif
# if defined(AO_HAVE_int_fetch_and_add1)
TA_assert(AO_int_fetch_and_add1(&zz) == 13);
# else
MISSING(AO_int_fetch_and_add1);
++zz;
# endif
# if defined(AO_HAVE_int_fetch_and_sub1)
TA_assert(AO_int_fetch_and_sub1(&zz) == 14);
# else
MISSING(AO_int_fetch_and_sub1);
--zz;
# endif
TA_assert(*(volatile int *)&zz == 13);
# if defined(AO_HAVE_compare_and_swap)
TA_assert(!AO_compare_and_swap(&x, 14, 42));
TA_assert(x == 13);
TA_assert(AO_compare_and_swap(&x, 13, 42));
TA_assert(x == 42);
# else
MISSING(AO_compare_and_swap);
if (*(volatile AO_t *)&x == 13) x = 42;
# endif
# if defined(AO_HAVE_or)
AO_or(&x, 66);
TA_assert(x == 106);
# else
# if !defined(AO_HAVE_or) || !defined(AO_HAVE_or_acquire) \
|| !defined(AO_HAVE_or_acquire_read) || !defined(AO_HAVE_or_full) \
|| !defined(AO_HAVE_or_read) || !defined(AO_HAVE_or_release) \
|| !defined(AO_HAVE_or_release_write) || !defined(AO_HAVE_or_write)
MISSING(AO_or);
# endif
x |= 66;
# endif
# if defined(AO_HAVE_xor)
AO_xor(&x, 181);
TA_assert(x == 223);
# else
# if !defined(AO_HAVE_xor) || !defined(AO_HAVE_xor_acquire) \
|| !defined(AO_HAVE_xor_acquire_read) || !defined(AO_HAVE_xor_full) \
|| !defined(AO_HAVE_xor_read) || !defined(AO_HAVE_xor_release) \
|| !defined(AO_HAVE_xor_release_write) || !defined(AO_HAVE_xor_write)
MISSING(AO_xor);
# endif
x ^= 181;
# endif
# if defined(AO_HAVE_and)
AO_and(&x, 57);
TA_assert(x == 25);
# else
# if !defined(AO_HAVE_and) || !defined(AO_HAVE_and_acquire) \
|| !defined(AO_HAVE_and_acquire_read) || !defined(AO_HAVE_and_full) \
|| !defined(AO_HAVE_and_read) || !defined(AO_HAVE_and_release) \
|| !defined(AO_HAVE_and_release_write) || !defined(AO_HAVE_and_write)
MISSING(AO_and);
# endif
x &= 57;
# endif
# if defined(AO_HAVE_fetch_compare_and_swap)
TA_assert(AO_fetch_compare_and_swap(&x, 14, 117) == 25);
TA_assert(x == 25);
TA_assert(AO_fetch_compare_and_swap(&x, 25, 117) == 25);
# else
MISSING(AO_fetch_compare_and_swap);
if (x == 25) x = 117;
# endif
TA_assert(x == 117);
# if defined(AO_HAVE_short_compare_and_swap)
TA_assert(!AO_short_compare_and_swap(&s, 14, 42));
TA_assert(s == 13);
TA_assert(AO_short_compare_and_swap(&s, 13, 42));
TA_assert(s == 42);
# else
MISSING(AO_short_compare_and_swap);
if (*(volatile short *)&s == 13) s = 42;
# endif
# if defined(AO_HAVE_short_or)
AO_short_or(&s, 66);
TA_assert(s == 106);
# else
# if !defined(AO_HAVE_short_or) || !defined(AO_HAVE_short_or_acquire) \
|| !defined(AO_HAVE_short_or_acquire_read) \
|| !defined(AO_HAVE_short_or_full) || !defined(AO_HAVE_short_or_read) \
|| !defined(AO_HAVE_short_or_release) \
|| !defined(AO_HAVE_short_or_release_write) \
|| !defined(AO_HAVE_short_or_write)
MISSING(AO_short_or);
# endif
s |= 66;
# endif
# if defined(AO_HAVE_short_xor)
AO_short_xor(&s, 181);
TA_assert(s == 223);
# else
# if !defined(AO_HAVE_short_xor) || !defined(AO_HAVE_short_xor_acquire) \
|| !defined(AO_HAVE_short_xor_acquire_read) \
|| !defined(AO_HAVE_short_xor_full) \
|| !defined(AO_HAVE_short_xor_read) \
|| !defined(AO_HAVE_short_xor_release) \
|| !defined(AO_HAVE_short_xor_release_write) \
|| !defined(AO_HAVE_short_xor_write)
MISSING(AO_short_xor);
# endif
s ^= 181;
# endif
# if defined(AO_HAVE_short_and)
AO_short_and(&s, 57);
TA_assert(s == 25);
# else
# if !defined(AO_HAVE_short_and) || !defined(AO_HAVE_short_and_acquire) \
|| !defined(AO_HAVE_short_and_acquire_read) \
|| !defined(AO_HAVE_short_and_full) \
|| !defined(AO_HAVE_short_and_read) \
|| !defined(AO_HAVE_short_and_release) \
|| !defined(AO_HAVE_short_and_release_write) \
|| !defined(AO_HAVE_short_and_write)
MISSING(AO_short_and);
# endif
s &= 57;
# endif
# if defined(AO_HAVE_short_fetch_compare_and_swap)
TA_assert(AO_short_fetch_compare_and_swap(&s, 14, 117) == 25);
TA_assert(s == 25);
TA_assert(AO_short_fetch_compare_and_swap(&s, 25, 117) == 25);
# else
MISSING(AO_short_fetch_compare_and_swap);
if (s == 25) s = 117;
# endif
TA_assert(s == 117);
# if defined(AO_HAVE_char_compare_and_swap)
TA_assert(!AO_char_compare_and_swap(&b, 14, 42));
TA_assert(b == 13);
TA_assert(AO_char_compare_and_swap(&b, 13, 42));
TA_assert(b == 42);
# else
MISSING(AO_char_compare_and_swap);
if (*(volatile char *)&b == 13) b = 42;
# endif
# if defined(AO_HAVE_char_or)
AO_char_or(&b, 66);
TA_assert(b == 106);
# else
# if !defined(AO_HAVE_char_or) || !defined(AO_HAVE_char_or_acquire) \
|| !defined(AO_HAVE_char_or_acquire_read) \
|| !defined(AO_HAVE_char_or_full) || !defined(AO_HAVE_char_or_read) \
|| !defined(AO_HAVE_char_or_release) \
|| !defined(AO_HAVE_char_or_release_write) \
|| !defined(AO_HAVE_char_or_write)
MISSING(AO_char_or);
# endif
b |= 66;
# endif
# if defined(AO_HAVE_char_xor)
AO_char_xor(&b, 181);
TA_assert(b == 223);
# else
# if !defined(AO_HAVE_char_xor) || !defined(AO_HAVE_char_xor_acquire) \
|| !defined(AO_HAVE_char_xor_acquire_read) \
|| !defined(AO_HAVE_char_xor_full) || !defined(AO_HAVE_char_xor_read) \
|| !defined(AO_HAVE_char_xor_release) \
|| !defined(AO_HAVE_char_xor_release_write) \
|| !defined(AO_HAVE_char_xor_write)
MISSING(AO_char_xor);
# endif
b ^= 181;
# endif
# if defined(AO_HAVE_char_and)
AO_char_and(&b, 57);
TA_assert(b == 25);
# else
# if !defined(AO_HAVE_char_and) || !defined(AO_HAVE_char_and_acquire) \
|| !defined(AO_HAVE_char_and_acquire_read) \
|| !defined(AO_HAVE_char_and_full) || !defined(AO_HAVE_char_and_read) \
|| !defined(AO_HAVE_char_and_release) \
|| !defined(AO_HAVE_char_and_release_write) \
|| !defined(AO_HAVE_char_and_write)
MISSING(AO_char_and);
# endif
b &= 57;
# endif
# if defined(AO_HAVE_char_fetch_compare_and_swap)
TA_assert(AO_char_fetch_compare_and_swap(&b, 14, 117) == 25);
TA_assert(b == 25);
TA_assert(AO_char_fetch_compare_and_swap(&b, 25, 117) == 25);
# else
MISSING(AO_char_fetch_compare_and_swap);
if (b == 25) b = 117;
# endif
TA_assert(b == 117);
# if defined(AO_HAVE_int_compare_and_swap)
TA_assert(!AO_int_compare_and_swap(&zz, 14, 42));
TA_assert(zz == 13);
TA_assert(AO_int_compare_and_swap(&zz, 13, 42));
TA_assert(zz == 42);
# else
MISSING(AO_int_compare_and_swap);
if (*(volatile int *)&zz == 13) zz = 42;
# endif
# if defined(AO_HAVE_int_or)
AO_int_or(&zz, 66);
TA_assert(zz == 106);
# else
# if !defined(AO_HAVE_int_or) || !defined(AO_HAVE_int_or_acquire) \
|| !defined(AO_HAVE_int_or_acquire_read) \
|| !defined(AO_HAVE_int_or_full) || !defined(AO_HAVE_int_or_read) \
|| !defined(AO_HAVE_int_or_release) \
|| !defined(AO_HAVE_int_or_release_write) \
|| !defined(AO_HAVE_int_or_write)
MISSING(AO_int_or);
# endif
zz |= 66;
# endif
# if defined(AO_HAVE_int_xor)
AO_int_xor(&zz, 181);
TA_assert(zz == 223);
# else
# if !defined(AO_HAVE_int_xor) || !defined(AO_HAVE_int_xor_acquire) \
|| !defined(AO_HAVE_int_xor_acquire_read) \
|| !defined(AO_HAVE_int_xor_full) || !defined(AO_HAVE_int_xor_read) \
|| !defined(AO_HAVE_int_xor_release) \
|| !defined(AO_HAVE_int_xor_release_write) \
|| !defined(AO_HAVE_int_xor_write)
MISSING(AO_int_xor);
# endif
zz ^= 181;
# endif
# if defined(AO_HAVE_int_and)
AO_int_and(&zz, 57);
TA_assert(zz == 25);
# else
# if !defined(AO_HAVE_int_and) || !defined(AO_HAVE_int_and_acquire) \
|| !defined(AO_HAVE_int_and_acquire_read) \
|| !defined(AO_HAVE_int_and_full) || !defined(AO_HAVE_int_and_read) \
|| !defined(AO_HAVE_int_and_release) \
|| !defined(AO_HAVE_int_and_release_write) \
|| !defined(AO_HAVE_int_and_write)
MISSING(AO_int_and);
# endif
zz &= 57;
# endif
# if defined(AO_HAVE_int_fetch_compare_and_swap)
TA_assert(AO_int_fetch_compare_and_swap(&zz, 14, 117) == 25);
TA_assert(zz == 25);
TA_assert(AO_int_fetch_compare_and_swap(&zz, 25, 117) == 25);
# else
MISSING(AO_int_fetch_compare_and_swap);
if (zz == 25) zz = 117;
# endif
TA_assert(zz == 117);
# if defined(AO_HAVE_double_load) || defined(AO_HAVE_double_store)
/* Initialize old_w even for store to workaround MSan warning. */
old_w.AO_val1 = 3316;
old_w.AO_val2 = 2921;
# endif
# if defined(AO_HAVE_double_load)
new_w = AO_double_load(&old_w);
TA_assert(new_w.AO_val1 == 3316 && new_w.AO_val2 == 2921);
# elif !defined(AO_HAVE_double_load) \
|| !defined(AO_HAVE_double_load_acquire) \
|| !defined(AO_HAVE_double_load_acquire_read) \
|| !defined(AO_HAVE_double_load_dd_acquire_read) \
|| !defined(AO_HAVE_double_load_full) \
|| !defined(AO_HAVE_double_load_read)
MISSING(AO_double_load);
# endif
# if defined(AO_HAVE_double_store)
new_w.AO_val1 = 1375;
new_w.AO_val2 = 8243;
AO_double_store(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
AO_double_store(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
new_w.AO_val1 ^= old_w.AO_val1;
new_w.AO_val2 ^= old_w.AO_val2;
AO_double_store(&old_w, new_w);
TA_assert(old_w.AO_val1 == 0 && old_w.AO_val2 == 0);
# elif !defined(AO_HAVE_double_store) \
|| !defined(AO_HAVE_double_store_full) \
|| !defined(AO_HAVE_double_store_release) \
|| !defined(AO_HAVE_double_store_release_write) \
|| !defined(AO_HAVE_double_store_write)
MISSING(AO_double_store);
# endif
# if defined(AO_HAVE_compare_double_and_swap_double)
TA_assert(!AO_compare_double_and_swap_double(&w, 17, 42, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_double_and_swap_double(&w, 0, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double(&w, 12, 14, 64, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double(&w, 11, 13, 85, 82));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double(&w, 13, 12, 17, 42));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_double_and_swap_double(&w, 12, 13, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_double_and_swap_double(&w, 17, 42, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_double_and_swap_double);
# endif
# if defined(AO_HAVE_compare_and_swap_double)
TA_assert(!AO_compare_and_swap_double(&w, 17, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_and_swap_double(&w, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double(&w, 13, 12, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double(&w, 1213, 48, 86));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_and_swap_double(&w, 12, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_and_swap_double(&w, 17, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_and_swap_double);
# endif
# if defined(AO_HAVE_double_compare_and_swap)
old_w.AO_val1 = 4116;
old_w.AO_val2 = 2121;
new_w.AO_val1 = 8537;
new_w.AO_val2 = 6410;
TA_assert(!AO_double_compare_and_swap(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_double_compare_and_swap(&w, w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = 29;
new_w.AO_val1 = 820;
new_w.AO_val2 = 5917;
TA_assert(!AO_double_compare_and_swap(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = 11;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 3552;
new_w.AO_val2 = 1746;
TA_assert(!AO_double_compare_and_swap(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 8537;
new_w.AO_val1 = 4116;
new_w.AO_val2 = 2121;
TA_assert(!AO_double_compare_and_swap(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 1;
TA_assert(AO_double_compare_and_swap(&w, old_w, new_w));
TA_assert(w.AO_val1 == 1 && w.AO_val2 == 2121);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = w.AO_val2;
new_w.AO_val1--;
new_w.AO_val2 = 0;
TA_assert(AO_double_compare_and_swap(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_double_compare_and_swap);
# endif
}
/*
* Copyright (c) 2003 by <NAME>. All rights reserved.
*
* This file is covered by the GNU general public license, version 2.
* see COPYING for details.
*/
/* Some basic sanity tests. These do not test the barrier semantics. */
#undef TA_assert
#define TA_assert(e) \
if (!(e)) { fprintf(stderr, "Assertion failed %s:%d (barrier: _release)\n", \
__FILE__, __LINE__), exit(1); }
#undef MISSING
#define MISSING(name) \
printf("Missing: %s\n", #name "_release")
#if defined(CPPCHECK)
void list_atomic_release(void);
void char_list_atomic_release(void);
void short_list_atomic_release(void);
void int_list_atomic_release(void);
void double_list_atomic_release(void);
#endif
void test_atomic_release(void)
{
AO_t x;
unsigned char b;
unsigned short s;
unsigned int zz;
# if defined(AO_HAVE_test_and_set_release)
AO_TS_t z = AO_TS_INITIALIZER;
# endif
# if defined(AO_HAVE_double_compare_and_swap_release) \
|| defined(AO_HAVE_double_load_release) \
|| defined(AO_HAVE_double_store_release)
static AO_double_t old_w; /* static to avoid misalignment */
AO_double_t new_w;
# endif
# if defined(AO_HAVE_compare_and_swap_double_release) \
|| defined(AO_HAVE_compare_double_and_swap_double_release) \
|| defined(AO_HAVE_double_compare_and_swap_release)
static AO_double_t w; /* static to avoid misalignment */
w.AO_val1 = 0;
w.AO_val2 = 0;
# endif
# if defined(CPPCHECK)
list_atomic_release();
char_list_atomic_release();
short_list_atomic_release();
int_list_atomic_release();
double_list_atomic_release();
# endif
# if defined(AO_HAVE_nop_release)
AO_nop_release();
# elif !defined(AO_HAVE_nop) || !defined(AO_HAVE_nop_full) \
|| !defined(AO_HAVE_nop_read) || !defined(AO_HAVE_nop_write)
MISSING(AO_nop);
# endif
# if defined(AO_HAVE_store_release)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile AO_t *)&x = 0; /* initialize to avoid false warning */
# endif
AO_store_release(&x, 13);
TA_assert(x == 13);
# else
# if !defined(AO_HAVE_store) || !defined(AO_HAVE_store_full) \
|| !defined(AO_HAVE_store_release) \
|| !defined(AO_HAVE_store_release_write) \
|| !defined(AO_HAVE_store_write)
MISSING(AO_store);
# endif
x = 13;
# endif
# if defined(AO_HAVE_load_release)
TA_assert(AO_load_release(&x) == 13);
# elif !defined(AO_HAVE_load) || !defined(AO_HAVE_load_acquire) \
|| !defined(AO_HAVE_load_acquire_read) \
|| !defined(AO_HAVE_load_dd_acquire_read) \
|| !defined(AO_HAVE_load_full) || !defined(AO_HAVE_load_read)
MISSING(AO_load);
# endif
# if defined(AO_HAVE_test_and_set_release)
TA_assert(AO_test_and_set_release(&z) == AO_TS_CLEAR);
TA_assert(AO_test_and_set_release(&z) == AO_TS_SET);
TA_assert(AO_test_and_set_release(&z) == AO_TS_SET);
AO_CLEAR(&z);
# else
MISSING(AO_test_and_set);
# endif
# if defined(AO_HAVE_fetch_and_add_release)
TA_assert(AO_fetch_and_add_release(&x, 42) == 13);
TA_assert(AO_fetch_and_add_release(&x, (AO_t)(-42)) == 55);
# else
MISSING(AO_fetch_and_add);
# endif
# if defined(AO_HAVE_fetch_and_add1_release)
TA_assert(AO_fetch_and_add1_release(&x) == 13);
# else
MISSING(AO_fetch_and_add1);
++x;
# endif
# if defined(AO_HAVE_fetch_and_sub1_release)
TA_assert(AO_fetch_and_sub1_release(&x) == 14);
# else
MISSING(AO_fetch_and_sub1);
--x;
# endif
# if defined(AO_HAVE_short_store_release)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile short *)&s = 0;
# endif
AO_short_store_release(&s, 13);
# else
# if !defined(AO_HAVE_short_store) || !defined(AO_HAVE_short_store_full) \
|| !defined(AO_HAVE_short_store_release) \
|| !defined(AO_HAVE_short_store_release_write) \
|| !defined(AO_HAVE_short_store_write)
MISSING(AO_short_store);
# endif
s = 13;
# endif
# if defined(AO_HAVE_short_load_release)
TA_assert(AO_short_load(&s) == 13);
# elif !defined(AO_HAVE_short_load) || !defined(AO_HAVE_short_load_acquire) \
|| !defined(AO_HAVE_short_load_acquire_read) \
|| !defined(AO_HAVE_short_load_dd_acquire_read) \
|| !defined(AO_HAVE_short_load_full) \
|| !defined(AO_HAVE_short_load_read)
MISSING(AO_short_load);
# endif
# if defined(AO_HAVE_short_fetch_and_add_release)
TA_assert(AO_short_fetch_and_add_release(&s, 42) == 13);
TA_assert(AO_short_fetch_and_add_release(&s, (unsigned short)-42) == 55);
# else
MISSING(AO_short_fetch_and_add);
# endif
# if defined(AO_HAVE_short_fetch_and_add1_release)
TA_assert(AO_short_fetch_and_add1_release(&s) == 13);
# else
MISSING(AO_short_fetch_and_add1);
++s;
# endif
# if defined(AO_HAVE_short_fetch_and_sub1_release)
TA_assert(AO_short_fetch_and_sub1_release(&s) == 14);
# else
MISSING(AO_short_fetch_and_sub1);
--s;
# endif
TA_assert(*(volatile short *)&s == 13);
# if defined(AO_HAVE_char_store_release)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile char *)&b = 0;
# endif
AO_char_store_release(&b, 13);
# else
# if !defined(AO_HAVE_char_store) || !defined(AO_HAVE_char_store_full) \
|| !defined(AO_HAVE_char_store_release) \
|| !defined(AO_HAVE_char_store_release_write) \
|| !defined(AO_HAVE_char_store_write)
MISSING(AO_char_store);
# endif
b = 13;
# endif
# if defined(AO_HAVE_char_load_release)
TA_assert(AO_char_load(&b) == 13);
# elif !defined(AO_HAVE_char_load) || !defined(AO_HAVE_char_load_acquire) \
|| !defined(AO_HAVE_char_load_acquire_read) \
|| !defined(AO_HAVE_char_load_dd_acquire_read) \
|| !defined(AO_HAVE_char_load_full) || !defined(AO_HAVE_char_load_read)
MISSING(AO_char_load);
# endif
# if defined(AO_HAVE_char_fetch_and_add_release)
TA_assert(AO_char_fetch_and_add_release(&b, 42) == 13);
TA_assert(AO_char_fetch_and_add_release(&b, (unsigned char)-42) == 55);
# else
MISSING(AO_char_fetch_and_add);
# endif
# if defined(AO_HAVE_char_fetch_and_add1_release)
TA_assert(AO_char_fetch_and_add1_release(&b) == 13);
# else
MISSING(AO_char_fetch_and_add1);
++b;
# endif
# if defined(AO_HAVE_char_fetch_and_sub1_release)
TA_assert(AO_char_fetch_and_sub1_release(&b) == 14);
# else
MISSING(AO_char_fetch_and_sub1);
--b;
# endif
TA_assert(*(volatile char *)&b == 13);
# if defined(AO_HAVE_int_store_release)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile int *)&zz = 0;
# endif
AO_int_store_release(&zz, 13);
# else
# if !defined(AO_HAVE_int_store) || !defined(AO_HAVE_int_store_full) \
|| !defined(AO_HAVE_int_store_release) \
|| !defined(AO_HAVE_int_store_release_write) \
|| !defined(AO_HAVE_int_store_write)
MISSING(AO_int_store);
# endif
zz = 13;
# endif
# if defined(AO_HAVE_int_load_release)
TA_assert(AO_int_load(&zz) == 13);
# elif !defined(AO_HAVE_int_load) || !defined(AO_HAVE_int_load_acquire) \
|| !defined(AO_HAVE_int_load_acquire_read) \
|| !defined(AO_HAVE_int_load_dd_acquire_read) \
|| !defined(AO_HAVE_int_load_full) || !defined(AO_HAVE_int_load_read)
MISSING(AO_int_load);
# endif
# if defined(AO_HAVE_int_fetch_and_add_release)
TA_assert(AO_int_fetch_and_add_release(&zz, 42) == 13);
TA_assert(AO_int_fetch_and_add_release(&zz, (unsigned int)-42) == 55);
# else
MISSING(AO_int_fetch_and_add);
# endif
# if defined(AO_HAVE_int_fetch_and_add1_release)
TA_assert(AO_int_fetch_and_add1_release(&zz) == 13);
# else
MISSING(AO_int_fetch_and_add1);
++zz;
# endif
# if defined(AO_HAVE_int_fetch_and_sub1_release)
TA_assert(AO_int_fetch_and_sub1_release(&zz) == 14);
# else
MISSING(AO_int_fetch_and_sub1);
--zz;
# endif
TA_assert(*(volatile int *)&zz == 13);
# if defined(AO_HAVE_compare_and_swap_release)
TA_assert(!AO_compare_and_swap_release(&x, 14, 42));
TA_assert(x == 13);
TA_assert(AO_compare_and_swap_release(&x, 13, 42));
TA_assert(x == 42);
# else
MISSING(AO_compare_and_swap);
if (*(volatile AO_t *)&x == 13) x = 42;
# endif
# if defined(AO_HAVE_or_release)
AO_or_release(&x, 66);
TA_assert(x == 106);
# else
# if !defined(AO_HAVE_or) || !defined(AO_HAVE_or_acquire) \
|| !defined(AO_HAVE_or_acquire_read) || !defined(AO_HAVE_or_full) \
|| !defined(AO_HAVE_or_read) || !defined(AO_HAVE_or_release) \
|| !defined(AO_HAVE_or_release_write) || !defined(AO_HAVE_or_write)
MISSING(AO_or);
# endif
x |= 66;
# endif
# if defined(AO_HAVE_xor_release)
AO_xor_release(&x, 181);
TA_assert(x == 223);
# else
# if !defined(AO_HAVE_xor) || !defined(AO_HAVE_xor_acquire) \
|| !defined(AO_HAVE_xor_acquire_read) || !defined(AO_HAVE_xor_full) \
|| !defined(AO_HAVE_xor_read) || !defined(AO_HAVE_xor_release) \
|| !defined(AO_HAVE_xor_release_write) || !defined(AO_HAVE_xor_write)
MISSING(AO_xor);
# endif
x ^= 181;
# endif
# if defined(AO_HAVE_and_release)
AO_and_release(&x, 57);
TA_assert(x == 25);
# else
# if !defined(AO_HAVE_and) || !defined(AO_HAVE_and_acquire) \
|| !defined(AO_HAVE_and_acquire_read) || !defined(AO_HAVE_and_full) \
|| !defined(AO_HAVE_and_read) || !defined(AO_HAVE_and_release) \
|| !defined(AO_HAVE_and_release_write) || !defined(AO_HAVE_and_write)
MISSING(AO_and);
# endif
x &= 57;
# endif
# if defined(AO_HAVE_fetch_compare_and_swap_release)
TA_assert(AO_fetch_compare_and_swap_release(&x, 14, 117) == 25);
TA_assert(x == 25);
TA_assert(AO_fetch_compare_and_swap_release(&x, 25, 117) == 25);
# else
MISSING(AO_fetch_compare_and_swap);
if (x == 25) x = 117;
# endif
TA_assert(x == 117);
# if defined(AO_HAVE_short_compare_and_swap_release)
TA_assert(!AO_short_compare_and_swap_release(&s, 14, 42));
TA_assert(s == 13);
TA_assert(AO_short_compare_and_swap_release(&s, 13, 42));
TA_assert(s == 42);
# else
MISSING(AO_short_compare_and_swap);
if (*(volatile short *)&s == 13) s = 42;
# endif
# if defined(AO_HAVE_short_or_release)
AO_short_or_release(&s, 66);
TA_assert(s == 106);
# else
# if !defined(AO_HAVE_short_or) || !defined(AO_HAVE_short_or_acquire) \
|| !defined(AO_HAVE_short_or_acquire_read) \
|| !defined(AO_HAVE_short_or_full) || !defined(AO_HAVE_short_or_read) \
|| !defined(AO_HAVE_short_or_release) \
|| !defined(AO_HAVE_short_or_release_write) \
|| !defined(AO_HAVE_short_or_write)
MISSING(AO_short_or);
# endif
s |= 66;
# endif
# if defined(AO_HAVE_short_xor_release)
AO_short_xor_release(&s, 181);
TA_assert(s == 223);
# else
# if !defined(AO_HAVE_short_xor) || !defined(AO_HAVE_short_xor_acquire) \
|| !defined(AO_HAVE_short_xor_acquire_read) \
|| !defined(AO_HAVE_short_xor_full) \
|| !defined(AO_HAVE_short_xor_read) \
|| !defined(AO_HAVE_short_xor_release) \
|| !defined(AO_HAVE_short_xor_release_write) \
|| !defined(AO_HAVE_short_xor_write)
MISSING(AO_short_xor);
# endif
s ^= 181;
# endif
# if defined(AO_HAVE_short_and_release)
AO_short_and_release(&s, 57);
TA_assert(s == 25);
# else
# if !defined(AO_HAVE_short_and) || !defined(AO_HAVE_short_and_acquire) \
|| !defined(AO_HAVE_short_and_acquire_read) \
|| !defined(AO_HAVE_short_and_full) \
|| !defined(AO_HAVE_short_and_read) \
|| !defined(AO_HAVE_short_and_release) \
|| !defined(AO_HAVE_short_and_release_write) \
|| !defined(AO_HAVE_short_and_write)
MISSING(AO_short_and);
# endif
s &= 57;
# endif
# if defined(AO_HAVE_short_fetch_compare_and_swap_release)
TA_assert(AO_short_fetch_compare_and_swap_release(&s, 14, 117) == 25);
TA_assert(s == 25);
TA_assert(AO_short_fetch_compare_and_swap_release(&s, 25, 117) == 25);
# else
MISSING(AO_short_fetch_compare_and_swap);
if (s == 25) s = 117;
# endif
TA_assert(s == 117);
# if defined(AO_HAVE_char_compare_and_swap_release)
TA_assert(!AO_char_compare_and_swap_release(&b, 14, 42));
TA_assert(b == 13);
TA_assert(AO_char_compare_and_swap_release(&b, 13, 42));
TA_assert(b == 42);
# else
MISSING(AO_char_compare_and_swap);
if (*(volatile char *)&b == 13) b = 42;
# endif
# if defined(AO_HAVE_char_or_release)
AO_char_or_release(&b, 66);
TA_assert(b == 106);
# else
# if !defined(AO_HAVE_char_or) || !defined(AO_HAVE_char_or_acquire) \
|| !defined(AO_HAVE_char_or_acquire_read) \
|| !defined(AO_HAVE_char_or_full) || !defined(AO_HAVE_char_or_read) \
|| !defined(AO_HAVE_char_or_release) \
|| !defined(AO_HAVE_char_or_release_write) \
|| !defined(AO_HAVE_char_or_write)
MISSING(AO_char_or);
# endif
b |= 66;
# endif
# if defined(AO_HAVE_char_xor_release)
AO_char_xor_release(&b, 181);
TA_assert(b == 223);
# else
# if !defined(AO_HAVE_char_xor) || !defined(AO_HAVE_char_xor_acquire) \
|| !defined(AO_HAVE_char_xor_acquire_read) \
|| !defined(AO_HAVE_char_xor_full) || !defined(AO_HAVE_char_xor_read) \
|| !defined(AO_HAVE_char_xor_release) \
|| !defined(AO_HAVE_char_xor_release_write) \
|| !defined(AO_HAVE_char_xor_write)
MISSING(AO_char_xor);
# endif
b ^= 181;
# endif
# if defined(AO_HAVE_char_and_release)
AO_char_and_release(&b, 57);
TA_assert(b == 25);
# else
# if !defined(AO_HAVE_char_and) || !defined(AO_HAVE_char_and_acquire) \
|| !defined(AO_HAVE_char_and_acquire_read) \
|| !defined(AO_HAVE_char_and_full) || !defined(AO_HAVE_char_and_read) \
|| !defined(AO_HAVE_char_and_release) \
|| !defined(AO_HAVE_char_and_release_write) \
|| !defined(AO_HAVE_char_and_write)
MISSING(AO_char_and);
# endif
b &= 57;
# endif
# if defined(AO_HAVE_char_fetch_compare_and_swap_release)
TA_assert(AO_char_fetch_compare_and_swap_release(&b, 14, 117) == 25);
TA_assert(b == 25);
TA_assert(AO_char_fetch_compare_and_swap_release(&b, 25, 117) == 25);
# else
MISSING(AO_char_fetch_compare_and_swap);
if (b == 25) b = 117;
# endif
TA_assert(b == 117);
# if defined(AO_HAVE_int_compare_and_swap_release)
TA_assert(!AO_int_compare_and_swap_release(&zz, 14, 42));
TA_assert(zz == 13);
TA_assert(AO_int_compare_and_swap_release(&zz, 13, 42));
TA_assert(zz == 42);
# else
MISSING(AO_int_compare_and_swap);
if (*(volatile int *)&zz == 13) zz = 42;
# endif
# if defined(AO_HAVE_int_or_release)
AO_int_or_release(&zz, 66);
TA_assert(zz == 106);
# else
# if !defined(AO_HAVE_int_or) || !defined(AO_HAVE_int_or_acquire) \
|| !defined(AO_HAVE_int_or_acquire_read) \
|| !defined(AO_HAVE_int_or_full) || !defined(AO_HAVE_int_or_read) \
|| !defined(AO_HAVE_int_or_release) \
|| !defined(AO_HAVE_int_or_release_write) \
|| !defined(AO_HAVE_int_or_write)
MISSING(AO_int_or);
# endif
zz |= 66;
# endif
# if defined(AO_HAVE_int_xor_release)
AO_int_xor_release(&zz, 181);
TA_assert(zz == 223);
# else
# if !defined(AO_HAVE_int_xor) || !defined(AO_HAVE_int_xor_acquire) \
|| !defined(AO_HAVE_int_xor_acquire_read) \
|| !defined(AO_HAVE_int_xor_full) || !defined(AO_HAVE_int_xor_read) \
|| !defined(AO_HAVE_int_xor_release) \
|| !defined(AO_HAVE_int_xor_release_write) \
|| !defined(AO_HAVE_int_xor_write)
MISSING(AO_int_xor);
# endif
zz ^= 181;
# endif
# if defined(AO_HAVE_int_and_release)
AO_int_and_release(&zz, 57);
TA_assert(zz == 25);
# else
# if !defined(AO_HAVE_int_and) || !defined(AO_HAVE_int_and_acquire) \
|| !defined(AO_HAVE_int_and_acquire_read) \
|| !defined(AO_HAVE_int_and_full) || !defined(AO_HAVE_int_and_read) \
|| !defined(AO_HAVE_int_and_release) \
|| !defined(AO_HAVE_int_and_release_write) \
|| !defined(AO_HAVE_int_and_write)
MISSING(AO_int_and);
# endif
zz &= 57;
# endif
# if defined(AO_HAVE_int_fetch_compare_and_swap_release)
TA_assert(AO_int_fetch_compare_and_swap_release(&zz, 14, 117) == 25);
TA_assert(zz == 25);
TA_assert(AO_int_fetch_compare_and_swap_release(&zz, 25, 117) == 25);
# else
MISSING(AO_int_fetch_compare_and_swap);
if (zz == 25) zz = 117;
# endif
TA_assert(zz == 117);
# if defined(AO_HAVE_double_load_release) || defined(AO_HAVE_double_store_release)
/* Initialize old_w even for store to workaround MSan warning. */
old_w.AO_val1 = 3316;
old_w.AO_val2 = 2921;
# endif
# if defined(AO_HAVE_double_load_release)
new_w = AO_double_load_release(&old_w);
TA_assert(new_w.AO_val1 == 3316 && new_w.AO_val2 == 2921);
# elif !defined(AO_HAVE_double_load) \
|| !defined(AO_HAVE_double_load_acquire) \
|| !defined(AO_HAVE_double_load_acquire_read) \
|| !defined(AO_HAVE_double_load_dd_acquire_read) \
|| !defined(AO_HAVE_double_load_full) \
|| !defined(AO_HAVE_double_load_read)
MISSING(AO_double_load);
# endif
# if defined(AO_HAVE_double_store_release)
new_w.AO_val1 = 1375;
new_w.AO_val2 = 8243;
AO_double_store_release(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
AO_double_store_release(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
new_w.AO_val1 ^= old_w.AO_val1;
new_w.AO_val2 ^= old_w.AO_val2;
AO_double_store_release(&old_w, new_w);
TA_assert(old_w.AO_val1 == 0 && old_w.AO_val2 == 0);
# elif !defined(AO_HAVE_double_store) \
|| !defined(AO_HAVE_double_store_full) \
|| !defined(AO_HAVE_double_store_release) \
|| !defined(AO_HAVE_double_store_release_write) \
|| !defined(AO_HAVE_double_store_write)
MISSING(AO_double_store);
# endif
# if defined(AO_HAVE_compare_double_and_swap_double_release)
TA_assert(!AO_compare_double_and_swap_double_release(&w, 17, 42, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_double_and_swap_double_release(&w, 0, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_release(&w, 12, 14, 64, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_release(&w, 11, 13, 85, 82));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_release(&w, 13, 12, 17, 42));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_double_and_swap_double_release(&w, 12, 13, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_double_and_swap_double_release(&w, 17, 42, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_double_and_swap_double);
# endif
# if defined(AO_HAVE_compare_and_swap_double_release)
TA_assert(!AO_compare_and_swap_double_release(&w, 17, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_and_swap_double_release(&w, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_release(&w, 13, 12, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_release(&w, 1213, 48, 86));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_and_swap_double_release(&w, 12, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_and_swap_double_release(&w, 17, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_and_swap_double);
# endif
# if defined(AO_HAVE_double_compare_and_swap_release)
old_w.AO_val1 = 4116;
old_w.AO_val2 = 2121;
new_w.AO_val1 = 8537;
new_w.AO_val2 = 6410;
TA_assert(!AO_double_compare_and_swap_release(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_double_compare_and_swap_release(&w, w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = 29;
new_w.AO_val1 = 820;
new_w.AO_val2 = 5917;
TA_assert(!AO_double_compare_and_swap_release(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = 11;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 3552;
new_w.AO_val2 = 1746;
TA_assert(!AO_double_compare_and_swap_release(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 8537;
new_w.AO_val1 = 4116;
new_w.AO_val2 = 2121;
TA_assert(!AO_double_compare_and_swap_release(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 1;
TA_assert(AO_double_compare_and_swap_release(&w, old_w, new_w));
TA_assert(w.AO_val1 == 1 && w.AO_val2 == 2121);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = w.AO_val2;
new_w.AO_val1--;
new_w.AO_val2 = 0;
TA_assert(AO_double_compare_and_swap_release(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_double_compare_and_swap);
# endif
}
/*
* Copyright (c) 2003 by <NAME>. All rights reserved.
*
* This file is covered by the GNU general public license, version 2.
* see COPYING for details.
*/
/* Some basic sanity tests. These do not test the barrier semantics. */
#undef TA_assert
#define TA_assert(e) \
if (!(e)) { fprintf(stderr, "Assertion failed %s:%d (barrier: _acquire)\n", \
__FILE__, __LINE__), exit(1); }
#undef MISSING
#define MISSING(name) \
printf("Missing: %s\n", #name "_acquire")
#if defined(CPPCHECK)
void list_atomic_acquire(void);
void char_list_atomic_acquire(void);
void short_list_atomic_acquire(void);
void int_list_atomic_acquire(void);
void double_list_atomic_acquire(void);
#endif
void test_atomic_acquire(void)
{
AO_t x;
unsigned char b;
unsigned short s;
unsigned int zz;
# if defined(AO_HAVE_test_and_set_acquire)
AO_TS_t z = AO_TS_INITIALIZER;
# endif
# if defined(AO_HAVE_double_compare_and_swap_acquire) \
|| defined(AO_HAVE_double_load_acquire) \
|| defined(AO_HAVE_double_store_acquire)
static AO_double_t old_w; /* static to avoid misalignment */
AO_double_t new_w;
# endif
# if defined(AO_HAVE_compare_and_swap_double_acquire) \
|| defined(AO_HAVE_compare_double_and_swap_double_acquire) \
|| defined(AO_HAVE_double_compare_and_swap_acquire)
static AO_double_t w; /* static to avoid misalignment */
w.AO_val1 = 0;
w.AO_val2 = 0;
# endif
# if defined(CPPCHECK)
list_atomic_acquire();
char_list_atomic_acquire();
short_list_atomic_acquire();
int_list_atomic_acquire();
double_list_atomic_acquire();
# endif
# if defined(AO_HAVE_nop_acquire)
AO_nop_acquire();
# elif !defined(AO_HAVE_nop) || !defined(AO_HAVE_nop_full) \
|| !defined(AO_HAVE_nop_read) || !defined(AO_HAVE_nop_write)
MISSING(AO_nop);
# endif
# if defined(AO_HAVE_store_acquire)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile AO_t *)&x = 0; /* initialize to avoid false warning */
# endif
AO_store_acquire(&x, 13);
TA_assert(x == 13);
# else
# if !defined(AO_HAVE_store) || !defined(AO_HAVE_store_full) \
|| !defined(AO_HAVE_store_release) \
|| !defined(AO_HAVE_store_release_write) \
|| !defined(AO_HAVE_store_write)
MISSING(AO_store);
# endif
x = 13;
# endif
# if defined(AO_HAVE_load_acquire)
TA_assert(AO_load_acquire(&x) == 13);
# elif !defined(AO_HAVE_load) || !defined(AO_HAVE_load_acquire) \
|| !defined(AO_HAVE_load_acquire_read) \
|| !defined(AO_HAVE_load_dd_acquire_read) \
|| !defined(AO_HAVE_load_full) || !defined(AO_HAVE_load_read)
MISSING(AO_load);
# endif
# if defined(AO_HAVE_test_and_set_acquire)
TA_assert(AO_test_and_set_acquire(&z) == AO_TS_CLEAR);
TA_assert(AO_test_and_set_acquire(&z) == AO_TS_SET);
TA_assert(AO_test_and_set_acquire(&z) == AO_TS_SET);
AO_CLEAR(&z);
# else
MISSING(AO_test_and_set);
# endif
# if defined(AO_HAVE_fetch_and_add_acquire)
TA_assert(AO_fetch_and_add_acquire(&x, 42) == 13);
TA_assert(AO_fetch_and_add_acquire(&x, (AO_t)(-42)) == 55);
# else
MISSING(AO_fetch_and_add);
# endif
# if defined(AO_HAVE_fetch_and_add1_acquire)
TA_assert(AO_fetch_and_add1_acquire(&x) == 13);
# else
MISSING(AO_fetch_and_add1);
++x;
# endif
# if defined(AO_HAVE_fetch_and_sub1_acquire)
TA_assert(AO_fetch_and_sub1_acquire(&x) == 14);
# else
MISSING(AO_fetch_and_sub1);
--x;
# endif
# if defined(AO_HAVE_short_store_acquire)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile short *)&s = 0;
# endif
AO_short_store_acquire(&s, 13);
# else
# if !defined(AO_HAVE_short_store) || !defined(AO_HAVE_short_store_full) \
|| !defined(AO_HAVE_short_store_release) \
|| !defined(AO_HAVE_short_store_release_write) \
|| !defined(AO_HAVE_short_store_write)
MISSING(AO_short_store);
# endif
s = 13;
# endif
# if defined(AO_HAVE_short_load_acquire)
TA_assert(AO_short_load(&s) == 13);
# elif !defined(AO_HAVE_short_load) || !defined(AO_HAVE_short_load_acquire) \
|| !defined(AO_HAVE_short_load_acquire_read) \
|| !defined(AO_HAVE_short_load_dd_acquire_read) \
|| !defined(AO_HAVE_short_load_full) \
|| !defined(AO_HAVE_short_load_read)
MISSING(AO_short_load);
# endif
# if defined(AO_HAVE_short_fetch_and_add_acquire)
TA_assert(AO_short_fetch_and_add_acquire(&s, 42) == 13);
TA_assert(AO_short_fetch_and_add_acquire(&s, (unsigned short)-42) == 55);
# else
MISSING(AO_short_fetch_and_add);
# endif
# if defined(AO_HAVE_short_fetch_and_add1_acquire)
TA_assert(AO_short_fetch_and_add1_acquire(&s) == 13);
# else
MISSING(AO_short_fetch_and_add1);
++s;
# endif
# if defined(AO_HAVE_short_fetch_and_sub1_acquire)
TA_assert(AO_short_fetch_and_sub1_acquire(&s) == 14);
# else
MISSING(AO_short_fetch_and_sub1);
--s;
# endif
TA_assert(*(volatile short *)&s == 13);
# if defined(AO_HAVE_char_store_acquire)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile char *)&b = 0;
# endif
AO_char_store_acquire(&b, 13);
# else
# if !defined(AO_HAVE_char_store) || !defined(AO_HAVE_char_store_full) \
|| !defined(AO_HAVE_char_store_release) \
|| !defined(AO_HAVE_char_store_release_write) \
|| !defined(AO_HAVE_char_store_write)
MISSING(AO_char_store);
# endif
b = 13;
# endif
# if defined(AO_HAVE_char_load_acquire)
TA_assert(AO_char_load(&b) == 13);
# elif !defined(AO_HAVE_char_load) || !defined(AO_HAVE_char_load_acquire) \
|| !defined(AO_HAVE_char_load_acquire_read) \
|| !defined(AO_HAVE_char_load_dd_acquire_read) \
|| !defined(AO_HAVE_char_load_full) || !defined(AO_HAVE_char_load_read)
MISSING(AO_char_load);
# endif
# if defined(AO_HAVE_char_fetch_and_add_acquire)
TA_assert(AO_char_fetch_and_add_acquire(&b, 42) == 13);
TA_assert(AO_char_fetch_and_add_acquire(&b, (unsigned char)-42) == 55);
# else
MISSING(AO_char_fetch_and_add);
# endif
# if defined(AO_HAVE_char_fetch_and_add1_acquire)
TA_assert(AO_char_fetch_and_add1_acquire(&b) == 13);
# else
MISSING(AO_char_fetch_and_add1);
++b;
# endif
# if defined(AO_HAVE_char_fetch_and_sub1_acquire)
TA_assert(AO_char_fetch_and_sub1_acquire(&b) == 14);
# else
MISSING(AO_char_fetch_and_sub1);
--b;
# endif
TA_assert(*(volatile char *)&b == 13);
# if defined(AO_HAVE_int_store_acquire)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile int *)&zz = 0;
# endif
AO_int_store_acquire(&zz, 13);
# else
# if !defined(AO_HAVE_int_store) || !defined(AO_HAVE_int_store_full) \
|| !defined(AO_HAVE_int_store_release) \
|| !defined(AO_HAVE_int_store_release_write) \
|| !defined(AO_HAVE_int_store_write)
MISSING(AO_int_store);
# endif
zz = 13;
# endif
# if defined(AO_HAVE_int_load_acquire)
TA_assert(AO_int_load(&zz) == 13);
# elif !defined(AO_HAVE_int_load) || !defined(AO_HAVE_int_load_acquire) \
|| !defined(AO_HAVE_int_load_acquire_read) \
|| !defined(AO_HAVE_int_load_dd_acquire_read) \
|| !defined(AO_HAVE_int_load_full) || !defined(AO_HAVE_int_load_read)
MISSING(AO_int_load);
# endif
# if defined(AO_HAVE_int_fetch_and_add_acquire)
TA_assert(AO_int_fetch_and_add_acquire(&zz, 42) == 13);
TA_assert(AO_int_fetch_and_add_acquire(&zz, (unsigned int)-42) == 55);
# else
MISSING(AO_int_fetch_and_add);
# endif
# if defined(AO_HAVE_int_fetch_and_add1_acquire)
TA_assert(AO_int_fetch_and_add1_acquire(&zz) == 13);
# else
MISSING(AO_int_fetch_and_add1);
++zz;
# endif
# if defined(AO_HAVE_int_fetch_and_sub1_acquire)
TA_assert(AO_int_fetch_and_sub1_acquire(&zz) == 14);
# else
MISSING(AO_int_fetch_and_sub1);
--zz;
# endif
TA_assert(*(volatile int *)&zz == 13);
# if defined(AO_HAVE_compare_and_swap_acquire)
TA_assert(!AO_compare_and_swap_acquire(&x, 14, 42));
TA_assert(x == 13);
TA_assert(AO_compare_and_swap_acquire(&x, 13, 42));
TA_assert(x == 42);
# else
MISSING(AO_compare_and_swap);
if (*(volatile AO_t *)&x == 13) x = 42;
# endif
# if defined(AO_HAVE_or_acquire)
AO_or_acquire(&x, 66);
TA_assert(x == 106);
# else
# if !defined(AO_HAVE_or) || !defined(AO_HAVE_or_acquire) \
|| !defined(AO_HAVE_or_acquire_read) || !defined(AO_HAVE_or_full) \
|| !defined(AO_HAVE_or_read) || !defined(AO_HAVE_or_release) \
|| !defined(AO_HAVE_or_release_write) || !defined(AO_HAVE_or_write)
MISSING(AO_or);
# endif
x |= 66;
# endif
# if defined(AO_HAVE_xor_acquire)
AO_xor_acquire(&x, 181);
TA_assert(x == 223);
# else
# if !defined(AO_HAVE_xor) || !defined(AO_HAVE_xor_acquire) \
|| !defined(AO_HAVE_xor_acquire_read) || !defined(AO_HAVE_xor_full) \
|| !defined(AO_HAVE_xor_read) || !defined(AO_HAVE_xor_release) \
|| !defined(AO_HAVE_xor_release_write) || !defined(AO_HAVE_xor_write)
MISSING(AO_xor);
# endif
x ^= 181;
# endif
# if defined(AO_HAVE_and_acquire)
AO_and_acquire(&x, 57);
TA_assert(x == 25);
# else
# if !defined(AO_HAVE_and) || !defined(AO_HAVE_and_acquire) \
|| !defined(AO_HAVE_and_acquire_read) || !defined(AO_HAVE_and_full) \
|| !defined(AO_HAVE_and_read) || !defined(AO_HAVE_and_release) \
|| !defined(AO_HAVE_and_release_write) || !defined(AO_HAVE_and_write)
MISSING(AO_and);
# endif
x &= 57;
# endif
# if defined(AO_HAVE_fetch_compare_and_swap_acquire)
TA_assert(AO_fetch_compare_and_swap_acquire(&x, 14, 117) == 25);
TA_assert(x == 25);
TA_assert(AO_fetch_compare_and_swap_acquire(&x, 25, 117) == 25);
# else
MISSING(AO_fetch_compare_and_swap);
if (x == 25) x = 117;
# endif
TA_assert(x == 117);
# if defined(AO_HAVE_short_compare_and_swap_acquire)
TA_assert(!AO_short_compare_and_swap_acquire(&s, 14, 42));
TA_assert(s == 13);
TA_assert(AO_short_compare_and_swap_acquire(&s, 13, 42));
TA_assert(s == 42);
# else
MISSING(AO_short_compare_and_swap);
if (*(volatile short *)&s == 13) s = 42;
# endif
# if defined(AO_HAVE_short_or_acquire)
AO_short_or_acquire(&s, 66);
TA_assert(s == 106);
# else
# if !defined(AO_HAVE_short_or) || !defined(AO_HAVE_short_or_acquire) \
|| !defined(AO_HAVE_short_or_acquire_read) \
|| !defined(AO_HAVE_short_or_full) || !defined(AO_HAVE_short_or_read) \
|| !defined(AO_HAVE_short_or_release) \
|| !defined(AO_HAVE_short_or_release_write) \
|| !defined(AO_HAVE_short_or_write)
MISSING(AO_short_or);
# endif
s |= 66;
# endif
# if defined(AO_HAVE_short_xor_acquire)
AO_short_xor_acquire(&s, 181);
TA_assert(s == 223);
# else
# if !defined(AO_HAVE_short_xor) || !defined(AO_HAVE_short_xor_acquire) \
|| !defined(AO_HAVE_short_xor_acquire_read) \
|| !defined(AO_HAVE_short_xor_full) \
|| !defined(AO_HAVE_short_xor_read) \
|| !defined(AO_HAVE_short_xor_release) \
|| !defined(AO_HAVE_short_xor_release_write) \
|| !defined(AO_HAVE_short_xor_write)
MISSING(AO_short_xor);
# endif
s ^= 181;
# endif
# if defined(AO_HAVE_short_and_acquire)
AO_short_and_acquire(&s, 57);
TA_assert(s == 25);
# else
# if !defined(AO_HAVE_short_and) || !defined(AO_HAVE_short_and_acquire) \
|| !defined(AO_HAVE_short_and_acquire_read) \
|| !defined(AO_HAVE_short_and_full) \
|| !defined(AO_HAVE_short_and_read) \
|| !defined(AO_HAVE_short_and_release) \
|| !defined(AO_HAVE_short_and_release_write) \
|| !defined(AO_HAVE_short_and_write)
MISSING(AO_short_and);
# endif
s &= 57;
# endif
# if defined(AO_HAVE_short_fetch_compare_and_swap_acquire)
TA_assert(AO_short_fetch_compare_and_swap_acquire(&s, 14, 117) == 25);
TA_assert(s == 25);
TA_assert(AO_short_fetch_compare_and_swap_acquire(&s, 25, 117) == 25);
# else
MISSING(AO_short_fetch_compare_and_swap);
if (s == 25) s = 117;
# endif
TA_assert(s == 117);
# if defined(AO_HAVE_char_compare_and_swap_acquire)
TA_assert(!AO_char_compare_and_swap_acquire(&b, 14, 42));
TA_assert(b == 13);
TA_assert(AO_char_compare_and_swap_acquire(&b, 13, 42));
TA_assert(b == 42);
# else
MISSING(AO_char_compare_and_swap);
if (*(volatile char *)&b == 13) b = 42;
# endif
# if defined(AO_HAVE_char_or_acquire)
AO_char_or_acquire(&b, 66);
TA_assert(b == 106);
# else
# if !defined(AO_HAVE_char_or) || !defined(AO_HAVE_char_or_acquire) \
|| !defined(AO_HAVE_char_or_acquire_read) \
|| !defined(AO_HAVE_char_or_full) || !defined(AO_HAVE_char_or_read) \
|| !defined(AO_HAVE_char_or_release) \
|| !defined(AO_HAVE_char_or_release_write) \
|| !defined(AO_HAVE_char_or_write)
MISSING(AO_char_or);
# endif
b |= 66;
# endif
# if defined(AO_HAVE_char_xor_acquire)
AO_char_xor_acquire(&b, 181);
TA_assert(b == 223);
# else
# if !defined(AO_HAVE_char_xor) || !defined(AO_HAVE_char_xor_acquire) \
|| !defined(AO_HAVE_char_xor_acquire_read) \
|| !defined(AO_HAVE_char_xor_full) || !defined(AO_HAVE_char_xor_read) \
|| !defined(AO_HAVE_char_xor_release) \
|| !defined(AO_HAVE_char_xor_release_write) \
|| !defined(AO_HAVE_char_xor_write)
MISSING(AO_char_xor);
# endif
b ^= 181;
# endif
# if defined(AO_HAVE_char_and_acquire)
AO_char_and_acquire(&b, 57);
TA_assert(b == 25);
# else
# if !defined(AO_HAVE_char_and) || !defined(AO_HAVE_char_and_acquire) \
|| !defined(AO_HAVE_char_and_acquire_read) \
|| !defined(AO_HAVE_char_and_full) || !defined(AO_HAVE_char_and_read) \
|| !defined(AO_HAVE_char_and_release) \
|| !defined(AO_HAVE_char_and_release_write) \
|| !defined(AO_HAVE_char_and_write)
MISSING(AO_char_and);
# endif
b &= 57;
# endif
# if defined(AO_HAVE_char_fetch_compare_and_swap_acquire)
TA_assert(AO_char_fetch_compare_and_swap_acquire(&b, 14, 117) == 25);
TA_assert(b == 25);
TA_assert(AO_char_fetch_compare_and_swap_acquire(&b, 25, 117) == 25);
# else
MISSING(AO_char_fetch_compare_and_swap);
if (b == 25) b = 117;
# endif
TA_assert(b == 117);
# if defined(AO_HAVE_int_compare_and_swap_acquire)
TA_assert(!AO_int_compare_and_swap_acquire(&zz, 14, 42));
TA_assert(zz == 13);
TA_assert(AO_int_compare_and_swap_acquire(&zz, 13, 42));
TA_assert(zz == 42);
# else
MISSING(AO_int_compare_and_swap);
if (*(volatile int *)&zz == 13) zz = 42;
# endif
# if defined(AO_HAVE_int_or_acquire)
AO_int_or_acquire(&zz, 66);
TA_assert(zz == 106);
# else
# if !defined(AO_HAVE_int_or) || !defined(AO_HAVE_int_or_acquire) \
|| !defined(AO_HAVE_int_or_acquire_read) \
|| !defined(AO_HAVE_int_or_full) || !defined(AO_HAVE_int_or_read) \
|| !defined(AO_HAVE_int_or_release) \
|| !defined(AO_HAVE_int_or_release_write) \
|| !defined(AO_HAVE_int_or_write)
MISSING(AO_int_or);
# endif
zz |= 66;
# endif
# if defined(AO_HAVE_int_xor_acquire)
AO_int_xor_acquire(&zz, 181);
TA_assert(zz == 223);
# else
# if !defined(AO_HAVE_int_xor) || !defined(AO_HAVE_int_xor_acquire) \
|| !defined(AO_HAVE_int_xor_acquire_read) \
|| !defined(AO_HAVE_int_xor_full) || !defined(AO_HAVE_int_xor_read) \
|| !defined(AO_HAVE_int_xor_release) \
|| !defined(AO_HAVE_int_xor_release_write) \
|| !defined(AO_HAVE_int_xor_write)
MISSING(AO_int_xor);
# endif
zz ^= 181;
# endif
# if defined(AO_HAVE_int_and_acquire)
AO_int_and_acquire(&zz, 57);
TA_assert(zz == 25);
# else
# if !defined(AO_HAVE_int_and) || !defined(AO_HAVE_int_and_acquire) \
|| !defined(AO_HAVE_int_and_acquire_read) \
|| !defined(AO_HAVE_int_and_full) || !defined(AO_HAVE_int_and_read) \
|| !defined(AO_HAVE_int_and_release) \
|| !defined(AO_HAVE_int_and_release_write) \
|| !defined(AO_HAVE_int_and_write)
MISSING(AO_int_and);
# endif
zz &= 57;
# endif
# if defined(AO_HAVE_int_fetch_compare_and_swap_acquire)
TA_assert(AO_int_fetch_compare_and_swap_acquire(&zz, 14, 117) == 25);
TA_assert(zz == 25);
TA_assert(AO_int_fetch_compare_and_swap_acquire(&zz, 25, 117) == 25);
# else
MISSING(AO_int_fetch_compare_and_swap);
if (zz == 25) zz = 117;
# endif
TA_assert(zz == 117);
# if defined(AO_HAVE_double_load_acquire) || defined(AO_HAVE_double_store_acquire)
/* Initialize old_w even for store to workaround MSan warning. */
old_w.AO_val1 = 3316;
old_w.AO_val2 = 2921;
# endif
# if defined(AO_HAVE_double_load_acquire)
new_w = AO_double_load_acquire(&old_w);
TA_assert(new_w.AO_val1 == 3316 && new_w.AO_val2 == 2921);
# elif !defined(AO_HAVE_double_load) \
|| !defined(AO_HAVE_double_load_acquire) \
|| !defined(AO_HAVE_double_load_acquire_read) \
|| !defined(AO_HAVE_double_load_dd_acquire_read) \
|| !defined(AO_HAVE_double_load_full) \
|| !defined(AO_HAVE_double_load_read)
MISSING(AO_double_load);
# endif
# if defined(AO_HAVE_double_store_acquire)
new_w.AO_val1 = 1375;
new_w.AO_val2 = 8243;
AO_double_store_acquire(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
AO_double_store_acquire(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
new_w.AO_val1 ^= old_w.AO_val1;
new_w.AO_val2 ^= old_w.AO_val2;
AO_double_store_acquire(&old_w, new_w);
TA_assert(old_w.AO_val1 == 0 && old_w.AO_val2 == 0);
# elif !defined(AO_HAVE_double_store) \
|| !defined(AO_HAVE_double_store_full) \
|| !defined(AO_HAVE_double_store_release) \
|| !defined(AO_HAVE_double_store_release_write) \
|| !defined(AO_HAVE_double_store_write)
MISSING(AO_double_store);
# endif
# if defined(AO_HAVE_compare_double_and_swap_double_acquire)
TA_assert(!AO_compare_double_and_swap_double_acquire(&w, 17, 42, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_double_and_swap_double_acquire(&w, 0, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_acquire(&w, 12, 14, 64, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_acquire(&w, 11, 13, 85, 82));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_acquire(&w, 13, 12, 17, 42));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_double_and_swap_double_acquire(&w, 12, 13, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_double_and_swap_double_acquire(&w, 17, 42, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_double_and_swap_double);
# endif
# if defined(AO_HAVE_compare_and_swap_double_acquire)
TA_assert(!AO_compare_and_swap_double_acquire(&w, 17, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_and_swap_double_acquire(&w, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_acquire(&w, 13, 12, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_acquire(&w, 1213, 48, 86));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_and_swap_double_acquire(&w, 12, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_and_swap_double_acquire(&w, 17, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_and_swap_double);
# endif
# if defined(AO_HAVE_double_compare_and_swap_acquire)
old_w.AO_val1 = 4116;
old_w.AO_val2 = 2121;
new_w.AO_val1 = 8537;
new_w.AO_val2 = 6410;
TA_assert(!AO_double_compare_and_swap_acquire(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_double_compare_and_swap_acquire(&w, w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = 29;
new_w.AO_val1 = 820;
new_w.AO_val2 = 5917;
TA_assert(!AO_double_compare_and_swap_acquire(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = 11;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 3552;
new_w.AO_val2 = 1746;
TA_assert(!AO_double_compare_and_swap_acquire(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 8537;
new_w.AO_val1 = 4116;
new_w.AO_val2 = 2121;
TA_assert(!AO_double_compare_and_swap_acquire(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 1;
TA_assert(AO_double_compare_and_swap_acquire(&w, old_w, new_w));
TA_assert(w.AO_val1 == 1 && w.AO_val2 == 2121);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = w.AO_val2;
new_w.AO_val1--;
new_w.AO_val2 = 0;
TA_assert(AO_double_compare_and_swap_acquire(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_double_compare_and_swap);
# endif
}
/*
* Copyright (c) 2003 by Hewlett-Packard Company. All rights reserved.
*
* This file is covered by the GNU general public license, version 2.
* see COPYING for details.
*/
/* Some basic sanity tests. These do not test the barrier semantics. */
#undef TA_assert
#define TA_assert(e) \
if (!(e)) { fprintf(stderr, "Assertion failed %s:%d (barrier: _read)\n", \
__FILE__, __LINE__), exit(1); }
#undef MISSING
#define MISSING(name) \
printf("Missing: %s\n", #name "_read")
#if defined(CPPCHECK)
void list_atomic_read(void);
void char_list_atomic_read(void);
void short_list_atomic_read(void);
void int_list_atomic_read(void);
void double_list_atomic_read(void);
#endif
void test_atomic_read(void)
{
AO_t x;
unsigned char b;
unsigned short s;
unsigned int zz;
# if defined(AO_HAVE_test_and_set_read)
AO_TS_t z = AO_TS_INITIALIZER;
# endif
# if defined(AO_HAVE_double_compare_and_swap_read) \
|| defined(AO_HAVE_double_load_read) \
|| defined(AO_HAVE_double_store_read)
static AO_double_t old_w; /* static to avoid misalignment */
AO_double_t new_w;
# endif
# if defined(AO_HAVE_compare_and_swap_double_read) \
|| defined(AO_HAVE_compare_double_and_swap_double_read) \
|| defined(AO_HAVE_double_compare_and_swap_read)
static AO_double_t w; /* static to avoid misalignment */
w.AO_val1 = 0;
w.AO_val2 = 0;
# endif
# if defined(CPPCHECK)
list_atomic_read();
char_list_atomic_read();
short_list_atomic_read();
int_list_atomic_read();
double_list_atomic_read();
# endif
# if defined(AO_HAVE_nop_read)
AO_nop_read();
# elif !defined(AO_HAVE_nop) || !defined(AO_HAVE_nop_full) \
|| !defined(AO_HAVE_nop_read) || !defined(AO_HAVE_nop_write)
MISSING(AO_nop);
# endif
# if defined(AO_HAVE_store_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile AO_t *)&x = 0; /* initialize to avoid false warning */
# endif
AO_store_read(&x, 13);
TA_assert(x == 13);
# else
# if !defined(AO_HAVE_store) || !defined(AO_HAVE_store_full) \
|| !defined(AO_HAVE_store_release) \
|| !defined(AO_HAVE_store_release_write) \
|| !defined(AO_HAVE_store_write)
MISSING(AO_store);
# endif
x = 13;
# endif
# if defined(AO_HAVE_load_read)
TA_assert(AO_load_read(&x) == 13);
# elif !defined(AO_HAVE_load) || !defined(AO_HAVE_load_acquire) \
|| !defined(AO_HAVE_load_acquire_read) \
|| !defined(AO_HAVE_load_dd_acquire_read) \
|| !defined(AO_HAVE_load_full) || !defined(AO_HAVE_load_read)
MISSING(AO_load);
# endif
# if defined(AO_HAVE_test_and_set_read)
TA_assert(AO_test_and_set_read(&z) == AO_TS_CLEAR);
TA_assert(AO_test_and_set_read(&z) == AO_TS_SET);
TA_assert(AO_test_and_set_read(&z) == AO_TS_SET);
AO_CLEAR(&z);
# else
MISSING(AO_test_and_set);
# endif
# if defined(AO_HAVE_fetch_and_add_read)
TA_assert(AO_fetch_and_add_read(&x, 42) == 13);
TA_assert(AO_fetch_and_add_read(&x, (AO_t)(-42)) == 55);
# else
MISSING(AO_fetch_and_add);
# endif
# if defined(AO_HAVE_fetch_and_add1_read)
TA_assert(AO_fetch_and_add1_read(&x) == 13);
# else
MISSING(AO_fetch_and_add1);
++x;
# endif
# if defined(AO_HAVE_fetch_and_sub1_read)
TA_assert(AO_fetch_and_sub1_read(&x) == 14);
# else
MISSING(AO_fetch_and_sub1);
--x;
# endif
# if defined(AO_HAVE_short_store_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile short *)&s = 0;
# endif
AO_short_store_read(&s, 13);
# else
# if !defined(AO_HAVE_short_store) || !defined(AO_HAVE_short_store_full) \
|| !defined(AO_HAVE_short_store_release) \
|| !defined(AO_HAVE_short_store_release_write) \
|| !defined(AO_HAVE_short_store_write)
MISSING(AO_short_store);
# endif
s = 13;
# endif
# if defined(AO_HAVE_short_load_read)
TA_assert(AO_short_load(&s) == 13);
# elif !defined(AO_HAVE_short_load) || !defined(AO_HAVE_short_load_acquire) \
|| !defined(AO_HAVE_short_load_acquire_read) \
|| !defined(AO_HAVE_short_load_dd_acquire_read) \
|| !defined(AO_HAVE_short_load_full) \
|| !defined(AO_HAVE_short_load_read)
MISSING(AO_short_load);
# endif
# if defined(AO_HAVE_short_fetch_and_add_read)
TA_assert(AO_short_fetch_and_add_read(&s, 42) == 13);
TA_assert(AO_short_fetch_and_add_read(&s, (unsigned short)-42) == 55);
# else
MISSING(AO_short_fetch_and_add);
# endif
# if defined(AO_HAVE_short_fetch_and_add1_read)
TA_assert(AO_short_fetch_and_add1_read(&s) == 13);
# else
MISSING(AO_short_fetch_and_add1);
++s;
# endif
# if defined(AO_HAVE_short_fetch_and_sub1_read)
TA_assert(AO_short_fetch_and_sub1_read(&s) == 14);
# else
MISSING(AO_short_fetch_and_sub1);
--s;
# endif
TA_assert(*(volatile short *)&s == 13);
# if defined(AO_HAVE_char_store_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile char *)&b = 0;
# endif
AO_char_store_read(&b, 13);
# else
# if !defined(AO_HAVE_char_store) || !defined(AO_HAVE_char_store_full) \
|| !defined(AO_HAVE_char_store_release) \
|| !defined(AO_HAVE_char_store_release_write) \
|| !defined(AO_HAVE_char_store_write)
MISSING(AO_char_store);
# endif
b = 13;
# endif
# if defined(AO_HAVE_char_load_read)
TA_assert(AO_char_load(&b) == 13);
# elif !defined(AO_HAVE_char_load) || !defined(AO_HAVE_char_load_acquire) \
|| !defined(AO_HAVE_char_load_acquire_read) \
|| !defined(AO_HAVE_char_load_dd_acquire_read) \
|| !defined(AO_HAVE_char_load_full) || !defined(AO_HAVE_char_load_read)
MISSING(AO_char_load);
# endif
# if defined(AO_HAVE_char_fetch_and_add_read)
TA_assert(AO_char_fetch_and_add_read(&b, 42) == 13);
TA_assert(AO_char_fetch_and_add_read(&b, (unsigned char)-42) == 55);
# else
MISSING(AO_char_fetch_and_add);
# endif
# if defined(AO_HAVE_char_fetch_and_add1_read)
TA_assert(AO_char_fetch_and_add1_read(&b) == 13);
# else
MISSING(AO_char_fetch_and_add1);
++b;
# endif
# if defined(AO_HAVE_char_fetch_and_sub1_read)
TA_assert(AO_char_fetch_and_sub1_read(&b) == 14);
# else
MISSING(AO_char_fetch_and_sub1);
--b;
# endif
TA_assert(*(volatile char *)&b == 13);
# if defined(AO_HAVE_int_store_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile int *)&zz = 0;
# endif
AO_int_store_read(&zz, 13);
# else
# if !defined(AO_HAVE_int_store) || !defined(AO_HAVE_int_store_full) \
|| !defined(AO_HAVE_int_store_release) \
|| !defined(AO_HAVE_int_store_release_write) \
|| !defined(AO_HAVE_int_store_write)
MISSING(AO_int_store);
# endif
zz = 13;
# endif
# if defined(AO_HAVE_int_load_read)
TA_assert(AO_int_load(&zz) == 13);
# elif !defined(AO_HAVE_int_load) || !defined(AO_HAVE_int_load_acquire) \
|| !defined(AO_HAVE_int_load_acquire_read) \
|| !defined(AO_HAVE_int_load_dd_acquire_read) \
|| !defined(AO_HAVE_int_load_full) || !defined(AO_HAVE_int_load_read)
MISSING(AO_int_load);
# endif
# if defined(AO_HAVE_int_fetch_and_add_read)
TA_assert(AO_int_fetch_and_add_read(&zz, 42) == 13);
TA_assert(AO_int_fetch_and_add_read(&zz, (unsigned int)-42) == 55);
# else
MISSING(AO_int_fetch_and_add);
# endif
# if defined(AO_HAVE_int_fetch_and_add1_read)
TA_assert(AO_int_fetch_and_add1_read(&zz) == 13);
# else
MISSING(AO_int_fetch_and_add1);
++zz;
# endif
# if defined(AO_HAVE_int_fetch_and_sub1_read)
TA_assert(AO_int_fetch_and_sub1_read(&zz) == 14);
# else
MISSING(AO_int_fetch_and_sub1);
--zz;
# endif
TA_assert(*(volatile int *)&zz == 13);
# if defined(AO_HAVE_compare_and_swap_read)
TA_assert(!AO_compare_and_swap_read(&x, 14, 42));
TA_assert(x == 13);
TA_assert(AO_compare_and_swap_read(&x, 13, 42));
TA_assert(x == 42);
# else
MISSING(AO_compare_and_swap);
if (*(volatile AO_t *)&x == 13) x = 42;
# endif
# if defined(AO_HAVE_or_read)
AO_or_read(&x, 66);
TA_assert(x == 106);
# else
# if !defined(AO_HAVE_or) || !defined(AO_HAVE_or_acquire) \
|| !defined(AO_HAVE_or_acquire_read) || !defined(AO_HAVE_or_full) \
|| !defined(AO_HAVE_or_read) || !defined(AO_HAVE_or_release) \
|| !defined(AO_HAVE_or_release_write) || !defined(AO_HAVE_or_write)
MISSING(AO_or);
# endif
x |= 66;
# endif
# if defined(AO_HAVE_xor_read)
AO_xor_read(&x, 181);
TA_assert(x == 223);
# else
# if !defined(AO_HAVE_xor) || !defined(AO_HAVE_xor_acquire) \
|| !defined(AO_HAVE_xor_acquire_read) || !defined(AO_HAVE_xor_full) \
|| !defined(AO_HAVE_xor_read) || !defined(AO_HAVE_xor_release) \
|| !defined(AO_HAVE_xor_release_write) || !defined(AO_HAVE_xor_write)
MISSING(AO_xor);
# endif
x ^= 181;
# endif
# if defined(AO_HAVE_and_read)
AO_and_read(&x, 57);
TA_assert(x == 25);
# else
# if !defined(AO_HAVE_and) || !defined(AO_HAVE_and_acquire) \
|| !defined(AO_HAVE_and_acquire_read) || !defined(AO_HAVE_and_full) \
|| !defined(AO_HAVE_and_read) || !defined(AO_HAVE_and_release) \
|| !defined(AO_HAVE_and_release_write) || !defined(AO_HAVE_and_write)
MISSING(AO_and);
# endif
x &= 57;
# endif
# if defined(AO_HAVE_fetch_compare_and_swap_read)
TA_assert(AO_fetch_compare_and_swap_read(&x, 14, 117) == 25);
TA_assert(x == 25);
TA_assert(AO_fetch_compare_and_swap_read(&x, 25, 117) == 25);
# else
MISSING(AO_fetch_compare_and_swap);
if (x == 25) x = 117;
# endif
TA_assert(x == 117);
# if defined(AO_HAVE_short_compare_and_swap_read)
TA_assert(!AO_short_compare_and_swap_read(&s, 14, 42));
TA_assert(s == 13);
TA_assert(AO_short_compare_and_swap_read(&s, 13, 42));
TA_assert(s == 42);
# else
MISSING(AO_short_compare_and_swap);
if (*(volatile short *)&s == 13) s = 42;
# endif
# if defined(AO_HAVE_short_or_read)
AO_short_or_read(&s, 66);
TA_assert(s == 106);
# else
# if !defined(AO_HAVE_short_or) || !defined(AO_HAVE_short_or_acquire) \
|| !defined(AO_HAVE_short_or_acquire_read) \
|| !defined(AO_HAVE_short_or_full) || !defined(AO_HAVE_short_or_read) \
|| !defined(AO_HAVE_short_or_release) \
|| !defined(AO_HAVE_short_or_release_write) \
|| !defined(AO_HAVE_short_or_write)
MISSING(AO_short_or);
# endif
s |= 66;
# endif
# if defined(AO_HAVE_short_xor_read)
AO_short_xor_read(&s, 181);
TA_assert(s == 223);
# else
# if !defined(AO_HAVE_short_xor) || !defined(AO_HAVE_short_xor_acquire) \
|| !defined(AO_HAVE_short_xor_acquire_read) \
|| !defined(AO_HAVE_short_xor_full) \
|| !defined(AO_HAVE_short_xor_read) \
|| !defined(AO_HAVE_short_xor_release) \
|| !defined(AO_HAVE_short_xor_release_write) \
|| !defined(AO_HAVE_short_xor_write)
MISSING(AO_short_xor);
# endif
s ^= 181;
# endif
# if defined(AO_HAVE_short_and_read)
AO_short_and_read(&s, 57);
TA_assert(s == 25);
# else
# if !defined(AO_HAVE_short_and) || !defined(AO_HAVE_short_and_acquire) \
|| !defined(AO_HAVE_short_and_acquire_read) \
|| !defined(AO_HAVE_short_and_full) \
|| !defined(AO_HAVE_short_and_read) \
|| !defined(AO_HAVE_short_and_release) \
|| !defined(AO_HAVE_short_and_release_write) \
|| !defined(AO_HAVE_short_and_write)
MISSING(AO_short_and);
# endif
s &= 57;
# endif
# if defined(AO_HAVE_short_fetch_compare_and_swap_read)
TA_assert(AO_short_fetch_compare_and_swap_read(&s, 14, 117) == 25);
TA_assert(s == 25);
TA_assert(AO_short_fetch_compare_and_swap_read(&s, 25, 117) == 25);
# else
MISSING(AO_short_fetch_compare_and_swap);
if (s == 25) s = 117;
# endif
TA_assert(s == 117);
# if defined(AO_HAVE_char_compare_and_swap_read)
TA_assert(!AO_char_compare_and_swap_read(&b, 14, 42));
TA_assert(b == 13);
TA_assert(AO_char_compare_and_swap_read(&b, 13, 42));
TA_assert(b == 42);
# else
MISSING(AO_char_compare_and_swap);
if (*(volatile char *)&b == 13) b = 42;
# endif
# if defined(AO_HAVE_char_or_read)
AO_char_or_read(&b, 66);
TA_assert(b == 106);
# else
# if !defined(AO_HAVE_char_or) || !defined(AO_HAVE_char_or_acquire) \
|| !defined(AO_HAVE_char_or_acquire_read) \
|| !defined(AO_HAVE_char_or_full) || !defined(AO_HAVE_char_or_read) \
|| !defined(AO_HAVE_char_or_release) \
|| !defined(AO_HAVE_char_or_release_write) \
|| !defined(AO_HAVE_char_or_write)
MISSING(AO_char_or);
# endif
b |= 66;
# endif
# if defined(AO_HAVE_char_xor_read)
AO_char_xor_read(&b, 181);
TA_assert(b == 223);
# else
# if !defined(AO_HAVE_char_xor) || !defined(AO_HAVE_char_xor_acquire) \
|| !defined(AO_HAVE_char_xor_acquire_read) \
|| !defined(AO_HAVE_char_xor_full) || !defined(AO_HAVE_char_xor_read) \
|| !defined(AO_HAVE_char_xor_release) \
|| !defined(AO_HAVE_char_xor_release_write) \
|| !defined(AO_HAVE_char_xor_write)
MISSING(AO_char_xor);
# endif
b ^= 181;
# endif
# if defined(AO_HAVE_char_and_read)
AO_char_and_read(&b, 57);
TA_assert(b == 25);
# else
# if !defined(AO_HAVE_char_and) || !defined(AO_HAVE_char_and_acquire) \
|| !defined(AO_HAVE_char_and_acquire_read) \
|| !defined(AO_HAVE_char_and_full) || !defined(AO_HAVE_char_and_read) \
|| !defined(AO_HAVE_char_and_release) \
|| !defined(AO_HAVE_char_and_release_write) \
|| !defined(AO_HAVE_char_and_write)
MISSING(AO_char_and);
# endif
b &= 57;
# endif
# if defined(AO_HAVE_char_fetch_compare_and_swap_read)
TA_assert(AO_char_fetch_compare_and_swap_read(&b, 14, 117) == 25);
TA_assert(b == 25);
TA_assert(AO_char_fetch_compare_and_swap_read(&b, 25, 117) == 25);
# else
MISSING(AO_char_fetch_compare_and_swap);
if (b == 25) b = 117;
# endif
TA_assert(b == 117);
# if defined(AO_HAVE_int_compare_and_swap_read)
TA_assert(!AO_int_compare_and_swap_read(&zz, 14, 42));
TA_assert(zz == 13);
TA_assert(AO_int_compare_and_swap_read(&zz, 13, 42));
TA_assert(zz == 42);
# else
MISSING(AO_int_compare_and_swap);
if (*(volatile int *)&zz == 13) zz = 42;
# endif
# if defined(AO_HAVE_int_or_read)
AO_int_or_read(&zz, 66);
TA_assert(zz == 106);
# else
# if !defined(AO_HAVE_int_or) || !defined(AO_HAVE_int_or_acquire) \
|| !defined(AO_HAVE_int_or_acquire_read) \
|| !defined(AO_HAVE_int_or_full) || !defined(AO_HAVE_int_or_read) \
|| !defined(AO_HAVE_int_or_release) \
|| !defined(AO_HAVE_int_or_release_write) \
|| !defined(AO_HAVE_int_or_write)
MISSING(AO_int_or);
# endif
zz |= 66;
# endif
# if defined(AO_HAVE_int_xor_read)
AO_int_xor_read(&zz, 181);
TA_assert(zz == 223);
# else
# if !defined(AO_HAVE_int_xor) || !defined(AO_HAVE_int_xor_acquire) \
|| !defined(AO_HAVE_int_xor_acquire_read) \
|| !defined(AO_HAVE_int_xor_full) || !defined(AO_HAVE_int_xor_read) \
|| !defined(AO_HAVE_int_xor_release) \
|| !defined(AO_HAVE_int_xor_release_write) \
|| !defined(AO_HAVE_int_xor_write)
MISSING(AO_int_xor);
# endif
zz ^= 181;
# endif
# if defined(AO_HAVE_int_and_read)
AO_int_and_read(&zz, 57);
TA_assert(zz == 25);
# else
# if !defined(AO_HAVE_int_and) || !defined(AO_HAVE_int_and_acquire) \
|| !defined(AO_HAVE_int_and_acquire_read) \
|| !defined(AO_HAVE_int_and_full) || !defined(AO_HAVE_int_and_read) \
|| !defined(AO_HAVE_int_and_release) \
|| !defined(AO_HAVE_int_and_release_write) \
|| !defined(AO_HAVE_int_and_write)
MISSING(AO_int_and);
# endif
zz &= 57;
# endif
# if defined(AO_HAVE_int_fetch_compare_and_swap_read)
TA_assert(AO_int_fetch_compare_and_swap_read(&zz, 14, 117) == 25);
TA_assert(zz == 25);
TA_assert(AO_int_fetch_compare_and_swap_read(&zz, 25, 117) == 25);
# else
MISSING(AO_int_fetch_compare_and_swap);
if (zz == 25) zz = 117;
# endif
TA_assert(zz == 117);
# if defined(AO_HAVE_double_load_read) || defined(AO_HAVE_double_store_read)
/* Initialize old_w even for store to workaround MSan warning. */
old_w.AO_val1 = 3316;
old_w.AO_val2 = 2921;
# endif
# if defined(AO_HAVE_double_load_read)
new_w = AO_double_load_read(&old_w);
TA_assert(new_w.AO_val1 == 3316 && new_w.AO_val2 == 2921);
# elif !defined(AO_HAVE_double_load) \
|| !defined(AO_HAVE_double_load_acquire) \
|| !defined(AO_HAVE_double_load_acquire_read) \
|| !defined(AO_HAVE_double_load_dd_acquire_read) \
|| !defined(AO_HAVE_double_load_full) \
|| !defined(AO_HAVE_double_load_read)
MISSING(AO_double_load);
# endif
# if defined(AO_HAVE_double_store_read)
new_w.AO_val1 = 1375;
new_w.AO_val2 = 8243;
AO_double_store_read(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
AO_double_store_read(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
new_w.AO_val1 ^= old_w.AO_val1;
new_w.AO_val2 ^= old_w.AO_val2;
AO_double_store_read(&old_w, new_w);
TA_assert(old_w.AO_val1 == 0 && old_w.AO_val2 == 0);
# elif !defined(AO_HAVE_double_store) \
|| !defined(AO_HAVE_double_store_full) \
|| !defined(AO_HAVE_double_store_release) \
|| !defined(AO_HAVE_double_store_release_write) \
|| !defined(AO_HAVE_double_store_write)
MISSING(AO_double_store);
# endif
# if defined(AO_HAVE_compare_double_and_swap_double_read)
TA_assert(!AO_compare_double_and_swap_double_read(&w, 17, 42, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_double_and_swap_double_read(&w, 0, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_read(&w, 12, 14, 64, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_read(&w, 11, 13, 85, 82));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_read(&w, 13, 12, 17, 42));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_double_and_swap_double_read(&w, 12, 13, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_double_and_swap_double_read(&w, 17, 42, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_double_and_swap_double);
# endif
# if defined(AO_HAVE_compare_and_swap_double_read)
TA_assert(!AO_compare_and_swap_double_read(&w, 17, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_and_swap_double_read(&w, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_read(&w, 13, 12, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_read(&w, 1213, 48, 86));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_and_swap_double_read(&w, 12, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_and_swap_double_read(&w, 17, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_and_swap_double);
# endif
# if defined(AO_HAVE_double_compare_and_swap_read)
old_w.AO_val1 = 4116;
old_w.AO_val2 = 2121;
new_w.AO_val1 = 8537;
new_w.AO_val2 = 6410;
TA_assert(!AO_double_compare_and_swap_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_double_compare_and_swap_read(&w, w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = 29;
new_w.AO_val1 = 820;
new_w.AO_val2 = 5917;
TA_assert(!AO_double_compare_and_swap_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = 11;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 3552;
new_w.AO_val2 = 1746;
TA_assert(!AO_double_compare_and_swap_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 8537;
new_w.AO_val1 = 4116;
new_w.AO_val2 = 2121;
TA_assert(!AO_double_compare_and_swap_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 1;
TA_assert(AO_double_compare_and_swap_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 1 && w.AO_val2 == 2121);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = w.AO_val2;
new_w.AO_val1--;
new_w.AO_val2 = 0;
TA_assert(AO_double_compare_and_swap_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_double_compare_and_swap);
# endif
}
/*
* Copyright (c) 2003 by <NAME>. All rights reserved.
*
* This file is covered by the GNU general public license, version 2.
* see COPYING for details.
*/
/* Some basic sanity tests. These do not test the barrier semantics. */
#undef TA_assert
#define TA_assert(e) \
if (!(e)) { fprintf(stderr, "Assertion failed %s:%d (barrier: _write)\n", \
__FILE__, __LINE__), exit(1); }
#undef MISSING
#define MISSING(name) \
printf("Missing: %s\n", #name "_write")
#if defined(CPPCHECK)
void list_atomic_write(void);
void char_list_atomic_write(void);
void short_list_atomic_write(void);
void int_list_atomic_write(void);
void double_list_atomic_write(void);
#endif
void test_atomic_write(void)
{
AO_t x;
unsigned char b;
unsigned short s;
unsigned int zz;
# if defined(AO_HAVE_test_and_set_write)
AO_TS_t z = AO_TS_INITIALIZER;
# endif
# if defined(AO_HAVE_double_compare_and_swap_write) \
|| defined(AO_HAVE_double_load_write) \
|| defined(AO_HAVE_double_store_write)
static AO_double_t old_w; /* static to avoid misalignment */
AO_double_t new_w;
# endif
# if defined(AO_HAVE_compare_and_swap_double_write) \
|| defined(AO_HAVE_compare_double_and_swap_double_write) \
|| defined(AO_HAVE_double_compare_and_swap_write)
static AO_double_t w; /* static to avoid misalignment */
w.AO_val1 = 0;
w.AO_val2 = 0;
# endif
# if defined(CPPCHECK)
list_atomic_write();
char_list_atomic_write();
short_list_atomic_write();
int_list_atomic_write();
double_list_atomic_write();
# endif
# if defined(AO_HAVE_nop_write)
AO_nop_write();
# elif !defined(AO_HAVE_nop) || !defined(AO_HAVE_nop_full) \
|| !defined(AO_HAVE_nop_read) || !defined(AO_HAVE_nop_write)
MISSING(AO_nop);
# endif
# if defined(AO_HAVE_store_write)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile AO_t *)&x = 0; /* initialize to avoid false warning */
# endif
AO_store_write(&x, 13);
TA_assert(x == 13);
# else
# if !defined(AO_HAVE_store) || !defined(AO_HAVE_store_full) \
|| !defined(AO_HAVE_store_release) \
|| !defined(AO_HAVE_store_release_write) \
|| !defined(AO_HAVE_store_write)
MISSING(AO_store);
# endif
x = 13;
# endif
# if defined(AO_HAVE_load_write)
TA_assert(AO_load_write(&x) == 13);
# elif !defined(AO_HAVE_load) || !defined(AO_HAVE_load_acquire) \
|| !defined(AO_HAVE_load_acquire_read) \
|| !defined(AO_HAVE_load_dd_acquire_read) \
|| !defined(AO_HAVE_load_full) || !defined(AO_HAVE_load_read)
MISSING(AO_load);
# endif
# if defined(AO_HAVE_test_and_set_write)
TA_assert(AO_test_and_set_write(&z) == AO_TS_CLEAR);
TA_assert(AO_test_and_set_write(&z) == AO_TS_SET);
TA_assert(AO_test_and_set_write(&z) == AO_TS_SET);
AO_CLEAR(&z);
# else
MISSING(AO_test_and_set);
# endif
# if defined(AO_HAVE_fetch_and_add_write)
TA_assert(AO_fetch_and_add_write(&x, 42) == 13);
TA_assert(AO_fetch_and_add_write(&x, (AO_t)(-42)) == 55);
# else
MISSING(AO_fetch_and_add);
# endif
# if defined(AO_HAVE_fetch_and_add1_write)
TA_assert(AO_fetch_and_add1_write(&x) == 13);
# else
MISSING(AO_fetch_and_add1);
++x;
# endif
# if defined(AO_HAVE_fetch_and_sub1_write)
TA_assert(AO_fetch_and_sub1_write(&x) == 14);
# else
MISSING(AO_fetch_and_sub1);
--x;
# endif
# if defined(AO_HAVE_short_store_write)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile short *)&s = 0;
# endif
AO_short_store_write(&s, 13);
# else
# if !defined(AO_HAVE_short_store) || !defined(AO_HAVE_short_store_full) \
|| !defined(AO_HAVE_short_store_release) \
|| !defined(AO_HAVE_short_store_release_write) \
|| !defined(AO_HAVE_short_store_write)
MISSING(AO_short_store);
# endif
s = 13;
# endif
# if defined(AO_HAVE_short_load_write)
TA_assert(AO_short_load(&s) == 13);
# elif !defined(AO_HAVE_short_load) || !defined(AO_HAVE_short_load_acquire) \
|| !defined(AO_HAVE_short_load_acquire_read) \
|| !defined(AO_HAVE_short_load_dd_acquire_read) \
|| !defined(AO_HAVE_short_load_full) \
|| !defined(AO_HAVE_short_load_read)
MISSING(AO_short_load);
# endif
# if defined(AO_HAVE_short_fetch_and_add_write)
TA_assert(AO_short_fetch_and_add_write(&s, 42) == 13);
TA_assert(AO_short_fetch_and_add_write(&s, (unsigned short)-42) == 55);
# else
MISSING(AO_short_fetch_and_add);
# endif
# if defined(AO_HAVE_short_fetch_and_add1_write)
TA_assert(AO_short_fetch_and_add1_write(&s) == 13);
# else
MISSING(AO_short_fetch_and_add1);
++s;
# endif
# if defined(AO_HAVE_short_fetch_and_sub1_write)
TA_assert(AO_short_fetch_and_sub1_write(&s) == 14);
# else
MISSING(AO_short_fetch_and_sub1);
--s;
# endif
TA_assert(*(volatile short *)&s == 13);
# if defined(AO_HAVE_char_store_write)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile char *)&b = 0;
# endif
AO_char_store_write(&b, 13);
# else
# if !defined(AO_HAVE_char_store) || !defined(AO_HAVE_char_store_full) \
|| !defined(AO_HAVE_char_store_release) \
|| !defined(AO_HAVE_char_store_release_write) \
|| !defined(AO_HAVE_char_store_write)
MISSING(AO_char_store);
# endif
b = 13;
# endif
# if defined(AO_HAVE_char_load_write)
TA_assert(AO_char_load(&b) == 13);
# elif !defined(AO_HAVE_char_load) || !defined(AO_HAVE_char_load_acquire) \
|| !defined(AO_HAVE_char_load_acquire_read) \
|| !defined(AO_HAVE_char_load_dd_acquire_read) \
|| !defined(AO_HAVE_char_load_full) || !defined(AO_HAVE_char_load_read)
MISSING(AO_char_load);
# endif
# if defined(AO_HAVE_char_fetch_and_add_write)
TA_assert(AO_char_fetch_and_add_write(&b, 42) == 13);
TA_assert(AO_char_fetch_and_add_write(&b, (unsigned char)-42) == 55);
# else
MISSING(AO_char_fetch_and_add);
# endif
# if defined(AO_HAVE_char_fetch_and_add1_write)
TA_assert(AO_char_fetch_and_add1_write(&b) == 13);
# else
MISSING(AO_char_fetch_and_add1);
++b;
# endif
# if defined(AO_HAVE_char_fetch_and_sub1_write)
TA_assert(AO_char_fetch_and_sub1_write(&b) == 14);
# else
MISSING(AO_char_fetch_and_sub1);
--b;
# endif
TA_assert(*(volatile char *)&b == 13);
# if defined(AO_HAVE_int_store_write)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile int *)&zz = 0;
# endif
AO_int_store_write(&zz, 13);
# else
# if !defined(AO_HAVE_int_store) || !defined(AO_HAVE_int_store_full) \
|| !defined(AO_HAVE_int_store_release) \
|| !defined(AO_HAVE_int_store_release_write) \
|| !defined(AO_HAVE_int_store_write)
MISSING(AO_int_store);
# endif
zz = 13;
# endif
# if defined(AO_HAVE_int_load_write)
TA_assert(AO_int_load(&zz) == 13);
# elif !defined(AO_HAVE_int_load) || !defined(AO_HAVE_int_load_acquire) \
|| !defined(AO_HAVE_int_load_acquire_read) \
|| !defined(AO_HAVE_int_load_dd_acquire_read) \
|| !defined(AO_HAVE_int_load_full) || !defined(AO_HAVE_int_load_read)
MISSING(AO_int_load);
# endif
# if defined(AO_HAVE_int_fetch_and_add_write)
TA_assert(AO_int_fetch_and_add_write(&zz, 42) == 13);
TA_assert(AO_int_fetch_and_add_write(&zz, (unsigned int)-42) == 55);
# else
MISSING(AO_int_fetch_and_add);
# endif
# if defined(AO_HAVE_int_fetch_and_add1_write)
TA_assert(AO_int_fetch_and_add1_write(&zz) == 13);
# else
MISSING(AO_int_fetch_and_add1);
++zz;
# endif
# if defined(AO_HAVE_int_fetch_and_sub1_write)
TA_assert(AO_int_fetch_and_sub1_write(&zz) == 14);
# else
MISSING(AO_int_fetch_and_sub1);
--zz;
# endif
TA_assert(*(volatile int *)&zz == 13);
# if defined(AO_HAVE_compare_and_swap_write)
TA_assert(!AO_compare_and_swap_write(&x, 14, 42));
TA_assert(x == 13);
TA_assert(AO_compare_and_swap_write(&x, 13, 42));
TA_assert(x == 42);
# else
MISSING(AO_compare_and_swap);
if (*(volatile AO_t *)&x == 13) x = 42;
# endif
# if defined(AO_HAVE_or_write)
AO_or_write(&x, 66);
TA_assert(x == 106);
# else
# if !defined(AO_HAVE_or) || !defined(AO_HAVE_or_acquire) \
|| !defined(AO_HAVE_or_acquire_read) || !defined(AO_HAVE_or_full) \
|| !defined(AO_HAVE_or_read) || !defined(AO_HAVE_or_release) \
|| !defined(AO_HAVE_or_release_write) || !defined(AO_HAVE_or_write)
MISSING(AO_or);
# endif
x |= 66;
# endif
# if defined(AO_HAVE_xor_write)
AO_xor_write(&x, 181);
TA_assert(x == 223);
# else
# if !defined(AO_HAVE_xor) || !defined(AO_HAVE_xor_acquire) \
|| !defined(AO_HAVE_xor_acquire_read) || !defined(AO_HAVE_xor_full) \
|| !defined(AO_HAVE_xor_read) || !defined(AO_HAVE_xor_release) \
|| !defined(AO_HAVE_xor_release_write) || !defined(AO_HAVE_xor_write)
MISSING(AO_xor);
# endif
x ^= 181;
# endif
# if defined(AO_HAVE_and_write)
AO_and_write(&x, 57);
TA_assert(x == 25);
# else
# if !defined(AO_HAVE_and) || !defined(AO_HAVE_and_acquire) \
|| !defined(AO_HAVE_and_acquire_read) || !defined(AO_HAVE_and_full) \
|| !defined(AO_HAVE_and_read) || !defined(AO_HAVE_and_release) \
|| !defined(AO_HAVE_and_release_write) || !defined(AO_HAVE_and_write)
MISSING(AO_and);
# endif
x &= 57;
# endif
# if defined(AO_HAVE_fetch_compare_and_swap_write)
TA_assert(AO_fetch_compare_and_swap_write(&x, 14, 117) == 25);
TA_assert(x == 25);
TA_assert(AO_fetch_compare_and_swap_write(&x, 25, 117) == 25);
# else
MISSING(AO_fetch_compare_and_swap);
if (x == 25) x = 117;
# endif
TA_assert(x == 117);
# if defined(AO_HAVE_short_compare_and_swap_write)
TA_assert(!AO_short_compare_and_swap_write(&s, 14, 42));
TA_assert(s == 13);
TA_assert(AO_short_compare_and_swap_write(&s, 13, 42));
TA_assert(s == 42);
# else
MISSING(AO_short_compare_and_swap);
if (*(volatile short *)&s == 13) s = 42;
# endif
# if defined(AO_HAVE_short_or_write)
AO_short_or_write(&s, 66);
TA_assert(s == 106);
# else
# if !defined(AO_HAVE_short_or) || !defined(AO_HAVE_short_or_acquire) \
|| !defined(AO_HAVE_short_or_acquire_read) \
|| !defined(AO_HAVE_short_or_full) || !defined(AO_HAVE_short_or_read) \
|| !defined(AO_HAVE_short_or_release) \
|| !defined(AO_HAVE_short_or_release_write) \
|| !defined(AO_HAVE_short_or_write)
MISSING(AO_short_or);
# endif
s |= 66;
# endif
# if defined(AO_HAVE_short_xor_write)
AO_short_xor_write(&s, 181);
TA_assert(s == 223);
# else
# if !defined(AO_HAVE_short_xor) || !defined(AO_HAVE_short_xor_acquire) \
|| !defined(AO_HAVE_short_xor_acquire_read) \
|| !defined(AO_HAVE_short_xor_full) \
|| !defined(AO_HAVE_short_xor_read) \
|| !defined(AO_HAVE_short_xor_release) \
|| !defined(AO_HAVE_short_xor_release_write) \
|| !defined(AO_HAVE_short_xor_write)
MISSING(AO_short_xor);
# endif
s ^= 181;
# endif
# if defined(AO_HAVE_short_and_write)
AO_short_and_write(&s, 57);
TA_assert(s == 25);
# else
# if !defined(AO_HAVE_short_and) || !defined(AO_HAVE_short_and_acquire) \
|| !defined(AO_HAVE_short_and_acquire_read) \
|| !defined(AO_HAVE_short_and_full) \
|| !defined(AO_HAVE_short_and_read) \
|| !defined(AO_HAVE_short_and_release) \
|| !defined(AO_HAVE_short_and_release_write) \
|| !defined(AO_HAVE_short_and_write)
MISSING(AO_short_and);
# endif
s &= 57;
# endif
# if defined(AO_HAVE_short_fetch_compare_and_swap_write)
TA_assert(AO_short_fetch_compare_and_swap_write(&s, 14, 117) == 25);
TA_assert(s == 25);
TA_assert(AO_short_fetch_compare_and_swap_write(&s, 25, 117) == 25);
# else
MISSING(AO_short_fetch_compare_and_swap);
if (s == 25) s = 117;
# endif
TA_assert(s == 117);
# if defined(AO_HAVE_char_compare_and_swap_write)
TA_assert(!AO_char_compare_and_swap_write(&b, 14, 42));
TA_assert(b == 13);
TA_assert(AO_char_compare_and_swap_write(&b, 13, 42));
TA_assert(b == 42);
# else
MISSING(AO_char_compare_and_swap);
if (*(volatile char *)&b == 13) b = 42;
# endif
# if defined(AO_HAVE_char_or_write)
AO_char_or_write(&b, 66);
TA_assert(b == 106);
# else
# if !defined(AO_HAVE_char_or) || !defined(AO_HAVE_char_or_acquire) \
|| !defined(AO_HAVE_char_or_acquire_read) \
|| !defined(AO_HAVE_char_or_full) || !defined(AO_HAVE_char_or_read) \
|| !defined(AO_HAVE_char_or_release) \
|| !defined(AO_HAVE_char_or_release_write) \
|| !defined(AO_HAVE_char_or_write)
MISSING(AO_char_or);
# endif
b |= 66;
# endif
# if defined(AO_HAVE_char_xor_write)
AO_char_xor_write(&b, 181);
TA_assert(b == 223);
# else
# if !defined(AO_HAVE_char_xor) || !defined(AO_HAVE_char_xor_acquire) \
|| !defined(AO_HAVE_char_xor_acquire_read) \
|| !defined(AO_HAVE_char_xor_full) || !defined(AO_HAVE_char_xor_read) \
|| !defined(AO_HAVE_char_xor_release) \
|| !defined(AO_HAVE_char_xor_release_write) \
|| !defined(AO_HAVE_char_xor_write)
MISSING(AO_char_xor);
# endif
b ^= 181;
# endif
# if defined(AO_HAVE_char_and_write)
AO_char_and_write(&b, 57);
TA_assert(b == 25);
# else
# if !defined(AO_HAVE_char_and) || !defined(AO_HAVE_char_and_acquire) \
|| !defined(AO_HAVE_char_and_acquire_read) \
|| !defined(AO_HAVE_char_and_full) || !defined(AO_HAVE_char_and_read) \
|| !defined(AO_HAVE_char_and_release) \
|| !defined(AO_HAVE_char_and_release_write) \
|| !defined(AO_HAVE_char_and_write)
MISSING(AO_char_and);
# endif
b &= 57;
# endif
# if defined(AO_HAVE_char_fetch_compare_and_swap_write)
TA_assert(AO_char_fetch_compare_and_swap_write(&b, 14, 117) == 25);
TA_assert(b == 25);
TA_assert(AO_char_fetch_compare_and_swap_write(&b, 25, 117) == 25);
# else
MISSING(AO_char_fetch_compare_and_swap);
if (b == 25) b = 117;
# endif
TA_assert(b == 117);
# if defined(AO_HAVE_int_compare_and_swap_write)
TA_assert(!AO_int_compare_and_swap_write(&zz, 14, 42));
TA_assert(zz == 13);
TA_assert(AO_int_compare_and_swap_write(&zz, 13, 42));
TA_assert(zz == 42);
# else
MISSING(AO_int_compare_and_swap);
if (*(volatile int *)&zz == 13) zz = 42;
# endif
# if defined(AO_HAVE_int_or_write)
AO_int_or_write(&zz, 66);
TA_assert(zz == 106);
# else
# if !defined(AO_HAVE_int_or) || !defined(AO_HAVE_int_or_acquire) \
|| !defined(AO_HAVE_int_or_acquire_read) \
|| !defined(AO_HAVE_int_or_full) || !defined(AO_HAVE_int_or_read) \
|| !defined(AO_HAVE_int_or_release) \
|| !defined(AO_HAVE_int_or_release_write) \
|| !defined(AO_HAVE_int_or_write)
MISSING(AO_int_or);
# endif
zz |= 66;
# endif
# if defined(AO_HAVE_int_xor_write)
AO_int_xor_write(&zz, 181);
TA_assert(zz == 223);
# else
# if !defined(AO_HAVE_int_xor) || !defined(AO_HAVE_int_xor_acquire) \
|| !defined(AO_HAVE_int_xor_acquire_read) \
|| !defined(AO_HAVE_int_xor_full) || !defined(AO_HAVE_int_xor_read) \
|| !defined(AO_HAVE_int_xor_release) \
|| !defined(AO_HAVE_int_xor_release_write) \
|| !defined(AO_HAVE_int_xor_write)
MISSING(AO_int_xor);
# endif
zz ^= 181;
# endif
# if defined(AO_HAVE_int_and_write)
AO_int_and_write(&zz, 57);
TA_assert(zz == 25);
# else
# if !defined(AO_HAVE_int_and) || !defined(AO_HAVE_int_and_acquire) \
|| !defined(AO_HAVE_int_and_acquire_read) \
|| !defined(AO_HAVE_int_and_full) || !defined(AO_HAVE_int_and_read) \
|| !defined(AO_HAVE_int_and_release) \
|| !defined(AO_HAVE_int_and_release_write) \
|| !defined(AO_HAVE_int_and_write)
MISSING(AO_int_and);
# endif
zz &= 57;
# endif
# if defined(AO_HAVE_int_fetch_compare_and_swap_write)
TA_assert(AO_int_fetch_compare_and_swap_write(&zz, 14, 117) == 25);
TA_assert(zz == 25);
TA_assert(AO_int_fetch_compare_and_swap_write(&zz, 25, 117) == 25);
# else
MISSING(AO_int_fetch_compare_and_swap);
if (zz == 25) zz = 117;
# endif
TA_assert(zz == 117);
# if defined(AO_HAVE_double_load_write) || defined(AO_HAVE_double_store_write)
/* Initialize old_w even for store to workaround MSan warning. */
old_w.AO_val1 = 3316;
old_w.AO_val2 = 2921;
# endif
# if defined(AO_HAVE_double_load_write)
new_w = AO_double_load_write(&old_w);
TA_assert(new_w.AO_val1 == 3316 && new_w.AO_val2 == 2921);
# elif !defined(AO_HAVE_double_load) \
|| !defined(AO_HAVE_double_load_acquire) \
|| !defined(AO_HAVE_double_load_acquire_read) \
|| !defined(AO_HAVE_double_load_dd_acquire_read) \
|| !defined(AO_HAVE_double_load_full) \
|| !defined(AO_HAVE_double_load_read)
MISSING(AO_double_load);
# endif
# if defined(AO_HAVE_double_store_write)
new_w.AO_val1 = 1375;
new_w.AO_val2 = 8243;
AO_double_store_write(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
AO_double_store_write(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
new_w.AO_val1 ^= old_w.AO_val1;
new_w.AO_val2 ^= old_w.AO_val2;
AO_double_store_write(&old_w, new_w);
TA_assert(old_w.AO_val1 == 0 && old_w.AO_val2 == 0);
# elif !defined(AO_HAVE_double_store) \
|| !defined(AO_HAVE_double_store_full) \
|| !defined(AO_HAVE_double_store_release) \
|| !defined(AO_HAVE_double_store_release_write) \
|| !defined(AO_HAVE_double_store_write)
MISSING(AO_double_store);
# endif
# if defined(AO_HAVE_compare_double_and_swap_double_write)
TA_assert(!AO_compare_double_and_swap_double_write(&w, 17, 42, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_double_and_swap_double_write(&w, 0, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_write(&w, 12, 14, 64, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_write(&w, 11, 13, 85, 82));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_write(&w, 13, 12, 17, 42));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_double_and_swap_double_write(&w, 12, 13, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_double_and_swap_double_write(&w, 17, 42, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_double_and_swap_double);
# endif
# if defined(AO_HAVE_compare_and_swap_double_write)
TA_assert(!AO_compare_and_swap_double_write(&w, 17, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_and_swap_double_write(&w, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_write(&w, 13, 12, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_write(&w, 1213, 48, 86));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_and_swap_double_write(&w, 12, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_and_swap_double_write(&w, 17, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_and_swap_double);
# endif
# if defined(AO_HAVE_double_compare_and_swap_write)
old_w.AO_val1 = 4116;
old_w.AO_val2 = 2121;
new_w.AO_val1 = 8537;
new_w.AO_val2 = 6410;
TA_assert(!AO_double_compare_and_swap_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_double_compare_and_swap_write(&w, w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = 29;
new_w.AO_val1 = 820;
new_w.AO_val2 = 5917;
TA_assert(!AO_double_compare_and_swap_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = 11;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 3552;
new_w.AO_val2 = 1746;
TA_assert(!AO_double_compare_and_swap_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 8537;
new_w.AO_val1 = 4116;
new_w.AO_val2 = 2121;
TA_assert(!AO_double_compare_and_swap_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 1;
TA_assert(AO_double_compare_and_swap_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 1 && w.AO_val2 == 2121);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = w.AO_val2;
new_w.AO_val1--;
new_w.AO_val2 = 0;
TA_assert(AO_double_compare_and_swap_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_double_compare_and_swap);
# endif
}
/*
* Copyright (c) 2003 by Hewlett-<NAME>. All rights reserved.
*
* This file is covered by the GNU general public license, version 2.
* see COPYING for details.
*/
/* Some basic sanity tests. These do not test the barrier semantics. */
#undef TA_assert
#define TA_assert(e) \
if (!(e)) { fprintf(stderr, "Assertion failed %s:%d (barrier: _full)\n", \
__FILE__, __LINE__), exit(1); }
#undef MISSING
#define MISSING(name) \
printf("Missing: %s\n", #name "_full")
#if defined(CPPCHECK)
void list_atomic_full(void);
void char_list_atomic_full(void);
void short_list_atomic_full(void);
void int_list_atomic_full(void);
void double_list_atomic_full(void);
#endif
void test_atomic_full(void)
{
AO_t x;
unsigned char b;
unsigned short s;
unsigned int zz;
# if defined(AO_HAVE_test_and_set_full)
AO_TS_t z = AO_TS_INITIALIZER;
# endif
# if defined(AO_HAVE_double_compare_and_swap_full) \
|| defined(AO_HAVE_double_load_full) \
|| defined(AO_HAVE_double_store_full)
static AO_double_t old_w; /* static to avoid misalignment */
AO_double_t new_w;
# endif
# if defined(AO_HAVE_compare_and_swap_double_full) \
|| defined(AO_HAVE_compare_double_and_swap_double_full) \
|| defined(AO_HAVE_double_compare_and_swap_full)
static AO_double_t w; /* static to avoid misalignment */
w.AO_val1 = 0;
w.AO_val2 = 0;
# endif
# if defined(CPPCHECK)
list_atomic_full();
char_list_atomic_full();
short_list_atomic_full();
int_list_atomic_full();
double_list_atomic_full();
# endif
# if defined(AO_HAVE_nop_full)
AO_nop_full();
# elif !defined(AO_HAVE_nop) || !defined(AO_HAVE_nop_full) \
|| !defined(AO_HAVE_nop_read) || !defined(AO_HAVE_nop_write)
MISSING(AO_nop);
# endif
# if defined(AO_HAVE_store_full)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile AO_t *)&x = 0; /* initialize to avoid false warning */
# endif
AO_store_full(&x, 13);
TA_assert(x == 13);
# else
# if !defined(AO_HAVE_store) || !defined(AO_HAVE_store_full) \
|| !defined(AO_HAVE_store_release) \
|| !defined(AO_HAVE_store_release_write) \
|| !defined(AO_HAVE_store_write)
MISSING(AO_store);
# endif
x = 13;
# endif
# if defined(AO_HAVE_load_full)
TA_assert(AO_load_full(&x) == 13);
# elif !defined(AO_HAVE_load) || !defined(AO_HAVE_load_acquire) \
|| !defined(AO_HAVE_load_acquire_read) \
|| !defined(AO_HAVE_load_dd_acquire_read) \
|| !defined(AO_HAVE_load_full) || !defined(AO_HAVE_load_read)
MISSING(AO_load);
# endif
# if defined(AO_HAVE_test_and_set_full)
TA_assert(AO_test_and_set_full(&z) == AO_TS_CLEAR);
TA_assert(AO_test_and_set_full(&z) == AO_TS_SET);
TA_assert(AO_test_and_set_full(&z) == AO_TS_SET);
AO_CLEAR(&z);
# else
MISSING(AO_test_and_set);
# endif
# if defined(AO_HAVE_fetch_and_add_full)
TA_assert(AO_fetch_and_add_full(&x, 42) == 13);
TA_assert(AO_fetch_and_add_full(&x, (AO_t)(-42)) == 55);
# else
MISSING(AO_fetch_and_add);
# endif
# if defined(AO_HAVE_fetch_and_add1_full)
TA_assert(AO_fetch_and_add1_full(&x) == 13);
# else
MISSING(AO_fetch_and_add1);
++x;
# endif
# if defined(AO_HAVE_fetch_and_sub1_full)
TA_assert(AO_fetch_and_sub1_full(&x) == 14);
# else
MISSING(AO_fetch_and_sub1);
--x;
# endif
# if defined(AO_HAVE_short_store_full)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile short *)&s = 0;
# endif
AO_short_store_full(&s, 13);
# else
# if !defined(AO_HAVE_short_store) || !defined(AO_HAVE_short_store_full) \
|| !defined(AO_HAVE_short_store_release) \
|| !defined(AO_HAVE_short_store_release_write) \
|| !defined(AO_HAVE_short_store_write)
MISSING(AO_short_store);
# endif
s = 13;
# endif
# if defined(AO_HAVE_short_load_full)
TA_assert(AO_short_load(&s) == 13);
# elif !defined(AO_HAVE_short_load) || !defined(AO_HAVE_short_load_acquire) \
|| !defined(AO_HAVE_short_load_acquire_read) \
|| !defined(AO_HAVE_short_load_dd_acquire_read) \
|| !defined(AO_HAVE_short_load_full) \
|| !defined(AO_HAVE_short_load_read)
MISSING(AO_short_load);
# endif
# if defined(AO_HAVE_short_fetch_and_add_full)
TA_assert(AO_short_fetch_and_add_full(&s, 42) == 13);
TA_assert(AO_short_fetch_and_add_full(&s, (unsigned short)-42) == 55);
# else
MISSING(AO_short_fetch_and_add);
# endif
# if defined(AO_HAVE_short_fetch_and_add1_full)
TA_assert(AO_short_fetch_and_add1_full(&s) == 13);
# else
MISSING(AO_short_fetch_and_add1);
++s;
# endif
# if defined(AO_HAVE_short_fetch_and_sub1_full)
TA_assert(AO_short_fetch_and_sub1_full(&s) == 14);
# else
MISSING(AO_short_fetch_and_sub1);
--s;
# endif
TA_assert(*(volatile short *)&s == 13);
# if defined(AO_HAVE_char_store_full)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile char *)&b = 0;
# endif
AO_char_store_full(&b, 13);
# else
# if !defined(AO_HAVE_char_store) || !defined(AO_HAVE_char_store_full) \
|| !defined(AO_HAVE_char_store_release) \
|| !defined(AO_HAVE_char_store_release_write) \
|| !defined(AO_HAVE_char_store_write)
MISSING(AO_char_store);
# endif
b = 13;
# endif
# if defined(AO_HAVE_char_load_full)
TA_assert(AO_char_load(&b) == 13);
# elif !defined(AO_HAVE_char_load) || !defined(AO_HAVE_char_load_acquire) \
|| !defined(AO_HAVE_char_load_acquire_read) \
|| !defined(AO_HAVE_char_load_dd_acquire_read) \
|| !defined(AO_HAVE_char_load_full) || !defined(AO_HAVE_char_load_read)
MISSING(AO_char_load);
# endif
# if defined(AO_HAVE_char_fetch_and_add_full)
TA_assert(AO_char_fetch_and_add_full(&b, 42) == 13);
TA_assert(AO_char_fetch_and_add_full(&b, (unsigned char)-42) == 55);
# else
MISSING(AO_char_fetch_and_add);
# endif
# if defined(AO_HAVE_char_fetch_and_add1_full)
TA_assert(AO_char_fetch_and_add1_full(&b) == 13);
# else
MISSING(AO_char_fetch_and_add1);
++b;
# endif
# if defined(AO_HAVE_char_fetch_and_sub1_full)
TA_assert(AO_char_fetch_and_sub1_full(&b) == 14);
# else
MISSING(AO_char_fetch_and_sub1);
--b;
# endif
TA_assert(*(volatile char *)&b == 13);
# if defined(AO_HAVE_int_store_full)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile int *)&zz = 0;
# endif
AO_int_store_full(&zz, 13);
# else
# if !defined(AO_HAVE_int_store) || !defined(AO_HAVE_int_store_full) \
|| !defined(AO_HAVE_int_store_release) \
|| !defined(AO_HAVE_int_store_release_write) \
|| !defined(AO_HAVE_int_store_write)
MISSING(AO_int_store);
# endif
zz = 13;
# endif
# if defined(AO_HAVE_int_load_full)
TA_assert(AO_int_load(&zz) == 13);
# elif !defined(AO_HAVE_int_load) || !defined(AO_HAVE_int_load_acquire) \
|| !defined(AO_HAVE_int_load_acquire_read) \
|| !defined(AO_HAVE_int_load_dd_acquire_read) \
|| !defined(AO_HAVE_int_load_full) || !defined(AO_HAVE_int_load_read)
MISSING(AO_int_load);
# endif
# if defined(AO_HAVE_int_fetch_and_add_full)
TA_assert(AO_int_fetch_and_add_full(&zz, 42) == 13);
TA_assert(AO_int_fetch_and_add_full(&zz, (unsigned int)-42) == 55);
# else
MISSING(AO_int_fetch_and_add);
# endif
# if defined(AO_HAVE_int_fetch_and_add1_full)
TA_assert(AO_int_fetch_and_add1_full(&zz) == 13);
# else
MISSING(AO_int_fetch_and_add1);
++zz;
# endif
# if defined(AO_HAVE_int_fetch_and_sub1_full)
TA_assert(AO_int_fetch_and_sub1_full(&zz) == 14);
# else
MISSING(AO_int_fetch_and_sub1);
--zz;
# endif
TA_assert(*(volatile int *)&zz == 13);
# if defined(AO_HAVE_compare_and_swap_full)
TA_assert(!AO_compare_and_swap_full(&x, 14, 42));
TA_assert(x == 13);
TA_assert(AO_compare_and_swap_full(&x, 13, 42));
TA_assert(x == 42);
# else
MISSING(AO_compare_and_swap);
if (*(volatile AO_t *)&x == 13) x = 42;
# endif
# if defined(AO_HAVE_or_full)
AO_or_full(&x, 66);
TA_assert(x == 106);
# else
# if !defined(AO_HAVE_or) || !defined(AO_HAVE_or_acquire) \
|| !defined(AO_HAVE_or_acquire_read) || !defined(AO_HAVE_or_full) \
|| !defined(AO_HAVE_or_read) || !defined(AO_HAVE_or_release) \
|| !defined(AO_HAVE_or_release_write) || !defined(AO_HAVE_or_write)
MISSING(AO_or);
# endif
x |= 66;
# endif
# if defined(AO_HAVE_xor_full)
AO_xor_full(&x, 181);
TA_assert(x == 223);
# else
# if !defined(AO_HAVE_xor) || !defined(AO_HAVE_xor_acquire) \
|| !defined(AO_HAVE_xor_acquire_read) || !defined(AO_HAVE_xor_full) \
|| !defined(AO_HAVE_xor_read) || !defined(AO_HAVE_xor_release) \
|| !defined(AO_HAVE_xor_release_write) || !defined(AO_HAVE_xor_write)
MISSING(AO_xor);
# endif
x ^= 181;
# endif
# if defined(AO_HAVE_and_full)
AO_and_full(&x, 57);
TA_assert(x == 25);
# else
# if !defined(AO_HAVE_and) || !defined(AO_HAVE_and_acquire) \
|| !defined(AO_HAVE_and_acquire_read) || !defined(AO_HAVE_and_full) \
|| !defined(AO_HAVE_and_read) || !defined(AO_HAVE_and_release) \
|| !defined(AO_HAVE_and_release_write) || !defined(AO_HAVE_and_write)
MISSING(AO_and);
# endif
x &= 57;
# endif
# if defined(AO_HAVE_fetch_compare_and_swap_full)
TA_assert(AO_fetch_compare_and_swap_full(&x, 14, 117) == 25);
TA_assert(x == 25);
TA_assert(AO_fetch_compare_and_swap_full(&x, 25, 117) == 25);
# else
MISSING(AO_fetch_compare_and_swap);
if (x == 25) x = 117;
# endif
TA_assert(x == 117);
# if defined(AO_HAVE_short_compare_and_swap_full)
TA_assert(!AO_short_compare_and_swap_full(&s, 14, 42));
TA_assert(s == 13);
TA_assert(AO_short_compare_and_swap_full(&s, 13, 42));
TA_assert(s == 42);
# else
MISSING(AO_short_compare_and_swap);
if (*(volatile short *)&s == 13) s = 42;
# endif
# if defined(AO_HAVE_short_or_full)
AO_short_or_full(&s, 66);
TA_assert(s == 106);
# else
# if !defined(AO_HAVE_short_or) || !defined(AO_HAVE_short_or_acquire) \
|| !defined(AO_HAVE_short_or_acquire_read) \
|| !defined(AO_HAVE_short_or_full) || !defined(AO_HAVE_short_or_read) \
|| !defined(AO_HAVE_short_or_release) \
|| !defined(AO_HAVE_short_or_release_write) \
|| !defined(AO_HAVE_short_or_write)
MISSING(AO_short_or);
# endif
s |= 66;
# endif
# if defined(AO_HAVE_short_xor_full)
AO_short_xor_full(&s, 181);
TA_assert(s == 223);
# else
# if !defined(AO_HAVE_short_xor) || !defined(AO_HAVE_short_xor_acquire) \
|| !defined(AO_HAVE_short_xor_acquire_read) \
|| !defined(AO_HAVE_short_xor_full) \
|| !defined(AO_HAVE_short_xor_read) \
|| !defined(AO_HAVE_short_xor_release) \
|| !defined(AO_HAVE_short_xor_release_write) \
|| !defined(AO_HAVE_short_xor_write)
MISSING(AO_short_xor);
# endif
s ^= 181;
# endif
# if defined(AO_HAVE_short_and_full)
AO_short_and_full(&s, 57);
TA_assert(s == 25);
# else
# if !defined(AO_HAVE_short_and) || !defined(AO_HAVE_short_and_acquire) \
|| !defined(AO_HAVE_short_and_acquire_read) \
|| !defined(AO_HAVE_short_and_full) \
|| !defined(AO_HAVE_short_and_read) \
|| !defined(AO_HAVE_short_and_release) \
|| !defined(AO_HAVE_short_and_release_write) \
|| !defined(AO_HAVE_short_and_write)
MISSING(AO_short_and);
# endif
s &= 57;
# endif
# if defined(AO_HAVE_short_fetch_compare_and_swap_full)
TA_assert(AO_short_fetch_compare_and_swap_full(&s, 14, 117) == 25);
TA_assert(s == 25);
TA_assert(AO_short_fetch_compare_and_swap_full(&s, 25, 117) == 25);
# else
MISSING(AO_short_fetch_compare_and_swap);
if (s == 25) s = 117;
# endif
TA_assert(s == 117);
# if defined(AO_HAVE_char_compare_and_swap_full)
TA_assert(!AO_char_compare_and_swap_full(&b, 14, 42));
TA_assert(b == 13);
TA_assert(AO_char_compare_and_swap_full(&b, 13, 42));
TA_assert(b == 42);
# else
MISSING(AO_char_compare_and_swap);
if (*(volatile char *)&b == 13) b = 42;
# endif
# if defined(AO_HAVE_char_or_full)
AO_char_or_full(&b, 66);
TA_assert(b == 106);
# else
# if !defined(AO_HAVE_char_or) || !defined(AO_HAVE_char_or_acquire) \
|| !defined(AO_HAVE_char_or_acquire_read) \
|| !defined(AO_HAVE_char_or_full) || !defined(AO_HAVE_char_or_read) \
|| !defined(AO_HAVE_char_or_release) \
|| !defined(AO_HAVE_char_or_release_write) \
|| !defined(AO_HAVE_char_or_write)
MISSING(AO_char_or);
# endif
b |= 66;
# endif
# if defined(AO_HAVE_char_xor_full)
AO_char_xor_full(&b, 181);
TA_assert(b == 223);
# else
# if !defined(AO_HAVE_char_xor) || !defined(AO_HAVE_char_xor_acquire) \
|| !defined(AO_HAVE_char_xor_acquire_read) \
|| !defined(AO_HAVE_char_xor_full) || !defined(AO_HAVE_char_xor_read) \
|| !defined(AO_HAVE_char_xor_release) \
|| !defined(AO_HAVE_char_xor_release_write) \
|| !defined(AO_HAVE_char_xor_write)
MISSING(AO_char_xor);
# endif
b ^= 181;
# endif
# if defined(AO_HAVE_char_and_full)
AO_char_and_full(&b, 57);
TA_assert(b == 25);
# else
# if !defined(AO_HAVE_char_and) || !defined(AO_HAVE_char_and_acquire) \
|| !defined(AO_HAVE_char_and_acquire_read) \
|| !defined(AO_HAVE_char_and_full) || !defined(AO_HAVE_char_and_read) \
|| !defined(AO_HAVE_char_and_release) \
|| !defined(AO_HAVE_char_and_release_write) \
|| !defined(AO_HAVE_char_and_write)
MISSING(AO_char_and);
# endif
b &= 57;
# endif
# if defined(AO_HAVE_char_fetch_compare_and_swap_full)
TA_assert(AO_char_fetch_compare_and_swap_full(&b, 14, 117) == 25);
TA_assert(b == 25);
TA_assert(AO_char_fetch_compare_and_swap_full(&b, 25, 117) == 25);
# else
MISSING(AO_char_fetch_compare_and_swap);
if (b == 25) b = 117;
# endif
TA_assert(b == 117);
# if defined(AO_HAVE_int_compare_and_swap_full)
TA_assert(!AO_int_compare_and_swap_full(&zz, 14, 42));
TA_assert(zz == 13);
TA_assert(AO_int_compare_and_swap_full(&zz, 13, 42));
TA_assert(zz == 42);
# else
MISSING(AO_int_compare_and_swap);
if (*(volatile int *)&zz == 13) zz = 42;
# endif
# if defined(AO_HAVE_int_or_full)
AO_int_or_full(&zz, 66);
TA_assert(zz == 106);
# else
# if !defined(AO_HAVE_int_or) || !defined(AO_HAVE_int_or_acquire) \
|| !defined(AO_HAVE_int_or_acquire_read) \
|| !defined(AO_HAVE_int_or_full) || !defined(AO_HAVE_int_or_read) \
|| !defined(AO_HAVE_int_or_release) \
|| !defined(AO_HAVE_int_or_release_write) \
|| !defined(AO_HAVE_int_or_write)
MISSING(AO_int_or);
# endif
zz |= 66;
# endif
# if defined(AO_HAVE_int_xor_full)
AO_int_xor_full(&zz, 181);
TA_assert(zz == 223);
# else
# if !defined(AO_HAVE_int_xor) || !defined(AO_HAVE_int_xor_acquire) \
|| !defined(AO_HAVE_int_xor_acquire_read) \
|| !defined(AO_HAVE_int_xor_full) || !defined(AO_HAVE_int_xor_read) \
|| !defined(AO_HAVE_int_xor_release) \
|| !defined(AO_HAVE_int_xor_release_write) \
|| !defined(AO_HAVE_int_xor_write)
MISSING(AO_int_xor);
# endif
zz ^= 181;
# endif
# if defined(AO_HAVE_int_and_full)
AO_int_and_full(&zz, 57);
TA_assert(zz == 25);
# else
# if !defined(AO_HAVE_int_and) || !defined(AO_HAVE_int_and_acquire) \
|| !defined(AO_HAVE_int_and_acquire_read) \
|| !defined(AO_HAVE_int_and_full) || !defined(AO_HAVE_int_and_read) \
|| !defined(AO_HAVE_int_and_release) \
|| !defined(AO_HAVE_int_and_release_write) \
|| !defined(AO_HAVE_int_and_write)
MISSING(AO_int_and);
# endif
zz &= 57;
# endif
# if defined(AO_HAVE_int_fetch_compare_and_swap_full)
TA_assert(AO_int_fetch_compare_and_swap_full(&zz, 14, 117) == 25);
TA_assert(zz == 25);
TA_assert(AO_int_fetch_compare_and_swap_full(&zz, 25, 117) == 25);
# else
MISSING(AO_int_fetch_compare_and_swap);
if (zz == 25) zz = 117;
# endif
TA_assert(zz == 117);
# if defined(AO_HAVE_double_load_full) || defined(AO_HAVE_double_store_full)
/* Initialize old_w even for store to workaround MSan warning. */
old_w.AO_val1 = 3316;
old_w.AO_val2 = 2921;
# endif
# if defined(AO_HAVE_double_load_full)
new_w = AO_double_load_full(&old_w);
TA_assert(new_w.AO_val1 == 3316 && new_w.AO_val2 == 2921);
# elif !defined(AO_HAVE_double_load) \
|| !defined(AO_HAVE_double_load_acquire) \
|| !defined(AO_HAVE_double_load_acquire_read) \
|| !defined(AO_HAVE_double_load_dd_acquire_read) \
|| !defined(AO_HAVE_double_load_full) \
|| !defined(AO_HAVE_double_load_read)
MISSING(AO_double_load);
# endif
# if defined(AO_HAVE_double_store_full)
new_w.AO_val1 = 1375;
new_w.AO_val2 = 8243;
AO_double_store_full(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
AO_double_store_full(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
new_w.AO_val1 ^= old_w.AO_val1;
new_w.AO_val2 ^= old_w.AO_val2;
AO_double_store_full(&old_w, new_w);
TA_assert(old_w.AO_val1 == 0 && old_w.AO_val2 == 0);
# elif !defined(AO_HAVE_double_store) \
|| !defined(AO_HAVE_double_store_full) \
|| !defined(AO_HAVE_double_store_release) \
|| !defined(AO_HAVE_double_store_release_write) \
|| !defined(AO_HAVE_double_store_write)
MISSING(AO_double_store);
# endif
# if defined(AO_HAVE_compare_double_and_swap_double_full)
TA_assert(!AO_compare_double_and_swap_double_full(&w, 17, 42, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_double_and_swap_double_full(&w, 0, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_full(&w, 12, 14, 64, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_full(&w, 11, 13, 85, 82));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_full(&w, 13, 12, 17, 42));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_double_and_swap_double_full(&w, 12, 13, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_double_and_swap_double_full(&w, 17, 42, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_double_and_swap_double);
# endif
# if defined(AO_HAVE_compare_and_swap_double_full)
TA_assert(!AO_compare_and_swap_double_full(&w, 17, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_and_swap_double_full(&w, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_full(&w, 13, 12, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_full(&w, 1213, 48, 86));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_and_swap_double_full(&w, 12, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_and_swap_double_full(&w, 17, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_and_swap_double);
# endif
# if defined(AO_HAVE_double_compare_and_swap_full)
old_w.AO_val1 = 4116;
old_w.AO_val2 = 2121;
new_w.AO_val1 = 8537;
new_w.AO_val2 = 6410;
TA_assert(!AO_double_compare_and_swap_full(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_double_compare_and_swap_full(&w, w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = 29;
new_w.AO_val1 = 820;
new_w.AO_val2 = 5917;
TA_assert(!AO_double_compare_and_swap_full(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = 11;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 3552;
new_w.AO_val2 = 1746;
TA_assert(!AO_double_compare_and_swap_full(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 8537;
new_w.AO_val1 = 4116;
new_w.AO_val2 = 2121;
TA_assert(!AO_double_compare_and_swap_full(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 1;
TA_assert(AO_double_compare_and_swap_full(&w, old_w, new_w));
TA_assert(w.AO_val1 == 1 && w.AO_val2 == 2121);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = w.AO_val2;
new_w.AO_val1--;
new_w.AO_val2 = 0;
TA_assert(AO_double_compare_and_swap_full(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_double_compare_and_swap);
# endif
}
/*
* Copyright (c) 2003 by Hewlett-Packard Company. All rights reserved.
*
* This file is covered by the GNU general public license, version 2.
* see COPYING for details.
*/
/* Some basic sanity tests. These do not test the barrier semantics. */
#undef TA_assert
#define TA_assert(e) \
if (!(e)) { fprintf(stderr, "Assertion failed %s:%d (barrier: _release_write)\n", \
__FILE__, __LINE__), exit(1); }
#undef MISSING
#define MISSING(name) \
printf("Missing: %s\n", #name "_release_write")
#if defined(CPPCHECK)
void list_atomic_release_write(void);
void char_list_atomic_release_write(void);
void short_list_atomic_release_write(void);
void int_list_atomic_release_write(void);
void double_list_atomic_release_write(void);
#endif
void test_atomic_release_write(void)
{
AO_t x;
unsigned char b;
unsigned short s;
unsigned int zz;
# if defined(AO_HAVE_test_and_set_release_write)
AO_TS_t z = AO_TS_INITIALIZER;
# endif
# if defined(AO_HAVE_double_compare_and_swap_release_write) \
|| defined(AO_HAVE_double_load_release_write) \
|| defined(AO_HAVE_double_store_release_write)
static AO_double_t old_w; /* static to avoid misalignment */
AO_double_t new_w;
# endif
# if defined(AO_HAVE_compare_and_swap_double_release_write) \
|| defined(AO_HAVE_compare_double_and_swap_double_release_write) \
|| defined(AO_HAVE_double_compare_and_swap_release_write)
static AO_double_t w; /* static to avoid misalignment */
w.AO_val1 = 0;
w.AO_val2 = 0;
# endif
# if defined(CPPCHECK)
list_atomic_release_write();
char_list_atomic_release_write();
short_list_atomic_release_write();
int_list_atomic_release_write();
double_list_atomic_release_write();
# endif
# if defined(AO_HAVE_nop_release_write)
AO_nop_release_write();
# elif !defined(AO_HAVE_nop) || !defined(AO_HAVE_nop_full) \
|| !defined(AO_HAVE_nop_read) || !defined(AO_HAVE_nop_write)
MISSING(AO_nop);
# endif
# if defined(AO_HAVE_store_release_write)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile AO_t *)&x = 0; /* initialize to avoid false warning */
# endif
AO_store_release_write(&x, 13);
TA_assert(x == 13);
# else
# if !defined(AO_HAVE_store) || !defined(AO_HAVE_store_full) \
|| !defined(AO_HAVE_store_release) \
|| !defined(AO_HAVE_store_release_write) \
|| !defined(AO_HAVE_store_write)
MISSING(AO_store);
# endif
x = 13;
# endif
# if defined(AO_HAVE_load_release_write)
TA_assert(AO_load_release_write(&x) == 13);
# elif !defined(AO_HAVE_load) || !defined(AO_HAVE_load_acquire) \
|| !defined(AO_HAVE_load_acquire_read) \
|| !defined(AO_HAVE_load_dd_acquire_read) \
|| !defined(AO_HAVE_load_full) || !defined(AO_HAVE_load_read)
MISSING(AO_load);
# endif
# if defined(AO_HAVE_test_and_set_release_write)
TA_assert(AO_test_and_set_release_write(&z) == AO_TS_CLEAR);
TA_assert(AO_test_and_set_release_write(&z) == AO_TS_SET);
TA_assert(AO_test_and_set_release_write(&z) == AO_TS_SET);
AO_CLEAR(&z);
# else
MISSING(AO_test_and_set);
# endif
# if defined(AO_HAVE_fetch_and_add_release_write)
TA_assert(AO_fetch_and_add_release_write(&x, 42) == 13);
TA_assert(AO_fetch_and_add_release_write(&x, (AO_t)(-42)) == 55);
# else
MISSING(AO_fetch_and_add);
# endif
# if defined(AO_HAVE_fetch_and_add1_release_write)
TA_assert(AO_fetch_and_add1_release_write(&x) == 13);
# else
MISSING(AO_fetch_and_add1);
++x;
# endif
# if defined(AO_HAVE_fetch_and_sub1_release_write)
TA_assert(AO_fetch_and_sub1_release_write(&x) == 14);
# else
MISSING(AO_fetch_and_sub1);
--x;
# endif
# if defined(AO_HAVE_short_store_release_write)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile short *)&s = 0;
# endif
AO_short_store_release_write(&s, 13);
# else
# if !defined(AO_HAVE_short_store) || !defined(AO_HAVE_short_store_full) \
|| !defined(AO_HAVE_short_store_release) \
|| !defined(AO_HAVE_short_store_release_write) \
|| !defined(AO_HAVE_short_store_write)
MISSING(AO_short_store);
# endif
s = 13;
# endif
# if defined(AO_HAVE_short_load_release_write)
TA_assert(AO_short_load(&s) == 13);
# elif !defined(AO_HAVE_short_load) || !defined(AO_HAVE_short_load_acquire) \
|| !defined(AO_HAVE_short_load_acquire_read) \
|| !defined(AO_HAVE_short_load_dd_acquire_read) \
|| !defined(AO_HAVE_short_load_full) \
|| !defined(AO_HAVE_short_load_read)
MISSING(AO_short_load);
# endif
# if defined(AO_HAVE_short_fetch_and_add_release_write)
TA_assert(AO_short_fetch_and_add_release_write(&s, 42) == 13);
TA_assert(AO_short_fetch_and_add_release_write(&s, (unsigned short)-42) == 55);
# else
MISSING(AO_short_fetch_and_add);
# endif
# if defined(AO_HAVE_short_fetch_and_add1_release_write)
TA_assert(AO_short_fetch_and_add1_release_write(&s) == 13);
# else
MISSING(AO_short_fetch_and_add1);
++s;
# endif
# if defined(AO_HAVE_short_fetch_and_sub1_release_write)
TA_assert(AO_short_fetch_and_sub1_release_write(&s) == 14);
# else
MISSING(AO_short_fetch_and_sub1);
--s;
# endif
TA_assert(*(volatile short *)&s == 13);
# if defined(AO_HAVE_char_store_release_write)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile char *)&b = 0;
# endif
AO_char_store_release_write(&b, 13);
# else
# if !defined(AO_HAVE_char_store) || !defined(AO_HAVE_char_store_full) \
|| !defined(AO_HAVE_char_store_release) \
|| !defined(AO_HAVE_char_store_release_write) \
|| !defined(AO_HAVE_char_store_write)
MISSING(AO_char_store);
# endif
b = 13;
# endif
# if defined(AO_HAVE_char_load_release_write)
TA_assert(AO_char_load(&b) == 13);
# elif !defined(AO_HAVE_char_load) || !defined(AO_HAVE_char_load_acquire) \
|| !defined(AO_HAVE_char_load_acquire_read) \
|| !defined(AO_HAVE_char_load_dd_acquire_read) \
|| !defined(AO_HAVE_char_load_full) || !defined(AO_HAVE_char_load_read)
MISSING(AO_char_load);
# endif
# if defined(AO_HAVE_char_fetch_and_add_release_write)
TA_assert(AO_char_fetch_and_add_release_write(&b, 42) == 13);
TA_assert(AO_char_fetch_and_add_release_write(&b, (unsigned char)-42) == 55);
# else
MISSING(AO_char_fetch_and_add);
# endif
# if defined(AO_HAVE_char_fetch_and_add1_release_write)
TA_assert(AO_char_fetch_and_add1_release_write(&b) == 13);
# else
MISSING(AO_char_fetch_and_add1);
++b;
# endif
# if defined(AO_HAVE_char_fetch_and_sub1_release_write)
TA_assert(AO_char_fetch_and_sub1_release_write(&b) == 14);
# else
MISSING(AO_char_fetch_and_sub1);
--b;
# endif
TA_assert(*(volatile char *)&b == 13);
# if defined(AO_HAVE_int_store_release_write)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile int *)&zz = 0;
# endif
AO_int_store_release_write(&zz, 13);
# else
# if !defined(AO_HAVE_int_store) || !defined(AO_HAVE_int_store_full) \
|| !defined(AO_HAVE_int_store_release) \
|| !defined(AO_HAVE_int_store_release_write) \
|| !defined(AO_HAVE_int_store_write)
MISSING(AO_int_store);
# endif
zz = 13;
# endif
# if defined(AO_HAVE_int_load_release_write)
TA_assert(AO_int_load(&zz) == 13);
# elif !defined(AO_HAVE_int_load) || !defined(AO_HAVE_int_load_acquire) \
|| !defined(AO_HAVE_int_load_acquire_read) \
|| !defined(AO_HAVE_int_load_dd_acquire_read) \
|| !defined(AO_HAVE_int_load_full) || !defined(AO_HAVE_int_load_read)
MISSING(AO_int_load);
# endif
# if defined(AO_HAVE_int_fetch_and_add_release_write)
TA_assert(AO_int_fetch_and_add_release_write(&zz, 42) == 13);
TA_assert(AO_int_fetch_and_add_release_write(&zz, (unsigned int)-42) == 55);
# else
MISSING(AO_int_fetch_and_add);
# endif
# if defined(AO_HAVE_int_fetch_and_add1_release_write)
TA_assert(AO_int_fetch_and_add1_release_write(&zz) == 13);
# else
MISSING(AO_int_fetch_and_add1);
++zz;
# endif
# if defined(AO_HAVE_int_fetch_and_sub1_release_write)
TA_assert(AO_int_fetch_and_sub1_release_write(&zz) == 14);
# else
MISSING(AO_int_fetch_and_sub1);
--zz;
# endif
TA_assert(*(volatile int *)&zz == 13);
# if defined(AO_HAVE_compare_and_swap_release_write)
TA_assert(!AO_compare_and_swap_release_write(&x, 14, 42));
TA_assert(x == 13);
TA_assert(AO_compare_and_swap_release_write(&x, 13, 42));
TA_assert(x == 42);
# else
MISSING(AO_compare_and_swap);
if (*(volatile AO_t *)&x == 13) x = 42;
# endif
# if defined(AO_HAVE_or_release_write)
AO_or_release_write(&x, 66);
TA_assert(x == 106);
# else
# if !defined(AO_HAVE_or) || !defined(AO_HAVE_or_acquire) \
|| !defined(AO_HAVE_or_acquire_read) || !defined(AO_HAVE_or_full) \
|| !defined(AO_HAVE_or_read) || !defined(AO_HAVE_or_release) \
|| !defined(AO_HAVE_or_release_write) || !defined(AO_HAVE_or_write)
MISSING(AO_or);
# endif
x |= 66;
# endif
# if defined(AO_HAVE_xor_release_write)
AO_xor_release_write(&x, 181);
TA_assert(x == 223);
# else
# if !defined(AO_HAVE_xor) || !defined(AO_HAVE_xor_acquire) \
|| !defined(AO_HAVE_xor_acquire_read) || !defined(AO_HAVE_xor_full) \
|| !defined(AO_HAVE_xor_read) || !defined(AO_HAVE_xor_release) \
|| !defined(AO_HAVE_xor_release_write) || !defined(AO_HAVE_xor_write)
MISSING(AO_xor);
# endif
x ^= 181;
# endif
# if defined(AO_HAVE_and_release_write)
AO_and_release_write(&x, 57);
TA_assert(x == 25);
# else
# if !defined(AO_HAVE_and) || !defined(AO_HAVE_and_acquire) \
|| !defined(AO_HAVE_and_acquire_read) || !defined(AO_HAVE_and_full) \
|| !defined(AO_HAVE_and_read) || !defined(AO_HAVE_and_release) \
|| !defined(AO_HAVE_and_release_write) || !defined(AO_HAVE_and_write)
MISSING(AO_and);
# endif
x &= 57;
# endif
# if defined(AO_HAVE_fetch_compare_and_swap_release_write)
TA_assert(AO_fetch_compare_and_swap_release_write(&x, 14, 117) == 25);
TA_assert(x == 25);
TA_assert(AO_fetch_compare_and_swap_release_write(&x, 25, 117) == 25);
# else
MISSING(AO_fetch_compare_and_swap);
if (x == 25) x = 117;
# endif
TA_assert(x == 117);
# if defined(AO_HAVE_short_compare_and_swap_release_write)
TA_assert(!AO_short_compare_and_swap_release_write(&s, 14, 42));
TA_assert(s == 13);
TA_assert(AO_short_compare_and_swap_release_write(&s, 13, 42));
TA_assert(s == 42);
# else
MISSING(AO_short_compare_and_swap);
if (*(volatile short *)&s == 13) s = 42;
# endif
# if defined(AO_HAVE_short_or_release_write)
AO_short_or_release_write(&s, 66);
TA_assert(s == 106);
# else
# if !defined(AO_HAVE_short_or) || !defined(AO_HAVE_short_or_acquire) \
|| !defined(AO_HAVE_short_or_acquire_read) \
|| !defined(AO_HAVE_short_or_full) || !defined(AO_HAVE_short_or_read) \
|| !defined(AO_HAVE_short_or_release) \
|| !defined(AO_HAVE_short_or_release_write) \
|| !defined(AO_HAVE_short_or_write)
MISSING(AO_short_or);
# endif
s |= 66;
# endif
# if defined(AO_HAVE_short_xor_release_write)
AO_short_xor_release_write(&s, 181);
TA_assert(s == 223);
# else
# if !defined(AO_HAVE_short_xor) || !defined(AO_HAVE_short_xor_acquire) \
|| !defined(AO_HAVE_short_xor_acquire_read) \
|| !defined(AO_HAVE_short_xor_full) \
|| !defined(AO_HAVE_short_xor_read) \
|| !defined(AO_HAVE_short_xor_release) \
|| !defined(AO_HAVE_short_xor_release_write) \
|| !defined(AO_HAVE_short_xor_write)
MISSING(AO_short_xor);
# endif
s ^= 181;
# endif
# if defined(AO_HAVE_short_and_release_write)
AO_short_and_release_write(&s, 57);
TA_assert(s == 25);
# else
# if !defined(AO_HAVE_short_and) || !defined(AO_HAVE_short_and_acquire) \
|| !defined(AO_HAVE_short_and_acquire_read) \
|| !defined(AO_HAVE_short_and_full) \
|| !defined(AO_HAVE_short_and_read) \
|| !defined(AO_HAVE_short_and_release) \
|| !defined(AO_HAVE_short_and_release_write) \
|| !defined(AO_HAVE_short_and_write)
MISSING(AO_short_and);
# endif
s &= 57;
# endif
# if defined(AO_HAVE_short_fetch_compare_and_swap_release_write)
TA_assert(AO_short_fetch_compare_and_swap_release_write(&s, 14, 117) == 25);
TA_assert(s == 25);
TA_assert(AO_short_fetch_compare_and_swap_release_write(&s, 25, 117) == 25);
# else
MISSING(AO_short_fetch_compare_and_swap);
if (s == 25) s = 117;
# endif
TA_assert(s == 117);
# if defined(AO_HAVE_char_compare_and_swap_release_write)
TA_assert(!AO_char_compare_and_swap_release_write(&b, 14, 42));
TA_assert(b == 13);
TA_assert(AO_char_compare_and_swap_release_write(&b, 13, 42));
TA_assert(b == 42);
# else
MISSING(AO_char_compare_and_swap);
if (*(volatile char *)&b == 13) b = 42;
# endif
# if defined(AO_HAVE_char_or_release_write)
AO_char_or_release_write(&b, 66);
TA_assert(b == 106);
# else
# if !defined(AO_HAVE_char_or) || !defined(AO_HAVE_char_or_acquire) \
|| !defined(AO_HAVE_char_or_acquire_read) \
|| !defined(AO_HAVE_char_or_full) || !defined(AO_HAVE_char_or_read) \
|| !defined(AO_HAVE_char_or_release) \
|| !defined(AO_HAVE_char_or_release_write) \
|| !defined(AO_HAVE_char_or_write)
MISSING(AO_char_or);
# endif
b |= 66;
# endif
# if defined(AO_HAVE_char_xor_release_write)
AO_char_xor_release_write(&b, 181);
TA_assert(b == 223);
# else
# if !defined(AO_HAVE_char_xor) || !defined(AO_HAVE_char_xor_acquire) \
|| !defined(AO_HAVE_char_xor_acquire_read) \
|| !defined(AO_HAVE_char_xor_full) || !defined(AO_HAVE_char_xor_read) \
|| !defined(AO_HAVE_char_xor_release) \
|| !defined(AO_HAVE_char_xor_release_write) \
|| !defined(AO_HAVE_char_xor_write)
MISSING(AO_char_xor);
# endif
b ^= 181;
# endif
# if defined(AO_HAVE_char_and_release_write)
AO_char_and_release_write(&b, 57);
TA_assert(b == 25);
# else
# if !defined(AO_HAVE_char_and) || !defined(AO_HAVE_char_and_acquire) \
|| !defined(AO_HAVE_char_and_acquire_read) \
|| !defined(AO_HAVE_char_and_full) || !defined(AO_HAVE_char_and_read) \
|| !defined(AO_HAVE_char_and_release) \
|| !defined(AO_HAVE_char_and_release_write) \
|| !defined(AO_HAVE_char_and_write)
MISSING(AO_char_and);
# endif
b &= 57;
# endif
# if defined(AO_HAVE_char_fetch_compare_and_swap_release_write)
TA_assert(AO_char_fetch_compare_and_swap_release_write(&b, 14, 117) == 25);
TA_assert(b == 25);
TA_assert(AO_char_fetch_compare_and_swap_release_write(&b, 25, 117) == 25);
# else
MISSING(AO_char_fetch_compare_and_swap);
if (b == 25) b = 117;
# endif
TA_assert(b == 117);
# if defined(AO_HAVE_int_compare_and_swap_release_write)
TA_assert(!AO_int_compare_and_swap_release_write(&zz, 14, 42));
TA_assert(zz == 13);
TA_assert(AO_int_compare_and_swap_release_write(&zz, 13, 42));
TA_assert(zz == 42);
# else
MISSING(AO_int_compare_and_swap);
if (*(volatile int *)&zz == 13) zz = 42;
# endif
# if defined(AO_HAVE_int_or_release_write)
AO_int_or_release_write(&zz, 66);
TA_assert(zz == 106);
# else
# if !defined(AO_HAVE_int_or) || !defined(AO_HAVE_int_or_acquire) \
|| !defined(AO_HAVE_int_or_acquire_read) \
|| !defined(AO_HAVE_int_or_full) || !defined(AO_HAVE_int_or_read) \
|| !defined(AO_HAVE_int_or_release) \
|| !defined(AO_HAVE_int_or_release_write) \
|| !defined(AO_HAVE_int_or_write)
MISSING(AO_int_or);
# endif
zz |= 66;
# endif
# if defined(AO_HAVE_int_xor_release_write)
AO_int_xor_release_write(&zz, 181);
TA_assert(zz == 223);
# else
# if !defined(AO_HAVE_int_xor) || !defined(AO_HAVE_int_xor_acquire) \
|| !defined(AO_HAVE_int_xor_acquire_read) \
|| !defined(AO_HAVE_int_xor_full) || !defined(AO_HAVE_int_xor_read) \
|| !defined(AO_HAVE_int_xor_release) \
|| !defined(AO_HAVE_int_xor_release_write) \
|| !defined(AO_HAVE_int_xor_write)
MISSING(AO_int_xor);
# endif
zz ^= 181;
# endif
# if defined(AO_HAVE_int_and_release_write)
AO_int_and_release_write(&zz, 57);
TA_assert(zz == 25);
# else
# if !defined(AO_HAVE_int_and) || !defined(AO_HAVE_int_and_acquire) \
|| !defined(AO_HAVE_int_and_acquire_read) \
|| !defined(AO_HAVE_int_and_full) || !defined(AO_HAVE_int_and_read) \
|| !defined(AO_HAVE_int_and_release) \
|| !defined(AO_HAVE_int_and_release_write) \
|| !defined(AO_HAVE_int_and_write)
MISSING(AO_int_and);
# endif
zz &= 57;
# endif
# if defined(AO_HAVE_int_fetch_compare_and_swap_release_write)
TA_assert(AO_int_fetch_compare_and_swap_release_write(&zz, 14, 117) == 25);
TA_assert(zz == 25);
TA_assert(AO_int_fetch_compare_and_swap_release_write(&zz, 25, 117) == 25);
# else
MISSING(AO_int_fetch_compare_and_swap);
if (zz == 25) zz = 117;
# endif
TA_assert(zz == 117);
# if defined(AO_HAVE_double_load_release_write) || defined(AO_HAVE_double_store_release_write)
/* Initialize old_w even for store to workaround MSan warning. */
old_w.AO_val1 = 3316;
old_w.AO_val2 = 2921;
# endif
# if defined(AO_HAVE_double_load_release_write)
new_w = AO_double_load_release_write(&old_w);
TA_assert(new_w.AO_val1 == 3316 && new_w.AO_val2 == 2921);
# elif !defined(AO_HAVE_double_load) \
|| !defined(AO_HAVE_double_load_acquire) \
|| !defined(AO_HAVE_double_load_acquire_read) \
|| !defined(AO_HAVE_double_load_dd_acquire_read) \
|| !defined(AO_HAVE_double_load_full) \
|| !defined(AO_HAVE_double_load_read)
MISSING(AO_double_load);
# endif
# if defined(AO_HAVE_double_store_release_write)
new_w.AO_val1 = 1375;
new_w.AO_val2 = 8243;
AO_double_store_release_write(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
AO_double_store_release_write(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
new_w.AO_val1 ^= old_w.AO_val1;
new_w.AO_val2 ^= old_w.AO_val2;
AO_double_store_release_write(&old_w, new_w);
TA_assert(old_w.AO_val1 == 0 && old_w.AO_val2 == 0);
# elif !defined(AO_HAVE_double_store) \
|| !defined(AO_HAVE_double_store_full) \
|| !defined(AO_HAVE_double_store_release) \
|| !defined(AO_HAVE_double_store_release_write) \
|| !defined(AO_HAVE_double_store_write)
MISSING(AO_double_store);
# endif
# if defined(AO_HAVE_compare_double_and_swap_double_release_write)
TA_assert(!AO_compare_double_and_swap_double_release_write(&w, 17, 42, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_double_and_swap_double_release_write(&w, 0, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_release_write(&w, 12, 14, 64, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_release_write(&w, 11, 13, 85, 82));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_release_write(&w, 13, 12, 17, 42));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_double_and_swap_double_release_write(&w, 12, 13, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_double_and_swap_double_release_write(&w, 17, 42, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_double_and_swap_double);
# endif
# if defined(AO_HAVE_compare_and_swap_double_release_write)
TA_assert(!AO_compare_and_swap_double_release_write(&w, 17, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_and_swap_double_release_write(&w, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_release_write(&w, 13, 12, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_release_write(&w, 1213, 48, 86));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_and_swap_double_release_write(&w, 12, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_and_swap_double_release_write(&w, 17, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_and_swap_double);
# endif
# if defined(AO_HAVE_double_compare_and_swap_release_write)
old_w.AO_val1 = 4116;
old_w.AO_val2 = 2121;
new_w.AO_val1 = 8537;
new_w.AO_val2 = 6410;
TA_assert(!AO_double_compare_and_swap_release_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_double_compare_and_swap_release_write(&w, w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = 29;
new_w.AO_val1 = 820;
new_w.AO_val2 = 5917;
TA_assert(!AO_double_compare_and_swap_release_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = 11;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 3552;
new_w.AO_val2 = 1746;
TA_assert(!AO_double_compare_and_swap_release_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 8537;
new_w.AO_val1 = 4116;
new_w.AO_val2 = 2121;
TA_assert(!AO_double_compare_and_swap_release_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 1;
TA_assert(AO_double_compare_and_swap_release_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 1 && w.AO_val2 == 2121);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = w.AO_val2;
new_w.AO_val1--;
new_w.AO_val2 = 0;
TA_assert(AO_double_compare_and_swap_release_write(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_double_compare_and_swap);
# endif
}
/*
* Copyright (c) 2003 by <NAME>. All rights reserved.
*
* This file is covered by the GNU general public license, version 2.
* see COPYING for details.
*/
/* Some basic sanity tests. These do not test the barrier semantics. */
#undef TA_assert
#define TA_assert(e) \
if (!(e)) { fprintf(stderr, "Assertion failed %s:%d (barrier: _acquire_read)\n", \
__FILE__, __LINE__), exit(1); }
#undef MISSING
#define MISSING(name) \
printf("Missing: %s\n", #name "_acquire_read")
#if defined(CPPCHECK)
void list_atomic_acquire_read(void);
void char_list_atomic_acquire_read(void);
void short_list_atomic_acquire_read(void);
void int_list_atomic_acquire_read(void);
void double_list_atomic_acquire_read(void);
#endif
void test_atomic_acquire_read(void)
{
AO_t x;
unsigned char b;
unsigned short s;
unsigned int zz;
# if defined(AO_HAVE_test_and_set_acquire_read)
AO_TS_t z = AO_TS_INITIALIZER;
# endif
# if defined(AO_HAVE_double_compare_and_swap_acquire_read) \
|| defined(AO_HAVE_double_load_acquire_read) \
|| defined(AO_HAVE_double_store_acquire_read)
static AO_double_t old_w; /* static to avoid misalignment */
AO_double_t new_w;
# endif
# if defined(AO_HAVE_compare_and_swap_double_acquire_read) \
|| defined(AO_HAVE_compare_double_and_swap_double_acquire_read) \
|| defined(AO_HAVE_double_compare_and_swap_acquire_read)
static AO_double_t w; /* static to avoid misalignment */
w.AO_val1 = 0;
w.AO_val2 = 0;
# endif
# if defined(CPPCHECK)
list_atomic_acquire_read();
char_list_atomic_acquire_read();
short_list_atomic_acquire_read();
int_list_atomic_acquire_read();
double_list_atomic_acquire_read();
# endif
# if defined(AO_HAVE_nop_acquire_read)
AO_nop_acquire_read();
# elif !defined(AO_HAVE_nop) || !defined(AO_HAVE_nop_full) \
|| !defined(AO_HAVE_nop_read) || !defined(AO_HAVE_nop_write)
MISSING(AO_nop);
# endif
# if defined(AO_HAVE_store_acquire_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile AO_t *)&x = 0; /* initialize to avoid false warning */
# endif
AO_store_acquire_read(&x, 13);
TA_assert(x == 13);
# else
# if !defined(AO_HAVE_store) || !defined(AO_HAVE_store_full) \
|| !defined(AO_HAVE_store_release) \
|| !defined(AO_HAVE_store_release_write) \
|| !defined(AO_HAVE_store_write)
MISSING(AO_store);
# endif
x = 13;
# endif
# if defined(AO_HAVE_load_acquire_read)
TA_assert(AO_load_acquire_read(&x) == 13);
# elif !defined(AO_HAVE_load) || !defined(AO_HAVE_load_acquire) \
|| !defined(AO_HAVE_load_acquire_read) \
|| !defined(AO_HAVE_load_dd_acquire_read) \
|| !defined(AO_HAVE_load_full) || !defined(AO_HAVE_load_read)
MISSING(AO_load);
# endif
# if defined(AO_HAVE_test_and_set_acquire_read)
TA_assert(AO_test_and_set_acquire_read(&z) == AO_TS_CLEAR);
TA_assert(AO_test_and_set_acquire_read(&z) == AO_TS_SET);
TA_assert(AO_test_and_set_acquire_read(&z) == AO_TS_SET);
AO_CLEAR(&z);
# else
MISSING(AO_test_and_set);
# endif
# if defined(AO_HAVE_fetch_and_add_acquire_read)
TA_assert(AO_fetch_and_add_acquire_read(&x, 42) == 13);
TA_assert(AO_fetch_and_add_acquire_read(&x, (AO_t)(-42)) == 55);
# else
MISSING(AO_fetch_and_add);
# endif
# if defined(AO_HAVE_fetch_and_add1_acquire_read)
TA_assert(AO_fetch_and_add1_acquire_read(&x) == 13);
# else
MISSING(AO_fetch_and_add1);
++x;
# endif
# if defined(AO_HAVE_fetch_and_sub1_acquire_read)
TA_assert(AO_fetch_and_sub1_acquire_read(&x) == 14);
# else
MISSING(AO_fetch_and_sub1);
--x;
# endif
# if defined(AO_HAVE_short_store_acquire_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile short *)&s = 0;
# endif
AO_short_store_acquire_read(&s, 13);
# else
# if !defined(AO_HAVE_short_store) || !defined(AO_HAVE_short_store_full) \
|| !defined(AO_HAVE_short_store_release) \
|| !defined(AO_HAVE_short_store_release_write) \
|| !defined(AO_HAVE_short_store_write)
MISSING(AO_short_store);
# endif
s = 13;
# endif
# if defined(AO_HAVE_short_load_acquire_read)
TA_assert(AO_short_load(&s) == 13);
# elif !defined(AO_HAVE_short_load) || !defined(AO_HAVE_short_load_acquire) \
|| !defined(AO_HAVE_short_load_acquire_read) \
|| !defined(AO_HAVE_short_load_dd_acquire_read) \
|| !defined(AO_HAVE_short_load_full) \
|| !defined(AO_HAVE_short_load_read)
MISSING(AO_short_load);
# endif
# if defined(AO_HAVE_short_fetch_and_add_acquire_read)
TA_assert(AO_short_fetch_and_add_acquire_read(&s, 42) == 13);
TA_assert(AO_short_fetch_and_add_acquire_read(&s, (unsigned short)-42) == 55);
# else
MISSING(AO_short_fetch_and_add);
# endif
# if defined(AO_HAVE_short_fetch_and_add1_acquire_read)
TA_assert(AO_short_fetch_and_add1_acquire_read(&s) == 13);
# else
MISSING(AO_short_fetch_and_add1);
++s;
# endif
# if defined(AO_HAVE_short_fetch_and_sub1_acquire_read)
TA_assert(AO_short_fetch_and_sub1_acquire_read(&s) == 14);
# else
MISSING(AO_short_fetch_and_sub1);
--s;
# endif
TA_assert(*(volatile short *)&s == 13);
# if defined(AO_HAVE_char_store_acquire_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile char *)&b = 0;
# endif
AO_char_store_acquire_read(&b, 13);
# else
# if !defined(AO_HAVE_char_store) || !defined(AO_HAVE_char_store_full) \
|| !defined(AO_HAVE_char_store_release) \
|| !defined(AO_HAVE_char_store_release_write) \
|| !defined(AO_HAVE_char_store_write)
MISSING(AO_char_store);
# endif
b = 13;
# endif
# if defined(AO_HAVE_char_load_acquire_read)
TA_assert(AO_char_load(&b) == 13);
# elif !defined(AO_HAVE_char_load) || !defined(AO_HAVE_char_load_acquire) \
|| !defined(AO_HAVE_char_load_acquire_read) \
|| !defined(AO_HAVE_char_load_dd_acquire_read) \
|| !defined(AO_HAVE_char_load_full) || !defined(AO_HAVE_char_load_read)
MISSING(AO_char_load);
# endif
# if defined(AO_HAVE_char_fetch_and_add_acquire_read)
TA_assert(AO_char_fetch_and_add_acquire_read(&b, 42) == 13);
TA_assert(AO_char_fetch_and_add_acquire_read(&b, (unsigned char)-42) == 55);
# else
MISSING(AO_char_fetch_and_add);
# endif
# if defined(AO_HAVE_char_fetch_and_add1_acquire_read)
TA_assert(AO_char_fetch_and_add1_acquire_read(&b) == 13);
# else
MISSING(AO_char_fetch_and_add1);
++b;
# endif
# if defined(AO_HAVE_char_fetch_and_sub1_acquire_read)
TA_assert(AO_char_fetch_and_sub1_acquire_read(&b) == 14);
# else
MISSING(AO_char_fetch_and_sub1);
--b;
# endif
TA_assert(*(volatile char *)&b == 13);
# if defined(AO_HAVE_int_store_acquire_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile int *)&zz = 0;
# endif
AO_int_store_acquire_read(&zz, 13);
# else
# if !defined(AO_HAVE_int_store) || !defined(AO_HAVE_int_store_full) \
|| !defined(AO_HAVE_int_store_release) \
|| !defined(AO_HAVE_int_store_release_write) \
|| !defined(AO_HAVE_int_store_write)
MISSING(AO_int_store);
# endif
zz = 13;
# endif
# if defined(AO_HAVE_int_load_acquire_read)
TA_assert(AO_int_load(&zz) == 13);
# elif !defined(AO_HAVE_int_load) || !defined(AO_HAVE_int_load_acquire) \
|| !defined(AO_HAVE_int_load_acquire_read) \
|| !defined(AO_HAVE_int_load_dd_acquire_read) \
|| !defined(AO_HAVE_int_load_full) || !defined(AO_HAVE_int_load_read)
MISSING(AO_int_load);
# endif
# if defined(AO_HAVE_int_fetch_and_add_acquire_read)
TA_assert(AO_int_fetch_and_add_acquire_read(&zz, 42) == 13);
TA_assert(AO_int_fetch_and_add_acquire_read(&zz, (unsigned int)-42) == 55);
# else
MISSING(AO_int_fetch_and_add);
# endif
# if defined(AO_HAVE_int_fetch_and_add1_acquire_read)
TA_assert(AO_int_fetch_and_add1_acquire_read(&zz) == 13);
# else
MISSING(AO_int_fetch_and_add1);
++zz;
# endif
# if defined(AO_HAVE_int_fetch_and_sub1_acquire_read)
TA_assert(AO_int_fetch_and_sub1_acquire_read(&zz) == 14);
# else
MISSING(AO_int_fetch_and_sub1);
--zz;
# endif
TA_assert(*(volatile int *)&zz == 13);
# if defined(AO_HAVE_compare_and_swap_acquire_read)
TA_assert(!AO_compare_and_swap_acquire_read(&x, 14, 42));
TA_assert(x == 13);
TA_assert(AO_compare_and_swap_acquire_read(&x, 13, 42));
TA_assert(x == 42);
# else
MISSING(AO_compare_and_swap);
if (*(volatile AO_t *)&x == 13) x = 42;
# endif
# if defined(AO_HAVE_or_acquire_read)
AO_or_acquire_read(&x, 66);
TA_assert(x == 106);
# else
# if !defined(AO_HAVE_or) || !defined(AO_HAVE_or_acquire) \
|| !defined(AO_HAVE_or_acquire_read) || !defined(AO_HAVE_or_full) \
|| !defined(AO_HAVE_or_read) || !defined(AO_HAVE_or_release) \
|| !defined(AO_HAVE_or_release_write) || !defined(AO_HAVE_or_write)
MISSING(AO_or);
# endif
x |= 66;
# endif
# if defined(AO_HAVE_xor_acquire_read)
AO_xor_acquire_read(&x, 181);
TA_assert(x == 223);
# else
# if !defined(AO_HAVE_xor) || !defined(AO_HAVE_xor_acquire) \
|| !defined(AO_HAVE_xor_acquire_read) || !defined(AO_HAVE_xor_full) \
|| !defined(AO_HAVE_xor_read) || !defined(AO_HAVE_xor_release) \
|| !defined(AO_HAVE_xor_release_write) || !defined(AO_HAVE_xor_write)
MISSING(AO_xor);
# endif
x ^= 181;
# endif
# if defined(AO_HAVE_and_acquire_read)
AO_and_acquire_read(&x, 57);
TA_assert(x == 25);
# else
# if !defined(AO_HAVE_and) || !defined(AO_HAVE_and_acquire) \
|| !defined(AO_HAVE_and_acquire_read) || !defined(AO_HAVE_and_full) \
|| !defined(AO_HAVE_and_read) || !defined(AO_HAVE_and_release) \
|| !defined(AO_HAVE_and_release_write) || !defined(AO_HAVE_and_write)
MISSING(AO_and);
# endif
x &= 57;
# endif
# if defined(AO_HAVE_fetch_compare_and_swap_acquire_read)
TA_assert(AO_fetch_compare_and_swap_acquire_read(&x, 14, 117) == 25);
TA_assert(x == 25);
TA_assert(AO_fetch_compare_and_swap_acquire_read(&x, 25, 117) == 25);
# else
MISSING(AO_fetch_compare_and_swap);
if (x == 25) x = 117;
# endif
TA_assert(x == 117);
# if defined(AO_HAVE_short_compare_and_swap_acquire_read)
TA_assert(!AO_short_compare_and_swap_acquire_read(&s, 14, 42));
TA_assert(s == 13);
TA_assert(AO_short_compare_and_swap_acquire_read(&s, 13, 42));
TA_assert(s == 42);
# else
MISSING(AO_short_compare_and_swap);
if (*(volatile short *)&s == 13) s = 42;
# endif
# if defined(AO_HAVE_short_or_acquire_read)
AO_short_or_acquire_read(&s, 66);
TA_assert(s == 106);
# else
# if !defined(AO_HAVE_short_or) || !defined(AO_HAVE_short_or_acquire) \
|| !defined(AO_HAVE_short_or_acquire_read) \
|| !defined(AO_HAVE_short_or_full) || !defined(AO_HAVE_short_or_read) \
|| !defined(AO_HAVE_short_or_release) \
|| !defined(AO_HAVE_short_or_release_write) \
|| !defined(AO_HAVE_short_or_write)
MISSING(AO_short_or);
# endif
s |= 66;
# endif
# if defined(AO_HAVE_short_xor_acquire_read)
AO_short_xor_acquire_read(&s, 181);
TA_assert(s == 223);
# else
# if !defined(AO_HAVE_short_xor) || !defined(AO_HAVE_short_xor_acquire) \
|| !defined(AO_HAVE_short_xor_acquire_read) \
|| !defined(AO_HAVE_short_xor_full) \
|| !defined(AO_HAVE_short_xor_read) \
|| !defined(AO_HAVE_short_xor_release) \
|| !defined(AO_HAVE_short_xor_release_write) \
|| !defined(AO_HAVE_short_xor_write)
MISSING(AO_short_xor);
# endif
s ^= 181;
# endif
# if defined(AO_HAVE_short_and_acquire_read)
AO_short_and_acquire_read(&s, 57);
TA_assert(s == 25);
# else
# if !defined(AO_HAVE_short_and) || !defined(AO_HAVE_short_and_acquire) \
|| !defined(AO_HAVE_short_and_acquire_read) \
|| !defined(AO_HAVE_short_and_full) \
|| !defined(AO_HAVE_short_and_read) \
|| !defined(AO_HAVE_short_and_release) \
|| !defined(AO_HAVE_short_and_release_write) \
|| !defined(AO_HAVE_short_and_write)
MISSING(AO_short_and);
# endif
s &= 57;
# endif
# if defined(AO_HAVE_short_fetch_compare_and_swap_acquire_read)
TA_assert(AO_short_fetch_compare_and_swap_acquire_read(&s, 14, 117) == 25);
TA_assert(s == 25);
TA_assert(AO_short_fetch_compare_and_swap_acquire_read(&s, 25, 117) == 25);
# else
MISSING(AO_short_fetch_compare_and_swap);
if (s == 25) s = 117;
# endif
TA_assert(s == 117);
# if defined(AO_HAVE_char_compare_and_swap_acquire_read)
TA_assert(!AO_char_compare_and_swap_acquire_read(&b, 14, 42));
TA_assert(b == 13);
TA_assert(AO_char_compare_and_swap_acquire_read(&b, 13, 42));
TA_assert(b == 42);
# else
MISSING(AO_char_compare_and_swap);
if (*(volatile char *)&b == 13) b = 42;
# endif
# if defined(AO_HAVE_char_or_acquire_read)
AO_char_or_acquire_read(&b, 66);
TA_assert(b == 106);
# else
# if !defined(AO_HAVE_char_or) || !defined(AO_HAVE_char_or_acquire) \
|| !defined(AO_HAVE_char_or_acquire_read) \
|| !defined(AO_HAVE_char_or_full) || !defined(AO_HAVE_char_or_read) \
|| !defined(AO_HAVE_char_or_release) \
|| !defined(AO_HAVE_char_or_release_write) \
|| !defined(AO_HAVE_char_or_write)
MISSING(AO_char_or);
# endif
b |= 66;
# endif
# if defined(AO_HAVE_char_xor_acquire_read)
AO_char_xor_acquire_read(&b, 181);
TA_assert(b == 223);
# else
# if !defined(AO_HAVE_char_xor) || !defined(AO_HAVE_char_xor_acquire) \
|| !defined(AO_HAVE_char_xor_acquire_read) \
|| !defined(AO_HAVE_char_xor_full) || !defined(AO_HAVE_char_xor_read) \
|| !defined(AO_HAVE_char_xor_release) \
|| !defined(AO_HAVE_char_xor_release_write) \
|| !defined(AO_HAVE_char_xor_write)
MISSING(AO_char_xor);
# endif
b ^= 181;
# endif
# if defined(AO_HAVE_char_and_acquire_read)
AO_char_and_acquire_read(&b, 57);
TA_assert(b == 25);
# else
# if !defined(AO_HAVE_char_and) || !defined(AO_HAVE_char_and_acquire) \
|| !defined(AO_HAVE_char_and_acquire_read) \
|| !defined(AO_HAVE_char_and_full) || !defined(AO_HAVE_char_and_read) \
|| !defined(AO_HAVE_char_and_release) \
|| !defined(AO_HAVE_char_and_release_write) \
|| !defined(AO_HAVE_char_and_write)
MISSING(AO_char_and);
# endif
b &= 57;
# endif
# if defined(AO_HAVE_char_fetch_compare_and_swap_acquire_read)
TA_assert(AO_char_fetch_compare_and_swap_acquire_read(&b, 14, 117) == 25);
TA_assert(b == 25);
TA_assert(AO_char_fetch_compare_and_swap_acquire_read(&b, 25, 117) == 25);
# else
MISSING(AO_char_fetch_compare_and_swap);
if (b == 25) b = 117;
# endif
TA_assert(b == 117);
# if defined(AO_HAVE_int_compare_and_swap_acquire_read)
TA_assert(!AO_int_compare_and_swap_acquire_read(&zz, 14, 42));
TA_assert(zz == 13);
TA_assert(AO_int_compare_and_swap_acquire_read(&zz, 13, 42));
TA_assert(zz == 42);
# else
MISSING(AO_int_compare_and_swap);
if (*(volatile int *)&zz == 13) zz = 42;
# endif
# if defined(AO_HAVE_int_or_acquire_read)
AO_int_or_acquire_read(&zz, 66);
TA_assert(zz == 106);
# else
# if !defined(AO_HAVE_int_or) || !defined(AO_HAVE_int_or_acquire) \
|| !defined(AO_HAVE_int_or_acquire_read) \
|| !defined(AO_HAVE_int_or_full) || !defined(AO_HAVE_int_or_read) \
|| !defined(AO_HAVE_int_or_release) \
|| !defined(AO_HAVE_int_or_release_write) \
|| !defined(AO_HAVE_int_or_write)
MISSING(AO_int_or);
# endif
zz |= 66;
# endif
# if defined(AO_HAVE_int_xor_acquire_read)
AO_int_xor_acquire_read(&zz, 181);
TA_assert(zz == 223);
# else
# if !defined(AO_HAVE_int_xor) || !defined(AO_HAVE_int_xor_acquire) \
|| !defined(AO_HAVE_int_xor_acquire_read) \
|| !defined(AO_HAVE_int_xor_full) || !defined(AO_HAVE_int_xor_read) \
|| !defined(AO_HAVE_int_xor_release) \
|| !defined(AO_HAVE_int_xor_release_write) \
|| !defined(AO_HAVE_int_xor_write)
MISSING(AO_int_xor);
# endif
zz ^= 181;
# endif
# if defined(AO_HAVE_int_and_acquire_read)
AO_int_and_acquire_read(&zz, 57);
TA_assert(zz == 25);
# else
# if !defined(AO_HAVE_int_and) || !defined(AO_HAVE_int_and_acquire) \
|| !defined(AO_HAVE_int_and_acquire_read) \
|| !defined(AO_HAVE_int_and_full) || !defined(AO_HAVE_int_and_read) \
|| !defined(AO_HAVE_int_and_release) \
|| !defined(AO_HAVE_int_and_release_write) \
|| !defined(AO_HAVE_int_and_write)
MISSING(AO_int_and);
# endif
zz &= 57;
# endif
# if defined(AO_HAVE_int_fetch_compare_and_swap_acquire_read)
TA_assert(AO_int_fetch_compare_and_swap_acquire_read(&zz, 14, 117) == 25);
TA_assert(zz == 25);
TA_assert(AO_int_fetch_compare_and_swap_acquire_read(&zz, 25, 117) == 25);
# else
MISSING(AO_int_fetch_compare_and_swap);
if (zz == 25) zz = 117;
# endif
TA_assert(zz == 117);
# if defined(AO_HAVE_double_load_acquire_read) || defined(AO_HAVE_double_store_acquire_read)
/* Initialize old_w even for store to workaround MSan warning. */
old_w.AO_val1 = 3316;
old_w.AO_val2 = 2921;
# endif
# if defined(AO_HAVE_double_load_acquire_read)
new_w = AO_double_load_acquire_read(&old_w);
TA_assert(new_w.AO_val1 == 3316 && new_w.AO_val2 == 2921);
# elif !defined(AO_HAVE_double_load) \
|| !defined(AO_HAVE_double_load_acquire) \
|| !defined(AO_HAVE_double_load_acquire_read) \
|| !defined(AO_HAVE_double_load_dd_acquire_read) \
|| !defined(AO_HAVE_double_load_full) \
|| !defined(AO_HAVE_double_load_read)
MISSING(AO_double_load);
# endif
# if defined(AO_HAVE_double_store_acquire_read)
new_w.AO_val1 = 1375;
new_w.AO_val2 = 8243;
AO_double_store_acquire_read(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
AO_double_store_acquire_read(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
new_w.AO_val1 ^= old_w.AO_val1;
new_w.AO_val2 ^= old_w.AO_val2;
AO_double_store_acquire_read(&old_w, new_w);
TA_assert(old_w.AO_val1 == 0 && old_w.AO_val2 == 0);
# elif !defined(AO_HAVE_double_store) \
|| !defined(AO_HAVE_double_store_full) \
|| !defined(AO_HAVE_double_store_release) \
|| !defined(AO_HAVE_double_store_release_write) \
|| !defined(AO_HAVE_double_store_write)
MISSING(AO_double_store);
# endif
# if defined(AO_HAVE_compare_double_and_swap_double_acquire_read)
TA_assert(!AO_compare_double_and_swap_double_acquire_read(&w, 17, 42, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_double_and_swap_double_acquire_read(&w, 0, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_acquire_read(&w, 12, 14, 64, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_acquire_read(&w, 11, 13, 85, 82));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_acquire_read(&w, 13, 12, 17, 42));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_double_and_swap_double_acquire_read(&w, 12, 13, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_double_and_swap_double_acquire_read(&w, 17, 42, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_double_and_swap_double);
# endif
# if defined(AO_HAVE_compare_and_swap_double_acquire_read)
TA_assert(!AO_compare_and_swap_double_acquire_read(&w, 17, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_and_swap_double_acquire_read(&w, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_acquire_read(&w, 13, 12, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_acquire_read(&w, 1213, 48, 86));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_and_swap_double_acquire_read(&w, 12, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_and_swap_double_acquire_read(&w, 17, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_and_swap_double);
# endif
# if defined(AO_HAVE_double_compare_and_swap_acquire_read)
old_w.AO_val1 = 4116;
old_w.AO_val2 = 2121;
new_w.AO_val1 = 8537;
new_w.AO_val2 = 6410;
TA_assert(!AO_double_compare_and_swap_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_double_compare_and_swap_acquire_read(&w, w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = 29;
new_w.AO_val1 = 820;
new_w.AO_val2 = 5917;
TA_assert(!AO_double_compare_and_swap_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = 11;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 3552;
new_w.AO_val2 = 1746;
TA_assert(!AO_double_compare_and_swap_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 8537;
new_w.AO_val1 = 4116;
new_w.AO_val2 = 2121;
TA_assert(!AO_double_compare_and_swap_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 1;
TA_assert(AO_double_compare_and_swap_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 1 && w.AO_val2 == 2121);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = w.AO_val2;
new_w.AO_val1--;
new_w.AO_val2 = 0;
TA_assert(AO_double_compare_and_swap_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_double_compare_and_swap);
# endif
}
/*
* Copyright (c) 2003 by <NAME>. All rights reserved.
*
* This file is covered by the GNU general public license, version 2.
* see COPYING for details.
*/
/* Some basic sanity tests. These do not test the barrier semantics. */
#undef TA_assert
#define TA_assert(e) \
if (!(e)) { fprintf(stderr, "Assertion failed %s:%d (barrier: _dd_acquire_read)\n", \
__FILE__, __LINE__), exit(1); }
#undef MISSING
#define MISSING(name) \
printf("Missing: %s\n", #name "_dd_acquire_read")
#if defined(CPPCHECK)
void list_atomic_dd_acquire_read(void);
void char_list_atomic_dd_acquire_read(void);
void short_list_atomic_dd_acquire_read(void);
void int_list_atomic_dd_acquire_read(void);
void double_list_atomic_dd_acquire_read(void);
#endif
void test_atomic_dd_acquire_read(void)
{
AO_t x;
unsigned char b;
unsigned short s;
unsigned int zz;
# if defined(AO_HAVE_test_and_set_dd_acquire_read)
AO_TS_t z = AO_TS_INITIALIZER;
# endif
# if defined(AO_HAVE_double_compare_and_swap_dd_acquire_read) \
|| defined(AO_HAVE_double_load_dd_acquire_read) \
|| defined(AO_HAVE_double_store_dd_acquire_read)
static AO_double_t old_w; /* static to avoid misalignment */
AO_double_t new_w;
# endif
# if defined(AO_HAVE_compare_and_swap_double_dd_acquire_read) \
|| defined(AO_HAVE_compare_double_and_swap_double_dd_acquire_read) \
|| defined(AO_HAVE_double_compare_and_swap_dd_acquire_read)
static AO_double_t w; /* static to avoid misalignment */
w.AO_val1 = 0;
w.AO_val2 = 0;
# endif
# if defined(CPPCHECK)
list_atomic_dd_acquire_read();
char_list_atomic_dd_acquire_read();
short_list_atomic_dd_acquire_read();
int_list_atomic_dd_acquire_read();
double_list_atomic_dd_acquire_read();
# endif
# if defined(AO_HAVE_nop_dd_acquire_read)
AO_nop_dd_acquire_read();
# elif !defined(AO_HAVE_nop) || !defined(AO_HAVE_nop_full) \
|| !defined(AO_HAVE_nop_read) || !defined(AO_HAVE_nop_write)
MISSING(AO_nop);
# endif
# if defined(AO_HAVE_store_dd_acquire_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile AO_t *)&x = 0; /* initialize to avoid false warning */
# endif
AO_store_dd_acquire_read(&x, 13);
TA_assert(x == 13);
# else
# if !defined(AO_HAVE_store) || !defined(AO_HAVE_store_full) \
|| !defined(AO_HAVE_store_release) \
|| !defined(AO_HAVE_store_release_write) \
|| !defined(AO_HAVE_store_write)
MISSING(AO_store);
# endif
x = 13;
# endif
# if defined(AO_HAVE_load_dd_acquire_read)
TA_assert(AO_load_dd_acquire_read(&x) == 13);
# elif !defined(AO_HAVE_load) || !defined(AO_HAVE_load_acquire) \
|| !defined(AO_HAVE_load_acquire_read) \
|| !defined(AO_HAVE_load_dd_acquire_read) \
|| !defined(AO_HAVE_load_full) || !defined(AO_HAVE_load_read)
MISSING(AO_load);
# endif
# if defined(AO_HAVE_test_and_set_dd_acquire_read)
TA_assert(AO_test_and_set_dd_acquire_read(&z) == AO_TS_CLEAR);
TA_assert(AO_test_and_set_dd_acquire_read(&z) == AO_TS_SET);
TA_assert(AO_test_and_set_dd_acquire_read(&z) == AO_TS_SET);
AO_CLEAR(&z);
# else
MISSING(AO_test_and_set);
# endif
# if defined(AO_HAVE_fetch_and_add_dd_acquire_read)
TA_assert(AO_fetch_and_add_dd_acquire_read(&x, 42) == 13);
TA_assert(AO_fetch_and_add_dd_acquire_read(&x, (AO_t)(-42)) == 55);
# else
MISSING(AO_fetch_and_add);
# endif
# if defined(AO_HAVE_fetch_and_add1_dd_acquire_read)
TA_assert(AO_fetch_and_add1_dd_acquire_read(&x) == 13);
# else
MISSING(AO_fetch_and_add1);
++x;
# endif
# if defined(AO_HAVE_fetch_and_sub1_dd_acquire_read)
TA_assert(AO_fetch_and_sub1_dd_acquire_read(&x) == 14);
# else
MISSING(AO_fetch_and_sub1);
--x;
# endif
# if defined(AO_HAVE_short_store_dd_acquire_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile short *)&s = 0;
# endif
AO_short_store_dd_acquire_read(&s, 13);
# else
# if !defined(AO_HAVE_short_store) || !defined(AO_HAVE_short_store_full) \
|| !defined(AO_HAVE_short_store_release) \
|| !defined(AO_HAVE_short_store_release_write) \
|| !defined(AO_HAVE_short_store_write)
MISSING(AO_short_store);
# endif
s = 13;
# endif
# if defined(AO_HAVE_short_load_dd_acquire_read)
TA_assert(AO_short_load(&s) == 13);
# elif !defined(AO_HAVE_short_load) || !defined(AO_HAVE_short_load_acquire) \
|| !defined(AO_HAVE_short_load_acquire_read) \
|| !defined(AO_HAVE_short_load_dd_acquire_read) \
|| !defined(AO_HAVE_short_load_full) \
|| !defined(AO_HAVE_short_load_read)
MISSING(AO_short_load);
# endif
# if defined(AO_HAVE_short_fetch_and_add_dd_acquire_read)
TA_assert(AO_short_fetch_and_add_dd_acquire_read(&s, 42) == 13);
TA_assert(AO_short_fetch_and_add_dd_acquire_read(&s, (unsigned short)-42) == 55);
# else
MISSING(AO_short_fetch_and_add);
# endif
# if defined(AO_HAVE_short_fetch_and_add1_dd_acquire_read)
TA_assert(AO_short_fetch_and_add1_dd_acquire_read(&s) == 13);
# else
MISSING(AO_short_fetch_and_add1);
++s;
# endif
# if defined(AO_HAVE_short_fetch_and_sub1_dd_acquire_read)
TA_assert(AO_short_fetch_and_sub1_dd_acquire_read(&s) == 14);
# else
MISSING(AO_short_fetch_and_sub1);
--s;
# endif
TA_assert(*(volatile short *)&s == 13);
# if defined(AO_HAVE_char_store_dd_acquire_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile char *)&b = 0;
# endif
AO_char_store_dd_acquire_read(&b, 13);
# else
# if !defined(AO_HAVE_char_store) || !defined(AO_HAVE_char_store_full) \
|| !defined(AO_HAVE_char_store_release) \
|| !defined(AO_HAVE_char_store_release_write) \
|| !defined(AO_HAVE_char_store_write)
MISSING(AO_char_store);
# endif
b = 13;
# endif
# if defined(AO_HAVE_char_load_dd_acquire_read)
TA_assert(AO_char_load(&b) == 13);
# elif !defined(AO_HAVE_char_load) || !defined(AO_HAVE_char_load_acquire) \
|| !defined(AO_HAVE_char_load_acquire_read) \
|| !defined(AO_HAVE_char_load_dd_acquire_read) \
|| !defined(AO_HAVE_char_load_full) || !defined(AO_HAVE_char_load_read)
MISSING(AO_char_load);
# endif
# if defined(AO_HAVE_char_fetch_and_add_dd_acquire_read)
TA_assert(AO_char_fetch_and_add_dd_acquire_read(&b, 42) == 13);
TA_assert(AO_char_fetch_and_add_dd_acquire_read(&b, (unsigned char)-42) == 55);
# else
MISSING(AO_char_fetch_and_add);
# endif
# if defined(AO_HAVE_char_fetch_and_add1_dd_acquire_read)
TA_assert(AO_char_fetch_and_add1_dd_acquire_read(&b) == 13);
# else
MISSING(AO_char_fetch_and_add1);
++b;
# endif
# if defined(AO_HAVE_char_fetch_and_sub1_dd_acquire_read)
TA_assert(AO_char_fetch_and_sub1_dd_acquire_read(&b) == 14);
# else
MISSING(AO_char_fetch_and_sub1);
--b;
# endif
TA_assert(*(volatile char *)&b == 13);
# if defined(AO_HAVE_int_store_dd_acquire_read)
# if (defined(AO_MEMORY_SANITIZER) || defined(LINT2)) \
&& defined(AO_PREFER_GENERALIZED)
*(volatile int *)&zz = 0;
# endif
AO_int_store_dd_acquire_read(&zz, 13);
# else
# if !defined(AO_HAVE_int_store) || !defined(AO_HAVE_int_store_full) \
|| !defined(AO_HAVE_int_store_release) \
|| !defined(AO_HAVE_int_store_release_write) \
|| !defined(AO_HAVE_int_store_write)
MISSING(AO_int_store);
# endif
zz = 13;
# endif
# if defined(AO_HAVE_int_load_dd_acquire_read)
TA_assert(AO_int_load(&zz) == 13);
# elif !defined(AO_HAVE_int_load) || !defined(AO_HAVE_int_load_acquire) \
|| !defined(AO_HAVE_int_load_acquire_read) \
|| !defined(AO_HAVE_int_load_dd_acquire_read) \
|| !defined(AO_HAVE_int_load_full) || !defined(AO_HAVE_int_load_read)
MISSING(AO_int_load);
# endif
# if defined(AO_HAVE_int_fetch_and_add_dd_acquire_read)
TA_assert(AO_int_fetch_and_add_dd_acquire_read(&zz, 42) == 13);
TA_assert(AO_int_fetch_and_add_dd_acquire_read(&zz, (unsigned int)-42) == 55);
# else
MISSING(AO_int_fetch_and_add);
# endif
# if defined(AO_HAVE_int_fetch_and_add1_dd_acquire_read)
TA_assert(AO_int_fetch_and_add1_dd_acquire_read(&zz) == 13);
# else
MISSING(AO_int_fetch_and_add1);
++zz;
# endif
# if defined(AO_HAVE_int_fetch_and_sub1_dd_acquire_read)
TA_assert(AO_int_fetch_and_sub1_dd_acquire_read(&zz) == 14);
# else
MISSING(AO_int_fetch_and_sub1);
--zz;
# endif
TA_assert(*(volatile int *)&zz == 13);
# if defined(AO_HAVE_compare_and_swap_dd_acquire_read)
TA_assert(!AO_compare_and_swap_dd_acquire_read(&x, 14, 42));
TA_assert(x == 13);
TA_assert(AO_compare_and_swap_dd_acquire_read(&x, 13, 42));
TA_assert(x == 42);
# else
MISSING(AO_compare_and_swap);
if (*(volatile AO_t *)&x == 13) x = 42;
# endif
# if defined(AO_HAVE_or_dd_acquire_read)
AO_or_dd_acquire_read(&x, 66);
TA_assert(x == 106);
# else
# if !defined(AO_HAVE_or) || !defined(AO_HAVE_or_acquire) \
|| !defined(AO_HAVE_or_acquire_read) || !defined(AO_HAVE_or_full) \
|| !defined(AO_HAVE_or_read) || !defined(AO_HAVE_or_release) \
|| !defined(AO_HAVE_or_release_write) || !defined(AO_HAVE_or_write)
MISSING(AO_or);
# endif
x |= 66;
# endif
# if defined(AO_HAVE_xor_dd_acquire_read)
AO_xor_dd_acquire_read(&x, 181);
TA_assert(x == 223);
# else
# if !defined(AO_HAVE_xor) || !defined(AO_HAVE_xor_acquire) \
|| !defined(AO_HAVE_xor_acquire_read) || !defined(AO_HAVE_xor_full) \
|| !defined(AO_HAVE_xor_read) || !defined(AO_HAVE_xor_release) \
|| !defined(AO_HAVE_xor_release_write) || !defined(AO_HAVE_xor_write)
MISSING(AO_xor);
# endif
x ^= 181;
# endif
# if defined(AO_HAVE_and_dd_acquire_read)
AO_and_dd_acquire_read(&x, 57);
TA_assert(x == 25);
# else
# if !defined(AO_HAVE_and) || !defined(AO_HAVE_and_acquire) \
|| !defined(AO_HAVE_and_acquire_read) || !defined(AO_HAVE_and_full) \
|| !defined(AO_HAVE_and_read) || !defined(AO_HAVE_and_release) \
|| !defined(AO_HAVE_and_release_write) || !defined(AO_HAVE_and_write)
MISSING(AO_and);
# endif
x &= 57;
# endif
# if defined(AO_HAVE_fetch_compare_and_swap_dd_acquire_read)
TA_assert(AO_fetch_compare_and_swap_dd_acquire_read(&x, 14, 117) == 25);
TA_assert(x == 25);
TA_assert(AO_fetch_compare_and_swap_dd_acquire_read(&x, 25, 117) == 25);
# else
MISSING(AO_fetch_compare_and_swap);
if (x == 25) x = 117;
# endif
TA_assert(x == 117);
# if defined(AO_HAVE_short_compare_and_swap_dd_acquire_read)
TA_assert(!AO_short_compare_and_swap_dd_acquire_read(&s, 14, 42));
TA_assert(s == 13);
TA_assert(AO_short_compare_and_swap_dd_acquire_read(&s, 13, 42));
TA_assert(s == 42);
# else
MISSING(AO_short_compare_and_swap);
if (*(volatile short *)&s == 13) s = 42;
# endif
# if defined(AO_HAVE_short_or_dd_acquire_read)
AO_short_or_dd_acquire_read(&s, 66);
TA_assert(s == 106);
# else
# if !defined(AO_HAVE_short_or) || !defined(AO_HAVE_short_or_acquire) \
|| !defined(AO_HAVE_short_or_acquire_read) \
|| !defined(AO_HAVE_short_or_full) || !defined(AO_HAVE_short_or_read) \
|| !defined(AO_HAVE_short_or_release) \
|| !defined(AO_HAVE_short_or_release_write) \
|| !defined(AO_HAVE_short_or_write)
MISSING(AO_short_or);
# endif
s |= 66;
# endif
# if defined(AO_HAVE_short_xor_dd_acquire_read)
AO_short_xor_dd_acquire_read(&s, 181);
TA_assert(s == 223);
# else
# if !defined(AO_HAVE_short_xor) || !defined(AO_HAVE_short_xor_acquire) \
|| !defined(AO_HAVE_short_xor_acquire_read) \
|| !defined(AO_HAVE_short_xor_full) \
|| !defined(AO_HAVE_short_xor_read) \
|| !defined(AO_HAVE_short_xor_release) \
|| !defined(AO_HAVE_short_xor_release_write) \
|| !defined(AO_HAVE_short_xor_write)
MISSING(AO_short_xor);
# endif
s ^= 181;
# endif
# if defined(AO_HAVE_short_and_dd_acquire_read)
AO_short_and_dd_acquire_read(&s, 57);
TA_assert(s == 25);
# else
# if !defined(AO_HAVE_short_and) || !defined(AO_HAVE_short_and_acquire) \
|| !defined(AO_HAVE_short_and_acquire_read) \
|| !defined(AO_HAVE_short_and_full) \
|| !defined(AO_HAVE_short_and_read) \
|| !defined(AO_HAVE_short_and_release) \
|| !defined(AO_HAVE_short_and_release_write) \
|| !defined(AO_HAVE_short_and_write)
MISSING(AO_short_and);
# endif
s &= 57;
# endif
# if defined(AO_HAVE_short_fetch_compare_and_swap_dd_acquire_read)
TA_assert(AO_short_fetch_compare_and_swap_dd_acquire_read(&s, 14, 117) == 25);
TA_assert(s == 25);
TA_assert(AO_short_fetch_compare_and_swap_dd_acquire_read(&s, 25, 117) == 25);
# else
MISSING(AO_short_fetch_compare_and_swap);
if (s == 25) s = 117;
# endif
TA_assert(s == 117);
# if defined(AO_HAVE_char_compare_and_swap_dd_acquire_read)
TA_assert(!AO_char_compare_and_swap_dd_acquire_read(&b, 14, 42));
TA_assert(b == 13);
TA_assert(AO_char_compare_and_swap_dd_acquire_read(&b, 13, 42));
TA_assert(b == 42);
# else
MISSING(AO_char_compare_and_swap);
if (*(volatile char *)&b == 13) b = 42;
# endif
# if defined(AO_HAVE_char_or_dd_acquire_read)
AO_char_or_dd_acquire_read(&b, 66);
TA_assert(b == 106);
# else
# if !defined(AO_HAVE_char_or) || !defined(AO_HAVE_char_or_acquire) \
|| !defined(AO_HAVE_char_or_acquire_read) \
|| !defined(AO_HAVE_char_or_full) || !defined(AO_HAVE_char_or_read) \
|| !defined(AO_HAVE_char_or_release) \
|| !defined(AO_HAVE_char_or_release_write) \
|| !defined(AO_HAVE_char_or_write)
MISSING(AO_char_or);
# endif
b |= 66;
# endif
# if defined(AO_HAVE_char_xor_dd_acquire_read)
AO_char_xor_dd_acquire_read(&b, 181);
TA_assert(b == 223);
# else
# if !defined(AO_HAVE_char_xor) || !defined(AO_HAVE_char_xor_acquire) \
|| !defined(AO_HAVE_char_xor_acquire_read) \
|| !defined(AO_HAVE_char_xor_full) || !defined(AO_HAVE_char_xor_read) \
|| !defined(AO_HAVE_char_xor_release) \
|| !defined(AO_HAVE_char_xor_release_write) \
|| !defined(AO_HAVE_char_xor_write)
MISSING(AO_char_xor);
# endif
b ^= 181;
# endif
# if defined(AO_HAVE_char_and_dd_acquire_read)
AO_char_and_dd_acquire_read(&b, 57);
TA_assert(b == 25);
# else
# if !defined(AO_HAVE_char_and) || !defined(AO_HAVE_char_and_acquire) \
|| !defined(AO_HAVE_char_and_acquire_read) \
|| !defined(AO_HAVE_char_and_full) || !defined(AO_HAVE_char_and_read) \
|| !defined(AO_HAVE_char_and_release) \
|| !defined(AO_HAVE_char_and_release_write) \
|| !defined(AO_HAVE_char_and_write)
MISSING(AO_char_and);
# endif
b &= 57;
# endif
# if defined(AO_HAVE_char_fetch_compare_and_swap_dd_acquire_read)
TA_assert(AO_char_fetch_compare_and_swap_dd_acquire_read(&b, 14, 117) == 25);
TA_assert(b == 25);
TA_assert(AO_char_fetch_compare_and_swap_dd_acquire_read(&b, 25, 117) == 25);
# else
MISSING(AO_char_fetch_compare_and_swap);
if (b == 25) b = 117;
# endif
TA_assert(b == 117);
# if defined(AO_HAVE_int_compare_and_swap_dd_acquire_read)
TA_assert(!AO_int_compare_and_swap_dd_acquire_read(&zz, 14, 42));
TA_assert(zz == 13);
TA_assert(AO_int_compare_and_swap_dd_acquire_read(&zz, 13, 42));
TA_assert(zz == 42);
# else
MISSING(AO_int_compare_and_swap);
if (*(volatile int *)&zz == 13) zz = 42;
# endif
# if defined(AO_HAVE_int_or_dd_acquire_read)
AO_int_or_dd_acquire_read(&zz, 66);
TA_assert(zz == 106);
# else
# if !defined(AO_HAVE_int_or) || !defined(AO_HAVE_int_or_acquire) \
|| !defined(AO_HAVE_int_or_acquire_read) \
|| !defined(AO_HAVE_int_or_full) || !defined(AO_HAVE_int_or_read) \
|| !defined(AO_HAVE_int_or_release) \
|| !defined(AO_HAVE_int_or_release_write) \
|| !defined(AO_HAVE_int_or_write)
MISSING(AO_int_or);
# endif
zz |= 66;
# endif
# if defined(AO_HAVE_int_xor_dd_acquire_read)
AO_int_xor_dd_acquire_read(&zz, 181);
TA_assert(zz == 223);
# else
# if !defined(AO_HAVE_int_xor) || !defined(AO_HAVE_int_xor_acquire) \
|| !defined(AO_HAVE_int_xor_acquire_read) \
|| !defined(AO_HAVE_int_xor_full) || !defined(AO_HAVE_int_xor_read) \
|| !defined(AO_HAVE_int_xor_release) \
|| !defined(AO_HAVE_int_xor_release_write) \
|| !defined(AO_HAVE_int_xor_write)
MISSING(AO_int_xor);
# endif
zz ^= 181;
# endif
# if defined(AO_HAVE_int_and_dd_acquire_read)
AO_int_and_dd_acquire_read(&zz, 57);
TA_assert(zz == 25);
# else
# if !defined(AO_HAVE_int_and) || !defined(AO_HAVE_int_and_acquire) \
|| !defined(AO_HAVE_int_and_acquire_read) \
|| !defined(AO_HAVE_int_and_full) || !defined(AO_HAVE_int_and_read) \
|| !defined(AO_HAVE_int_and_release) \
|| !defined(AO_HAVE_int_and_release_write) \
|| !defined(AO_HAVE_int_and_write)
MISSING(AO_int_and);
# endif
zz &= 57;
# endif
# if defined(AO_HAVE_int_fetch_compare_and_swap_dd_acquire_read)
TA_assert(AO_int_fetch_compare_and_swap_dd_acquire_read(&zz, 14, 117) == 25);
TA_assert(zz == 25);
TA_assert(AO_int_fetch_compare_and_swap_dd_acquire_read(&zz, 25, 117) == 25);
# else
MISSING(AO_int_fetch_compare_and_swap);
if (zz == 25) zz = 117;
# endif
TA_assert(zz == 117);
# if defined(AO_HAVE_double_load_dd_acquire_read) || defined(AO_HAVE_double_store_dd_acquire_read)
/* Initialize old_w even for store to workaround MSan warning. */
old_w.AO_val1 = 3316;
old_w.AO_val2 = 2921;
# endif
# if defined(AO_HAVE_double_load_dd_acquire_read)
new_w = AO_double_load_dd_acquire_read(&old_w);
TA_assert(new_w.AO_val1 == 3316 && new_w.AO_val2 == 2921);
# elif !defined(AO_HAVE_double_load) \
|| !defined(AO_HAVE_double_load_acquire) \
|| !defined(AO_HAVE_double_load_acquire_read) \
|| !defined(AO_HAVE_double_load_dd_acquire_read) \
|| !defined(AO_HAVE_double_load_full) \
|| !defined(AO_HAVE_double_load_read)
MISSING(AO_double_load);
# endif
# if defined(AO_HAVE_double_store_dd_acquire_read)
new_w.AO_val1 = 1375;
new_w.AO_val2 = 8243;
AO_double_store_dd_acquire_read(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
AO_double_store_dd_acquire_read(&old_w, new_w);
TA_assert(old_w.AO_val1 == 1375 && old_w.AO_val2 == 8243);
new_w.AO_val1 ^= old_w.AO_val1;
new_w.AO_val2 ^= old_w.AO_val2;
AO_double_store_dd_acquire_read(&old_w, new_w);
TA_assert(old_w.AO_val1 == 0 && old_w.AO_val2 == 0);
# elif !defined(AO_HAVE_double_store) \
|| !defined(AO_HAVE_double_store_full) \
|| !defined(AO_HAVE_double_store_release) \
|| !defined(AO_HAVE_double_store_release_write) \
|| !defined(AO_HAVE_double_store_write)
MISSING(AO_double_store);
# endif
# if defined(AO_HAVE_compare_double_and_swap_double_dd_acquire_read)
TA_assert(!AO_compare_double_and_swap_double_dd_acquire_read(&w, 17, 42, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_double_and_swap_double_dd_acquire_read(&w, 0, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_dd_acquire_read(&w, 12, 14, 64, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_dd_acquire_read(&w, 11, 13, 85, 82));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_double_and_swap_double_dd_acquire_read(&w, 13, 12, 17, 42));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_double_and_swap_double_dd_acquire_read(&w, 12, 13, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_double_and_swap_double_dd_acquire_read(&w, 17, 42, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_double_and_swap_double);
# endif
# if defined(AO_HAVE_compare_and_swap_double_dd_acquire_read)
TA_assert(!AO_compare_and_swap_double_dd_acquire_read(&w, 17, 12, 13));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_compare_and_swap_double_dd_acquire_read(&w, 0, 12, 13));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_dd_acquire_read(&w, 13, 12, 33));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(!AO_compare_and_swap_double_dd_acquire_read(&w, 1213, 48, 86));
TA_assert(w.AO_val1 == 12 && w.AO_val2 == 13);
TA_assert(AO_compare_and_swap_double_dd_acquire_read(&w, 12, 17, 42));
TA_assert(w.AO_val1 == 17 && w.AO_val2 == 42);
TA_assert(AO_compare_and_swap_double_dd_acquire_read(&w, 17, 0, 0));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_compare_and_swap_double);
# endif
# if defined(AO_HAVE_double_compare_and_swap_dd_acquire_read)
old_w.AO_val1 = 4116;
old_w.AO_val2 = 2121;
new_w.AO_val1 = 8537;
new_w.AO_val2 = 6410;
TA_assert(!AO_double_compare_and_swap_dd_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
TA_assert(AO_double_compare_and_swap_dd_acquire_read(&w, w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = 29;
new_w.AO_val1 = 820;
new_w.AO_val2 = 5917;
TA_assert(!AO_double_compare_and_swap_dd_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = 11;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 3552;
new_w.AO_val2 = 1746;
TA_assert(!AO_double_compare_and_swap_dd_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 8537;
new_w.AO_val1 = 4116;
new_w.AO_val2 = 2121;
TA_assert(!AO_double_compare_and_swap_dd_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 8537 && w.AO_val2 == 6410);
old_w.AO_val1 = old_w.AO_val2;
old_w.AO_val2 = 6410;
new_w.AO_val1 = 1;
TA_assert(AO_double_compare_and_swap_dd_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 1 && w.AO_val2 == 2121);
old_w.AO_val1 = new_w.AO_val1;
old_w.AO_val2 = w.AO_val2;
new_w.AO_val1--;
new_w.AO_val2 = 0;
TA_assert(AO_double_compare_and_swap_dd_acquire_read(&w, old_w, new_w));
TA_assert(w.AO_val1 == 0 && w.AO_val2 == 0);
# else
MISSING(AO_double_compare_and_swap);
# endif
}
|
import fs from 'fs'
import { isEmpty, reject } from 'lodash';
try {
const input = fs.readFileSync('./in/input2.txt', 'utf8');
const commands = reject(input.split('\n'), isEmpty);
var horizontal = 0;
var depth = 0;
var aim = 0;
commands.forEach((command: string) => {
const partialCommand = command.split(' ');
const direction = partialCommand[0];
const amount = parseInt(partialCommand[1], 10);
if (direction === 'forward') {
horizontal += amount;
depth += aim * amount;
} else if (direction === 'down') {
aim += amount;
} else if (direction === 'up') {
aim -= amount;
} else {
console.error(`bad command ${direction}`)
}
})
const multiplied = horizontal * depth;
console.log(multiplied);
} catch (err) {
console.error(err);
}
|
Development and Validation of a Clinical and Computerised Decision Support System for Management of Hypertension (DSS-HTN) at a Primary Health Care (PHC) Setting
Background Hypertension remains the top global cause of disease burden. Decision support systems (DSS) could provide an adequate and cost-effective means to improve the management of hypertension at a primary health care (PHC) level in a developing country, nevertheless evidence on this regard is rather limited. Methods Development of DSS software was based on an algorithmic approach for (a) evaluation of a hypertensive patient, (b) risk stratification (c) drug management and (d) lifestyle interventions, based on Indian guidelines for hypertension II (2007). The beta testing of DSS software involved a feedback from the end users of the system on the contents of the user interface. Software validation and piloting was done in field, wherein the virtual recommendations and advice given by the DSS were compared with two independent experts (government doctors from the non-participating PHC centers). Results The overall percent agreement between the DSS and independent experts among 60 hypertensives on drug management was 85% (95% CI: 83.61 - 85.25). The kappa statistic for overall agreement for drug management was 0.659 (95% CI: 0.457 - 0.862) indicating a substantial degree of agreement beyond chance at an alpha fixed at 0.05 with 80% power. Receiver operator curve (ROC) showed a good accuracy for the DSS, wherein, the area under curve (AUC) was 0.848 (95% CI: 0.741 - 0.948). Sensitivity and specificity of the DSS were 83.33 and 85.71% respectively when compared with independent experts. Conclusion A point of care, pilot tested and validated DSS for management of hypertension has been developed in a resource constrained low and middle income setting and could contribute to improved management of hypertension at a primary health care level.
Introduction
Hypertension exerts a substantial public health burden on cardiovascular health status and health care systems in India . The pooled prevalence of hypertension is estimated to be 25% (95% CI: 11.66 -44.8 in males and 13.68 -44.5 in females) and 10% (95% CI: 3.7 -24 in males and 3.69 -17 in females) in urban and rural areas of India respectively .. By 2025, the rate of hypertension (in %) has been projected to go up to around 22.9 and 23.6 from the existing rates of 20. 6 and 20.9 (in 2000) for Indian males and females respectively . However, only one fourth of Indian patients on antihypertensives achieve blood pressure control . Recent studies have shown that physician adherence to evidence based and standardized medical care results in achieving adequate blood pressure control among hypertensive patients . Clinical guidelines and algorithms at the point of health care delivery, in the form of decision making aids to the health care providers, such as clinical and or computerised decision support systems (DSS) are a possible way to improve the standard of care delivery, more so, in a resource constrained primary health care setting .
Clinical and computerised decision support systems have been developed, validated and field tested in the western world for management of hypertension during the last decade . Mixed results have been shown for DSS in the management of hypertension in the developed world for patient outcomes, but have shown that they may improve the Physician performance . An improvement in the quality of antihypertensive treatment, concurrently leading to a considerable reduction in drug costs have been shown for DSS . There exist no studies in a low and middle income country (LMIC), wherein, a clinical decision support system, either computerised or non-computerised, has been shown to aid clinical decision making and in management of hypertension.
Hence, we performed a study to find out the ease of building a clinical decision support system, its validity, and to assess the utility of DSS in managing hypertension at a primary health care level in a LMIC (India). The primary purpose for developing the DSS software was to help the end user (health care providers-physicians serving at the primary health care level) to (a) undertake a thorough evaluation of risk factors for hypertension and future cardiovascular diseases (b) to classify the risk levels for progression to future cardiovascular diseases (c) to follow a software prompted algorithmic guideline based drug management (which would be developed based on Indian Hypertension guidelines II, 2007 and (d) to give alerts on the counseling on lifestyle changes and adherence to medication. The aim was to develop, pilot test and validate a decision support system for hypertensive patients. Improvement in patient outcomes (reduction in blood pressure and improvement in BP control rates) and physician skills and practitioners performance (uptake of evidence based guidelines for hypertension by the primary care physicians) were the two main issues that we attempted to address by developing and deploying a DSS in PHCs.
Methods
The phases of DSS development and validation are summarized in Figure 1.
Phase I -Development of DSS software
The knowledge base for the evaluation, staging and risk stratification of a hypertensive patient; algorithmic drug based management and lifestyle interventions in the DSS were developed based on the Indian Hypertension II (2007) guidelines, which have been developed by the Association of Physicians of India (API) and endorsed by the Cardiological Society of India, Hypertension Society of India and the API . Stakeholder and situational analysis formed a major part of the development exercise. Focus group discussions and semi structured questionnaires were done with the consenting 34 Primary Health Care (PHC) doctors and nurse assistants in Algorithms were developed by the software developers (Data Template, Bangalore, India) to help build the inferential engine base of the DSS. Medical language and software coding and machine language development were done by the medical software developers (Data Template) using "open source" platforms (JAVA and MySQL). Figure 2 details the architecture of the built DSS. Further details are mentioned in File S1. Caution was taken to ensure that the prepared 'scenarios', 'risk stratifications', 'drug algorithms' mirrored the Indian Hypertension II guidelines. The prepared "rules and logic" sheets were reviewed independently by two physician experts in the management of hypertension (government doctors from the non-participating PHC centers).
Phase II -Beta testing of DSS software
The contents of the user interface (UI) were shown to randomly selected physicians (from a line listed sample frame) who were working in the PHCs of Mahabubnagar district. The acceptability and validity of the questionnaire, reasoning for the questions, suggestions for improvement of data capture from the drop down menu, inputs on how to structure the summary page, views on what all the comprehensive elements need to be stressed in the tailor made recommendations, locally applicable and relevant life style advices that pop up in the DSS software upon entry of the data were field tested in 10% Validation of DSS-HTN in PHC Settings PLOS ONE | www.plosone.org of physicians (n = 10) from the primary health care centers and from the community health care centers (n=8). The key feedback gained from the beta testing phase are summarised below: (A) Clinical support required for management of hypertension. "Clear definitions of risk and staging of blood pressure, guidelines on effective lifestyle counseling in local language, advice on the best drugs from the available ones in the PHCs, information on the side effects, contraindications and benefits of each drug class among the antihypertensive medications, information buttons to cross check the recommendations and clear cut guidelines on when to refer a patient to the next level of care" were requested during beta testing phase (B): Feasibility. Almost all the physicians and nurses felt that this was feasible provided it did not interrupt their daily workflow patterns. Key elements that were important to them were the accuracy of the recommendations, contents of the output, time taken for the input and the speed of the output from the system. Given the heavy workload of the outpatient departments in the PHCs, all agreed that a 10 minute window was the ceiling limit for the time taken between data entry and output of patient specific recommendations by the DSS.
(C) Operational issues. "Easily navigable, highly visible and understandable guidelines" were the main requests from all the participants. "Maintenance of the knowledge base and subsequent incorporation of new guidelines as and when they arose" were also requested.
In addition to the life style counseling on benefits of losing weight and walking for at least 30 minutes a day; quitting smoking and avoiding alcohol; and adherence to medications, end users of the systems specifically requested for incorporation of locally applicable and relevant life style advices in the DSS. These included advice on harmful effects of locally prevalent forms of oral tobacco consumption (a) khainitobacco with slaked lime paste, and areca nuts (b) zardatobacco, lime, spices, vegetable dyes, areca nut (c) pan masala -betel leaf quid. Along with lifestyle advice to reduce addition of excess salt to prepared food, end users of the system also requested to incorporate counseling on reduction of papads (locally prepared highly salted snacks -seasoned dough made from lentils, chickpeas, rice, or potato, fried or cooked with dry heat) and pickles. Most users felt that figurative explanations for portion sizes for fruit and vegetable consumption would enable them to counsel better. Hence we defined portions as follows: A portion can be: vegetables (fresh, raw, tinned, or frozen) 1 portion = 3 tablespoons; salad, 1 portion = 1 bowl; fresh fruit, 1 portion = 1 medium apple, one banana; fruit juice (excluding cordials, fruit drinks, squashes), 1 portion = 1 small glass or more.
The time taken for completing the electronic data capture by the participating physicians was noted (mean time: 10 minutes, SD: 3 min), so as to achieve a consensus among them that it doesn't affect their daily work flow patterns. This critical feedback from the field on the developed user interface was relayed to the developers. To understand the field level difficulties, paucity of the technical resources in a PHC and to gauge the OPD burden per day per center, visits were undertaken for a 'passive observation' along with the technical personnel.
Phase III -Validation of DSS in field settings
We retrieved the systems risk staging (screen shot of DSS showing - Figure 3), tailor made recommendations and the advice (screenshot of results page - Figure 4) given to the patient (based on the clinical signs, symptoms and detailed history notes that the doctor entered in the netbook) from the field sites during the testing phase and compared them with the recommendations and advice given by two independent experts who were distinct from the two government doctors involved in phase one(experienced physicians from the government doctors from the non-participating PHC centers). The information and the reasoning logic displayed on 'info' buttons in the DSS output page (screenshot of info page- Figure 5) was corroborated with the 2007 Indian hypertension guidelines.
Process and quality assurance
The quality process and the development of DSS software is summarised in table 1 and explained in detail in File S1. Testing was included in every iteration to ensure the quality of deliverables during the DSS development phase. The principal measure of progress was the delivery of 'working' software. Late changes in requirements were also welcomed as there was a close and daily cooperation between the business development people and developers of the DSS application. Face-to-face conversation (co-location), continuous attention to technical excellence and good design, simple self -organising teams with adaptability to changing circumstances were the key quality assurance norms followed during the DSS development, beta testing, field testing, pilot testing and finally during the implementation phase.
Statistical Methods
Sample size: The number of subjects required in a 2-rater study to detect a statistically significant (p<.05) on a dichotomous variable, with 80% power, at various proportions of positive diagnoses, to detect a difference in kappa of 0.40 is 50 for a two tailed test . Assuming a response rate of 80%, the final sample size was adjusted to 60 Indian hypertensive patients. The detailed calculations for the overall agreement, kappa and 95% CI values are mentioned in File S1.
Results
The overall percent agreement (calculated as the sum of all agreements divided by the total number of observations) between the DSS software and the independent experts (not from the study team) on the stage of the blood pressure (BP) measurement, risk, drug management, side effects and adverse interactions, lifestyle advice and follow up advice was 90% (95% CI: 88.52 -90.16); 91.67% (95% CI: 90. 16 Based on the risk category; staging of BP; presence or absence of associated clinical conditions and target organ damage; DSS suggested drug management for 39 out of 60 HTN patients, whereas the independent experts opined that 42 out of 60 would need drug management (table 3 and table S1 in File S1). The positive and negative percent agreements were 85.71% and 83.33 % respectively. The overall percent agreement (P o ) between the DSS and expert was 85% (95% CI: 83.61% -85.25%). Receiver Operator Curve (ROC) showed a good accuracy for the DSS, wherein, the area under curve (AUC) was 0.848 (95% CI: 0.741 -0.948). Sensitivity and specificity of the DSS were 83.33 and 85.71% respectively when compared with independent experts (Figure 6). The kappa statistic (observed agreement beyond chance divided by the maximum agreement beyond chance), for overall agreement for drug management was 0.659 (95% CI: 0.457 -0.862) indicating a substantial degree of agreement beyond chance at an alpha fixed at 0.05 with 80% power. The prevalence index was 0.31 and the bias index (extent to which the raters disagree on the proportion of positive or negative cases) was 0.05.
Discussion
We built a clinical decision support system, based on 2007 Indian hypertension II guidelines, for staging and risk stratification of hypertension, for suggesting evidence based recommendations on drug management and life style advice of hypertensive patients to better manage hypertensive patients at a primary healthcare level. We report a good accuracy of our built DSS, with an AUC of 0.848 (95% CI: 0.74-0.94) with sensitivity and specificity values of 0.83 and 0.85 respectively. A moderate to substantial agreement of 0.66 (estimate for kappa) on drug management of hypertensive patients was noticed between the DSS and an independent expert after adjusting for occurrence of a chance agreement. The 95% CI for kappa fell in the fair to substantial agreement range (0.43 to 0.89). 31% of agreements on the positive classification differed from that of the negative classification and the disagreement between the DSS (virtual) and the independent expert (real) on the proportion of positive or negative cases was 5%. The prevalence and bias index, the 95% CI for kappa and positive and negative agreements of 0.91 and 0.73 indicate the robustness of the yielded kappa.
Comparison with previous literature
One of the first specific DSS built for managing hypertension, the ATHENA-Hypertension (Assessment and Treatment of Hypertension: Evidence-based Automation built by Stanford Medical Informatics) system , a similar knowledge-based DSS like our built DSS, showed that implementation and deployment of clinical decision support was feasible in large clinical settings . Differences in ATHENA system and DSS in drug management, prescription of antihypertensives, availability of physiological testing and risk classification are explained in detail in the File S1. The clinical data visualizations and evidence to support specific recommendations in ATHENA were more comprehensive than the physician in adding, substituting or increasing drug therapy, where the criteria were clear in the pre-defined rules . Our study showed a moderate to substantial agreement on Validation of DSS-HTN in PHC Settings PLOS ONE | www.plosone.org drug management of hypertensive patients between DSS and the independent physician evaluators. Care was taken to ensure that testing data came from real patients and representative physician evaluators who were familiar with the clinical settings similar to the offline ATHENA system testing study (a 'physician-evaluator who was a representative of the end-user population' validated the system) .
The concordance rates for definite indication and absolute contraindication for drug management in hypertension were 85% and 100%, respectively when the knowledge base for a hypertension management DSS (LIGHT) was verified and validated . We report similar findings from our study, i.e., positive and negative percent agreements were 85.71% and 83.33 % respectively and the overall percent agreement (P o ) between the DSS and experts was 85% (95% CI: 83.61% -85.25%). In an on-demand DSS study for primary care management of hypertension (similar clinical settings of primary health care in our study), physicians were more willing to use DSS in complex clinical situations, when the reasoning logic was clearly demarcated . Our DSS has display buttons for information on the logic and engine rules that help in arriving at decisions based on patient profile and clinical history.
A recent paper on 'Analysis on the accuracy of a decision support system for hypertension monitoring' on a developed DSS -the WeHealth system , proposed a theoretical method to evaluate the accuracy of WeHealth hypertension monitoring system by linking the system accuracy with the distribution of sensors' errors (systolic and diastolic BP) and the errors of context (entry risk factors, target organ damage and complications). The difference in accuracy was less than 1% when traditional (physician review) was compared with the WeHealth DSS . The difference in accuracy between our DSS and the independent experts was in between 2-4% (table S2 in File S1). Moreover the value of AUC of 0.848 with a tight range of 95% CI (0.74-0.94) suggests a good accuracy of our DSS.
Strengths
Our DSS has user friendly and properly structured (recommendation and reasoning info buttons) pull-down lists; a consistent use of information or use of symbols and color for improving visibility and speed of navigation; a clinical user interface that mimics their paper predecessors; and has a standardized evidence based risk stratification, staging of BP, guidelines and recommendations for drug, lifestyle and follow up advice for Indian patients suffering from hypertension. The decision algorithm is also visible as a pop up menu for the clinicians to see and find out the logic behind the decision. Moreover, we have taken into confidence and involved the end users of the DSS (clinicians and treating physicians at a primary health care level) at every stage of the development, pilot testing and validations so that a consistent understanding of the purpose of the DSS system and the functionality of the user interface takes place during the implementation phase. Incomplete or inaccurate data entry has been prevented as the ceiling (maximum and minimum permissible values) limits for each variable have been defined during the coding process. A summary sheet highlighting the patient specific key risk factors, stage of BP, any co-morbid conditions is in place, so that the cognitive burden of absorbing the information does not prevent the end users from thinking about what the information means. We have validated the DSS by attempting to simulate a real life scenario by bringing in independent evaluators who were Validation of DSS-HTN in PHC Settings PLOS ONE | www.plosone.org not otherwise involved in the DSS project, but were a part of the government run primary health care system care givers. The SAGE (Standards-Based Sharable Active Guideline Environment) consortium project recommends that a DSS should have "a complex clinical guideline as a series of recommendation sets" . Our DSS takes into account the context, decision, action and route to create a standards-based decision support's system. We followed the SAGE guidelines model which suggests that DSS (a) must be delivered through features available within the existing clinical information systems (b) must facilitate clinical workflow non-intrusively and (c) must be efficient and allow easy inspection of the underlying clinical logic . The fundamental principle involved in evaluating methods in medical informatics is to do a comprehensive evaluation of the consistency, depth and coverage of the knowledge encoded in the system . Each of these areas was tested in our DSS validation. Finally, implementation of clinical guidelines through the DSS acts as a teaching tool for the treating physicians and also ensures adherence to current guidelines resulting in quality of health care services provided.
Miller et al underlined the importance of thinking through the necessary key features during the process of developing a medical diagnostic and treatment algorithm. The validation for the system performance was based on what clinical practitioners would use or require during actual practice. The boundaries and limitations of the knowledge-base and available system functions have been specified upfront. Particular emphasis was paid to address the system-related (unambiguous and easy navigability of end user interface), user-related (lack of training with the system, failure to understand key system functions, lack of medical knowledge, etc.), and external variables (lack of available gold standards, quality of independent reviewers) influences on the validation process.
Limitations
Developing a standard for comparing the DSS recommendations turned out to be a challenge. Although, physician reviews have traditionally served as the gold standard, errors owing to the large and voluminous data analysis (60 patients' history and physical findings) may have limited the validity. Similarly, the authors of ATHENA-DSS study also acknowledge that an evolved consensus between the physician review and recommendations put forth by the system could turn out to be a better gold standard . In the CHAID (Chi-squared Automatic Interaction Detection) DSS for hypertension management built by using a data mining approach, clustering and the association rules were used for validating decisions made in hypertension management . More specifically, data warehouse architecture was used to collect and integrate relevant data from hospital clinical information systems. Our study, being a pilot study to test out the feasibility of converting clinical practise guidelines into implementable and hands on decision rules doesn't integrate all the data from patient electronic records and hospital clinical information systems as the infrastructure for health management information system is still at a nascent stage in India.
The knowledge based engine in our system, built mostly on "if and then" scenarios limits itself to management of hypertension only at primary care settings. Similarly, potential interactions with other drugs that would have had an effect on blood pressure have not been built in the system. Referral scenarios are suggested when the reasoning engine is confronted with complex data that can be managed only at secondary and tertiary care settings. However, since the issue of random agreement purely by chance has been adequately addressed, we believe that our finding of moderate to substantial agreement between the virtual (DSS) and the real (physicians review) are valid (since we report the AUC and the 95% CI for AUC in the ROC curve, 95% CI for kappa, prevalence and bias index).
DSS used in the developed world for management of hypertension have shown success if the DSS seamlessly blends in the daily work patterns of the end users, without burdening them on the cognitive or time scales, and improves their work efficiency. The time spent on manual data entry, loss of opportunity for decision, and the onus or responsibility in the event of an error are major areas that need to be addressed in future DSS studies. We have followed a systematic approach for DSS validation study wherein, feasibility, reliability in performance, DSS components' testing, evaluation of DSS in the context in which they were developed were initially done before a randomised control trial was planned. The results of the just completed randomised trial will help us to undertake a formal evaluation of DSS on patient specific outcomes.
Conclusion
A point of care, pilot tested and validated virtual DSS which matches the real life scenario for management of hypertension has been developed for improved management of hypertension at a primary health care level in a low and middle income setting. Public health policy decision makers could use the innovative DSS platform for (a) delivering evidence based non communicable disease (NCD) health care delivery models (promotion, prevention and treatment), (b) improving health system efficiency, and (c) reducing health disparities in primary care settings in low and middle income (LMICs) countries. |
/*B 16/17, Zadaća 2, Zadatak 4
NAPOMENA: i javni ATo-vi su dio postavke
Autotestovi by <NAME>. Sva pitanja, sugestije
i prijave gresaka saljite na mail: <EMAIL>
*/
#include<iostream>
#include<string>
#include<vector>
#include<stdexcept>
#include<new>
using std::cin;
using std::cout;
using std::endl;
using std::string;
int PotencijalniKrivci(char **&ref, std::vector<std::string>v){
char **niz(nullptr);
try{
niz=new char*[v.size()];
}
catch(std::bad_alloc){
delete[] niz;
throw;
}
try{
for(int i=0; i<v.size(); i++)
niz[i]=nullptr;
for(int i=0; i<v.size(); i++)
niz[i]=new char[v[i].size()+1];
for(int i=0; i<v.size(); i++){
for(int j=0; j<v[i].size(); j++){
niz[i][j]=v[i][j];
if(j==v[i].size()-1)
niz[i][j+1]='\0';
}
}
ref=niz;
return v.size();
}
catch(std::bad_alloc){
for(int i=0; i<v.size(); i++)
delete[] niz[i];
delete[] niz;
throw;
}
}
int OdbaciOptuzbu(char **&ref, int n, std::string s){
int br=0;
for(int i=0; i<n; i++){
char *r=ref[i];
int j=0;
while(*r!='\0' && j<s.size()){
if(*r==s[j]){
j++;
r++;
}
else break;
}
if(j==s.size() && *r=='\0'){
delete[] ref[i];
ref[i]=nullptr;
br++;
}
}
if(br==0) throw std::domain_error("Osoba sa imenom "+s+" nije bila optuzena");
if(br>10){
char**niz=nullptr;
try{
niz=new char*[n-br];
}
catch(std::bad_alloc){
delete[] niz;
}
try{
int j(0);
for(int i=0; i<n-br; i++)
niz[i]=nullptr;
for(int i=0; i<n; i++){
if(ref[i]!=nullptr){
char *r=ref[i];
char*e=ref[i];
int b=0;
while(*r!='\0'){ b++; r++;}
niz[j]=new char[b+1];
char *p=niz[j];
while(*p++=*e++);
j++;
}
}
for(int i=0; i<n; i++)
delete[] ref[i];
ref=niz;
return n-br;
}
catch(std::bad_alloc){
for(int i=0; i<n-br; i++)
delete[] niz[i];
delete[] niz;
}
}
return n;
}
int DodajOptuzbu(char **&ref, int n, std::string s){
char *optuzeni=nullptr;
try{
optuzeni=new char[s.size()+1];
for(int i=0; i<s.size(); i++){
optuzeni[i]=s[i];
if(i==s.size()-1) optuzeni[i+1]='\0';
}
}
catch(std::bad_alloc){
delete[] optuzeni;
throw;
}
int br(0);
for(int i=0; i<n; i++){
if(ref[i]==nullptr){
ref[i]=&optuzeni[0];
br=1;
break;
}
}
if(br==0){
char **niz=nullptr;
try{
niz=new char*[n+1];
}
catch(std::bad_alloc){
delete[] niz;
throw;
}
try{
for(int i=0; i<n; i++)
niz[i]=nullptr;
for(int i=0; i<n; i++){
char*r=ref[i];
int b=0;
while(*r!='\0'){ b++; r++;}
niz[i]=new char[b+1];
int j(0);
for( ; ;){
niz[i][j]=ref[i][j];
if(ref[i][j]=='\0') break;
j++;
}
}
niz[n]=&optuzeni[0];
for(int i=0; i<n; i++)
delete[] ref[i];
delete[] ref;
ref=niz;
return n+1;
}
catch(std::bad_alloc){
for(int i=0; i<n+1; i++)
delete[] niz[i];
delete[] niz;
throw;
}
}
return n;
}
void IzlistajOptuzbu(char **ref, int n){
cout<<endl;
for(int i=0; i<n; i++){
char *r=ref[i];
if(ref[i]==nullptr) continue;
while(*r!='\0'){
cout<<*r;
r++;
}
if(i!=n-1) cout<<endl;
}
}
int main ()
{
cout<<"Koliko potencijalnih krivaca zelite unijeti? ";
int n;
char **ref;
cin>>n;
cout<<endl<<"Unesite potencijalne krivce: ";
std::cin.ignore(10000,'\n');
try{
std::vector<std::string>v(n);
for(int i=0; i<n; i++)
cin>>v[i];
n=PotencijalniKrivci(ref, v);
int x;
std::string s;
for( ; ; ){
cout<<endl<<"Odaberite opciju: 1 za unos novog optuzenog, 2 za brisanje nekog optuzenog 3 za izlistavanje optuzenih, 0 za kraj: ";
cin>>x;
if(x==0) break;
if(x==1){
cout<<endl<<"Unesite ime novog optuzenog: ";
std::cin.ignore(10000,'\n');
std::getline(cin, s);
n=DodajOptuzbu(ref, n, s);
}
if(x==2){
cout<<endl<<"Unesite ime koje zelite izbaciti: ";
std::cin.ignore(10000,'\n');
std::getline(cin, s);
n=OdbaciOptuzbu(ref, n, s);
}
if(x==3) IzlistajOptuzbu(ref,n);
}
}
catch(std:: domain_error izuzetak){
cout<<izuzetak.what()<<endl;
for(int i=0; i<n; i++)
delete[] ref[i];
delete[] ref;
return 0;
}
catch(std::bad_alloc){
for(int i=0; i<n; i++)
delete[] ref[i];
delete[] ref;
return 0;
}
for(int i=0; i<n; i++)
delete[] ref[i];
delete[] ref;
return 0;
}
|
<filename>chromium/chrome/browser/chromeos/certificate_provider/certificate_info.h
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef CHROME_BROWSER_CHROMEOS_CERTIFICATE_PROVIDER_CERTIFICATE_INFO_H_
#define CHROME_BROWSER_CHROMEOS_CERTIFICATE_PROVIDER_CERTIFICATE_INFO_H_
#include <stddef.h>
#include <vector>
#include "base/memory/ref_counted.h"
#include "net/cert/x509_certificate.h"
#include "net/ssl/ssl_private_key.h"
namespace chromeos {
namespace certificate_provider {
// Holds all information of a certificate that must be synchronously available
// to implement net::SSLPrivateKey.
struct CertificateInfo {
CertificateInfo();
~CertificateInfo();
net::SSLPrivateKey::Type type = net::SSLPrivateKey::Type::RSA;
size_t max_signature_length_in_bytes = 0;
scoped_refptr<net::X509Certificate> certificate;
std::vector<net::SSLPrivateKey::Hash> supported_hashes;
};
using CertificateInfoList = std::vector<CertificateInfo>;
} // namespace certificate_provider
} // namespace chromeos
#endif // CHROME_BROWSER_CHROMEOS_CERTIFICATE_PROVIDER_CERTIFICATE_INFO_H_
|
Effect of rate and timing of potassium chloride application on the yield and quality of potato (Solanum tuberosum L. ‘Russet Burbank’)
Mohr, R. M. and Tomasiewicz, D. J. 2012. Effect of rate and timing of potassium chloride application on the yield and quality of potato ( Solanum tuberosum L. ‘Russet Burbank’). Can. J. Plant Sci. 92: 783-794. Potassium is frequently applied to irrigated potato in Manitoba. Field experiments were conducted at two sites in each of 2006, 2007 and 2008 to assess effects of rate and timing of potassium chloride (KCl) application on the yield, quality, and nutrient status of irrigated potato (Solanum tuberosum ‘Russet Burbank’) in southern Manitoba. Preplant application of KCl increased total and marketable yield at one site, and tended (0.05<P = 0.10) to increase total and marketable yield at three additional sites. At three of the four K-responsive sites, soil test K levels were <200 mg NH4OAc-extractable K kg-1, the level below which K fertilizer is recommended based on existing guidelines. Effects of timing of KCl application on total and marketable yield were limited although, averaged across sites, KCl applied at hilling reduced the yield of small tubers (<85 g) and increased the proportion of larger tubers (170 to 340 g) compared with preplant application. Averaged across sites, KCl applied preplant or at hilling reduced specific gravity compared with the 0 KCl treatments. Improvements in fry colour with KCl application were evident at only one site. Petiole and tuber K and Cl- concentration, K and Cl- removal in harvested tubers, and post-harvest soil test K concentration increased with KCl application. However, petiole K concentration measured 82 to 85 d after planting predicted only 24% of the variability in relative marketable yield for sites containing between 164 and 632 mg NH4OAc-extractable K kg-1 to 15 cm. Results demonstrate the potential for yield increases and specific gravity declines with KCl application under Manitoba conditions, but suggest that further research will be required to better predict the potential for yield responses using soil and petiole testing. |
<filename>internal/data/no_op.go
package data
//NoOp implements Null Object Pattern, non nil result.
type NoOp struct {
IOutput
}
//Deleted represents a valid deletion.
type Deleted struct {
NoOp
}
|
Platelet selenium as indicator of wheat selenium intake.
The effect of an increased intake of wheat selenium (Se) on platelet Se, serum Se, whole-blood Se, and glutathione peroxidase (GSH-Px) levels was investigated in 14 healthy Norwegian females (age 21-53 years). The intake of 60 micrograms Se per day as wheat Se, for six weeks, significantly increased the platelet Se (mean +/- SEM) from 9.1 +/- 1.1 mumol/L to 11.4 +/- 0.9 mumol/L, the serum Se from 1.43 +/- 0.18 mumol/L to 1.63 +/- 0.25 mumol/L, and the whole blood Se from 1.77 +/- 0.18 mumol/L to 2.01 +/- 0.18 mumol/L. The increase in percent of initial Se values was twice as high for platelets as for serum and whole blood. The GSH-Px levels were not altered during the experiment. Platelet Se was not significantly correlated to the Se intake initially. At the end of the experimental period, the Se in platelets reflected the total Se intake, but not with a simple linear correlation. No significant correlation between the total Se intake and the Se concentration in whole blood or serum was found. |
<reponame>mahak/akka
/*
* Copyright (C) 2020-2021 Lightbend Inc. <https://www.lightbend.com>
*/
package jdocs.akka.persistence.typed;
import akka.actor.typed.ActorRef;
import akka.actor.typed.Behavior;
import akka.persistence.testkit.query.javadsl.PersistenceTestKitReadJournal;
import akka.persistence.typed.ReplicaId;
import akka.persistence.typed.ReplicationId;
import akka.persistence.typed.crdt.ORSet;
import akka.persistence.typed.javadsl.CommandHandler;
import akka.persistence.typed.javadsl.EventHandler;
import akka.persistence.typed.javadsl.ReplicatedEventSourcedBehavior;
import akka.persistence.typed.javadsl.ReplicatedEventSourcing;
import akka.persistence.typed.javadsl.ReplicationContext;
import java.util.Collections;
import java.util.Set;
interface ReplicatedMovieExample {
// #movie-entity
public final class MovieWatchList
extends ReplicatedEventSourcedBehavior<MovieWatchList.Command, ORSet.DeltaOp, ORSet<String>> {
interface Command {}
public static class AddMovie implements Command {
public final String movieId;
public AddMovie(String movieId) {
this.movieId = movieId;
}
}
public static class RemoveMovie implements Command {
public final String movieId;
public RemoveMovie(String movieId) {
this.movieId = movieId;
}
}
public static class GetMovieList implements Command {
public final ActorRef<MovieList> replyTo;
public GetMovieList(ActorRef<MovieList> replyTo) {
this.replyTo = replyTo;
}
}
public static class MovieList {
public final Set<String> movieIds;
public MovieList(Set<String> movieIds) {
this.movieIds = Collections.unmodifiableSet(movieIds);
}
}
public static Behavior<Command> create(
String entityId, ReplicaId replicaId, Set<ReplicaId> allReplicas) {
return ReplicatedEventSourcing.commonJournalConfig(
new ReplicationId("movies", entityId, replicaId),
allReplicas,
PersistenceTestKitReadJournal.Identifier(),
MovieWatchList::new);
}
private MovieWatchList(ReplicationContext replicationContext) {
super(replicationContext);
}
@Override
public ORSet<String> emptyState() {
return ORSet.empty(getReplicationContext().replicaId());
}
@Override
public CommandHandler<Command, ORSet.DeltaOp, ORSet<String>> commandHandler() {
return newCommandHandlerBuilder()
.forAnyState()
.onCommand(
AddMovie.class, (state, command) -> Effect().persist(state.add(command.movieId)))
.onCommand(
RemoveMovie.class,
(state, command) -> Effect().persist(state.remove(command.movieId)))
.onCommand(
GetMovieList.class,
(state, command) -> {
command.replyTo.tell(new MovieList(state.getElements()));
return Effect().none();
})
.build();
}
@Override
public EventHandler<ORSet<String>, ORSet.DeltaOp> eventHandler() {
return newEventHandlerBuilder().forAnyState().onAnyEvent(ORSet::applyOperation);
}
}
// #movie-entity
}
|
def calc_cumsum_2d(image, box):
b, n, m, c = utils.get_tensor_shape(image)
_, p, _ = utils.get_tensor_shape(box)
cumsum = calc_integral_image(image)
ymin, xmin, ymax, xmax = tf.unstack(box, axis=-1)
i = tf.range(tf.cast(b, tf.int64), dtype=tf.int64)
i = tf.tile(tf.expand_dims(i, axis=-1), [1, p])
i_a = tf.gather_nd(cumsum, tf.stack([i, ymin, xmin], axis=-1))
i_b = tf.gather_nd(cumsum, tf.stack([i, ymin, xmax], axis=-1))
i_c = tf.gather_nd(cumsum, tf.stack([i, ymax, xmin], axis=-1))
i_d = tf.gather_nd(cumsum, tf.stack([i, ymax, xmax], axis=-1))
return i_d + i_a - i_b - i_c |
<filename>lib/version.py
"""
Versioning
Implements `semantic versioning <https://semver.org/>`_. It implements the
entire specification. This implementation also implements 'Calendar versioning
<https://calver.org/>`_ and supports hybrid schemes that use elements from both
versioning schemes.
.. only:: development_administrator
Module management
Created on Apr. 26, 2020
@author: <NAME>
"""
import re
from typing import Optional, Sequence, MutableSequence, Union, MutableSet, Set
from lib.gvClasses import Counter
class PreRelease(object):
_choices: MutableSet[str] = ['alpha', 'beta', 'rc']
@classmethod
def augmentChoices(cls,
choices: set[str]):
cls._choices.union(choices)
"""
Implements the pre-release identifier component of a semantic based version
"""
def __init__(self: 'PreRelease',
description: Set[str] = [],
counter: Optional[Counter]=None):
if isinstance(description, str):
self.validateDescription(description)
self._descriptionadd(description)
elif isinstance(description, Set):
for d in description:
self.validateDescription(d)
self._description.add(d)
else:
raise ValueError(f'{type(description)} is not a supported type.')
self._counter = counter
@property
def description(self: 'PreRelease')-> MutableSet:
return self._description
@property
def counter(self: 'PreRelease')-> Counter:
return self._counter
def __gt__(self: 'PreRelease',
comparand: 'PreRelease') -> bool:
for i in range(min(len(self.description),
len(comparand.description))):
if self.description[i] > comparand.description[i]:
return True
if self.description[i] < comparand.description[i]:
return False
if len(self.description) > len(comparand.decription):
return True
if self.counter is None and comparand.counter is not None:
return False
if self.counter is not None and comparand.counter is None:
return True
return False
def __eq__(self: 'PreRelease',
comparand: 'PreRelease'):
for i in range(min(len(self.description),
len(comparand.description))):
if self.description[i] != comparand.description[i]:
return False
if len(self.description) ==\
len(comparand.description):
return True
return True if self.counter == comparand.counter else False
return False
def __ne__(self: 'PreRelease',
comparand: 'PreRelease'):
return not self. __eq__(comparand)
def __le__(self: 'PreRelease',
comparand: 'PreRelease'):
return not self.__gt__(comparand)
def __lt__(self: 'PreRelease',
comparand: 'PreRelease'):
return not self.__gt__(comparand) and self.__ne__(comparand)
def __ge__(self: 'PreRelease',
comparand: 'PreRelease'):
return self.__gt__(comparand) or self.__eq__(comparand)
def __hash(self: 'SemanticVersion'):
calc = (self.description, self.counter)
return hash(calc)
def validateDescription(self: 'PreRlease',
desc: Union[str, Set[str]]) -> bool:
"""
This method actually never returns False. It raises a ValueError
exception instead that shows an attempt to use a description keyword
twice in the same pre-release version. Things like `alpha.alpha` do
not make a lot of sense. If you really need that form of expression,
override this method in a derived class and do what you need.
"""
d = ''
err = 'Duplicate description terms are not supported. {}'\
' is already used.'
if isinstance(desc, str):
if self.description in PreRelease.choices:
return True
else:
d = desc
elif isinstance(desc, Set):
for d in desc:
if d not in PreRelease.choices:
raise ValueError(err.format(d))
return True
raise ValueError(err.format(d))
def __str__(self: 'PreRelease') -> str:
string = ''
for s in self.description:
string += f'.{s}'
if self.counter:
string += f'{self.counter}'
return string
class Build(object):
"""
Implements the build identification component of a semantic based version.
"""
def __init__(self: 'Build',
id_: Union[str, Set[str]]) -> None:
self.id_:Set[str] = {}
if isinstance(id_, str):
self._id: Set[str].add(id_)
elif isinstance(id_, Set[str]):
for i in id_:
self._id.add(i)
else:
raise ValueError(f'The build id - {id_}'
f' has an invalid type - {type(id_)}')
@property
def id(self: 'Build')-> Set[str]:
return self._id
def __str__(self: 'Build')-> str:
string = ''
for i, id_ in enumerate(self._id):
string += '' if i == 0 else '.'
string += id_
return string
def __gt__(self: 'Build',
comparand: 'Build') -> bool:
if len(self.id) > len(comparand.id):
return True
if len(self.id) < len(comparand.id):
return False
for i in range(len(self.id)):
if self.id[i] > comparand.id[i]:
return True
if self.id[i] < comparand.id[i]:
return False
return False
def __le__(self: 'Build',
comparand: 'Build') -> bool:
return not self.__gt__(comparand)
def __eq__(self: 'Build',
comparand: 'Build') -> bool:
if len(self.id) != len(comparand.id):
return False
for i in range(len(self.id)):
if self.id[i] != comparand.id[i]:
return False
return True
def __ne__(self: 'Build',
comparand: 'Build') -> bool:
return not self.__eq__(comparand)
def __lt__(self: 'Build',
comparand: 'Build') -> bool:
return not self.__gt__(comparand) and self.__ne__(comparand)
def __ge__(self: 'Build',
comparand: 'Build') -> bool:
return self.__gt__(comparand) or self.__eq__(comparand)
def __hash__(self: 'Build') -> int:
return hash(self.id)
class SemanticVersion(object):
"""
This is the internal representation of a semantic version. It can create
semantic version objects from external strings. i.e it has a semantic
version factory, and can provide semantic version strings as an external
string representation.
"""
# This string contains the regular expression that defines the external
# structure of a semantic version object.
_pattern: Optional(re.Pattern) = None
def __init__(self: 'SemanticVersion',
major: Counter = 0,
minor: Optional(Counter) = 0,
micro: Optional(Counter) = 0,
prerelease: Optional[PreRelease]=None,
build: Optional[Build]=None):
self._major = major
self._minor = minor
self._micro = micro
self._prerelease = PreRelease(prerelease) if prerelease else None
self._build = Build(build) if build else None
@property
def major(self: 'SemanticVersion')-> Counter:
return self._major
@property
def minor(self: 'SemanticVersion') -> Optional[Counter]:
return self._minor
@property
def micro(self: 'SemanticVersion') -> Optional[Counter]:
return self._micro
@property
def prerelease(self: 'SemanticVersion') -> Optional[PreRelease]:
return self._prerelease
@property
def build(self: 'SemanticVersion') -> Optional[Build]:
return self._build
def __str__(self: 'SemanticVersion')->str:
"""
Provides a semantic versioning valid string representation of the
version. It extends strict semantic versioning to allow versions with
only major and minor components or with major components only as this
represents the way semantic versioning is used in practice.
:return: String representing the version
:rtype: str
"""
string = f'{self.major}'
if self.minor is not None:
string += f'.{self.minor}'
if self.micro is not None:
string += f'.{string.micro}'
if self.prerelease is not None:
string += f'-{str(self.prerelease)}'
if self.build is not None:
string += f'+{self.build}'
return string
def __gt__(self: 'SemanticVersion',
comparand: 'SemanticVersion') -> bool:
# Test major version
if self.major > comparand.major:
return True
if self.major < comparand.major:
return False
# Test minor version
if self.minor is None:
if comparand.minor is not None:
return True
elif comparand.minor is None:
return False
elif self.minor > comparand.minor:
return True
# Test micro version
if self.micro is None:
if comparand.micro is not None:
return True
elif comparand.micro is None:
return False
elif self.comparand > comparand.micro:
return True
# Test prerelease
if self.prerelease is None:
if comparand.prerelease is not None:
return False
else:
if comparand.prerelease is None:
return False
return True if self.prerelease > comparand.prerelease else False
def __le__(self: 'SemanticVersion',
comparand: 'SemanticVersion') -> bool:
not self.__gt__(comparand)
def __eq__(self: 'SemanticVersion',
comparand: 'SemanticVersion') -> bool:
return True if (self.major == comparand.major and\
self.minor == comparand.minor and\
self.micro == comparand.micro and\
self.prerelease == comparand.prerelease and\
self.build == comparand.build) else False
def __ne__(self: 'SemanticVersion',
comparand: 'SemanticVersion') -> bool:
return not self.__eq__(comparand)
def __ge__(self,
comparand: 'SemanticVeion'):
return self.__gt__(comparand) or self.__eq__(comparand)
def __lt__(self: 'SemanticVersion',
comparand: 'SemanticVersion') -> bool:
return not self.__ge__(comparand)
def __hash__(self: 'SemanticVersion') -> int:
return hash((self.major, self.minor, self.micro,
self.prerelease, self.build))
@staticmethod
def SemanticFactory(external: str) -> 'SemanticVersion':
"""
Generates a SemanticVersion object from a string
"""
if SemanticVersion._pattern is None:
SemanticVersion._pattern = re.compile(r"""
# This regular expression pattern is based on the pattern suggested in the
# semantic version formal specification. It has been modified in the following
# ways:
# * Fix bugs.
# * Take advantage of the Python implementation of regular expressions.
# * Make more readable by taking advantage of the VERBOSE flag in Python to
# put the expression on multiple lines and to include comments.
^ # Start of string
(?P<major> # The major group is mandatory.
0| # It can be zero
[1-9][0-9]* # or a non-zero number with no leading zero.
) # End of the named group
(?:\.(?P<minor> # The minor named group is optional. If it is not present, the
# micro group should not be present. This cannot be enforced
# directly by regular expressions but can be checked in Python
# code that examines the result of the regular expression
# match.
0| # It can be zero
[1-9][0-9]* # or a non-zero number with no leading zero
) # End of the named group
) # End of the optional group
(?:\.(?P<micro> # The micro named group is optional
0| # It can be zero
[1-9][0-9]* # or a non-zero number with no leading zero
) # End of the named group
) # End of the optional group
(?:-(?P<prerelease> # The prerelease named group is optional.
# If it is present, it must contain at least one sub-group.
# The recognition of sub-groups is partially performed in
# Python code working on the results of the regular
# expression match.
# All sub-groups except the first are optional. Optional
# sub-groups are separated by a ".".
0| # The mandatory sub-group contents can
# be zero or can be
[1-9][0-9]* # a non-zero number with no leading
# zero
| [0-9a-zA-Z]+ # or an alphanumeric character string.
(?:\. # The start of an optional sub-group
# with a "." used as a sub-group
# separator.
0| # The sub-group that may be zero
[1-9][0-9] # or a non-zero number with no leading
# zero.
| [0-9a-zA-Z]+ # or an alphanumeric character string.
)* # This sub-group may appear an
# indefinite number of times.
) # End of the named group - prerelease.
) # End of the optional group.
(?:\+(?P<buildmetadata> # The buildmetadata named group is optional.
# If it is present, it must contain at least one
# sub-group. The "." character is used as a sub-group
# separator.
[0-9a-zA-Z]+ # This is the mandatory
# sub-group. It must
# contain an indefinite
# number of alphanumeric
# characters.
(?:\. # The start of an
# optional sub-group
[0-9a-zA-Z]+ # If present, it must
# contain an indefinite
# number of alphanumeric
# characters.
) # End of the buildmeta
# optional sub-group.
* # Zero to an indefinite
# number of optional
# sub-groups may be
# present.
) # End of the named group
# - buildmeta.
) # End of the optional
# group.
$ # End of version string
""", # End of pattern
re.VERBOSE)
match = re.fullmatch(SemanticVersion._pattern,
external)
if match is None:
raise ValueError(f'String {external} does not describe'
' a valid semantic version'
' - it has an incorrect structure.')
components = match.groupdict()
if 'minor' not in components and 'micro' in components:
raise ValueError('When the minor version is omitted, the micro'
' version must also be omitted')
mj = components.major
mi = None if 'minor' not in components else components.minor
mc = None if 'micro' not in components else components.micro
pr = None if 'prerelease' not in components else components.prerelease
bd = None if 'build' not in components else components.build
return SemanticVersion(mj,
mi,
mc,
pr,
bd)
|
<reponame>GMCunha/Trabalhos-UFSM<filename>ED/Prova 2/dicionario.h
// o dicionário propriamente dito.
typedef struct dicionario Dicionario;
//uma entrada do dicionário.
typedef struct entrada {
char * str;
int freq;
}Entrada;
// Funcoes do dicionario
Dicionario* criaDicionario();
void removeDicionario(Dicionario* dic);
void adicionarEntrada(Dicionario* dic, char* str);
Entrada** exportaEntrada(Dicionario* dic);
// Funcao de lista encadeada
int lista_tamanho(Dicionario* dic); |
def adaptive_max_pool1d(x, output_size, return_mask=False, name=None):
pool_type = 'max'
check_variable_and_dtype(x, 'x', ['float32', 'float64'],
'adaptive_max_pool1d')
_check_input(x, 3)
check_type(output_size, 'pool_size', int, 'adaptive_max_pool1d')
check_type(return_mask, 'return_mask', bool, 'adaptive_max_pool1d')
pool_size = [1] + utils.convert_to_list(output_size, 1, 'pool_size')
l_type = 'max_pool2d_with_index'
x = unsqueeze(x, [2])
if in_dygraph_mode():
pool_out = core.ops.max_pool2d_with_index(
x, 'pooling_type', pool_type, 'ksize', pool_size, 'adaptive', True)
return (squeeze(pool_out[0], [2]), squeeze(
pool_out[1], [2])) if return_mask else squeeze(pool_out[0], [2])
helper = LayerHelper(l_type, **locals())
dtype = helper.input_dtype(input_param_name='x')
pool_out = helper.create_variable_for_type_inference(dtype)
mask = helper.create_variable_for_type_inference(dtype)
outputs = {"Out": pool_out, "Mask": mask}
helper.append_op(
type=l_type,
inputs={"X": x},
outputs=outputs,
attrs={
"pooling_type": pool_type,
"ksize": pool_size,
"adaptive": True,
})
return (squeeze(pool_out, [2]),
squeeze(mask, [2])) if return_mask else squeeze(pool_out, [2]) |
<reponame>WinterCore/icicle
import "source-map-support/register";
import databaseConnect from "./database/connect";
import initializeServer from "./routes/init";
import logger from "./logger";
databaseConnect()
.then(initializeServer)
.catch(err => logger.error(err));
|
/**
* Save image to file<br>
* supported format is PNG and JPEG<br>
*
* @param bufImage
* @param imageType
* @param os
* @param jpegQualityPercentage
* @throws IOException
*/
private static void save(BufferedImage bufImage, ImageFormat imageType, OutputStream os, int jpegQualityPercentage) throws IOException {
if (ImageFormat.PNG == imageType) {
ImageIO.write(bufImage, "PNG", os);
}
else if (ImageFormat.JPEG == imageType) {
saveJpeg(bufImage, os, (float) jpegQualityPercentage / 100f);
}
else {
throw new RuntimeException("Error occured while saving the image.Not supported extension. only supported .png / .jpg ");
}
} |
// Copyright (c) 2020 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package service
import (
"context"
"strconv"
"time"
azurev1alpha1 "github.com/gardener/remedy-controller/pkg/apis/azure/v1alpha1"
"github.com/gardener/remedy-controller/pkg/controller"
"github.com/gardener/remedy-controller/pkg/controller/azure"
"github.com/gardener/remedy-controller/pkg/utils"
"github.com/go-logr/logr"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/util/retry"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
)
type actuator struct {
client client.Client
namespace string
syncPeriod time.Duration
logger logr.Logger
}
// NewActuator creates a new Actuator.
func NewActuator(client client.Client, namespace string, syncPeriod time.Duration, logger logr.Logger) controller.Actuator {
logger.Info("Creating actuator", "namespace", namespace, "syncPeriod", syncPeriod)
return &actuator{
client: client,
namespace: namespace,
syncPeriod: syncPeriod,
logger: logger,
}
}
// CreateOrUpdate reconciles object creation or update.
func (a *actuator) CreateOrUpdate(ctx context.Context, obj client.Object) (requeueAfter time.Duration, err error) {
// Cast object to Service
var svc *corev1.Service
var ok bool
if svc, ok = obj.(*corev1.Service); !ok {
return 0, errors.New("reconciled object is not a service")
}
// Initialize labels
pubipLabels := map[string]string{
azure.ServiceLabel: ObjectLabeler.GetLabelValue(svc),
}
// Get LoadBalancer IPs
ips := getServiceLoadBalancerIPs(svc)
shouldIgnore := shouldIgnoreService(svc)
// Create or update PublicIPAddress objects for existing LoadBalancer IPs
if !shouldIgnore {
for ip := range ips {
pubip := &azurev1alpha1.PublicIPAddress{
ObjectMeta: metav1.ObjectMeta{
Name: generatePublicIPAddressName(svc.Namespace, svc.Name, ip),
Namespace: a.namespace,
},
}
a.logger.Info("Creating or updating publicipaddress", "name", pubip.Name, "namespace", pubip.Namespace)
if err := retry.RetryOnConflict(retry.DefaultBackoff, func() error {
_, err := controllerutil.CreateOrUpdate(ctx, a.client, pubip, func() error {
pubip.Labels = pubipLabels
delete(pubip.Annotations, azure.DoNotCleanAnnotation)
pubip.Spec.IPAddress = ip
return nil
})
return err
}); err != nil {
return 0, errors.Wrap(err, "could not create or update publicipaddress")
}
}
}
// Delete PublicIPAddress objects for non-existing LoadBalancer IPs
pubipList := &azurev1alpha1.PublicIPAddressList{}
if err := a.client.List(ctx, pubipList, client.InNamespace(a.namespace), client.MatchingLabels(pubipLabels)); err != nil {
return 0, errors.Wrap(err, "could not list publicipaddresses")
}
for _, pubip := range pubipList.Items {
if _, ok := ips[pubip.Spec.IPAddress]; !ok || shouldIgnore {
if shouldIgnore {
a.logger.Info("Adding do-not-clean annotation on publicipaddress", "name", pubip.Name, "namespace", pubip.Namespace)
if err := retry.RetryOnConflict(retry.DefaultBackoff, func() error {
pubip.Annotations = utils.Add(pubip.Annotations, azure.DoNotCleanAnnotation, strconv.FormatBool(true))
return a.client.Update(ctx, &pubip)
}); err != nil {
return 0, errors.Wrap(err, "could not add do-not-clean annotation on publicipaddress")
}
}
a.logger.Info("Deleting publicipaddress", "name", pubip.Name, "namespace", pubip.Namespace)
if err := client.IgnoreNotFound(a.client.Delete(ctx, &pubip)); err != nil {
return 0, errors.Wrap(err, "could not delete publicipaddress")
}
}
}
return a.syncPeriod, nil
}
// Delete reconciles object deletion.
func (a *actuator) Delete(ctx context.Context, obj client.Object) (requeueAfter time.Duration, err error) {
// Cast object to Service
var svc *corev1.Service
var ok bool
if svc, ok = obj.(*corev1.Service); !ok {
return 0, errors.New("reconciled object is not a service")
}
// Initialize labels
pubipLabels := map[string]string{
azure.ServiceLabel: ObjectLabeler.GetLabelValue(svc),
}
// Get LoadBalancer IPs
ips := getServiceLoadBalancerIPs(svc)
// Delete PublicIPAddress objects for existing LoadBalancer IPs
for ip := range ips {
pubip := &azurev1alpha1.PublicIPAddress{
ObjectMeta: metav1.ObjectMeta{
Name: generatePublicIPAddressName(svc.Namespace, svc.Name, ip),
Namespace: a.namespace,
},
}
a.logger.Info("Deleting publicipaddress", "name", pubip.Name, "namespace", pubip.Namespace)
if err := client.IgnoreNotFound(a.client.Delete(ctx, pubip)); err != nil {
return 0, errors.Wrap(err, "could not delete publicipaddress")
}
}
// Delete PublicIPAddress objects for non-existing LoadBalancer IPs
pubipList := &azurev1alpha1.PublicIPAddressList{}
if err := a.client.List(ctx, pubipList, client.InNamespace(a.namespace), client.MatchingLabels(pubipLabels)); err != nil {
return 0, errors.Wrap(err, "could not list publicipaddresses")
}
for _, pubip := range pubipList.Items {
if _, ok := ips[pubip.Spec.IPAddress]; !ok {
a.logger.Info("Deleting publicipaddress", "name", pubip.Name, "namespace", pubip.Namespace)
if err := client.IgnoreNotFound(a.client.Delete(ctx, &pubip)); err != nil {
return 0, errors.Wrap(err, "could not delete publicipaddress")
}
}
}
return 0, nil
}
// ShouldFinalize returns true if the object should be finalized.
func (a *actuator) ShouldFinalize(_ context.Context, obj client.Object) (bool, error) {
// Cast object to Service
var svc *corev1.Service
var ok bool
if svc, ok = obj.(*corev1.Service); !ok {
return false, errors.New("reconciled object is not a service")
}
// Return true if there are LoadBalancer IPs and the service should not be ignored
return len(getServiceLoadBalancerIPs(svc)) > 0 && !shouldIgnoreService(svc), nil
}
func getServiceLoadBalancerIPs(svc *corev1.Service) map[string]bool {
ips := make(map[string]bool)
for _, ingress := range svc.Status.LoadBalancer.Ingress {
if ingress.IP != "" {
ips[ingress.IP] = true
}
}
return ips
}
func shouldIgnoreService(svc *corev1.Service) bool {
return svc.Annotations[azure.IgnoreAnnotation] == strconv.FormatBool(true)
}
func generatePublicIPAddressName(serviceNamespace, serviceName, ip string) string {
return serviceNamespace + "-" + serviceName + "-" + ip
}
|
Retinal projection display system based on MEMS scanning projector and conicoid curved semi-reflective mirror
Retinal projection display (RPD) is a research hotspot in the field of near-eye display (NED), which has a long depth of field and is able to overcome the vergence-accommodation conflict (VAC) because of the principle of Maxwellian view. However, the existing problems of RPD, such as small field of view (FOV) and large volume heavy weight, restrict its application and development. In this paper, an RPD system based on micro-electro-mechanical system (MEMS) and conicoid curved semi-reflective mirror is proposed in order to realize a large FOV with compact structure and light weight. The MEMS used is a kind of biaxial scanning mirror, which works in coordination with RGB laser diodes for the sake of scanning a two-dimensional image in a specific direction with high illuminance and high resolution. The conicoid curved semi-reflective mirror is used to project the image onto the retina and form a large FOV with a suitable eye relief distance (ERF), meanwhile to keep the external object visible. The combination of MEMS, RGB laser diodes and conicoid curved semi-reflective mirror retains compact size factor suitable for wearable function. The performance of the proposed RPD is quantitatively analyzed. And the key performance is set forth in detail as follows. The FOV is 70° horizontal (H) × 40° vertical (V), ERF is 30mm. And the MTFs are discussed as well. The proposed RPD system realizes a large FOV, a long depth cue and high performance with compact size, which can be further applied in the next-generation optical see-through NED. |
import pcraster as pcr
import pcraster.framework as pcrfw
import scipy.stats
import numpy
import os
import operator
import glob
import subprocess
import math
import rpy2
##
import rpy2.robjects.numpy2ri
rpy2.robjects.numpy2ri.activate() # nodig bij nieuwere rpy2 versie
import rpy2.robjects as robjects
##
from collections import deque
#from PCRaster.NumPy import *
import random
#from osgeo import gdal
# time in hours
###########################################
# retrieving data from maps, writing maps #
###########################################
# at first time step and sample, create self.a=numpy.empty(1000*100*2*1).reshape(1000,100,2,1)
# of als object self.rainAsNumpy=class(filenameToBeWritten,locs,nrsamples,nrtimesteps,rows,cols)
# en dan in de dynamic self.rainAsNumpy.report(Rain,currentSample,currentTimestep)
# of in premcloop
# c is a 2 value array
# >>> b[999,99,:,0]=c
# as soon as all timesteps and samples have been done, write (or at last time step and last sample)
# self.a=writeToNumpy(self.a,locs,variable,nameString,currentSample,currentTimestep,nrOfSamples,nrOfTimesteps)
def getCellValue(Map, Row, Column):
Value, Valid = pcr.cellvalue(Map, Row, Column)
if Valid:
return Value
else:
print('missing value in input of getCellValue')
def printErrorMessageIfACellContainsTrue(booleanMap, errorMessage):
scalarMap = pcr.cover(pcr.scalar(booleanMap), 0)
cellContainsTrueMap = pcr.boolean(pcr.mapmaximum(scalarMap))
cellContainsTrue = getCellValue(cellContainsTrueMap, 1, 1)
if cellContainsTrue > 0.5:
print(errorMessage)
def getCellValueAtBooleanLocation(location, map):
# map can be any type, return value always float
valueMap = pcr.mapmaximum(pcr.ifthen(location, pcr.scalar(map)))
# to get rid of bug that gives very low value in case of missing value on location
valueMap = pcr.ifthen(pcr.pcrgt(valueMap, -1e10), valueMap)
value = getCellValue(valueMap, 1, 1)
return value
def getCellValueAtBooleanLocationReturnMVOrNot(location, map):
# map can be any type, return value always float
valueMap = pcr.mapmaximum(pcr.ifthen(location, pcr.scalar(map)))
# to get rid of bug that gives very low value in case of missing value on location
valueMap = pcr.ifthen(pcr.pcrgt(valueMap, -1e10), valueMap)
value, valid = pcr.cellvalue(valueMap, 1, 1)
return value, valid
def printCellValue(self, mapVariable, variableNameToPrint, unit, row, column):
cellValue = getCellValue(mapVariable, row, column)
print(variableNameToPrint + ' (' + unit + ') at row ' + str(row) + ', column: ' + str(column) + ' is: ' + str(cellValue))
def onePeriod(self, startTime, endTime, timeStepDuration, currentTimeStep):
# this could be separated in two functions, one converting hours to
# time steps, one creating the period
time = float(currentTimeStep) * float(timeStepDuration)
period = (time > startTime) & (time < endTime)
return period
def returnCellValuesAtLocationsAsNumpyArray(locations, values):
# output gives values at locations as numpy array
# ordered by id's of locations, so lowest location number
# is first in out
locsAsNumpy = pcr.pcr2numpy(locations, 9999)
valuesAsNumpy = pcr.pcr2numpy(values, 9999)
locsOnly = locsAsNumpy[locsAsNumpy > 0.5]
valuesOnly = valuesAsNumpy[locsAsNumpy > 0.5]
all = numpy.dstack((locsOnly, valuesOnly))
allZero = all[0]
final = allZero[allZero[:, 0].argsort(), ]
out = final[:, 1]
return out
def writeNumpyArrayAsScalarPCRasterMap(numpyArray, fileName):
driver = None
cols = len(numpyArray)
rows = 2
driver = None
dst_ds = None
dst = None
driver = gdal.GetDriverByName('RST')
#dst_ds = driver.Create('piet.rst', cols, rows, 1, gdal.GDT_Float32 )
tmpFile = fileName + 'piet.rst'
dst_ds = driver.Create(tmpFile, cols, rows, 1, gdal.GDT_Float32 )
numpyArrayTwoRows = numpy.array((numpyArray, numpyArray))
dst_ds.GetRasterBand(1).WriteArray(numpyArrayTwoRows)
for filename in glob.glob('piet.*'):
os.remove(filename)
driver = gdal.GetDriverByName('PCRaster')
print(fileName)
dst_ds_pcr = driver.CreateCopy( fileName, dst_ds, 1, ['Type=Float32'] )
print('create copy gedaan ' + fileName)
del dst_ds
del dst_ds_pcr
def writeNumpyArrayAsScalarPCRasterMapAsc2Map(numpyArray, fileName):
numpy.savetxt('tmp.asc', numpyArray)
length = len(numpyArray)
command = 'mapattr -S -R ' + str(length) + ' -s -C 1 tmp.clone'
p = subprocess.call(command, shell=True)
command = 'asc2map tmp.asc ' + fileName + ' -S --clone tmp.clone'
p = subprocess.call(command, shell=True)
os.remove('tmp.clone')
os.remove('tmp.asc')
def reportLocations(locations, values, basename, sampleNumber, timeStep):
# let op locations should have no mv's
fileName = pcrfw.generateNameST(basename, sampleNumber, timeStep)
numpyArray = returnCellValuesAtLocationsAsNumpyArray(locations, values)
writeNumpyArrayAsScalarPCRasterMapAsc2Map(numpyArray, fileName)
def reportLocationsAsNumpyArray(locations, values, basename, sampleNumber, timeStep):
# reports one file per realization and per time step
fileName = pcrfw.generateNameST(basename, sampleNumber, timeStep)
numpyArray = returnCellValuesAtLocationsAsNumpyArray(locations, pcr.spatial(values))
numpyArrayAsMapWithOneRow = numpyArray.reshape(1, len(numpyArray))
numpy.savetxt(fileName + '.numpy.txt', numpyArrayAsMapWithOneRow)
def reportLocationsAsNumpyArrayOneFilePerRealization(locations, values,
basename, sampleNumber, timeStep, endTimeStep):
fileName = pcrfw.generateNameS(basename, sampleNumber) + '.numpy.txt'
if timeStep == 1:
theFile = file(fileName, 'w')
else:
theFile = file(fileName, 'a')
numpyArray = returnCellValuesAtLocationsAsNumpyArray(locations, pcr.spatial(values))
numpyArrayAsMapWithOneRow = numpyArray.reshape(1, len(numpyArray))
numpy.savetxt(theFile, numpyArrayAsMapWithOneRow)
theFile.close()
def reportAsNumpyArray(values, basename, sampleNumber, timeStep):
fileName = pcrfw.generateNameST(basename, sampleNumber, timeStep)
valuesAsNumpy = pcr.pcr2numpy(values, 9999)
numpy.savetxt(fileName + '.numpy.txt', valuesAsNumpy)
def openFileAsNumpyArray(name):
src_ds = gdal.Open(name)
cols = src_ds.RasterXSize
rows = src_ds.RasterYSize
rasterBand = src_ds.GetRasterBand(1)
numpyArray = numpy.array(rasterBand.ReadAsArray())
return numpyArray
def openAsNumpyArray(basename, sampleNumber, timeStep):
fileName = pcrfw.generateNameST(basename, sampleNumber, timeStep)
numpyArray = openFileAsNumpyArray(fileName)
return numpyArray
def openSamplesAndTimestepsAsNumpyArray(basename, samples, timesteps):
t = 1
output = []
for timestep in timesteps:
print('timestep ' + str(timestep) + ' done,',)
allSamples = []
for sample in samples:
array = openAsNumpyArray(basename, sample, timestep)
allSamples.append(array)
output.append(allSamples)
outputAsArray = numpy.array(output)
return outputAsArray
def openSamplesAndTimestepsAsNumpyArraysAsNumpyArrayTimeThenSamples(basename, samples, timesteps):
# this is the same (older) as openSamplesAndTimestepsAsNumpyArraysAsNumpyArray
# but it loops for each time step over all samples, which appears to be slower
t = 1
output = []
print('doing basename ', basename)
for timestep in timesteps:
allSamples = []
for sample in samples:
#fileName = pcrfw.generateNameST(basename,sample,timestep) + '.npy'
# array=numpy.load(fileName)
fileName = pcrfw.generateNameST(basename, sample, timestep) + '.numpy.txt'
array = numpy.atleast_2d(numpy.loadtxt(fileName))
allSamples.append(array)
output.append(allSamples)
outputAsArray = numpy.array(output)
return outputAsArray
def convertTimeseriesOfMapFilesToNumpyArray(basename, samples, timesteps):
t = 1
output = []
for timestep in timesteps:
allSamples = []
for sample in samples:
fileName = pcrfw.generateNameST(basename, sample, timestep)
valuesAsNumpy = pcr.pcr2numpy(pcr.spatial(fileName), 9999)
# array=numpy.atleast_2d(numpy.loadtxt(fileName))
allSamples.append(valuesAsNumpy)
output.append(allSamples)
outputAsArray = numpy.array(output)
return outputAsArray
def openSamplesAndTimestepsAsNumpyArraysAsNumpyArray(basename, samples, timesteps):
print('doing basename ', basename)
done = 0
firstFileName = pcrfw.generateNameST(basename, samples[0], timesteps[0]) + '.numpy.txt'
array = numpy.atleast_2d(numpy.loadtxt(firstFileName))
a = numpy.shape(array)
b = (len(timesteps), len(samples))
outputAsArray = numpy.ones(b + a)
sampleIndex = 0
for sample in samples:
timestepIndex = 0
for timestep in timesteps:
fileName = pcrfw.generateNameST(basename, sample, timestep) + '.numpy.txt'
array = numpy.atleast_2d(numpy.loadtxt(fileName))
outputAsArray[timestepIndex, sampleIndex, ] = array
done = done + 1
timestepIndex += 1
sampleIndex += 1
return outputAsArray
def openSamplesAsNumpyArrays(basename, samples, timesteps):
# opens for each realization a timeseries (stored with generalfunctions.reportLocationsAsNumpyArrayOneFilePerRealization)
# and stores it in a multi dimensional numpy array and writes to disk
print('doing basename ', basename)
done = 0
#firstFileName = pcrfw.generateNameST(basename,samples[0],timesteps[0]) + '.numpy.txt'
firstFileName = pcrfw.generateNameS(basename, samples[0]) + '.numpy.txt'
array = numpy.atleast_2d(numpy.loadtxt(firstFileName)[0])
a = numpy.shape(array)
b = (len(timesteps), len(samples))
outputAsArray = numpy.ones(b + a)
sampleIndex = 0
for sample in samples:
fileName = pcrfw.generateNameS(basename, sample) + '.numpy.txt'
timeSeries = numpy.atleast_2d(numpy.loadtxt(fileName))
timestepIndex = 0
for timestep in timesteps:
array = timeSeries[timestepIndex]
outputAsArray[timestepIndex, sampleIndex, ] = array
done = done + 1
timestepIndex += 1
print(sample, done)
sampleIndex += 1
print('new', outputAsArray)
return outputAsArray
def createList(samples, timesteps):
wholeList = []
allSamples = samples[:]
for timestep in timesteps:
wholeList.append(allSamples)
return wholeList
def test(basename, samples, timesteps):
t = 1
output = []
print('doing basename ', basename)
for timestep in timesteps:
allSamples = []
for sample in samples:
print(timestep, sample)
fileName = pcrfw.generateNameST(basename, sample, timestep) + '.numpy.txt'
array = numpy.atleast_2d(numpy.loadtxt(fileName))
allSamples.append(array)
output.append(allSamples)
outputAsArray = numpy.array(output)
return outputAsArray
def testTwo(basename, samples, timesteps):
print('doing basename ', basename)
result = createList(samples, timesteps)
t = 0
for timestep in timesteps:
s = 0
for sample in samples:
print(timestep, sample)
fileName = pcrfw.generateNameST(basename, sample, timestep) + '.numpy.txt'
theArray = numpy.atleast_2d(numpy.loadtxt(fileName))
result[t][s] = theArray
s = s + 1
t = t + 1
outputAsArray = numpy.array(result)
return outputAsArray
###########################
# MAP ALGEBRA #
###########################
def selectACell(Map, XInNrCells, YInNrCells):
xMap = pcr.nominal(pcr.xcoordinate(pcr.boolean(Map)) / pcr.celllength())
yMap = pcr.nominal(pcr.ycoordinate(pcr.boolean(Map)) / pcr.celllength())
location = pcr.pcrand((xMap == XInNrCells), (yMap == YInNrCells))
return location
def mapeq(mapOne, mapTwo):
mapOneScalar = pcr.scalar(mapOne)
mapTwoScalar = pcr.scalar(mapTwo)
difference = mapOneScalar - mapTwoScalar
cellEqual = pcr.pcreq(difference, pcr.scalar(0))
mapEqual = pcr.pcrgt(pcr.mapminimum(pcr.scalar(cellEqual)), pcr.scalar(0.5))
return getCellValue(mapEqual, 1, 1)
def slopeToDownstreamNeighbour(dem, ldd):
slopeToDownstreamNeighbour = (dem - pcr.downstream(ldd, dem)) / pcr.downstreamdist(ldd)
return slopeToDownstreamNeighbour
def slopeToDownstreamNeighbourNotFlat(dem, ldd, minSlope):
slopeToDownstreamNeighbourMap = slopeToDownstreamNeighbour(dem, ldd)
lddArea = pcr.defined(ldd)
minSlopeCover = pcr.ifthen(lddArea, pcr.scalar(minSlope))
slopeToDownstreamNeighbourNotFlat = pcr.cover(pcr.max(minSlopeCover, slopeToDownstreamNeighbourMap), minSlopeCover)
return slopeToDownstreamNeighbourNotFlat
def distancetodownstreamcell(Ldd):
distanceToDownstreamCell = pcr.max(pcr.downstreamdist(Ldd), pcr.celllength())
return distanceToDownstreamCell
def normalcorrelated(normalX, normalY, correlation):
# returns realizations of two normal variables with
# mean zero and var 1 having correlation of correlation
# based on:
# x=normal()
# y=ax+b*normal()
# correlation = a / pcr.sqrt( pcr.sqr(a) + pcr.sqr(b) )
x = pcr.scalar(normalX)
y = (x + pcr.sqrt((1 / pcr.sqr(correlation)) - 1) * pcr.scalar(normalY)) * pcr.scalar(correlation)
return x, y
def swapValuesOfTwoRegions(regions, values, doIt):
# assigns the highest value found in region False to all cells
# in region True, and vice versa
# regions, a boolean map with two regions
# values, a map of scalar data type
if doIt:
valueInRegionFalse = pcr.mapmaximum(pcr.ifthen(pcr.pcrnot(regions), values))
valueInRegionTrue = pcr.mapmaximum(pcr.ifthen(regions, values))
swapped = pcr.ifthenelse(regions, valueInRegionFalse, valueInRegionTrue)
return swapped
else:
return values
##############################
# converting to numpy stuff #
##############################
def createTimeSeriesList(timeSeriesFile):
file = open(timeSeriesFile, 'r')
piet = file.readlines()
newList = []
for line in piet:
lineList = string.split(line)
newList.append(lineList)
file.close()
return newList
def timeInputSparse(fileName):
return os.path.exists(fileName)
def mapToColAsArray(name):
"""Selects values at row, col from raster name in Monte Carlo samples.
name -- Name of raster.
row -- Row index of cell to read.
col -- Col index of cell to read.
The returned array does not contain missing values so the size is maximimal
the number of cells. It contains three columns, x, y, name
x,y are given as xcoordinate and ycoordinate values
Returned array has elements of type numpy.float32"""
nrRows = pcr.clone().nrRows()
nrCols = pcr.clone().nrCols()
nrCells = nrRows * nrCols
mask = numpy.zeros(nrCells).astype(numpy.bool_)
arrayX = numpy.zeros(nrCells).astype(numpy.float32)
arrayY = numpy.zeros(nrCells).astype(numpy.float32)
arrayName = numpy.zeros(nrCells).astype(numpy.float32)
xMap = pcr.xcoordinate(pcr.defined(name))
yMap = pcr.ycoordinate(pcr.defined(name))
# For each cell.
c = 0
while c < nrCells:
arrayName[c], mask[c] = pcr.cellvalue(name, c + 1)
arrayX[c], dummy = pcr.cellvalue(xMap, c + 1)
arrayY[c], dummy = pcr.cellvalue(yMap, c + 1)
c += 1
arrayName = numpy.compress(mask, arrayName)
arrayX = numpy.compress(mask, arrayX)
arrayY = numpy.compress(mask, arrayY)
mapAsColArray = numpy.column_stack((arrayX, arrayY, arrayName))
return mapAsColArray
def addTimeColumnToMapAsColArray(mapAsColArray, time):
b = numpy.insert(mapAsColArray, 2, float(time), axis=1)
return b
def stackOfMapsToColAsArray(stackOfMapsAsList, currentTime):
timeOfFirstMap = currentTime - len(stackOfMapsAsList) + 1
t = timeOfFirstMap
stackOfMapsAsColsList = []
for map in stackOfMapsAsList:
mapArray = mapToColAsArray(map)
mapAsColArrayWithTime = addTimeColumnToMapAsColArray(mapArray, t)
stackOfMapsAsColsList.append(mapAsColArrayWithTime)
t = t + 1
array = numpy.concatenate(stackOfMapsAsColsList, axis=0)
return array
def stackOfMapsToRDataFrame(stackOfMapsAsList, currentTime):
colAsArray = stackOfMapsToColAsArray(stackOfMapsAsList, currentTime)
dataFrame = convertStackOfMapsToRDataFrame(colAsArray)
return dataFrame
def loadTimeseries(timeSeriesFileName):
"""Reads a PCRaster timeseries that should not contain a header
timeSeriesFileName -- Name of timeseries.
The returned numpy array has two dimensions
First dimension: timesteps, second dimension values, where first item is timestep
second item is first value column, third item is second value column, et. """
a = numpy.loadtxt(timeSeriesFileName)
return a
##################################
# links to R #
##################################
def convertStackOfMapsToRDataFrame(mapAsColArray):
# note z is time and v is variable
robjects.r('''
convertToDataFrame <- function(x) {
frame=as.data.frame(x)
colnames(frame)[1] <- "x"
colnames(frame)[2] <- "y"
colnames(frame)[3] <- "z"
colnames(frame)[4] <- "v"
frame
}
''')
convertToDataFrame = robjects.r['convertToDataFrame']
a = convertToDataFrame(mapAsColArray)
return a
def experimentalVariogramValues(stackOfMapsAsList, boundariesVector, space, savePlot, fileName, maxVarPlot):
# returns distances and semivariances for steps defined by boundaries vector
# note that length of returned vector equals number of intervals that is
# available, thus, len(returnedList) can be smaller than len(boundariesVector)!
# space (TRUE) -> spatial correlation
# space (FALSE) -> temporal correlation
stackOfMapsAsRDataFrame = stackOfMapsToRDataFrame(stackOfMapsAsList, 10)
if space:
robjects.r('''
experimentalVariogramValues <- function(dataFrame,boundariesVector) {
require("gstat")
require("automap")
#gstatDataFrame <- gstat(id = "v",formula = v~1, locations = ~x+y+z, data = dataFrame)
# spatial only
a = variogram(v ~ 1, ~ x + y + z, dataFrame, beta=0,tol.ver=0.1, boundaries=boundariesVector)
#plotting seems not to work
#data(dataFrame)
#coordinates(dataFrame)=~x+y+z
#variogram=autofitVariogram(v~1, dataFrame,model="Exp")
#b=fit.variogram(a, vgm(1,"Exp",3))
#pdf("test.pdf")
#plot(a$dist,a$gamma)
#dev.off()
a
}
''')
else:
robjects.r('''
experimentalVariogramValues <- function(dataFrame,boundariesVector) {
require("gstat")
require("automap")
#gstatDataFrame <- gstat(id = "v",formula = v~1, locations = ~x+y+z, data = dataFrame)
# temporal only
colnames(dataFrame)[1] <- "z"
colnames(dataFrame)[3] <- "x"
#gstatDataFrame <- gstat(id = "v",formula = v~1, locations = ~x+y+z, data = dataFrame)
# normal
#a = variogram(v ~ 1, ~ x + y + z, dataFrame, beta=0,tol.ver=0.1, alpha=90,tol.hor=0.1, boundaries=boundariesVector)
# remove trend (universal kriging variogram)
a = variogram(v ~ x, ~ x + y + z, dataFrame, beta=0,tol.ver=0.1, alpha=90,tol.hor=0.1, boundaries=boundariesVector)
a
}
''')
experimentalVariogramValues = robjects.r['experimentalVariogramValues']
boundariesVectorR = robjects.FloatVector(boundariesVector)
expVariogram = experimentalVariogramValues(stackOfMapsAsRDataFrame, boundariesVectorR)
if savePlot:
robjects.r('''
saveExperimentalVariogram <- function(experimentalVariogram,fileName,maxVarPlot) {
require("gstat")
png(fileName)
plot(experimentalVariogram$dist,experimentalVariogram$gamma,ylim=c(0,maxVarPlot))
dev.off()
}
''')
saveExperimentalVariogram = robjects.r['saveExperimentalVariogram']
saveExperimentalVariogram(expVariogram, fileName, maxVarPlot)
# return expVariogram.r['dist'][0], expVariogram.r['gamma'][0]
return expVariogram[1], expVariogram[2]
def semvar(firstMap, secondMap):
nrPairs = getCellValue( pcr.cover(pcr.maptotal(pcr.scalar(pcr.pcrand(pcr.defined(firstMap), pcr.defined(secondMap)))), 0), 1, 1)
sumOfSquaredDiff = getCellValue( pcr.cover(pcr.maptotal(pcr.sqr(firstMap - secondMap) / 2.0), 0), 1, 1)
return nrPairs, sumOfSquaredDiff
def experimentalVariogramValuesInTime(stackOfMapsAsList, bounds):
nrPairsOfLags = [0.0] * len(bounds)
print(nrPairsOfLags)
sumOfSquaredDiffOfLags = [0.0] * len(bounds)
sumOfDists = [0.0] * len(bounds)
nMaps = len(stackOfMapsAsList)
for i in range(0, nMaps):
for j in range(i + 1, nMaps):
dist = math.fabs(i - j)
nrPairs, sumOfSquaredDiff = semvar(stackOfMapsAsList[i], stackOfMapsAsList[j])
k = 0
used = 0
for bound in bounds:
if (dist < bound) and (used == 0):
nrPairsOfLags[k] = nrPairsOfLags[k] + nrPairs
sumOfSquaredDiffOfLags[k] = sumOfSquaredDiffOfLags[k] + sumOfSquaredDiff
sumOfDists[k] = sumOfDists[k] + nrPairs * float(dist)
used = 1
k = k + 1
semvarList = map(operator.truediv, sumOfSquaredDiffOfLags, nrPairsOfLags)
distList = map(operator.truediv, sumOfDists, nrPairsOfLags)
return numpy.array(list(distList)), numpy.array(list(semvarList))
# jan=pcr.ifthen(pcr.pcrle(pcr.uniqueid(pcr.defined('jet00000.001')),100),pcr.scalar('jet00000.001'))
#test=[pcr.scalar('jet00000.001'),jan, pcr.scalar('jet00000.003'),pcr.scalar('jet00000.004')]
# a=experimentalVariogramValuesInTime(test,[1.5,7.0])
def semvarOfStackOfMapsInSpace(stackOfMapsAsList, lagX, lagY):
# lagX is shift of cells to right (positive)
# lagY is shift of cells up (positive)
nrPairsTot = 0.0
sumOfSquaredDiffTot = 0.0
for theMap in stackOfMapsAsList:
shiftedMap = pcr.shift(theMap, lagY, 0 - lagX)
nrPairs, sumOfSquaredDiff = semvar(theMap, shiftedMap)
nrPairsTot = nrPairsTot + nrPairs
sumOfSquaredDiffTot = sumOfSquaredDiffTot + sumOfSquaredDiff
return nrPairsTot, sumOfSquaredDiffTot
def createLagXAndLagYForBounds(bounds):
lagXlagYDist = []
maxLag = int(round(bounds[-1]))
possibleLags = range(0, maxLag, 1)
for i in possibleLags:
for j in possibleLags:
dist = math.pcr.sqrt((float(i)**2) + (float(j)**2))
if (dist < bounds[-1]) and not ((i == 0) and (j == 0)):
a = [ i, j, dist]
lagXlagYDist.append(a)
return lagXlagYDist
# b=createLagXAndLagYForBounds([0.2,4.0])
# print b
def experimentalVariogramValuesInSpace(stackOfMapsAsList, bounds):
nrPairsOfLags = [0.0] * len(bounds)
sumOfSquaredDiffOfLags = [0.0] * len(bounds)
sumOfDists = [0.0] * len(bounds)
nMaps = len(stackOfMapsAsList)
lagXAndLagY = createLagXAndLagYForBounds(bounds)
for i in lagXAndLagY:
dist = i[2]
nrPairs, sumOfSquaredDiff = semvarOfStackOfMapsInSpace(stackOfMapsAsList, i[0], i[1])
k = 0
used = 0
for bound in bounds:
if (dist < bound) and (used == 0):
nrPairsOfLags[k] = nrPairsOfLags[k] + nrPairs
sumOfSquaredDiffOfLags[k] = sumOfSquaredDiffOfLags[k] + sumOfSquaredDiff
sumOfDists[k] = sumOfDists[k] + nrPairs * float(dist)
used = 1
k = k + 1
semvarList = map(operator.div, sumOfSquaredDiffOfLags, nrPairsOfLags)
distList = map(operator.div, sumOfDists, nrPairsOfLags)
return distList, semvarList
# jan=pcr.ifthen(pcr.pcrle(pcr.uniqueid(pcr.defined('jet00000.001')),100),pcr.scalar('jet00000.001'))
##test=[pcr.scalar('jet00000.001'),jan, pcr.scalar('jet00000.003'),pcr.scalar('jet00000.004')]
#test=[pcr.scalar('jet00000.001'),pcr.scalar('jet00000.002'), pcr.scalar('jet00000.003'),pcr.scalar('jet00000.004')]
# a,b=experimentalVariogramValuesInSpace(test,[2.3,15,17.8])
# print a, b
def descriptiveStatistics(stackOfMapsAsRDataFrame):
robjects.r('''
descriptiveStatistics <- function(dataFrame) {
mean <- mean(dataFrame$v)
variance <- var(dataFrame$v)
c(mean,variance)
}
''')
descriptiveStatistics = robjects.r['descriptiveStatistics']
var = descriptiveStatistics(stackOfMapsAsRDataFrame)
return var
###########################
# some data management #
###########################
def cornerMap(cloneMap):
x = pcr.xcoordinate(cloneMap)
y = pcr.ycoordinate(cloneMap)
corner = pcr.pcrand(pcr.pcreq(x, pcr.mapminimum(x)), pcr.pcreq(y, pcr.mapmaximum(y)))
return corner
def convertListOfValuesToListOfNonSpatialMaps(listOfValues, cloneMap):
# non spatial, i.e. value is put in the corner with the largest x and y
# return values is always scalar
corner = cornerMap(cloneMap)
listOfMaps = []
for value in listOfValues:
map = pcr.ifthen(corner, pcr.scalar(value))
listOfMaps.append(map)
return listOfMaps
def convertListOfNonSpatialMapsToListOfValues(listOfMaps):
listOfValues = []
clone = pcr.ifthenelse(pcr.defined(listOfMaps[0]), pcr.boolean(1), 1)
x = pcr.xcoordinate(clone)
y = pcr.ycoordinate(clone)
corner = pcr.pcrand(pcr.pcreq(x, pcr.mapminimum(x)), pcr.pcreq(y, pcr.mapmaximum(y)))
for map in listOfMaps:
value = getCellValueAtBooleanLocation(corner, map)
listOfValues.append(value)
return listOfValues
# tests
# setclone("cloneSmall.map")
#a = pcr.scalar("norSmall.map")
#b = pcr.scalar("norSmall.map")*2
#c = pcr.scalar("norSmall.map")*3
#d = pcr.scalar("norSmall.map")*4
#e = pcr.scalar("norSmall.map")*5
# stackOfMapsAsList=[a,b,c,d,e]
# pcr.report(d,"testje")
#test = convertListOfNonSpatialMapsToListOfValues(stackOfMapsAsList)
#c = stackOfMapsToColAsArray(stackOfMapsAsList, 10)
#d = convertStackOfMapsToRDataFrame(c)
# boundVector=(1.5,2.5,30.5)
# dist,gamma=experimentalVariogramValues(d,boundVector,1,1,'pietje.pdf')
# print dist
# print gamma
# aListOfMaps=convertListOfValuesToListOfNonSpatialMaps(gamma,"cloneSmall.map")
# pcr.report(aListOfMaps[0],"testje")
#
#descrStats = descriptiveStatistics(d)
# bListOfMaps=convertListOfValuesToListOfNonSpatialMaps(descrStats,"cloneSmall.map")
# pcr.report(bListOfMaps[0],"mean")
#
#import time
# time.sleep(100)
def keepHistoryOfMaps(currentHistoryOfMaps, mapOfCurrentTimeStep, numberOfTimeStepsToKeep):
# uses deque objects instead of lists, thus, conversion is required to get a list simply
# by list(currentHistoryOfMaps)
currentHistoryOfMaps.append(mapOfCurrentTimeStep)
if len(currentHistoryOfMaps) > numberOfTimeStepsToKeep:
currentHistoryOfMaps.popleft()
if len(currentHistoryOfMaps) > numberOfTimeStepsToKeep:
print('warning: length of keepHistoryOfMaps is greater than number of timesteps to keep')
return currentHistoryOfMaps
##############################
# map algebra #
##############################
def nrCols(map):
x = pcr.xcoordinate(pcr.boolean(map))
xMax = pcr.mapmaximum(x)
xMin = pcr.mapminimum(x)
nrCols = ((xMax - xMin) / pcr.celllength()) + 1
return nrCols
def nrRows(map):
y = pcr.ycoordinate(pcr.boolean(map))
yMax = pcr.mapmaximum(y)
yMin = pcr.mapminimum(y)
nrCols = ((yMax - yMin) / pcr.celllength()) + 1
return nrCols
def nrCells(map):
return nrCols(map) * nrRows(map)
def corners(map):
left = edge(map, 4, 0)
right = edge(map, 6, 0)
top = edge(map, 8, 0)
bottom = edge(map, 2, 0)
return pcr.pcrgt(pcr.scalar(left) + pcr.scalar(right) + pcr.scalar(top) + pcr.scalar(bottom), 1.5)
def edges(map):
left = edge(map, 4, 0)
right = edge(map, 6, 0)
top = edge(map, 8, 0)
bottom = edge(map, 2, 0)
return pcr.pcror(pcr.pcror(left, right), pcr.pcror(top, bottom))
def edgeZone(map, nrCells):
# nrCells can be (should be) floating point
edgeMap = edges(map)
distToEdge = pcr.spread(edgeMap, 0, 1) / pcr.celllength()
edgeZoneMap = distToEdge < (nrCells - 1.0)
return edgeZoneMap
def booleanTrue(map):
# returns a map that is everywhere true
# removes mvs
noMVs = pcr.cover(map, 1)
return pcr.defined(noMVs)
def edge(map, side, distance):
# map should have y incr. bot to top
# returns boolean map with edge
# distance defines distance to map boundary of edge,
# e.g. distance 0 returns the real edge, distance
# distance is an integer or nominal or ordinal
# 1 returns one to the left/right/top/bottom
# side is ldd dirs, e.g. 4 returns left side
# side is an integer
# works only with whole coordinates (as cells are selected using equals on coors as floating points)
realDist = pcr.celllength() * pcr.scalar(distance)
if ((side == 4) or (side == 6)):
x = pcr.xcoordinate(booleanTrue(map))
if side == 4:
sideMap = pcr.pcreq(x, pcr.mapminimum(x) + realDist)
if side == 6:
sideMap = pcr.pcreq(x, pcr.mapmaximum(x) - realDist)
if ((side == 8) or (side == 2)):
y = pcr.ycoordinate(booleanTrue(map))
if side == 2:
sideMap = pcr.pcreq(y, pcr.mapminimum(y) + realDist)
if side == 8:
sideMap = pcr.pcreq(y, pcr.mapmaximum(y) - realDist)
return sideMap
def bottom(map):
'''
returns the bottom line of cells
works only with y increases bottom to top
any cell size (unlike edge)
'''
yCoordinate = pcr.ycoordinate(pcr.defined(map))
bottom = (yCoordinate == pcr.mapminimum(yCoordinate))
return bottom
def neighbourIsMissingComponent(nbShift, noMVOnMap):
NBIsMissingAll = pcr.ifthenelse(pcr.defined(nbShift), pcr.boolean(0), pcr.boolean(1))
NBIsMissing = pcr.pcrand(NBIsMissingAll, noMVOnMap)
return NBIsMissing
def neighbourIsMissingValueOrEdgeAndCellItselfIsDefined(map):
noMVOnMap = pcr.defined(map)
rightNBIsMissing = neighbourIsMissingComponent(pcr.shift(map, pcr.scalar(0), pcr.scalar(1)), noMVOnMap)
leftNBIsMissing = neighbourIsMissingComponent(pcr.shift(map, pcr.scalar(0), pcr.scalar(-1)), noMVOnMap)
upperNBIsMissing = neighbourIsMissingComponent(pcr.shift(map, pcr.scalar(-1), pcr.scalar(0)), noMVOnMap)
lowerNBIsMissing = neighbourIsMissingComponent(pcr.shift(map, pcr.scalar(1), pcr.scalar(0)), noMVOnMap)
return upperNBIsMissing, rightNBIsMissing, lowerNBIsMissing, leftNBIsMissing
# aMap=pcr.scalar("idOth.map")
#aMap=pcr.ifthen(pcr.uniform(1) < 0.9,pcr.scalar(2))
# one,two,three,four=neighbourIsMissingValueOrEdgeAndCellItselfIsDefined(aMap)
# pcr.report(one,'one.map')
# pcr.report(two,'two.map')
# pcr.report(three,'three.map')
# pcr.report(four,'four.map')
# pcr.report(aMap,'amap.map')
def moveRowsOrColumnsForPeriodicBoundaryCondition(map, direction):
# moves row/column next to edge cells to other edge
# direction is ldd dirs (e.g. 8 is bottom to top)
# direction is an integer
if (direction == 6):
cellsToShift = edge(map, 4, 1)
distanceToShift = pcr.scalar(nrCols(map) - 2)
shiftedMap = pcr.ifthen(edge(map, 6, 0), pcr.shift(map, 0, 0 - distanceToShift))
if (direction == 4):
cellsToShift = edge(map, 6, 1)
distanceToShift = pcr.scalar(nrCols(map) - 2)
shiftedMap = pcr.ifthen(edge(map, 4, 0), pcr.shift(map, 0, distanceToShift))
if (direction == 2):
cellsToShift = edge(map, 8, 1)
distanceToShift = pcr.scalar(nrRows(map) - 2)
shiftedMap = pcr.ifthen(edge(map, 2, 0), pcr.shift(map, 0 - distanceToShift, 0))
if (direction == 8):
cellsToShift = edge(map, 2, 1)
distanceToShift = pcr.scalar(nrRows(map) - 2)
shiftedMap = pcr.ifthen(edge(map, 8, 0), pcr.shift(map, distanceToShift, 0))
return shiftedMap
def periodicBoundaryCondition(map):
# note positive shifts are to left or to top..
# first value is vertical shift, second value is horizontal shift
left = moveRowsOrColumnsForPeriodicBoundaryCondition(map, 4)
right = moveRowsOrColumnsForPeriodicBoundaryCondition(map, 6)
top = moveRowsOrColumnsForPeriodicBoundaryCondition(map, 8)
bottom = moveRowsOrColumnsForPeriodicBoundaryCondition(map, 2)
tmp = pcr.cover(left, right, top, bottom, map)
newMap = pcr.ifthen(pcr.pcrnot(corners(map)), tmp)
return newMap
# test=pcr.scalar("idOth.map")
# nrCols=nrCols(test)
# pcr.report(nrCols,"test")
# testje=edge(test,8,1)
# pcr.report(testje,"testje")
#
# test2=moveRowsOrColumnsForPeriodicBoundaryCondition(test,8)
# pcr.report(test2,"test2")
# test3=periodicBoundaryCondition(test)
# pcr.report(test3,"test3")
# test4=edges(test)
# pcr.report(test4,"test4")
def periodicBoundaryConditionNumpy(map):
a = pcr.pcr2numpy(map, 1)
# second left edge
secondLeft = a[:, 1]
# second right edge
secondRight = a[:, a.shape[1] - 2]
# remove left and right edge
new = numpy.delete(a, (0, a.shape[1] - 1), 1)
# add new left edge
newLeft = numpy.insert(new, 0, secondRight, 1)
# add new right edge
b = numpy.insert(newLeft, newLeft.shape[1], secondLeft, 1)
# second upper edge
secondUpper = b[1, :]
# second right edge
secondLower = b[a.shape[0] - 2, :]
# remove upper and lower edge
bNew = numpy.delete(b, (0, a.shape[0] - 1), 0)
# add upper edge
newTop = numpy.insert(bNew, 0, secondLower, 0)
# add new lower edge
c = numpy.insert(newTop, newTop.shape[0], secondUpper, 0)
outMap = pcr.numpy2pcr(Scalar, c, 0)
return outMap
# test=pcr.scalar("idOth.map")
# test2=periodicBoundaryConditionNumpy(test)
# pcr.report(test2,"test2")
# test3=periodicBoundaryCondition(test)
# pcr.report(test3,"test3")
def createToCellsPeriodicBoundaryCondition(clone):
list = []
list.append(edge(clone, 6, 0))
list.append(edge(clone, 8, 0))
list.append(edge(clone, 4, 0))
list.append(edge(clone, 2, 0))
return list
def createFromCellsPeriodicBoundaryCondition(clone):
list = []
list.append(edge(clone, 4, 1))
list.append(edge(clone, 2, 1))
list.append(edge(clone, 6, 1))
list.append(edge(clone, 8, 1))
return list
def colNumber(clone):
return pcr.ordinal((pcr.roundoff((pcr.xcoordinate(clone) / pcr.celllength()))))
def rowNumber(clone):
return pcr.ordinal((pcr.roundoff((pcr.ycoordinate(clone) / pcr.celllength()))))
def periodicBoundaryConditionAreatotal(map, fromCells, toCells, cols, rows):
right = pcr.ifthen(toCells[0], pcr.areatotal(pcr.ifthen(fromCells[0], map), rows))
bottom = pcr.ifthen(toCells[1], pcr.areatotal(pcr.ifthen(fromCells[1], map), cols))
left = pcr.ifthen(toCells[2], pcr.areatotal(pcr.ifthen(fromCells[2], map), rows))
top = pcr.ifthen(toCells[3], pcr.areatotal(pcr.ifthen(fromCells[3], map), cols))
return pcr.cover(right, bottom, left, top, map)
# test=pcr.scalar("idOth.map")
# clone=pcr.defined(test)
#
# fromCells=createFromCellsPeriodicBoundaryCondition(clone)
# toCells=createToCellsPeriodicBoundaryCondition(clone)
# cols=colNumber(clone)
# rows=rowNumber(clone)
# test2=periodicBoundaryConditionAreatotal(test,fromCells,toCells,cols,rows)
# pcr.report(test,"test")
# pcr.report(test2,"test2")
# test3=periodicBoundaryCondition(test)
# pcr.report(test2-test3,"diff")
##########################
# sampling schemes #
##########################
def samplingScheme(clone, nrSamples, fractionShortDistance, separationDistance, nrCellsToRight, nrCellsToTop):
# get numb. of samples
nrSamplesGrid = pcr.rounddown((1.0 - fractionShortDistance) * nrSamples)
nrSamplesShortDistance = nrSamples - nrSamplesGrid
# get possible locs
colnumber = pcr.roundoff((pcr.xcoordinate(clone) / pcr.celllength()))
tmp = pcr.roundoff((pcr.ycoordinate(clone) / pcr.celllength()))
rownumber = tmp - pcr.mapminimum(tmp) + 1
possibleCols = pcr.pcreq(pcr.pcrmod(colnumber, separationDistance), 0)
possibleRows = pcr.pcreq(pcr.pcrmod(rownumber, separationDistance), 0)
possibleLocations = pcr.pcrand(possibleCols, possibleRows)
# get grid area
nrRowsCols = pcr.roundup(pcr.sqrt(nrSamplesGrid))
locColSel = pcr.ifthenelse(pcr.pcrle(colnumber, (separationDistance * nrRowsCols)), possibleLocations, 0)
locRowSel = pcr.ifthenelse(pcr.pcrle(rownumber, (separationDistance * nrRowsCols)), possibleLocations, 0)
locsArea = pcr.pcrand(locColSel, locRowSel)
# get samples
samplesAtGrid = pcr.cover(pcr.pcrle(pcr.order(pcr.ifthen(locsArea, pcr.uniqueid(clone))), nrSamplesGrid), 0)
# get samples with extra samples
randomSampleNumbers = pcr.ordinal(pcr.ifthen(samplesAtGrid, pcr.order(pcr.uniform(pcr.ifthen(samplesAtGrid, pcr.boolean(1))))))
alreadyCovered = pcr.defined(randomSampleNumbers)
for sample in range(1, int(getCellValue(pcr.mapmaximum(randomSampleNumbers), 1, 1)) + 1):
theSample = pcr.cover(pcr.pcreq(sample, randomSampleNumbers), 0)
nbSample = pcr.pcrand(pcr.pcrgt(pcr.window4total(pcr.scalar(theSample)), 0.5), pcr.pcrnot( alreadyCovered))
extraSample = pcr.cover(pcr.pcrlt(pcr.order(pcr.ifthen(nbSample, pcr.uniform(1))), 1.5), 0)
alreadyCovered = pcr.pcror(alreadyCovered, extraSample)
totalSamples = pcr.maptotal(pcr.scalar(alreadyCovered))
# print getCellValue(totalSamples,1,1)
if getCellValue(totalSamples, 1, 1) > (nrSamples - 1.0 + 0.1):
break
sampleNumbers = pcr.ordinal(pcr.ifthen(alreadyCovered, pcr.order(pcr.uniqueid(pcr.ifthen(alreadyCovered, pcr.boolean(1))))))
# shift the cells to centre
centreCol = pcr.maptotal(colnumber) / nrCells(clone)
centreRow = pcr.maptotal(rownumber) / nrCells(clone)
centreColSamples = pcr.maptotal(pcr.ifthen(alreadyCovered, colnumber)) / pcr.maptotal(pcr.ifthen(alreadyCovered, pcr.scalar(1)))
centreRowSamples = pcr.maptotal(pcr.ifthen(alreadyCovered, rownumber)) / pcr.maptotal(pcr.ifthen(alreadyCovered, pcr.scalar(1)))
sampleNumbersShifted = pcr.shift(sampleNumbers, centreRow - centreRowSamples - nrCellsToTop, centreColSamples - centreCol - nrCellsToRight)
return pcr.cover(sampleNumbersShifted, 0)
def samplingSchemeRandomShift(clone, nrSamples, fractionShortDistance, separationDistance, maxNrCellsToRight, maxNrCellsToTop):
uniformRight = pcr.mapuniform()
shiftRight = pcr.roundoff((uniformRight - 0.5) * maxNrCellsToRight)
uniformTop = pcr.mapuniform()
shiftTop = pcr.roundoff((uniformTop - 0.5) * maxNrCellsToTop)
sampleNumbersShifted = samplingScheme(clone, nrSamples, fractionShortDistance, separationDistance, shiftRight, shiftTop)
return sampleNumbersShifted
def samplingSchemeSubset(clone, nrSamples, fractionShortDistance, separationDistance, nrCellsToRight, nrCellsToTop, nrSamplesRemove, uniformMap,
realNrSamples ):
# nrSamples, nr of samples as if it were a normal scheme (regular, all grid positions used)
# nrSamplesRemove, nr of samples removed again from normal scheme, random positions
# realNrSamples real number of samples (to represent removal)
# get numb. of samples
nrSamplesGrid = pcr.rounddown((1.0 - fractionShortDistance) * nrSamples)
nrSamplesShortDistance = nrSamples - nrSamplesGrid
# get possible locs
colnumber = pcr.roundoff((pcr.xcoordinate(clone) / pcr.celllength()))
tmp = pcr.roundoff((pcr.ycoordinate(clone) / pcr.celllength()))
rownumber = tmp - pcr.mapminimum(tmp) + 1
possibleCols = pcr.pcreq(pcr.pcrmod(colnumber, separationDistance), 0)
possibleRows = pcr.pcreq(pcr.pcrmod(rownumber, separationDistance), 0)
possibleLocations = pcr.pcrand(possibleCols, possibleRows)
# get grid area
nrRowsCols = pcr.roundup(pcr.sqrt(nrSamplesGrid))
locColSel = pcr.ifthenelse(pcr.pcrle(colnumber, (separationDistance * nrRowsCols)), possibleLocations, 0)
locRowSel = pcr.ifthenelse(pcr.pcrle(rownumber, (separationDistance * nrRowsCols)), possibleLocations, 0)
locsArea = pcr.pcrand(locColSel, locRowSel)
# get samples
samplesAtGrid = pcr.cover(pcr.pcrle(pcr.order(pcr.ifthen(locsArea, pcr.uniqueid(clone))), nrSamplesGrid), 0)
# randomly remove samples
samplesAtGridRandomSampleNumbers = pcr.order(pcr.ifthen(pcr.pcrne(samplesAtGrid, 0), uniformMap) )
pcr.report(samplesAtGridRandomSampleNumbers, 'rem.map')
remove = pcr.cover(samplesAtGridRandomSampleNumbers < nrSamplesRemove, 0)
newSamples = pcr.cover(pcr.ifthenelse(pcr.pcrnot(remove), samplesAtGrid, 0), 0)
samplesAtGrid = newSamples
pcr.report(samplesAtGrid, 'testje.map')
# get samples with extra samples
randomSampleNumbers = pcr.ordinal(pcr.ifthen(samplesAtGrid, pcr.order(pcr.uniform(pcr.ifthen(samplesAtGrid, pcr.boolean(1))))))
alreadyCovered = pcr.defined(randomSampleNumbers)
for sample in range(1, int(getCellValue(pcr.mapmaximum(randomSampleNumbers), 1, 1)) + 1):
theSample = pcr.cover(pcr.pcreq(sample, randomSampleNumbers), 0)
nbSample = pcr.pcrand(pcr.pcrgt(pcr.window4total(pcr.scalar(theSample)), 0.5), pcr.pcrnot( alreadyCovered))
extraSample = pcr.cover(pcr.pcrlt(pcr.order(pcr.ifthen(nbSample, pcr.uniform(1))), 1.5), 0)
alreadyCovered = pcr.pcror(alreadyCovered, extraSample)
totalSamples = pcr.maptotal(pcr.scalar(alreadyCovered))
# print getCellValue(totalSamples,1,1)
if getCellValue(totalSamples, 1, 1) > (realNrSamples - 1.0 + 0.1):
break
sampleNumbers = pcr.ordinal(pcr.ifthen(alreadyCovered, pcr.order(pcr.uniqueid(pcr.ifthen(alreadyCovered, pcr.boolean(1))))))
# shift the cells to centre
centreCol = pcr.maptotal(colnumber) / nrCells(clone)
centreRow = pcr.maptotal(rownumber) / nrCells(clone)
centreColSamples = pcr.maptotal(pcr.ifthen(alreadyCovered, colnumber)) / pcr.maptotal(pcr.ifthen(alreadyCovered, pcr.scalar(1)))
centreRowSamples = pcr.maptotal(pcr.ifthen(alreadyCovered, rownumber)) / pcr.maptotal(pcr.ifthen(alreadyCovered, pcr.scalar(1)))
sampleNumbersShifted = pcr.shift(sampleNumbers, centreRow - centreRowSamples - nrCellsToTop, centreColSamples - centreCol - nrCellsToRight)
return pcr.cover(sampleNumbersShifted, 0)
def samplingSchemeRandomShiftSubset(clone, nrSamples, fractionShortDistance, separationDistance, maxNrCellsToRight, maxNrCellsToTop, nrSamplesRemove,
uniformMap, realNrSamples):
uniformRight = pcr.mapuniform()
shiftRight = pcr.roundoff((uniformRight - 0.5) * maxNrCellsToRight)
uniformTop = pcr.mapuniform()
shiftTop = pcr.roundoff((uniformTop - 0.5) * maxNrCellsToTop)
sampleNumbersShifted = samplingSchemeSubset(clone, nrSamples, fractionShortDistance, separationDistance, shiftRight, shiftTop, nrSamplesRemove, uniformMap,
realNrSamples)
return sampleNumbersShifted
# clone=pcr.defined("clone.map")
# test=samplingSchemeRandomShift(clone,100,0.3,3,4,40)
# test2=samplingScheme(clone,100,0.3,3,0,0)
# pcr.report(test,"pietje")
# pcr.report(test2,"pietje2")
def createGifAnimation(name, samples):
# postmcloop function to convert individual png or gif
# files to animated gif
# use gifview (from gifsicle package) to animate, man gifview for
# options
# or gimp, filters -> animation -> playback
for sample in samples:
baseName = pcrfw.generateNameS(name, sample)
command = "convert " + baseName + "* " + baseName + "_ani.gif"
os.system(command)
# alternatief: for i in ete*; do convert $i ${i%%png}gif; done
# en dan gifsicle to make the animation (maybe faster)
def mapaverage(aMap):
total = pcr.maptotal(aMap)
nrCellsNoMV = pcr.maptotal(pcr.scalar(pcr.defined(aMap)))
return total / nrCellsNoMV
########################################
### FUNCTIONS FOR PARTICLE FILTERING ###
########################################
def createFileNameToReportAVariableForSuspend(classInstance, variableName, currentTimeStep, currentSampleNumber):
className = classInstance.__class__.__name__
classNameDir = str(currentSampleNumber) + '/stateVar/' + className
if not os.path.exists(classNameDir):
os.mkdir(classNameDir)
return pcrfw.generateNameST('stateVar/' + className + '/' + variableName, currentSampleNumber, currentTimeStep)
# print className + "/" variableName
def letterSequence():
alphabeth = string.ascii_letters[0:26]
letters = []
for firstLetter in alphabeth:
for secondLetter in alphabeth:
letters.append(firstLetter + secondLetter)
return letters
def reportMemberVariablesOfAClassForSuspend(classInstance, currentTimeStep, currentSampleNumber):
b = vars(classInstance)
names = letterSequence()
i = 0
for name in sorted(b):
if type(b[name]) == PCRaster._PCRaster.Field:
fileName = createFileNameToReportAVariableForSuspend(classInstance, names[i], currentTimeStep, currentSampleNumber)
pcr.report(b[name], fileName)
# switch on for testing
#fileName = createFileNameToReportAVariableForSuspend(classInstance,names[i],currentTimeStep,currentSampleNumber)
# print 'report', classInstance, name, fileName
i = i + 1
print('In reporting, the number of member vars in ', classInstance, ' is ', i)
def readMemberVariablesOfAClassForResume(classInstance, currentTimeStep, currentSampleNumber):
b = vars(classInstance)
names = letterSequence()
i = 0
for name in sorted(b):
if type(b[name]) == PCRaster._PCRaster.Field:
fileName = createFileNameToReportAVariableForSuspend(classInstance, names[i], currentTimeStep, currentSampleNumber)
mapToResume = pcr.readmap(fileName)
vars(classInstance)[name] = mapToResume
# switch on for testing
#fileName = createFileNameToReportAVariableForSuspend(classInstance,names[i],currentTimeStep,currentSampleNumber)
# print 'read', classInstance, name, fileName
i = i + 1
print('In reading, the number of member vars in ', classInstance, ' is ', i)
def printMemberVariables(classInstance):
a = sorted(vars(classInstance))
print(classInstance)
print('number of member variables ', len(a))
print(a)
print
def removePeriodsFromAListOfTimesteps(filterTimesteps, periodsToExclude):
newFilterTimesteps = filterTimesteps[:]
for period in periodsToExclude:
lower = period[0]
upper = period[1]
oldFilterTimesteps = newFilterTimesteps[:]
newFilterTimesteps = []
for timestep in oldFilterTimesteps:
if (timestep < lower) or (timestep > upper):
newFilterTimesteps.append(timestep)
return newFilterTimesteps
##########################
### OBJECTIVE FUNCTIONS ##
##########################
def hydrographOF(observedDischarge, modelledDischarge):
"""Returns nash sutcliffe coefficient and mean square error
observedDischarge -- numpy array
modelledDischarge -- numpy array
"""
squares = numpy.square(observedDischarge - modelledDischarge)
squaresO = numpy.square(observedDischarge - numpy.mean(observedDischarge))
NSE = 1.0 - (numpy.sum(squares) / numpy.sum(squaresO))
MSE = numpy.sum(squares) / len(squares)
a = numpy.corrcoef(observedDischarge, modelledDischarge)
sumObs = numpy.sum(observedDischarge)
sumMod = numpy.sum(modelledDischarge)
slope, intercept, r_value, p_value, std_err = scipy.stats.linregress(observedDischarge, modelledDischarge)
print('sumObs is', sumObs)
print('sumMod is', sumMod)
print('lin. regression; slope, intercept, r2, p value are', slope, intercept, r_value**2.0, p_value)
return NSE, MSE
def hydrographOFProb(observedDischarge, modelledDischarge):
"""Returns nash sutcliffe coefficient and mean square error, probabilistic, i.e. modelled is a MC sample
observedDischarge -- numpy array
modelledDischarge -- numpy array
"""
squares = numpy.square(observedDischarge - modelledDischarge.T)
squaresO = numpy.square(observedDischarge - numpy.mean(observedDischarge))
NSE = 1.0 - (numpy.sum(squares, axis=1) / numpy.sum(squaresO))
# MSE=numpy.sum(squares)/len(squares)
# sumObs=numpy.sum(observedDischarge)
# sumMod=numpy.sum(modelledDischarge)
# print 'sumObs is', sumObs
# print 'sumMod is', sumMod
return NSE
##############################################
### FUNCTIONS FOR GENERATING RANDOM FIELDS ###
##############################################
def mapuniformBounds(min, max, fixedValue='aString..', useRealization=True):
"""Assigns a random value taken from a uniform distribution
min -- lower bound, python floating point or pcraster type (no default)
max -- upper bound of interval, python or pcraster type (no default)
fixedValue -- value assigned when useRealization is False (python floating point or pcraster type)
useRealization -- switch to assign fixedValue (default is True)
fixedValue should be a PCRaster type when output is used in PCRaster functions
"""
if useRealization:
a = pcr.mapuniform()
range = pcr.scalar(max) - pcr.scalar(min)
field = (a * range) + min
return field
else:
return fixedValue
def areauniformBounds(min, max, areaMap, fixedValue='aString..', useRealization=True):
"""Assigns a random value taken from a uniform distribution
min -- lower bound, python or pcraster type
max -- upper bound of interval, python or pcraster type
areamap -- classified map
fixedValue -- value assigned when useRealization is False
useRealization -- switch to assign fixedValue
fixedValue should be a PCRaster type when output is used in PCRaster functions"""
if useRealization:
a = pcr.areauniform(pcr.spatial(areaMap))
range = pcr.scalar(max) - pcr.scalar(min)
field = (a * range) + min
return field
else:
return fixedValue
def mapNormalRelativeError(input, standardDeviationForInputOfOne):
a = pcr.mapnormal()
error = a * input * standardDeviationForInputOfOne
realization = input + error
return realization
def mapgamma(shapeParameter):
'''Returns a realization from the gamma distribution with a mean of one
shapeParameter is a Python floating point
return value is a Python floating point
'''
scaleParameter = 1.0 / shapeParameter
realization = random.gammavariate(shapeParameter, scaleParameter)
return realization
def booleanMapWithOnlyMissingValues(map):
nonMV = pcr.defined(map)
result = pcr.ifthen(pcr.pcrnot(nonMV), pcr.boolean(map))
return result
def lookupColumnsInTableScalar(table, classifiedMap):
'''Reads 2nd to nth column in table and returns for each of these
columns a map. Maps are returned as a list of maps. First column is
key column linked to the classified input map classifiedMap. Returns
a scalar maps.
table -- input table giving for each class on classifiedMap a value
classifiedMap -- classes
Example of a table (ascii file):
1 0.2 0.5 0.3
2 0.9 0.05 0.05
3 0.0 0.01 0.99
First column should give all codes on classifiedMap, second up to n columns
give values assigned to output, second column value of code 1, third
value of code 2, etc.
'''
# read the table
tableNP = numpy.loadtxt(table)
# create the output, empty
emptyMap = pcr.scalar(booleanMapWithOnlyMissingValues(classifiedMap))
nrOfOutputMaps = len(tableNP[0]) - 1
resultMaps = []
i = 0
while i < nrOfOutputMaps:
resultMaps.append(emptyMap)
i += 1
for row in tableNP:
key = row[0]
outputs = row[1:]
areaToBeAssigned = pcr.pcreq(pcr.nominal(key), classifiedMap)
outputNumber = 0
for value in outputs:
resultMap = pcr.ifthenelse(areaToBeAssigned, value, resultMaps[outputNumber])
resultMaps[outputNumber] = resultMap
outputNumber += 1
return resultMaps
def discreteProbabilityDistributionPerArea(table, classifiedMap):
'''Draws a realization from a discrete probability distribution given
for each class. All cells in a class get the same realization.
table -- input table giving for each class on classifiedMap the distribution
classifiedMap -- classes
Returns a nominal map with values between 1 and the number of columns in table - 1.
Example of a table (ascii file):
1 0.2 0.5 0.3
2 0.9 0.05 0.05
3 0.0 0.01 0.99
First column should give all codes on classifiedMap, second up to n columns
give probability distribution, second column probability of code 1, third
probability of code 2, etc. Result may contain values between 1 and n-1.
'''
randomValue = pcr.areauniform(classifiedMap)
result = discreteProbabilityDistribution(table, classifiedMap, randomValue)
return result
def discreteProbabilityDistributionPerCell(table, classifiedMap):
'''Same as discreteProbabilityDistributionPerArea, but each cell in output
is a independent realization
'''
randomValue = pcr.uniform(pcr.boolean(1))
result = discreteProbabilityDistribution(table, classifiedMap, randomValue)
return result
def discreteProbabilityDistribution(table, classifiedMap, uniformMap):
'''Draws a realization from a discrete probability distribution given
for each class. All cells in a class get the same realization
table -- input table giving for each class on classifiedMap the distribution
classifiedMap -- classes
uniformMap -- random values between zero and one used to create the realizations
Returns a nominal map with values between 1 and the number of columns in table - 1.
Example of a table (ascii file):
1 0.2 0.5 0.3
2 0.9 0.05 0.05
3 0.0 0.01 0.99
First column should give all codes on classifiedMap, second up to n columns
give probability distribution, second column probability of code 1, third
probability of code 2, etc. Result may contain values between 1 and n-1.
'''
# create series of maps with the probabilities
probabilities = lookupColumnsInTableScalar(table, classifiedMap)
# get number of output classes
tableNP = numpy.loadtxt(table)
nrOfOutputClasses = len(tableNP[0]) - 1
# calculate cumulative probabilities
cumulativeProbabilities = []
cumulativeProbability = pcr.scalar(0)
for probability in probabilities:
cumulativeProbability = probability + cumulativeProbability
cumulativeProbabilities.append(cumulativeProbability)
randomValue = uniformMap
# create empty result map
result = pcr.nominal(booleanMapWithOnlyMissingValues(classifiedMap))
value = nrOfOutputClasses
reversedCumProbs = reversed(cumulativeProbabilities)
for cumProb in reversedCumProbs:
print(value)
result = pcr.ifthenelse(pcr.pcrlt(randomValue, cumProb), value, result)
value = value - 1
return result
# a=0.0
# for i in range(1,100000):
# test = mapgamma(6.0)
# a=a+test
# print 'gamma real :', test
# print 'gamma mean :', a/100000.0
###############################
### FUNCTIONS FOR REPORTING ###
###############################
def reportAListOfMaps(listOfMaps, baseName, timeStep, sample):
alphabeth = string.ascii_letters[0:26]
i = 0
for map in listOfMaps:
totBaseName = baseName + str(alphabeth[i])
reportName = pcrfw.generateNameST(totBaseName, sample, timeStep)
pcr.report(map, reportName)
i = i + 1
def reportAVariableAtTheLastTimeStep(currentTimeStep, currentSampleNumber, nrTimeSteps, variable, name):
if currentTimeStep == nrTimeSteps:
pcr.report(variable, pcrfw.generateNameS(name + '.map', currentSampleNumber))
#####################################
### FUNCTIONS FOR FRAGSTATS STUFF ###
#####################################
def proportionOfClassInAreas(areas, classMap, selectedClass):
'''
Calculates the fractional area of a class on classMap
for each area on areas map. The class on classMap
is given by selectedClass.
'''
areaOfAreas = pcr.areatotal(pcr.spatial(pcr.scalar(1)), areas)
selectedClassMap = pcr.scalar((classMap == selectedClass))
proportionOfClassInAreasMap = pcr.areatotal(selectedClassMap, areas) / areaOfAreas
return proportionOfClassInAreasMap
def convertValuesInAreasToSeparateMaps(areas, numberOfAreas, valuesInAreas):
'''
Takes the maximum value of valuesInAreas in each area on areas and
stores it in a map. These maps are collected and returned as a list
of maps.
'''
separateMaps = []
for i in range(1, numberOfAreas + 1):
separateMap = pcr.mapmaximum(pcr.ifthen(areas == i, valuesInAreas))
separateMaps.append(separateMap)
return separateMaps
def selectFromEachAreaOneCell(areas):
'''
Selects from each class on areas one cell and returns
the class value from area at that cell. Remaining cells
have zero value at result.
'''
uniqueId = pcr.uniqueid(pcr.defined(areas))
oneCellPerArea = pcr.ifthenelse(pcr.areamaximum(uniqueId, areas) == uniqueId, areas, 0)
return oneCellPerArea
def patchSize(areas):
areasClumped = pcr.clump(areas)
nrCellsPerPatch = pcr.areatotal(pcr.spatial(pcr.scalar(1)), areasClumped)
singleCellPerArea = selectFromEachAreaOneCell(areasClumped) != 0
nrCellsPerPatchStoredOnOneCellPerPatch = pcr.ifthen(singleCellPerArea, nrCellsPerPatch)
meanPatchSize = pcr.areaaverage(nrCellsPerPatchStoredOnOneCellPerPatch, areas)
return meanPatchSize
# areas=(pcr.nominal('cl000000.014'))
# pcr.report(areas,'areas.map')
# test=selectFromEachAreaOneCell(areas)
# pcr.report(test,'test.map')
# patchsize=patchSize(areas)
# pcr.report(patchsize,'patchsize.map')
|
A/N: Hi everyone, my name is Kieran Wespell
This is the first fanfic I have ever written before and I really want to make a good first impression. So I made sure I put a lot of effort in this first chapter (or episode as I like to call them).
So yeah, the premise of this fic is that it's a season of Star vs The Forces of Evil, and every chapter is an episode. Each chapter will have it's own premise and it's own conclusion. Of course, I'll make sure to develop the characters with each chapters, and I'll even have a major story arc that ties this entire story together.
I'll always start these chapters with a premise as such:
Episode 1: Marco's Blue Belt Test Part 1
Description: When an accident occurs during Marco's all-important Blue Belt Test, it's up to Star to race against the clock to help her friend get his Blue belt.
Just like that.
If you want you can leave a review at the end, as those will be extremely helpful. Anyways, enjoy!
Credit to kprovido from DeviantArt for the cover art
Disclaimer: I do not own Star vs the Forces of Evil, that belongs to it's creator, Daron Nefcy. Although if I did own it, I would add some more background music, it can get really quiet at times.
(Edit 8/6/2015: Split Episodes in 2 parts for less reading load for one chapter. I know this will screw up the current reviews for each episode, but I feel like this is necessary. To any followers, I'm sorry for the spam of story updates this splitting will cause.)
As they were walking down the sidewalk together, Janna could sense something bothering her best friend Jackie. Perhaps it was the fact that Jackie had barely made a dent on her cup of frozen yogurt. Or perhaps it was the way she limped on her skateboard with a seemingly lifeless stride as opposed to her usual confident vigor. Or maybe it was the blank expression on her face she wore as they cruised the sidewalk.
"You alright Jackie?" Janna asked. "You haven't been saying much since I came to pick you up."
"What?" Jackie replied innocuously. "I'm fine Janna, just kinda tired if anything." Jackie attempted to put on a bleak smile for reassurance. Janna wasn't buying it.
"Come on Jackie, you know this is so not like you at all. I got some time left to kill, so how about we go to the half-pipe and find some other skater bros there."
Jackie gave a long, drawn-out expression of doubt, as if she were mulling over the idea.
"Rather not," Jackie replied. "I just got out of a relationship, remember? Not really in the mood to hunker down with another guy at the moment."
Janna's eyes lit up in realization. "Oh, this is about Justin isn't it? No wonder you've been so mopey all day."
"What, no, Janna this isn't about him. And besides it was an amicable split, no hard feelings whatsoever. Or at least, I don't have any."
"Eh, whatever."
They passed by the windowsill of the local strip mall dojo, when they heard the yelps of a familiar boy from the inside the building. The girls looked through the window to find a Latino boy in his karate uniform bowing down to another man, who could be assumed to be his superior judging by the man's muscular build and black belt. The man was holding a thick stack of wooden board halves in each hand. The crowd in the bleachers gave a round of applause for the boy.
"Yeah, you go Marco!" Jackie said, as Janna cheered him on, assuming he'd done something impressive.
The girls saw a blonde girl in a sleeveless, turquoise dress with rainbow stripes on the chest, jumped up from her seat and wailed in excitement and cheer. She held onto a purple wand on her right hand. In the midst of her excitement, a blast of purple light shot off from the wand, shattering part of the roof on top of her. Some of the crowd turned their heads towards her. The girl sat back down meekly.
Jackie watched Marco ready himself, as the instructor called down another pupil from the row of other students in the front. The instructor picked up a several wooden boards from a mountainous stack off to the side. The instructor held onto one side with both hands, while the volunteer held on to the other.
Marco shifted to the side ready to strike. In an instant, he jumped high in the air and kicked the boards by the side of his foot, shattering the stack of wood. Another round of applause for Marco, with Jackie and Jenna joining in from the outside.
"Impressive," Jackie said.
"You said it," Janna replied.
This time, the instructor brought out a concrete board instead of a wooden one. Marco's eyes flared in excitement as he began to get acquainted with the board. He motioned his legs in a roundhouse kick onto the concrete.
Marco prepared for his kick by readying his stance once more. His body swiveled to the side, as he forced his legs up. For a split-second Jackie's eyes caught onto Marco's. The wrong part of Marco's foot made impact with the solid concrete.
Even through the glass, Marco's muffled screams of agony could be heard outside. Jackie watched as he limped on his back, cradling his battered foot closer to his body. Janna reeled back in horror. The blonde girl from the bleachers jumped down to her friend in an instant. She looked at his foot and had to hold her mouth in from vomiting.
Jackie and Jenna drifted by, in a sort of shell-shocked manner. Jackie was the first to break the sudden silence.
"He's not gonna be okay," she said. "Maybe we should go back in and help."
"Nah, it'll be fine," Janna replied. "Star is with him, and I bet she can whip something up to fix his foot. Let's not interfere."
Jackie nodded. A faint growl trembled underneath her stomach, taking an arm to clench on top of it. Janna picked up on it with her dolphin-like hearing.
"Jackie, finish up your yogurt. I'm taking you to that Thai place you like. No buts, missy."
Janna knew Jackie wasn't going to argue over free food.
Marco straddled his broken foot next to his body. From the corner of his eye, he could see the horrific condition his foot was in. When he lifted his leg up, the foot dangled in front of him. The redness and swollenness of the foot contrasted with the rest of the body. Marco whimpered in pain underneath his breath.
"Marco!" Star said, as she jumped down off the bleachers, shoving away a few spectators on the way down. She rushed to Marco's side, and she crouched down next to Marco, putting a hand on his shoulder in sympathy. Marco stared onto Star's watering, concerned eyes.
"It's not so bad, Star? Right?"
Star glimpsed at Marco's foot. She can see partial bone fragments trying to poke out of his foot. His toes bent in awkward, unnatural positions, one in particular crookedly bent back all the way to the top of his foot. Marco could see Star place her hands on her mouth, holding back the urge to vomit. This made Marco shirk back in shame.
"Relax, Marco, it's not as bad as it looks, I swear." Star tried to reassure Marco, but even he could see the obvious horrific appearance of his foot.
Jeremy, a boy no older than seven or eight in a black belt, walked up to Marco with the biggest, patronizing grin on his face.
"She's right, it only a scratch Marco." Marco stared at the little boy with contempt. Jeremy's condescending comments were the last thing he needed in this situation. "So, come on Marco, if it's not that bad, just get up and break the board already. Master Keith doesn't have all day."
A man with a bulky, muscular build in a black uniform with the arms sleeves torn took a knee next to Marco. Marco, tried to turn the other way in embarrassment, not wanting to look at his master in the eye.
"I can get someone to take you to the hospital, and we can reschedule this test for another day, maybe as soon as your leg heals," Master Keith said. "No extra charge this time."
"That's a shame, Marco," Jeremy said in a presumptuous manner. "I guess you aren't getting a blue belt today. It's too bad, I was really rooting for you to get it today. At this rate, with your luck, you might just be stuck a green belt by the time I get a 2nd degree black belt."
Marco felt a nerve struck, and he could only grimace at the doting, arrogant child. He pulled Star in desperation, holding back the tears forming underneath his eyelids. "Star, please, I know you know a spell to fix this. Please Star, I need to get this blue belt." Marco choked in his sobs.
"Okay, okay Marco. It's going to be alright." Star tried to reassure her best friend. Marco watched as she bit on her wand, eyes of to the side, hoping she was rifling through her mind for some easy fix for his leg. "I got it Marco." She lifted her wand on top of his battered leg, the top glowing in a bright, luminous light.
"Legolas Defixio!"
A shroud of purple light emitted itself from the wand, concealing the bruised and horrifically battered foot. The pain from the foot began dissipating underneath the purple glow. A feeling of euphoria and pleasured travelled from Marco's battered right leg throughout his entire body. Marco closed his eyes and blushed from the exhilarating bliss.
When he opened his eyes again, Marco could only find a rounded stump where his foot used to be.
Both Star and Marco shrieked in horror. Star began hyperventilating before raising her wand once more.
"Animus de Cyclo!"
A beam of light was directed onto Marco's leg, transforming it into an oversized chicken leg. Marco's eyes could only open wide in terror.
"—de Cyclo!" Star said, trying again
Marco's leg was replaced with a gazelle's leg.
"—de Cyclo!" Star said in desperation.
A frog leg replaced the gazelle leg. Marco watched as Star slammed her head in frustration.
Star growled in her grievance, "Returnius Arma Normalius," before a beam of light was fired directly onto Marco's leg one more time. Marco pondered where he had heard that spell in the past. He looked down on the ground, noticing a full length arm, similar in size to his other leg, instead of anything else.
"Star, why did you just use the arm spell on me?" Marco asked, horrified.
"I don't know, Marco, I don't remember the names of spells off the top of my head," Star said. "Azarath Metrio—"
"Star, stop!" Marco interjected. "Casting random spells isn't gonna help anybody at this point."
Star muttered under her breath, "Where is the spell book when things like this?" Star eyes twinkled in realization. "Marco, where is the spell book? I remember asking you to bring it back."
"Yeah, I picked it up from Janna's place a couple days ago," Marco recalled, specifically because of a particularly awkward conversation with Jackie on the way there. "I think I dropped it off at your room."
"Well, stay here Marco, you are passing your test and getting a Blue belt today." Star ran out the door, leaving Marco essentially crippled on one of the most important days of his life.
Star startled Mrs. Diaz when she pounced through the door in her rush. She still hadn't notice Mrs. Diaz as she muttered to herself that she needed to remember that teleportation spell. Mrs. Diaz cleared her throat loudly to capture Star's attention.
"Oh sorry, Mrs. Diaz," Star said as she just noticed Marco's mom next to her, as well as the broken door frame.
"Oh no need to apologize," Mrs. Diaz replied. "You seem to be in a rush."
"Yeah you can say that."
"How did Marco's karate test go?" Mrs. Diaz asked. "I wish my husband and I were there, but apparently we would embarrass him on his big day. Well, I guess it's understandable. I used to think the same way about my parents."
"Marcos still doing it, he just asked me to pick up something from the house." Star ran up the stairs, passed Mrs. Diaz. "It's an emergency."
Mrs. Diaz just shrugged as the princess ran past her.
Inside her room, Star dashed through every nook and cranny in her room, hoping to find the book there. She shoved away everything in sight; every cabinet, every shelf, and even her Mirror phone, yet there was nothing. Star fell onto both knees, clasped her face with her hands and screamed in frustration. Her bangs fell over head, as tears formed in her eyelids. In the corner of her eye, she saw her bed. Hopeful, she slid underneath, to find a piece of paper instead of her precious spell book.
"I O U – Ferguson," it said. Star scowled in pure frustration.
Star ran out the house, note in one hand, and the wand in the other fixing her bangs. Mrs. Diaz threw a bottle of an exotic-looking, rainbow coloured drink. Star clasped it with both hands.
"Give it to Marco when you see him, it always cheers him up."
When Mrs. Diaz wasn't looking, Star sneaked a sip before placing it in her backpack. It tasted like every piece of candy, yet light to the touch.
Master Keith held out the concrete board for Marco once more. Marco positioned himself for a roundhouse kick, standing on his mutated hand foot, ignoring the awkward feeling of the mutated leg. Marco rotated his normal leg around for the kick. He slipped on his arm foot and fell on his back. Marco sighed.
Jeremy was on his back, laughing at Marco's pain and misery. The rest of the students just looked on in pity.
"Poor guy," one of the male students said.
"You can't fault the man's determination." Everyone else agreed with that sentiment.
Marco pushed himself back up, and positioned his body for another roundhouse kick.
"You know, we can still reschedule for as early as tomorrow even, if your friend can fix your foot soon." Master Keith held out the board for Marco, as Marco attempted another kick.
"I don't need to," Marco replied, slipping on his hand foot for the umpteenth time.
Star found Ferguson exiting the convenience store with a decently large blue slushy on his right hand, while his pants pockets filled to the brim with an assortment of chocolate bars. He was sipping on his slushy, when Star pounced on top from out of nowhere, dropping his slushy as well as all the chocolate in his pockets. Fergusson managed a tiny yelp before Star began throttling him.
"Where is it?" Star asked anxiously.
"I don't know what you are the talking about," Ferguson could muster between shakes.
"The book, where is it?"
"You mean that gigantic book of mumbo jumbo? I thought Marco had it."
Star armed her wand, shoving it extremely close to Ferguson's abdomen. The star insignia glowed dimly.
"Have you ever wondered what it's like to be Narwhal, Ferguson?" He whimpered and shook his head. "Now I've seen your IOU note underneath my bed Fergusson, I know you have it somewhere. Tell me where it is, or the only thing you'll be eating for the next few days is fish and whatever else Narwhals eat."
"Okay, I gave it Janna," he whimpered. Relief washed over him as the wand stopped glowing.
"Janna?"
"Yeah, she said she'd make out with me if I gave it to her," Ferguson said. Star mouth instinctively opened wide in shock. "I don't know what she wanted with it, but I wasn't going to miss out my only chance at kissing a girl."
Just as Star was about to run off to find Janna, she glanced at the sorry sight that was Ferguson. His entire body was covered in slushy, as well as half-melted chocolate. He was choking on his sobs and he laid there in fear. Star knew she couldn't leave him like this.
"Thanks, Star," Ferguson said, before taking a sip of a brand new blue slushy.
Star ran off without saying goodbye. Her feet grew sore with every step.
Don't worry Marco, I'll get you that Blue belt one way or another, she muttered underneath her breath.
Star considered herself to be a relatively heavy sleeper. From earthquakes to her overbearing mother shouting over her ear, nothing could seemingly wake up the princess. She has once even boasted about being able to sleep through anything to her friend Pony Head, as if it was an achievement to be proud of. So it was truly impressive when the sound of wood breaking and yelps had roused the slumbering princess from her bed. At nearly 5:30 in the morning.
Growing impatient and unable to fall asleep, Star rose from her bed grumpily, setting her sleep mask off to the side on her bed stand. Star marched towards the source of all the noise early in the morning, Marco's room. She opened the door, finding Marco shattering a stack of wooden boards laying on top of a cinder block with a well-placed Karate chop. Marco turned around to see as Star stood there with bloodshot eyes and bags under her eyes.
"Marco Diaz, it's five o'clock, it's way too early to be doing Karate or whatever…" Star closed the door behind her and slumped on her back. "Besides, what are you do – doing practicing—"Those were the only words she could have mustered in her confused, drowsy state before droning out, mouth wide open.
"Hang on, Star," Marco said. "I got just the thing to perk you up in the morning."
Marco pulled her to the side to get out through the door. When he came back, he had a glass of multi-coloured liquid on hand. Star identified the relatively pungent smell of fruit punch with a hint of coffee grinds.
"Open wide, Star." Star didn't resist as Marco poured the drink into Star's mouth. The instant burst of candy-flavoured sweetness made Star's eyes open wide in excitement. Star can feel the bags under eyes recede as her mind grew more alert and aware. Even the bloodshot eyes were fading away. Star swiped the glass from Marco's hand. She downed the rest of the contents into her body, the drowsiness disappearing after every gulp. "Better?" Marco asked.
"Whoa, that was really good." Star handed the glass back to Marco. "What's in that stuff?"
"I'll be honest, I have no clue," Marco replied. "I've seen my mom pour crushed coffee beans in, but I have no idea how she makes the rainbow colours. Apparently, it's a 'Diaz family secret' that I would only be allowed to know when I'm eighteen."
Star moaned in delight, sticking her rainbow coloured tongue out like a dog.
"Anyways, you were saying," Marco said.
"Oh yeah, right. What are you doing at five in the morning practicing karate? It's way too early to do anything except sleep."
"Today is the day of my blue belt test." Star stared at Marco, puzzled. "You know, the belt test. Every martial art has a belt test, where they test you on everything you've learnt. If you pass, you get a new, better belt colour. If you don't, well, you're stuck on the belt you are at, 'til the next opportunity for a test comes up."
Star nodded in understanding.
"You know how long I've been waiting for this, Star?" Star shook her head. "Six months Star. I was supposed to get my blue belt six months ago."
Star leaned in and payed close attention to Marco.
"For the last six months, every time test day came, something always comes up and I have to keep pushing back that day," he said. "It's always a family emergency, or something got broken, and even Ludo has been interfering at the worst possible times"
Star put a hand on Marco's shoulder in sympathy.
"I'm making sure I get my blue belt today, no matter what it takes."
"Anything you need me to do, Marco?"
"Yeah, actually," Marco said, as he walked across the room to pick up a pile of wooden boards. "I need to practice my roundhouse, and practicing kicks is easier when another person holds the board."
Star smiled and complied; she grabbed a board and positioned herself for Marco.
Roundhouse kicks weren't the only thing they practiced. Marco and Star procedurally went through every kind of kick that Marco could find off the top of his head. Flying side-kicks, butterfly kicks and even a 540 spinning hook kick were done as properly and efficiently as possible. Marco refused a foot message, even when his legs were getting tired.
Next, Marco performed a variety of Karate patterns, in-sync with Star counting out loud. This exercise confused the Mewnian Princess.
Star observed Marco sparring with a hanging punching bag. She was enamored by the lightning-fast kicks and punches Marco doled out onto the bag. It crumbled under the pressure of a powerful roundhouse kick, sending it slumped to the ground. Star, impressed, gave a round of applause for Marco.
With a flick of her wand, Star levitated several wooden boards around Marco. In a seemingly abrupt blink of an eye, Marco smashed the boards with ease. He wiped the sweat underneath his brow before going on.
Finally, Marco had to get his dad to help him with the next exercise. Mr. Diaz brought up a concrete block with him. Star even knocked on it, concerned Marco wouldn't be able to break something so solid. Star and Mr. Diaz positioned themselves properly, both holding a different side of the board. Marco prepared himself in the proper stance for the kick. Marco rotated his body, jumped while facing the side and in explosive fashion, kicked the concrete straight at the center, shattering it in half. Star looked at the half broken piece of concrete in shock, as Mr. Diaz congratulated his son with a pat on the back.
Marco passed out on the bed in exhaustion. Star takes a seat next to him. She puts the concrete off to the side.
"Marco, I knew you did Karate, but I never knew you were this good." Star took another glance at the shattered concrete, as well as the slumped over punching bag.
"Meh, it all comes from a lot of practice and frustration," Marco said. "Mostly frustration."
"If it were up to me, I would give you a blue belt on the spot."
"Yeah, but too bad the world doesn't work like that," Marco replied. Marco yawned and stretched his arms out before placing them behind his head.
"Seems like you're really tired, I'll get out of your hair and let you take a nap." Star got up and walked towards the door.
"Nah, Star, stay. I could use a little company, if you don't mind." Star shifted Marco off to the side a bit, lying herself next to Marco. For a moment, both just stared up the ceiling, seemingly content to just lie there in silence. "Star, there's one thing I wanna ask.
"Has there ever been a time in your life where you were just so close to something, but life, fate, destiny or whatever, just takes it away from you like a cruel bait-and-switch joke?"
Star pondered on the question momentarily. She replied, "I lived as a princess of a huge kingdom. Anytime I wanted something, I just asked and then my parents or one of the servants would go out and get it. So, I guess not."
"Well, I guess you're lucky," Marco said. "Sometimes, I think the only reason I exist is to be a punchline to some higher power's joke."
"Come on Marco. Don't be ridiculous, you're a great guy, you don't need a blue belt to tell you that."
"It's not about the blue belt, Star," he replied. "It feels like everything good gets pulled away the moment I try to pounce for it."
Star realized he was referring to Jackie Lynn Thomas choosing Justin, the football star over him. She remembered all the complex, convoluted schemes they set up together to win Jackie's heart that day, only to be utterly ignored for some blonde-haired jock.
"That's why I want this blue belt. This is the one thing right now, I know, I won't let out of my grasp. I don't care if I lose a limb or two or it, I'll keep working for this blue belt." Marco stood up from the bed and walked out the door. "I'm going to take a shower. Once I get out, prep the boards again, I want to perfect the roundhouse kick before we get to the dojo."
Star watched the Karate boy walk out the room. Even if she hadn't been through what he has, she could empathize with him, understanding the pain and suffering Marco had been going through.
So when Star rushed to find Janna in Echo Creek, she remembered that conversation in the morning. She remembered the first day she'd arrived at Earth, where Marco was compassionate enough to let her stay at his house, despite the fact she ruined his room with a black hole.
She remembered why she had to keep running, no matter how sore her legs were. |
def call_and_report(self, item, when, log=True, **kwds):
call = runner.call_runtest_hook(item, when, **kwds)
self._call_infos[item][when] = call
hook = item.ihook
report = hook.pytest_runtest_makereport(item=item, call=call)
if report.when in self._PYTEST_WHENS:
if report.outcome == self._PYTEST_OUTCOME_PASSED:
if self._should_handle_test_success(item):
log = False
elif report.outcome == self._PYTEST_OUTCOME_FAILED:
err, name = self._get_test_name_and_err(item, when)
if self._will_handle_test_error_or_failure(item, name, err):
log = False
if log:
hook.pytest_runtest_logreport(report=report)
if self.runner.check_interactive_exception(call, report):
hook.pytest_exception_interact(node=item, call=call, report=report)
return report |
Bill Cosby, Cosby: His Life and Times (inset)
Mark Whitaker, whose extensive biography on Bill Cosby was released earlier this year, has apologized for completely ignoring the numerous rape allegations against the comedian.
His admission came after The New York Times' David Carr owned up to being one of Cosby's "media enablers" and called out others whom he felt had done the same. This included writers for The New Yorker, The Atlantic and Whitaker, "who did not find room in his almost-500-page biography... to address the accusations that Mr. Cosby had assaulted numerous women, at least four of whom had spoken on the record and by name in the past about what they say Mr. Cosby did to them."
Bill Cosby rape allegations: Everything you need to know about the scandal
On Monday night, Whitaker tweeted Carr and admitted that the allegations against Cosby should have been included in Cosby: His Life and Times: "David you are right. I was wrong to not deal with the sexual assault charges against Cosby and pursue them more aggressively. I am following new developments and will address them at the appropriate time. If true the stories are shocking and horrible."
In an article published by The Daily Beast last week, Whitaker had attempted to defend his decision to leave out the allegations, saying he didn't want to inaccurately prejudice readers against Cosby with things he could not confirm.
"I wasn't going to reprint the allegations. I had a couple of reasons for that," Whitaker told The Daily Beast. "You can do that and say here's an allegation, and here's a denial, but given the nature of the allegations, the allegations would stick. As a biographer, you're really trying to say 'I'm painting a scene for you. Here you are in the room. This is what happened.' And if you do enough reporting, you can actually do that. And if you can't do that, you don't do that. When you're writing a book, you want to make sure it's really accurate, that you can stand behind it, because once it's out it's not like a piece in a newspaper or even a news magazine that you can correct quickly. That was just the standard I used."
However, Whitaker also acknowledged "the story has changed" and that he plans on addressing "that in future editions of the book, if not sooner." "If it happened, and it was a pattern, it's terrible and really creepy. ... I was just having a discussion with my son about this, and psychologically, if it happened... it's sort of compartmentalization," Whitaker said.
Nearly 20 women have come forward so far with stories of sexual assault dating back decades, but Whitaker said he believed Cosby has already "paid a big price" for the alleged assaults, which were first reported in 2005. "The show [a planned NBC sitcom] has been yanked. The reruns of The Cosby Show have been taken off the air. He's routinely called a rapist everywhere. That's a big price," Whitaker said, noting that he thinks Cosby's public image might still be saved.
"There might be an Oprah interview or something like that. There are things you can kind of imagine. ... Maybe he could suck it up and make amends by giving a whole bunch of money to anti-sexual abuse causes or something," Whitaker said. "He still has a fan base. I think people will still turn out for him. ... If he can't continue to perform, that will be the hardest thing for him. But if he can still go into arenas, and people will come and laugh at his stories, then he'll survive. That's what he's always cared about the most.
"I certainly would not have anticipated the degree to which this has become a huge issue again," Whitaker continued. "What you eventually learn about everything related to these allegations, and how you think that should figure in your ultimate judgment of Bill Cosby has to be weighed — and should be weighed — in the balance with a lot of the stuff I reported in the book more thoroughly than anybody else."
Meanwhile, the fallout continues for Cosby. The New York Post's Page Six reported Tuesday that in 1989, Cosby leaked his daughter Erinn's drug problem to The National Enquirer in exchange for the tabloid killing a planned story about him "swinging with Sammy Davis Jr. and some showgirls in Las Vegas." My editor told me that daddy Cosby was the source. He ratted out his flesh and blood," an Enquirer source told Page Six. |
#include <bits/stdc++.h>
using namespace std;
string a;
int p;
int main() {
cin>>p>>a;
if (p==8) cout<<"vaporeon"<<endl;
else if (p==6) cout<<"espeon"<<endl;
else {
if ((a[0]=='j'||a[0]=='.')&&(a[1]=='o'||a[1]=='.')&&(a[2]=='l'||a[2]=='.')&&(a[3]=='t'||a[3]=='.')) {
cout<<"jolteon"<<endl;
}
if ((a[0]=='f'||a[0]=='.')&&(a[1]=='l'||a[1]=='.')&&(a[2]=='a'||a[2]=='.')&&(a[3]=='r'||a[3]=='.')) {
cout<<"flareon"<<endl;
}
if ((a[0]=='u'||a[0]=='.')&&(a[1]=='m'||a[1]=='.')&&(a[2]=='b'||a[2]=='.')&&(a[3]=='r'||a[3]=='.')) {
cout<<"umbreon"<<endl;
}
if ((a[0]=='l'||a[0]=='.')&&(a[1]=='e'||a[1]=='.')&&(a[2]=='a'||a[2]=='.')&&(a[3]=='f'||a[3]=='.')) {
cout<<"leafeon"<<endl;
}
if ((a[0]=='g'||a[0]=='.')&&(a[1]=='l'||a[1]=='.')&&(a[2]=='a'||a[2]=='.')&&(a[3]=='c'||a[3]=='.')) {
cout<<"glaceon"<<endl;
}
if ((a[0]=='s'||a[0]=='.')&&(a[1]=='y'||a[1]=='.')&&(a[2]=='l'||a[2]=='.')&&(a[3]=='v'||a[3]=='.')) {
cout<<"sylveon"<<endl;
}
}
}
|
// Checks that ComputeFrameCropRegion covers required regions when their union
// is within target size.
TEST(FrameCropRegionComputerTest, CoversRequiredWithinTargetSize) {
const auto options = MakeKeyFrameCropOptions(kTargetWidth, kTargetHeight);
FrameCropRegionComputer computer(options);
KeyFrameInfo key_frame_info;
AddDetection(MakeRect(100, 100, 100, 200), true, &key_frame_info);
AddDetection(MakeRect(200, 400, 300, 500), true, &key_frame_info);
KeyFrameCropResult crop_result;
MP_EXPECT_OK(computer.ComputeFrameCropRegion(key_frame_info, &crop_result));
CheckRequiredRegionsAreCovered(key_frame_info, crop_result);
EXPECT_TRUE(CheckRectsEqual(MakeRect(100, 100, 400, 800),
crop_result.required_region()));
EXPECT_TRUE(
CheckRectsEqual(crop_result.region(), crop_result.required_region()));
EXPECT_TRUE(crop_result.are_required_regions_covered_in_target_size());
} |
An afternoon subway ride turned violent when a passenger was assaulted after telling a fellow straphanger he stepped on his foot, police say.
The attack occurred on a southbound R in Brooklyn around 3:45 p.m. on Friday, October 20th. The NYPD says the victim, a 30-year-old male, and the male suspect were on the train when the suspect stepped on the victim's foot. According to police, "When the victim confronted the individual, the individual did punch the victim in the face. As a result of being punched in the face, the victim fell to the ground, while on the ground the individual did kick and punch the victim in the head and face."
By the time the train got to the 4th Avenue and 36th Street station, the suspect fled and took a southbound D train.
The victim was taken to Lutheran Hospital in critical but stable condition.
Police released an image of the suspect, describing him as 17-19 years old, 5'10" and 160 pounds, and last seen wearing a white t-shirt, black jeans, white sneakers, and a black book bag.
Anyone with information in regards to this incident is asked to call the NYPD's Crime Stoppers Hotline at 800-577-TIPS or for Spanish 1-888-57-PISTA (74782)
The public can also submit their tips by logging onto the Crime Stoppers Website at WWW.NYPDCRIMESTOPPERS.COM or texting their tips to 274637(CRIMES) then enter TIP577. |
// Loads a saved game from disk
public void loadGame() {
saveFile = new File(SAVE_PATH);
JSONParser parser = new JSONParser();
JSONObject obj;
try {
obj = (JSONObject) parser.parse(new FileReader(saveFile));
} catch (FileNotFoundException e) {
Out.error("Attempted to load from file: " + saveFile.getAbsolutePath());
e.printStackTrace();
return;
} catch (IOException | ParseException e) {
e.printStackTrace();
return;
}
Importers.importAll(obj);
} |
// DatabaseVersion returns the version of the `arangod` binary that is being
// used by this starter.
func (s *Service) DatabaseVersion(ctx context.Context) (driver.Version, bool, error) {
for i := 0; i < 25; i++ {
d, enterprise, err := s.databaseVersion(ctx)
if err != nil {
s.log.Warn().Err(err).Msg("Error while getting version")
time.Sleep(time.Second)
continue
}
return d, enterprise, nil
}
return "", false, fmt.Errorf("Unable to get version")
} |
#include<bits/stdc++.h>
using namespace std;
long long N=1e5;
int main()
{
ios_base::sync_with_stdio(0);cin.tie(NULL);cout.tie(NULL);
long long t;
cin>>t;
/* 1.the first action allows it to move from (xc, yc) to (xc−1, yc); left
2.the second action allows it to move from (xc, yc) to (xc, yc+1);up
3.the third action allows it to move from (xc, yc) to (xc+1, yc);right
4.the fourth action allows it to move from (xc, yc) to (xc, yc−1).down */
while(t--)
{
long long n;
long long x,y,act1,act2,act3,act4;
cin>>n;
long long minx1=1e5,miny1=1e5,maxx1=-1e5,maxy1=-1e5;
for(int i=0;i<n;i++)
{
cin>>x>>y>>act1>>act2>>act3>>act4;
if(!act1) maxx1=max(maxx1,x);
if(!act2) miny1=min(miny1,y);
if(!act3) minx1=min(minx1,x);
if(!act4) maxy1=max(maxy1,y);
}
if(maxx1<=minx1 && maxy1<=miny1)
cout<<1<<" "<<maxx1<<" "<<maxy1<<endl;
else
cout<<0<<endl;
}
return 0;
} |
def _chunk_over_days(self, days):
x = len(self.exercises)
d = x % days
n = x // days
sliced_at = (days - d) * n
pt1 = self.exercises[:sliced_at]
pt2 = self.exercises[sliced_at:]
return list(grouped(pt1, n)) + list(grouped(pt2, n + 1)) |
/**
* Here's the action handler for the Edit Player form page. If there are
* any errors in the form inputs, this sends an error back to Edit Player.
* Otherwise, it will either save an existing Player or create a new one.
*/
public class EditPlayerAction extends MadnessServlet
{
/**
* Initialize this servlet as needed.
*
* @param servletConfig an object for configuring this servlet
*/
public void init(ServletConfig servletConfig) throws ServletException
{
super.init(servletConfig);
m_securityBits = SECURITY_ADMIN;
}
/**
* Service a request to this servlet using a MadnessWriter for output.
*
* @param request a servlet request
* @param response a servlet response
* @param session the user's session
* @param out a MadnessWriter for output
*/
protected void doBoth(HttpServletRequest request, HttpServletResponse response,
HttpSession session, MadnessWriter out) throws ServletException, IOException
{
PlayerManager playerMan = PlayerManager.GetInstance();
Player player = null;
if (request.getParameter(P_PLAYER_ID) != null)
player = playerMan.select(request.getParameter(P_PLAYER_ID), false);
else
player = new Player();
player.setActive("true".equals(request.getParameter(P_ACTIVE)));
player.setUsername(scrubInput(request.getParameter(P_USERNAME)));
player.setPassword(scrubInput(request.getParameter(P_PASSWORD)));
player.setFirstName(scrubInput(request.getParameter(P_FIRST_NAME)));
player.setLastName(scrubInput(request.getParameter(P_LAST_NAME)));
player.setNickname(scrubInput(request.getParameter(P_NICKNAME)));
player.setEmail(scrubInput(request.getParameter(P_EMAIL)));
player.setAdmin("true".equals(request.getParameter(P_ADMIN)));
StringBuffer errors = new StringBuffer();
if (player.getFirstName() == null)
errors.append("A first name is required." + BR);
if (player.getLastName() == null)
errors.append("A last name is required." + BR);
if (player.getUsername() == null)
errors.append("A login name is required." + BR);
else if ((player.getID() == -1) && playerMan.getUsernameInUse(player.getUsername()))
errors.append("The login name you entered is already in use.");
if (player.getPassword() == null)
errors.append("You must enter a password." + BR);
if (errors.length() > 0)
{
request.setAttribute(P_ERRORS, errors.toString());
getServletContext().getRequestDispatcher(URL_EDIT_PLAYER).forward(request, response);
}
else
{
if (player.getID() == -1)
playerMan.insert(player);
else
playerMan.update(player);
getServletContext().getRequestDispatcher(URL_SCOREBOARD).forward(request, response);
}
}
} |
<filename>src/main/java/FileABCIApp.java<gh_stars>0
import classes.Transaction;
import com.github.jtendermint.jabci.api.*;
import com.github.jtendermint.jabci.socket.ConnectionListener;
import com.github.jtendermint.jabci.socket.TSocket;
import com.github.jtendermint.jabci.types.*;
import com.google.protobuf.ByteString;
import java.security.NoSuchAlgorithmException;
import java.util.Base64;
import java.util.logging.Level;
import java.util.logging.Logger;
import com.google.gson.Gson;
import crypto.RipeMD160;
import data.AppState;
import javax.xml.bind.DatatypeConverter;
public class FileABCIApp implements IBeginBlock, ICheckTx, ICommit, IInfo, IDeliverTx, IQuery, IEndBlock {
private static final Logger LOG = Logger.getLogger(FileABCIApp.class.getName());
private ByteString appState;
private long currentHeight;
FileABCIApp(ConnectionListener listener) throws InterruptedException{
appState = ByteString.copyFrom(new byte[0]);
currentHeight = 0L;
if(AppState.exists()){
appState = (ByteString) AppState.loadAppState().get(AppState.KEYAPP);
currentHeight = (Long) AppState.loadAppState().get(AppState.KEYHEIGHT);
System.out.println(appState.toStringUtf8());
} else AppState.saveAppState(appState, currentHeight);
TSocket socket = new TSocket((exception, event) -> {}, listener, (name, remaining) -> {});
socket.registerListener(this);
try{
Thread t = new Thread(socket::start);
t.setName("FileABCI Socket Thread");
t.setDaemon(true);
t.start();
LOG.info("Socket on: " + t.getState());
Thread.sleep(1000L);
} catch (IllegalStateException e) {
LOG.log(Level.INFO, "Error in ABCI App Socket Thread: " + e.getMessage());
}
}
@Override
public ResponseBeginBlock requestBeginBlock(RequestBeginBlock requestBeginBlock) {
LOG.log(Level.INFO, "Begin Block: " + requestBeginBlock.getHash());
currentHeight = requestBeginBlock.getHeader().getHeight();
if(requestBeginBlock.getHeader().getNumTxs() != 0){
try {
appState = RipeMD160.getHashFromBytes(appState);
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
}
}
return ResponseBeginBlock.newBuilder().build();
}
@Override
public ResponseCheckTx requestCheckTx(RequestCheckTx requestCheckTx) {
LOG.log(Level.INFO, "Check tx: " + requestCheckTx.getTx());
return ResponseCheckTx.newBuilder().setCode(CodeType.OK).build();
}
@Override
public ResponseDeliverTx receivedDeliverTx(RequestDeliverTx requestDeliverTx) {
LOG.log(Level.INFO, "Deliver tx: " + requestDeliverTx.getTx());
ResponseDeliverTx.Builder builder = ResponseDeliverTx.newBuilder();
byte[] base64Decoded = DatatypeConverter.parseBase64Binary(requestDeliverTx.getTx().toStringUtf8());
Gson gson = new Gson();
Transaction trans = gson.fromJson(new String(base64Decoded) , Transaction.class);
if(trans.getOwner() != null) {
KVPair.Builder kvbuilder = KVPair.newBuilder();
kvbuilder.setKey(ByteString.copyFromUtf8("account.owner"));
kvbuilder.setValue(ByteString.copyFromUtf8(trans.getOwner()));
builder.setTags(0, kvbuilder.build());
}
return builder.setCode(CodeType.OK).build();
}
@Override
public ResponseEndBlock requestEndBlock(RequestEndBlock requestEndBlock) {
LOG.log(Level.INFO, "End Block: " + requestEndBlock.getHeight());
return ResponseEndBlock.newBuilder().build();
}
@Override
public ResponseCommit requestCommit(RequestCommit requestCommit) {
LOG.log(Level.INFO, "Commit");
ResponseCommit.Builder builder = ResponseCommit.newBuilder();
try{
builder.setData(appState);
AppState.saveAppState(appState, currentHeight);
} catch (Exception e) {
LOG.log(Level.INFO, "Bad Commit: " + e.getMessage());
}
return builder.build();
}
@Override
public ResponseInfo requestInfo(RequestInfo requestInfo) {
LOG.log(Level.INFO, "Info: " + requestInfo.getVersion());
ResponseInfo.Builder response = ResponseInfo.newBuilder();
response.setLastBlockAppHash(appState);
response.setLastBlockHeight(currentHeight);
return response.build();
}
@Override
public ResponseQuery requestQuery(RequestQuery requestQuery) {
LOG.log(Level.INFO, "Query: " + requestQuery.getData());
return ResponseQuery.newBuilder().build();
}
}
|
from KEYWORDS import *
import os
class SCANNER:
def scan(i, line):
global lex
lex = []
for char in line:
lex.append(char)
def check_for_semicolon(i):
if lex[-2] != ';':
print(f'{b.FAIL}KEYWORD ERROR:{b.ENDC} missing ";" on line {i + 1} in test.odl: {b.WARNING}{line}{b.ENDC}')
os._exit(0)
def lexeme(i, line):
lexeme = []
for char in lex:
if char != ' ':
lexeme.append(char)
if char == ' ':
key = ''.join(lexeme)
if key in KEYWORDS:
if key == 'WRITE':
WRITE.write(i, line)
elif key == 'VAR':
VARIABLES.var(i, line)
elif key == 'UVAR':
VARIABLES.uvar(i, line)
elif key == 'CVAR':
VARIABLES.cvar(i, line)
elif key == 'ADD':
MATH.add(line)
elif key == 'SUBTRACT':
MATH.subtract(line)
elif key == 'ILEN':
LENGTH.len_items(i, line)
elif key == 'LEN':
LENGTH.length(i, line)
elif key == 'STACK':
STACK.stack(i, line)
elif key == 'POPSTACK':
STACK.pop(i, line)
elif key == 'PUSHSTACK':
STACK.push(i, line)
elif key == 'KEYWORDS':
SYNTAX.keyword(i, line)
elif key == 'QUEUE':
QUEUE.queue(i, line)
elif key == 'ENQUEUE':
QUEUE.enqueue(i, line)
elif key == 'DEQUE':
QUEUE.deque(i, line)
elif key == 'FORE':
FUNCTIONS.fore(i, line)
# ! TEST CODE
if __name__ == "__main__":
class b:
HEADER = '\033[95m'
OKBLUE = '\033[94m'
OKCYAN = '\033[96m'
OKGREEN = '\033[92m'
WARNING = '\033[93m'
FAIL = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
SCANNER = SCANNER
with open('/Users/drewskikatana/ODL/test.odl', 'r') as program:
text = program.readlines()
for i, line in enumerate(text):
SCANNER.scan(i, line)
SCANNER.check_for_semicolon(i)
SCANNER.lexeme(i, line)
print(f'{b.HEADER}VARIABLES:{b.ENDC} ', VAR_STACK)
print(f'{b.HEADER}QUEUE VALUES: {b.ENDC}', VAR_STACK['queue1'])
print(f'{b.HEADER}STACK VALUES: {b.ENDC}', VAR_STACK['stack1'])
|
package gov.nist.drmf.interpreter.pom.eval.constraints;
import gov.nist.drmf.interpreter.pom.common.meta.AssumeMLPAvailability;
import gov.nist.drmf.interpreter.pom.MLPWrapper;
import gov.nist.drmf.interpreter.pom.SemanticMLPWrapper;
import mlp.ParseException;
import mlp.PomTaggedExpression;
import org.junit.jupiter.api.BeforeAll;
import org.junit.jupiter.api.Test;
import java.io.IOException;
import static org.junit.jupiter.api.Assertions.*;
/**
* @author <NAME>
*/
@AssumeMLPAvailability
public class ConstraintsMatcherTests {
private static MLPWrapper wrapper;
@BeforeAll
static void init() throws IOException {
wrapper = SemanticMLPWrapper.getStandardInstance();
}
private static void generalCheck(String blueprint, String constraint, String[] vars, String[] vals) throws ParseException {
MLPBlueprintTree bt = new MLPBlueprintTree(vals);
bt.setBlueprint(blueprint);
MLPBlueprintNode constraintTree = MLPBlueprintTree.parseTree(constraint);
assertTrue(bt.matches(constraintTree));
String[][] v = bt.getConstraintVariablesAndValues();
assertArrayEquals(v[0], vars);
assertArrayEquals(v[1], vals);
}
@Test
public void parseTest() throws ParseException {
PomTaggedExpression pte = wrapper.parse("a+b");
assertNotNull(pte);
assertEquals(pte.getComponents().size(), 3);
}
@Test
public void createBlueprintTest() throws ParseException {
String blueprintConstraint = "var = 1,2";
MLPBlueprintNode bt = MLPBlueprintTree.parseTree(blueprintConstraint);
assertNotNull(bt);
}
@Test
public void singleVariableMatchTest() throws ParseException {
String blueprintConstraint = "var = 1,2";
String actualConstraint = "n = 1,2";
generalCheck(blueprintConstraint, actualConstraint, new String[] {"n"}, new String[] {"1"});
}
@Test
public void multipleVariableMatchTest() throws ParseException {
String blueprintConstraint = "var1,var2,var3 > 0";
String actualConstraint = "a, b, c > 0";
generalCheck(blueprintConstraint, actualConstraint, new String[] {"a", "b", "c"}, new String[] {"1", "1", "1"});
}
@Test
public void differentValuesMatchTest() throws ParseException {
String blueprintConstraint = "var1-var2 even";
String actualConstraint = "v - w even";
generalCheck(blueprintConstraint, actualConstraint, new String[] {"v", "w"}, new String[] {"2", "0"});
}
@Test
public void complexMatchTest() throws ParseException {
String blueprintConstraint = "var \\in \\Complex \\setminus [0,\\infty)";
String actualConstraint = "z \\in \\Complex \\setminus [0, \\infty)";
generalCheck(blueprintConstraint, actualConstraint, new String[] {"z"}, new String[] {"-1"});
}
@Test
public void greekMatchTest() throws ParseException {
String blueprint = "2 var \\neq -1,-2,-3, \\dotsc";
String constraint = "2\\nu\\neq -1, -2, -3, \\dotsc";
generalCheck(blueprint, constraint, new String[] {"\\nu"}, new String[] {"1/4"});
}
@Test
public void uniformMatchTest() throws ParseException {
String blueprint = "\\realpart{var} < \\frac{1}{2}, \\frac{3}{2}, \\dots";
String constraint = "\\realpart{m} < \\ifrac{1}{2}, \\tfrac{3}{2}, \\ldots";
generalCheck(blueprint, constraint, new String[] {"m"}, new String[] {"3/2"});
}
@Test
public void nonMatchTest() throws ParseException {
String blueprint = "var \\neq 0,1";
String constraint = "x\\neq 0";
MLPBlueprintTree bt = new MLPBlueprintTree(new String[]{"3/2"});
bt.setBlueprint(blueprint);
MLPBlueprintNode constraintTree = MLPBlueprintTree.parseTree(constraint);
assertFalse(bt.matches(constraintTree));
}
@Test
public void noMatchTest() throws ParseException {
String blueprint = "\\realpart{var} > 1";
String constraint = "n = 1,2";
MLPBlueprintTree bt = new MLPBlueprintTree(new String[] {});
bt.setBlueprint(blueprint);
MLPBlueprintNode constraintTree = MLPBlueprintTree.parseTree(constraint);
assertFalse(bt.matches(constraintTree));
String[][] v = bt.getConstraintVariablesAndValues();
assertEquals(v[0].length, 0);
assertEquals(v[1].length, 0);
}
}
|
To follow up on the last entry, what I am using Django for is (re)writing our account request handling system. The simple version of how people get accounts here is a three step process. The would-be user tells us at least their desired Unix login, their name, their email address, and which professor is sponsoring them. We ask the professor if they actually want to sponsor the person's account; if the professor says yes, we create the account and email the requester with details.
(There are a few non-professor account sponsors for things like staff accounts. Professors can sponsor accounts for whoever they want, including people not otherwise associated with the university.)
What I want to automate is the process of submitting requests and having them approved (which seems like a great fit for a simple application built on a modern web framework); we'll continue to do the actual account creation by hand, using a set of scripts we have for it. So far, this has a pretty straightforward two-table and two-form application design; one table for submitted account requests, one table for sponsors, one form for submitting an account request, and a second form for sponsors to approve or reject accounts. Of course, now we get to the complications.
The big complication is that the current 'sponsors' information bundles three or four separate things together: what name people ask to sponsor their account, who actually approves the account, and what home directory new users should be put into (and what Unix group they should be assigned to). The name is usually a professor's, but it can also be a generic thing like 'Professional Masters Student' or 'Graduate Chair'; this means that the same person may have several sponsor entries that they approve accounts for. Home directories are complicated because some professors (and special sponsors) have their own home directories for sponsored accounts, but others put new accounts in the general home directories for their research group.
(DRY suggests that it would be a bad idea to manually replicate a research group's home directory information into the sponsor entries for each of its professors. The OO way out of this is different from the SQL way out.)
Then there are the workflow complications:
Points of Contact can approve accounts in place of one of their professors. I don't know how to cleanly represent this in a schema at all if I want to reuse the same form that sponsors use. (Besides, I already have the case that one person can approve requests for multiple 'sponsors' entries.)
the mass intake of new graduate students is handled differently. The Graduate Office prepares a list of new students and who is theoretically supervising them, then the supervisors approve their new students, and finally we email all of the approved people to ask them to basically come pick their login. This creates a couple of schema complications. First, an account request's approval status is different from whether or not it is 'complete' (has enough information to be created). New grad student accounts start out both unapproved and incomplete (since we don't know what login the new grad student wants), become approved but incomplete, and are finally completed when the new grad student picks a login. Second, there needs to be some way for new grad students to access their approved but incomplete account request so that they can fill in their desired login name, and some sort of authentication for this access. (I just realized that this implies that the login cannot be the primary key on the 'requests' table, although it still has to be unique.)
sometimes sponsors just outright make new accounts for people, including picking their login (this is most common with new administrative staff). Making them first fill in the request form then immediately approve it is kind of silly; they should be able to fill in a preapproved request.
oh yeah, we need an audit trail for when various things happened and who did them. Should this audit trail simply be text messages, or should I try to give it more structure?
(So far I am assuming that core staff will use the general Django administrative interface to do things like add new sponsors.)
All of these complications leave me looking at a scheme where either the tables are multiplying and cross-connecting, or things are mutating into objects that look less and less like anything with a good SQL representation.
(Talking to the duck here has already been useful in making me realize a few things about the problem.)
Sidebar: the OO way versus the SQL way of handling home directories
The OO way is that the 'sponsors' object has both a 'group' field and a 'homedirs' field that can be empty. The 'group' field points to an object for the research group, which has a 'homedirs' field of its own. If sponsors.homedirs is non-empty, we use that; otherwise, we use sponsors.group.homedirs (which must be non-empty).
The SQL way is probably to have a separate mapping table that translates entities to homedirs. Both groups and sponsors have entries in this table (we require that their names be non-overlapping, which is not a problem in practice). Rows in the 'sponsors' table have a foreign key that points to an entry in the mapping table, either the sponsor's group's entry or the sponsor's individual entry.
(The SQL mapping table approach is roughly how the current system handles this.) |
import Argument from '../src/Argument';
test('노드 인자 받아오기 모듈', () => {
const mapKeyToExpected = new Map([
['-u', 'https://dummy.com/'],
['-q', '.query > img'],
['-r', '2, 100'],
['-s', ''],
]);
process.argv = [
'execute node path',
'execute js file',
...mapKeyToExpected.entries(),
].flat();
const argument = new Argument();
expect(argument.URL).toEqual(mapKeyToExpected.get('-u'));
expect(argument.QUERY).toEqual(mapKeyToExpected.get('-q'));
expect(argument.SINGLE_DIRECTORY).toEqual(true);
expect(argument.RANGE).toEqual(mapKeyToExpected.get('-r'));
});
|
Dr. Carl E. Taylor, an architect of a 134-nation agreement that established primary health care as a universal right, died on Feb. 4 in Baltimore. He was 93.
The cause was prostate cancer, said the Johns Hopkins Bloomberg School of Public Health, where he was a professor emeritus and faculty member for 48 years.
Dr. Taylor conducted research in more than 70 countries and helped establish international health as a distinct academic field in the United States. His field trials in India 50 years ago were among the first to demonstrate the value of recruiting and training villagers to deliver basic health care in poor communities.
With two others, he wrote a pivotal 1959 study connecting malnutrition and infectious disease.
He was the primary consultant to the World Health Organization on the international Alma-Ata Declaration, adopted at a 1978 conference in Alma-Ata, now Altmaty, Kazakhstan. The document’s advocacy of community participation in health care, a position influenced in part by Dr. Taylor’s research, remains a guiding tenet of public health.
Advertisement Continue reading the main story
“He is the greatest public health expert I have come across,” said Dr. Halfdan T. Mahler, who, as W.H.O. director general from 1973 to 1988, was responsible for the agreement. |
<gh_stars>0
import { Range } from '../range';
interface RangeFactory {
/**
* Creates a closed range [0, end]
*/
(end: number): Range;
/**
* Creates a closed range [start, end]
*
* ### Example (es module)
* ```js
* import { range } from '@ouracademy/range'
* console.log(range(2, 10))
* ```
*
* @param start or left.
* @param end or end
* @returns [start, end].
* @anotherNote See range(end), upTo(), startingOn().
*/
// tslint:disable-next-line:unified-signatures
(start: number, end: number): Range;
/**
* Creates a range starting from -infinity
* [-infinity, end]
* @param end
*/
upTo(end: number): Range;
/**
* Creates a range ending at infinity
* [start, infinity]
* @param end
*/
startingOn(start: number): Range;
}
const createRangeFactory = (): RangeFactory => {
const result = (arg1: number, arg2: number) =>
arg2 ? new Range(arg1, arg2) : new Range(0, arg2);
result.upTo = (end: number) => new Range(-Infinity, end);
result.startingOn = (start: number) => new Range(start, Infinity);
return result as RangeFactory;
};
export const range = createRangeFactory();
export const interval = range;
/**
* Use it as a special case pattern.
*/
export const emptyRange = range(4, 1);
|
/**
* Created by vincent on 17-3-15.
* Copyright @ 2013-2017 Platon AI. All rights reserved
*/
public class MetadataWritable implements Writable {
private MultiMetadata metadata;
public MetadataWritable() {
this.metadata = new MultiMetadata();
}
public MetadataWritable(MultiMetadata metadata) {
this.metadata = metadata;
}
public MultiMetadata get() {
return metadata;
}
public final void write(DataOutput out) throws IOException {
Collection<String> names = metadata.names();
out.writeInt(names.size());
for (String name : names) {
Collection<String> values = metadata.getNonNullValues(name);
Text.writeString(out, name);
out.writeInt(values.size());
for (String value : values) {
Text.writeString(out, value);
}
}
}
public final void readFields(DataInput in) throws IOException {
int nameCount = in.readInt();
for (int i = 0; i < nameCount; i++) {
String name = Text.readString(in);
int valueCount = in.readInt();
for (int j = 0; j < valueCount; j++) {
metadata.put(name, Text.readString(in));
}
}
}
} |
package com.mauriciotogneri.botcoin.provider;
import org.jetbrains.annotations.NotNull;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.math.BigDecimal;
import java.util.ArrayList;
import java.util.List;
public class FilePriceProvider implements DataProvider<Price>
{
private int index = 0;
private final Price[] prices;
public FilePriceProvider(String path)
{
this.prices = load(path);
}
@NotNull
private Price[] load(String path)
{
try
{
List<Price> list = new ArrayList<>();
File file = new File(path);
FileReader fileReader = new FileReader(file);
BufferedReader bufferedReader = new BufferedReader(fileReader);
String line;
while ((line = bufferedReader.readLine()) != null)
{
String[] parts = line.split(";");
if (parts.length > 1)
{
long timestamp = Long.parseLong(parts[0]);
BigDecimal price = new BigDecimal(parts[1]);
list.add(new Price(timestamp, price));
}
}
fileReader.close();
Price[] result = new Price[list.size()];
for (int i = 0; i < list.size(); i++)
{
result[i] = list.get(i);
}
return result;
}
catch (Exception e)
{
throw new RuntimeException(e);
}
}
public void reset()
{
index = 0;
}
public Price[] prices()
{
return prices;
}
@Override
public boolean hasData()
{
return (index < prices.length);
}
@Override
public Price data()
{
return prices[index++];
}
} |
<gh_stars>1-10
import redisClient from "../model/redis";
import jwt from "jsonwebtoken";
import * as User from "../repositories/user";
import { HTTP_CODES, HTTP_MESSAGES } from "../helpers/constants";
import { Response, Request, NextFunction } from "express";
const { OK } = HTTP_CODES;
const { SUCCESS } = HTTP_MESSAGES;
const GET_ACCESS_TOKEN = async (req: Request, res: Response, next: NextFunction) => {
try {
const { id } = req.user;
const payload = { id };
const usedToken = req.headers.authorization!.split(" ")[1];
redisClient.set("Blacklist_" + id, usedToken);
const token = jwt.sign(payload, process.env.JWT_ACCESS_SECRET!, {
expiresIn: process.env.JWT_ACCESS_TIME,
});
const refreshToken = GENERATE_REFRESH_TOKEN(id);
await User.updateToken(id, token);
return res.json({ status: OK, message: SUCCESS, payload: { token, refreshToken } });
} catch (err) {
next(err);
}
};
const GENERATE_REFRESH_TOKEN = function async(id: string) {
const payload = { id };
const refreshToken = jwt.sign(payload, process.env.JWT_REFRESH_SECRET!, {
expiresIn: process.env.JWT_REFRESH_TIME,
});
redisClient.set(id, JSON.stringify({ token: refreshToken }));
return refreshToken;
};
export { GET_ACCESS_TOKEN, GENERATE_REFRESH_TOKEN };
|
/**
* The class implements a thread-safe LinkedHashMap with a maximalSize. The keys are stored in insert-order
* In contrast to other concurrent map implementations, the size operation is fast (constant in the mapSize),
* but might imprecise. The size is updated in fixed intervals.
* <p/>
* Unfortunately the thread-safeness is difficult to test.
* <p/>
* The skipList is currently not implemented.
*
* @author jaschar
*/
public class ConcurrentSparkList implements Serializable{
private static final Logger log = Logger.getLogger(ConcurrentSparkList.class);
static {log.setLevel(org.apache.log4j.Level.DEBUG);}
private final Long2LongOpenHashMap data;
private JavaPairRDD<Long, Long> item2ReadCount;
private JavaPairRDD<Long, Long> item2timeStampData;
private AtomicInteger atomicInteger;
private Function2<Long, Long, Long> replaceValues = ((Function2<Long, Long, Long> & Serializable) (x, y) -> {
if (x > y) {
return x;
} else {
return y;
}
});
private int maxSize = 10;
private int numPartitions;
/**
* Constructor
*
* @param _maxSize set the maximal size
*/
public ConcurrentSparkList(final int _maxSize) {
// start spark node
SharedService.getInstance();
// check the parameter
if (_maxSize <= 2) {
throw new IllegalArgumentException("maxSize must not be <= 2");
}
atomicInteger = new AtomicInteger();
atomicInteger.set(0);
// set the parameters
this.maxSize = _maxSize;
data = new Long2LongOpenHashMap(maxSize);
item2ReadCount = SharedService.parallelizePairs(data);
item2timeStampData = SharedService.parallelizePairs(data);
numPartitions = item2ReadCount.context().defaultParallelism();
}
public boolean isEmpty() {
return item2ReadCount.isEmpty();
}
private void put(Long key, Long value) {
JavaPairRDD<Long, Long> filteredPairRDD = item2ReadCount.filter(t -> t._1().equals(key));
if(filteredPairRDD.isEmpty()) {
atomicInteger.getAndIncrement();
}
JavaPairRDD<Long, Long> newPair = SharedService.parallelizePairs(new Tuple2(key, value));
JavaPairRDD<Long, Long> timeStamp = SharedService.parallelizePairs(new Tuple2(key, System.currentTimeMillis()));
if (size() < maxSize) {
addNewElement(newPair, timeStamp);
} else {
// remove a old key
removeOldKey();
// add new pair
addNewElement(newPair, timeStamp);
}
}
private void removeOldKey() {
remove(getOldest());
}
private void addNewElement(JavaPairRDD newPair, JavaPairRDD timeStamp) {
item2ReadCount = item2ReadCount
.union(newPair)
.coalesce(numPartitions, false)
.reduceByKey((v1, v2) -> (Long) v1 + (Long) v2, numPartitions)
.mapToPair((PairFunction<Tuple2<Long, Long>, Long, Long>) Tuple2::swap)
.sortByKey(false, numPartitions)
.mapToPair((PairFunction<Tuple2<Long, Long>, Long, Long>) Tuple2::swap);
item2timeStampData = item2timeStampData
.union(timeStamp)
.coalesce(numPartitions, false)
.reduceByKey(replaceValues)
.mapToPair((PairFunction<Tuple2<Long, Long>, Long, Long>) Tuple2::swap)
.sortByKey(true, numPartitions)
.mapToPair((PairFunction<Tuple2<Long, Long>, Long, Long>) Tuple2::swap);
}
public Long remove(Object key) {
Long oldValue = item2ReadCount
.collectAsMap()
.get(key);
if (oldValue != null) {
item2ReadCount = item2ReadCount.filter((Function<Tuple2<Long, Long>, Boolean>) t -> !t._1().equals(oldValue));
}
return oldValue;
}
public int size() {
return atomicInteger.get();
}
public void incrementKeyFrequency(final Long key) {
incrementKeyFrequency(key, 1);
}
/**
* @param key
*/
private void incrementKeyFrequency(final Long key, final long count) {
put(key, count);
}
public List<Long> deliver() {
return item2ReadCount
.map(t -> t._1())
.collect();
}
Tuple2<Long, Long> getOldest() {
return item2timeStampData.first();
}
Tuple2<Long, Long> getNewest() {
System.out.println("getNewest is inefficient, do not use it!!! it is just for debug proposes.");
return item2timeStampData.collect().get((int) item2timeStampData.count() - 1);
}
/**
*
*/
public Iterator<Tuple2<Long, Long>> getIterator() {
return item2ReadCount.collect().iterator();
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder();
sb.append("[ ");
for (Tuple2<Long, Long> entry : this.item2ReadCount.collect()) {
sb.append("(" + entry._1() + "|" + entry._2() + ")");
}
sb.append("] FIFO=>");
for (Long entry : this.item2ReadCount.keys().collect()) {
sb.append(entry + ", ");
}
return sb.toString();
}
public static void main(String[] args) {
ConcurrentSparkList me = new ConcurrentSparkList(3);
me.incrementKeyFrequency(6L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(1L, 2L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(2L, 2L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(2L, 2L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(2L, 2L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(1L, 2L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(1L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(1L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(1L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(3L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(3L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(1L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(1L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(3L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(4L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(1L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(5L, 1L);
System.out.println(me);
System.out.println(me.size());
me.incrementKeyFrequency(5L, 1L);
System.out.println(me);
System.out.println(me.size());
System.out.println("start the iterator (sort by value descending)");
for (Iterator<Tuple2<Long, Long>> iterator = me.getIterator(); iterator.hasNext(); ) {
Tuple2<Long, Long> entry = iterator.next();
System.out.println(entry._1() + "||" + entry._2());
}
System.out.println("get oldest tuple:");
System.out.println(me.getOldest());
System.out.println("get newest tuple:");
System.out.println(me.getNewest());
System.out.println("deliver ranking:");
System.out.println(me.deliver());
}
} |
<reponame>davecheney/loadavg
// +build !linux,!darwin,!freebsd
package loadavg
import (
"fmt"
"runtime"
)
func loadAvg() ([3]float64, error) {
return [...]float64{-1, -1, -1}, fmt.Errorf("loadavg: unsupported platform %q", runtime.GOOS)
}
|
/*
© 2022–present <NAME> <<EMAIL>> (https://haraldrudell.github.io/haraldrudell/)
All rights reserved.
*/
package sqliter
import (
"database/sql"
"github.com/haraldrudell/parl"
"github.com/haraldrudell/parl/perrors"
_ "modernc.org/sqlite"
)
const (
sqLiteDriverName = "sqlite"
)
type DataSource struct {
*sql.DB
}
// NewDB get a DB object that repreents the databases in a directory
func NewDataSource(dataSourceName string) (dataSource parl.DataSource, err error) {
d := DataSource{}
if d.DB, err = sql.Open(sqLiteDriverName, dataSourceName); err != nil {
err = perrors.Errorf("sql.Open(%s %s): %w", sqLiteDriverName, dataSourceName, err)
return
}
dataSource = &d
return
}
|
// Upload uploads a list of transfers to storage, in parallel.
//
// Transfer events (started, failed, finished, etc) are communicated
// via the Transfer interface.
func Upload(ctx context.Context, store Storage, transfers []Transfer, parallelLimit int) {
wp := workerpool.New(parallelLimit)
for _, x := range transfers {
x := x
wp.Submit(func() {
x.Started()
obj, err := store.Put(ctx, x.URL(), x.Path())
if err != nil {
x.Failed(err)
} else {
x.Finished(obj)
}
})
}
wp.StopWait()
} |
/**
* Registers a future in the in-flight futures collection. When it completes (either normally or exceptionally),
* it will be removed from the collection.
*
* @param future the future to register
*/
public void registerFuture(CompletableFuture<?> future) {
future.whenComplete((result, ex) -> inFlightFutures.remove(future));
inFlightFutures.add(future);
} |
.
The conventional view of emigration holds that it represents a loss of resources from a country and that the only possible policy response is to discourage new emigration while promoting return of those who have left. A new policy is needed based on a fuller understanding of the potential benefits of emigration for the country of origin. The cost of emigration is usually counted as the loss of educational investment, the loss of labor force, and the loss of the contributions to development that would have been made by talented emigrants. But such views usually do not include a serious treatment of the economic problems of labor supply and demand in general or of skilled labor in particular. Underemployment or unemployment of highly educated persons and overproduction of educated persons are problems throughout Latin America and much of the developing world. A truer evaluation of the costs of education which considered decreasing marginal costs rather than average costs per student, nominally variable costs that actually behave as fixed costs, and an adequate assignment of costs for students leaving school before graduating would lead to much lower estimates of average cost per university student in Latin America. Significant emigration may actually result indirectly in an increase in national income by reducing pressure on the labor market and allowing wages to rise for remaining workers. Remittances for emigrants and repatriation of savings may contribute significantly to national income and balance of payments, and may compensate for or even exceed the economic losses of emigration. National policy for emigrants should aim at maximizing the economic benefits of emigration by providing incentives for the accumulation of capital obtained abroad and its transfer to the country of origin. The 1st major goal of emigration policy should be to maintain affective and social ties between the emigrant and the country of origin as a necessary condition for channeling benefits to the country. Such factors as inclusive citizenship policies for spouses and children born abroad, provisions for absentee voting, communication and information programs, and recognition of education and professional title conferred abroad would help motivate a continuing interest in the country of origin. The 2nd policy goal should be to create concrete channels for different types of emigrant activities that would benefit the country of origin. This operational side of emigration policy would provide channels for the return to the country of capital and goods accumulated by the emigrant and would provide for cooperation in scientific endeavors, business and investment, and for social and humanitarian projects.
|
0 SHARES Facebook Twitter Google Whatsapp Pinterest Print Mail Flipboard
Memory is the process in which information is encoded, stored, and retrieved, and most people with a brain would have little problem retrieving information from the past six years unless their brain is dead. It is fair to say that when people think back on the past six years, they will likely remember that Republicans were responsible for damaging the world’s financial system and America’s economy twice, and are right on schedule to do it again. Americans should be used to regularly-scheduled financial crises due to Republicans playing hostage politics, and quite likely they have learned to accept it as the consequence of electing fascists to serve in Congress.
The current debt ceiling crisis marks the third time in six years Republicans were instrumental in creating havoc with the world, and America’s, economy through economic malfeasance and manufactured financial emergencies. Now that they are creating another crisis, they are maneuvering to plan the next emergency to take another shot at thwarting the Affordable Care Act before the end of the 2013 calendar year.
Each of the GOP’s economic disasters has cost the nation dearly including the 2007-08 crash that killed tens-of-millions of jobs and decimated the world’s economy, and in 2011 they cost America a million jobs, nearly $19 billion in interest fees, and a sequester slated to kill over a million jobs within the next year. There is no telling what the current crisis will cost the nation or how devastating the next planned crisis will be will be to the economy, but if Republicans prevail there is little doubt the cost will be painful because the GOP is highly motivated to keep the economy in dire straits. While Americans may be getting accustomed to Republican-created financial crises, world leaders remember the level of damage they are capable of and lashed out at Congress for shutting down the government and playing politics with the debt ceiling.
While neo-confederates Rafael Cruz and Sarah Palin were minimizing the Republican shutdown in Washington D.C. over the weekend, world economic leaders were pleading, warning, and pressing America to “raise its debt ceiling and reopen its government or risk massive disruption the world over” according to Christine Lagarde, the International Monetary Fund’s (IMF) managing director. The World Bank, IMF, and world leaders were in Washington to talk about the international economic recovery until “they found out that the debt ceiling was the issue” and that Republicans shut down the government and made sure “there was no remedy in sight.” As if Treasury Secretary and Fed Chairman Ben Bernanke were unaware of the threat to the world’s economy, the leaders lectured them and predicted that even a near-default would lead to higher borrowing costs and a slowdown of the global economy; just what Republicans want.
The chairman of the French bank, BNP Paribas, said “This cannot happen, and this shall not happen. The consequences of this would be absolutely disastrous,” but he was preaching to the choir and likely unaware that Republicans could not care less about the consequences of their actions. Secretary Lew readily acknowledged that “Our work begins at home, and we recognize that the United States is the anchor of the international financial system, and the United States cannot take this hard-earned reputation for granted.” Based on the parade of economic experts decrying America’s flirtation with an economic catastrophe, the nation’s reputation as an economic leader is rapidly vanishing and likely damaged irreparably.
Many of the world’s high-ranking economic officials made open appeals to Congress replete with warnings from America’s allies and creditors alike. World Bank leader Jim Yong Kim said the “world is days away from a very dangerous moment” and that “the closer we get to the deadline the greater the impact will be for the world.” The German finance minister issued a stern directive to Congress that “the fiscal standoff has to be resolved without delay,” and IMF’s Lagarde said “that the lack of certainty, that lack of trust in the U.S. signature” will disrupt the world economy.” Jamie Dimon, chief executive of JPMorgan Chase painted a bleak picture of the days ahead if there is no resolution and warned that “As you get closer to it, the panic will set in and something will happen” and that “JPMorgan has spent huge amounts of time and money and effort to be prepared” because they remember the damage Republicans wreaked in 2008 and 2011 and understand their willingness to repeat their past performances.
It is a sad day indeed when the Secretary of the Treasury and Federal Reserve chairman of the world’s financial leader have to be chastised and dressed-down by foreign leaders because a group of fascist Republicans threaten the world’s economy and financial system. However, it is becoming a regular occurrence for leaders of foreign nations to question the legitimacy of America as a global leader as evidenced by criticism that “nobody knows if the country will still be solvent in three weeks. What is clear, though, is that America is already politically bankrupt,” or that “a rump in Congress is holding the whole place to ransom that doesn’t really jibe with the notion of the United States as a global leader.” The insults prompted Secretary of State John Kerry to tell world leaders that “When we get this moment of political silliness behind us, we will get back on a track the world will respect and want to be part of.”
After three different financial crises in six short years, it is highly unlikely that any nation will ever respect America again and they certainly will think twice about being any part of this country’s “track;” at least as long as Republicans remain a toxic political movement. The world’s problem now is that toxic Republicans are as unfazed at the prospect of creating a worldwide financial catastrophe as they are destroying America’s economy, and they appear legitimately pleased at the ruin their hostage politics are already wreaking on American citizens and financial markets around the world.
America had gone from being a respected nation under the last Democratic president to a hated warmonger and world economy destroyer under the Bush administration, and just when another Democrat saved this nation’s economy and brought stability to the world’s financial markets, Republicans deliberately caused a financial crisis within 8 months of taking control of the House of Representatives in 2011. Within a year-and-a-half they threatened to send the economy over the fiscal cliff in December 2012, and ten months later are two days away from sending the world’s economy back into the recession they created in 2007-2008.
The state of this nation under assault by Republican fascists is beyond embarrassing; it is distressing to say the very least. For their part, Republicans are undaunted at the level of damage they have wreaked on the nation thus far, and are seemingly delighted to create an economic catastrophe that will affect the entire world. It is a sad commentary indeed that a country that was once a respected and exceptional world leader has fallen so far from the world’s graces due to one political party that still cannot accept the result of two presidential elections, and their willingness to decimate the world’s economy and commit political suicide to punish the people for electing an African American man as President puts them in the same league as any extremist terrorist.
If you’re ready to read more from the unbossed and unbought Politicus team, sign up for our newsletter here! Email address: Leave this field empty if you're human: |
<reponame>emanuil-tolev/wellcome-outputs-from-ncbi
import sys
import requests
from datetime import datetime
with open(sys.argv[1], 'rb') as f:
content = f.read()
content = content.splitlines()
data = []
for row in content:
r = requests.get('http://oag.cottagelabs.com/lookup/' + row)
if r.status_code != 200:
print datetime.now(), 'OAG returned', r.status_code, '; First 100 resp. chars:', r.text[:100]
|
use std::os::raw::{c_int, c_float};
#[link(name="SpidarMouse")]
extern "C" {
fn OpenSpidarMouse() -> c_int;
fn SetForce(Force_XScale: c_float, Force_YScale: c_float, duration: c_int);
fn CloseSpidarMouse() -> c_int;
fn SetMinForceDuty(MinForceDuty: c_float);
fn SetDutyOnCh(
duty1: c_float,
duty2: c_float,
duty3: c_float,
duty4: c_float,
duration: c_int
);
}
pub fn open_spidar_mouse() -> i32 {
unsafe{ OpenSpidarMouse() }
}
pub fn set_force(force_x_scale: f32, froce_y_scale: f32, duration: i32) {
unsafe{ SetForce(force_x_scale, froce_y_scale, duration); }
}
pub fn close_spidar_mouse() -> bool {
unsafe{
match CloseSpidarMouse(){
0 => false,
_ => true
}
}
}
pub fn set_min_force_duty(min_force_duty: f32) {
unsafe{ SetMinForceDuty(min_force_duty); }
}
pub fn set_duty_on_ch(
duty1: f32,
duty2: f32,
duty3: f32,
duty4: f32,
duration: i32
){
unsafe{
SetDutyOnCh(
duty1,
duty2,
duty3,
duty4,
duration
);
}
} |
#pragma once
#include <string>
enum token_t {
TUNKNOWN = 0,
TEOF,
NL,
TAB,
SPACE,
TINVALID,
TINDENT,
TDEDENT,
TLINE,
TBACKSLASH,
TOP,
TID,
TCHAR,
TSTRING,
TFLOAT,
TDINT,
TXINT,
TRINT,
TLPAREN,
TRPAREN,
TLBRACKET,
TRBRACKET,
TLBRACE,
TRBRACE,
};
#define IS_INDENT(c) ((c) == SPACE || (c) == TAB)
#define IS_LGROUPER(c) \
((c) == TLPAREN || (c) == TLBRACE || (c) == TLBRACKET)
#define IS_RGROUPER(c) \
((c) == TRPAREN || (c) == TRBRACE || (c) == TRBRACKET || (c) == TEOF)
#define OPPOSITE_GROUPER(c) \
((c) == TLPAREN ? TRPAREN \
:(c) == TLBRACKET ? TRBRACKET \
:(c) == TLBRACE ? TRBRACE \
: TUNKNOWN)
extern const char* toknames[];
struct Token {
token_t token;
int line;
int column;
std::string lexeme;
Token() : token(TINVALID), line(-1), column(-1), lexeme("") {}
Token(int t, int l, int c, const char *s) : token((token_t)t), line(l), column(c), lexeme(s ? s : "") {
//std::cout << "made token " << toknames[token] << " = {" << lexeme <<"}"<< std::endl;
}
};
void initLexer(std::string &filename);
void closeLexer();
Token nextToken();
|
Oh Kate, you’re going to put us through hell!
Friday morning sees an event taking place in the UK that music fans thought would never happen when tickets go on sale for a series of Kate Bush concerts. Yes, the star most unlikely to step out on a stage by herself again has taken everyone by surprise with the announcement that she plans to do just that.
What happens next? Surely a mad, tearful scramble by fans for tickets, because it’s more than buying tickets to see a live performance by a much-loved British artist, it’s a chance to take part in a world event, a chance to say ‘yes I was there, I saw Kate Bush sing live before I died’.
It’s an opportunity not to be missed, but what do you have to go through to get there?
1. Dread
The night before you go to bed full of worry, what if you oversleep? What if your internet connection is lost over night? What if you don’t get tickets? The swirling dread you feel before a massive, scary event weighs heavy on you and you fall into a fitful sleep.
Advertisement
Advertisement
2. Optimism
You awake full of excitement, sure in the knowledge you are bound to be successful and pick up tickets to this great event with ease, there’s no way you can lose. The early bird catches the worm right? So you are up and getting ready early which leads on to…
3. Planning
An hour or two before kick off you:
Log on to all available computers and iPads in your home and set to ticket sites
Get out credit cards, phone, pens and paper and place neatly next to your computers
Make a note of seating areas, available dates and text friends and family to try and co-ordinate who should go for which dates
You’re sorted, you start drumming your fingers on the table in anticipation as you wait, which becomes…
4. Nervous excitement
9.29am. It’s time! It’s now or never! A flurry of nervous activity fuelled by adrenaline. You’re in, you start ticking boxes and entering your details. It’s slow, it’s busy, it’s not working for you. The site has crashed. Open up another tab and have another go. Your finger starts hitting refresh, refresh, refresh over and over again, you can’t stop. Now we’re in:
5. Panic mode
Someone on Twitter is crowing, they already have their tickets. You don’t. You hate them. You’re still ticking boxes and refreshing and trying to enter your credit card details again. Crash. Cry. Re-enter for the fifth time. Crash. Cry. You’re now just randomly pressing buttons.
Advertisement
Advertisement
This can only go one of two ways:
6a. Elation
‘Thank you for your purchase, enjoy the show.’ The message flashes up on the screen, you’ve only gone and done it! The tight ball of feeling you had in your tummy releases, you can breathe and immediately take to Twitter, letting all and sundry know of your success. Secure in the knowledge you have prevailed you can be magnanimous with your sympathy for those less fortunate. Take your shoes off and throw them in the lake, your work here is done.
Or:
6b. Horror
‘Sorry, we’re unable to fulfill your request.’ The window of opportunity is closing and you have been left out in the cold. Without tickets. Every night now looks to be sold out. You’ve been sitting there for less than five minutes, it feels like a lifetime. A lifetime in which you will never get to see Kate Bush live. Your head falls into your hands. It’s over. Or is it?
6c. Desperation
Turn horror into desperation by quickly ringing everyone you know to ask if they have a spare seat. Log onto eBay and set up an search alert for tickets. Declare you will turn up on the night and throw yourself on the mercy of the touts. Basically debase yourself. Put on ‘Running Up That Hill’ and promise that you too would do a ‘deal with god’ just let me get my hands on a ticket.
Fingers crossed that you do. (And me too!)
Advertisement
MORE: What we’d like to see from Kate Bush live |
15 Practical Grep Command Examples In Linux / UNIX
Photo courtesy of Alexôme’s
You should get a grip on the Linux grep command.
This is part of the on-going 15 Examples series, where 15 detailed examples will be provided for a specific command or functionality. Earlier we discussed 15 practical examples for Linux find command, Linux command line history and mysqladmin command.
In this article let us review 15 practical examples of Linux grep command that will be very useful to both newbies and experts.
First create the following demo_file that will be used in the examples below to demonstrate grep command.
$ cat demo_file THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE. this line is the 1st lower case line in this file. This Line Has All Its First Character Of The Word With Upper Case. Two lines above this line is empty. And this is the last line.
1. Search for the given string in a single file
The basic usage of grep command is to search for a specific string in the specified file as shown below.
Syntax: grep "literal_string" filename
$ grep "this" demo_file this line is the 1st lower case line in this file. Two lines above this line is empty. And this is the last line.
2. Checking for the given string in multiple files.
Syntax: grep "string" FILE_PATTERN
This is also a basic usage of grep command. For this example, let us copy the demo_file to demo_file1. The grep output will also include the file name in front of the line that matched the specific pattern as shown below. When the Linux shell sees the meta character, it does the expansion and gives all the files as input to grep.
$ cp demo_file demo_file1 $ grep "this" demo_* demo_file:this line is the 1st lower case line in this file. demo_file:Two lines above this line is empty. demo_file:And this is the last line. demo_file1:this line is the 1st lower case line in this file. demo_file1:Two lines above this line is empty. demo_file1:And this is the last line.
3. Case insensitive search using grep -i
Syntax: grep -i "string" FILE
This is also a basic usage of the grep. This searches for the given string/pattern case insensitively. So it matches all the words such as “the”, “THE” and “The” case insensitively as shown below.
$ grep -i "the" demo_file THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE. this line is the 1st lower case line in this file. This Line Has All Its First Character Of The Word With Upper Case. And this is the last line.
4. Match regular expression in files
Syntax: grep "REGEX" filename
This is a very powerful feature, if you can use use regular expression effectively. In the following example, it searches for all the pattern that starts with “lines” and ends with “empty” with anything in-between. i.e To search “lines[anything in-between]empty” in the demo_file.
$ grep "lines.*empty" demo_file Two lines above this line is empty.
From documentation of grep: A regular expression may be followed by one of several repetition operators:
? The preceding item is optional and matched at most once.
* The preceding item will be matched zero or more times.
+ The preceding item will be matched one or more times.
{n} The preceding item is matched exactly n times.
{n,} The preceding item is matched n or more times.
{,m} The preceding item is matched at most m times.
{n,m} The preceding item is matched at least n times, but not more than m times.
5. Checking for full words, not for sub-strings using grep -w
If you want to search for a word, and to avoid it to match the substrings use -w option. Just doing out a normal search will show out all the lines.
The following example is the regular grep where it is searching for “is”. When you search for “is”, without any option it will show out “is”, “his”, “this” and everything which has the substring “is”.
$ grep -i "is" demo_file THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE. this line is the 1st lower case line in this file. This Line Has All Its First Character Of The Word With Upper Case. Two lines above this line is empty. And this is the last line.
The following example is the WORD grep where it is searching only for the word “is”. Please note that this output does not contain the line “This Line Has All Its First Character Of The Word With Upper Case”, even though “is” is there in the “This”, as the following is looking only for the word “is” and not for “this”.
$ grep -iw "is" demo_file THIS LINE IS THE 1ST UPPER CASE LINE IN THIS FILE. this line is the 1st lower case line in this file. Two lines above this line is empty. And this is the last line.
6. Displaying lines before/after/around the match using grep -A, -B and -C
When doing a grep on a huge file, it may be useful to see some lines after the match. You might feel handy if grep can show you not only the matching lines but also the lines after/before/around the match.
Please create the following demo_text file for this example.
$ cat demo_text 4. Vim Word Navigation You may want to do several navigation in relation to the words, such as: * e - go to the end of the current word. * E - go to the end of the current WORD. * b - go to the previous (before) word. * B - go to the previous (before) WORD. * w - go to the next word. * W - go to the next WORD. WORD - WORD consists of a sequence of non-blank characters, separated with white space. word - word consists of a sequence of letters, digits and underscores. Example to show the difference between WORD and word * 192.168.1.1 - single WORD * 192.168.1.1 - seven words.
6.1 Display N lines after match
-A is the option which prints the specified N lines after the match as shown below.
Syntax: grep -A <N> "string" FILENAME
The following example prints the matched line, along with the 3 lines after it.
$ grep -A 3 -i "example" demo_text Example to show the difference between WORD and word * 192.168.1.1 - single WORD * 192.168.1.1 - seven words.
6.2 Display N lines before match
-B is the option which prints the specified N lines before the match.
Syntax: grep -B <N> "string" FILENAME
When you had option to show the N lines after match, you have the -B option for the opposite.
$ grep -B 2 "single WORD" demo_text Example to show the difference between WORD and word * 192.168.1.1 - single WORD
6.3 Display N lines around match
-C is the option which prints the specified N lines before the match. In some occasion you might want the match to be appeared with the lines from both the side. This options shows N lines in both the side(before & after) of match.
$ grep -C 2 "Example" demo_text word - word consists of a sequence of letters, digits and underscores. Example to show the difference between WORD and word * 192.168.1.1 - single WORD
7. Highlighting the search using GREP_OPTIONS
As grep prints out lines from the file by the pattern / string you had given, if you wanted it to highlight which part matches the line, then you need to follow the following way.
When you do the following export you will get the highlighting of the matched searches. In the following example, it will highlight all the this when you set the GREP_OPTIONS environment variable as shown below.
$ export GREP_OPTIONS='--color=auto' GREP_COLOR='100;8' $ grep this demo_file this line is the 1st lower case line in this file. Two lines above this line is empty. And this is the last line.
8. Searching in all files recursively using grep -r
When you want to search in all the files under the current directory and its sub directory. -r option is the one which you need to use. The following example will look for the string “ramesh” in all the files in the current directory and all it’s subdirectory.
$ grep -r "ramesh" *
9. Invert match using grep -v
You had different options to show the lines matched, to show the lines before match, and to show the lines after match, and to highlight match. So definitely You’d also want the option -v to do invert match.
When you want to display the lines which does not matches the given string/pattern, use the option -v as shown below. This example will display all the lines that did not match the word “go”.
$ grep -v "go" demo_text 4. Vim Word Navigation You may want to do several navigation in relation to the words, such as: WORD - WORD consists of a sequence of non-blank characters, separated with white space. word - word consists of a sequence of letters, digits and underscores. Example to show the difference between WORD and word * 192.168.1.1 - single WORD * 192.168.1.1 - seven words.
10. display the lines which does not matches all the given pattern.
Syntax: grep -v -e "pattern" -e "pattern"
$ cat test-file.txt a b c d $ grep -v -e "a" -e "b" -e "c" test-file.txt d
11. Counting the number of matches using grep -c
When you want to count that how many lines matches the given pattern/string, then use the option -c.
Syntax: grep -c "pattern" filename
$ grep -c "go" demo_text 6
When you want do find out how many lines matches the pattern
$ grep -c this demo_file 3
When you want do find out how many lines that does not match the pattern
$ grep -v -c this demo_file 4
12. Display only the file names which matches the given pattern using grep -l
If you want the grep to show out only the file names which matched the given pattern, use the -l (lower-case L) option.
When you give multiple files to the grep as input, it displays the names of file which contains the text that matches the pattern, will be very handy when you try to find some notes in your whole directory structure.
$ grep -l this demo_* demo_file demo_file1
13. Show only the matched string
By default grep will show the line which matches the given pattern/string, but if you want the grep to show out only the matched string of the pattern then use the -o option.
It might not be that much useful when you give the string straight forward. But it becomes very useful when you give a regex pattern and trying to see what it matches as
$ grep -o "is.*line" demo_file is line is the 1st lower case line is line is is the last line
14. Show the position of match in the line
When you want grep to show the position where it matches the pattern in the file, use the following options as
Syntax: grep -o -b "pattern" file
$ cat temp-file.txt 12345 12345 $ grep -o -b "3" temp-file.txt 2:3 8:3
Note: The output of the grep command above is not the position in the line, it is byte offset of the whole file.
15. Show line number while displaying the output using grep -n
To show the line number of file with the line matched. It does 1-based line numbering for each file. Use -n option to utilize this feature.
$ grep -n "go" demo_text 5: * e - go to the end of the current word. 6: * E - go to the end of the current WORD. 7: * b - go to the previous (before) word. 8: * B - go to the previous (before) WORD. 9: * w - go to the next word. 10: * W - go to the next WORD.
Additional Grep Tutorials
Awesome Linux Articles
Following are few awesome 15 examples articles that you might find helpful.
If you enjoyed this article, you might also like.. |
Scheduling Parallel Tasks onto Opportunistically Available Cloud Resources
We consider the problem of opportunistically scheduling low-priority tasks onto underutilized computation resources in the cloud left by high-priority tasks. To avoid conflicts with high-priority tasks, the scheduler must suspend the low-priority tasks (causing waiting), or move them to other underutilized servers (causing migration), if the high-priority tasks resume. The goal of opportunistic scheduling is to schedule the low-priority tasks onto intermittently available server resources while minimizing the combined cost of waiting and migration. Moreover, we aim to support multiple parallel low-priority tasks with synchronization constraints. Under the assumption that servers' availability to low-priority tasks can be modeled as ON/OFF Markov chains, we have shown that the optimal solution requires solving a Markov Decision Process (MDP) that has exponential complexity, and efficient solutions are known only in the case of homogeneously behaving servers. In this paper, we propose an efficient heuristic scheduling policy by formulating the problem as restless Multi-Armed Bandits (MAB) under relaxed synchronization. We prove the index ability of the problem and provide closed-form formulas to compute the indices. Our evaluation using real data center traces shows that the performance result closely matches the prediction by the Markov chain model, and the proposed index policy achieves consistently good performance under various server dynamics compared with the existing policies. |
import java.io.*;
class Main {
public static void main(String[] args) throws Exception {
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
while(true) {
int width = Integer.parseInt(br.readLine());
if(width == 0) return;
int maxSize = 0;
boolean[][] map = new boolean[width][width];
int[] xy = new int[2];
for(int y = 0; y < width; y++) {
String str = br.readLine();
for(int x = 0; x < width; x++) {
if(str.charAt(x) == '*')
map[y][x] = true;
else {
int nowSize = 1;
sqCheck: for(int cWidth = 1; x - cWidth >= 0 && y - cWidth >= 0; cWidth++) {
for(int cWidth2 = 0; cWidth2 <= cWidth; cWidth2++)
if(map[y - cWidth + cWidth2][x - cWidth] || map[y - cWidth][x - cWidth + cWidth2])
break sqCheck;
nowSize++;
}
if(maxSize < nowSize) maxSize = nowSize;
}
}
}
System.out.println(maxSize);
}
}
} |
#include "PseudoRandom.h"
pcg32_random_t random;
void InitSeedRNG()
{
pcg32_srandom_r(&random, time(NULL), (intptr_t)&random);
}
void InitSeedEntropy()
{
uint64_t seeds[2];
entropy_getbytes((void*)seeds, sizeof(seeds));
pcg32_srandom_r(&random, seeds[0], seeds[1]);
}
UID GenerateUUID()
{
return pcg32_random_r(&random);
}
int GetRandomBetween(int min, int max)
{
return pcg32_boundedrand_r(&random, (max - min) + 1) + min;
}
|
import sys
from io import StringIO
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
if __name__ == "__main__":
datasets = sys.stdin.read().split("\n\n")
df = pd.DataFrame()
for dataset in datasets[:-1]:
algo, *table = dataset.split("\n")
tmp = pd.read_fwf(StringIO("\n".join(table)))
tmp["algorithm"] = algo
df = df.append(tmp)
df = df.melt(
id_vars=["algorithm", "i"],
value_vars=[x for x in df.columns if x.endswith("sort")],
var_name="arrangement",
value_name="time",
)
with plt.xkcd():
# fiddle with some of the xkcd defaults
# see https://matplotlib.org/3.1.1/_modules/matplotlib/pyplot.html#xkcd
mpl.rcParams.update(
{
"path.effects": [
mpl.patheffects.withStroke(linewidth=2.5, foreground="w")
],
"lines.linewidth": 2.25,
}
)
g = sns.relplot(
data=df,
x="i",
y="time",
hue="algorithm",
col="arrangement",
kind="line",
height=2.5,
aspect=1.5,
col_wrap=3,
)
g.set_axis_labels("Array size (2^N)", "Time (s)")
g.set_titles("{col_name}")
output = StringIO()
g.savefig(output, format="svg")
print(output.getvalue())
|
// Returns the maximum decrypted length of an "aesgcm" ciphertext.
static size_t
ece_aesgcm_max_decrypted_length(uint32_t rs, size_t ciphertextLen) {
ECE_UNUSED(rs);
return ciphertextLen;
} |
#include "cvd/image.h"
#include "cvd/utility.h"
#include "cvd/colourspace.h"
#include "cvd/colourspaces.h"
#include "cvd/image_convert.h"
#include <iostream>
namespace CVD {
namespace{
unsigned char saturate(int i)
{
if(i<0)
return 0;
else if(i>255)
return 255;
else
return i;
}
struct yuv422_ind{
static const int y1 = 0;
static const int uu = 1;
static const int y2 = 2;
static const int vv = 3;
};
struct vuy422_ind{
static const int y1=1;
static const int uu=0;
static const int vv=2;
static const int y2=3;
};
template<class C, class Ind>
void convert_422(const BasicImage<C>& from, BasicImage<Rgb<byte> >& to)
{
int yy, uu, vv, ug_plus_vg, ub, vr;
int r,g,b;
size_t bytes_per_row = from.size().x * 2;
for(int y=0; y < from.size().y; y++)
{
const unsigned char* yuv = reinterpret_cast<const unsigned char*>(from.data()) + bytes_per_row*y;
for(int x=0; x < from.size().x; x+=2, yuv+=4)
{
uu = yuv[Ind::uu] - 128;
vv = yuv[Ind::vv] - 128;
ug_plus_vg = uu * 88 + vv * 183;
ub = uu * 454;
vr = vv * 359;
yy = yuv[Ind::y1] << 8;
r = (yy + vr) >> 8;
g = (yy - ug_plus_vg) >> 8;
b = (yy + ub) >> 8;
to[y][x+0].red = saturate(r);
to[y][x+0].green = saturate(g);
to[y][x+0].blue = saturate(b);
yy = yuv[Ind::y2] << 8;
r = (yy + vr) >> 8;
g = (yy - ug_plus_vg) >> 8;
b = (yy + ub) >> 8;
to[y][x+1].red = saturate(r);
to[y][x+1].green = saturate(g);
to[y][x+1].blue = saturate(b);
}
}
}
template<class C, class Ind> void convert_422_grey(const BasicImage<C>& from, BasicImage<byte>& to)
{
//yuv422 / vuy422 is along the lines of yuyv
//which is 4 bytes for 2 pixels, i.e. 2 bytes per pixel
size_t bytes_per_row = from.size().x * 2;
for(int y=0; y < from.size().y; y++)
{
const unsigned char* yuv = reinterpret_cast<const unsigned char*>(from.data()) + bytes_per_row*y;
for(int x=0; x < from.size().x; x+=2, yuv+=4)
{
to[y][x+0] = yuv[Ind::y1];
to[y][x+1] = yuv[Ind::y2];
}
}
}
}
template<> void convert_image(const BasicImage<yuv422>& from, BasicImage<Rgb<byte> >& to)
{
convert_422<yuv422, yuv422_ind>(from, to);
}
template<> void convert_image(const BasicImage<yuv422>& from, BasicImage<byte>& to)
{
convert_422_grey<yuv422, yuv422_ind>(from, to);
}
template<> void convert_image(const BasicImage<vuy422>& from, BasicImage<Rgb<byte> >& to)
{
convert_422<vuy422, vuy422_ind>(from, to);
}
template<> void convert_image(const BasicImage<vuy422>& from, BasicImage<byte>& to)
{
convert_422_grey<vuy422, vuy422_ind>(from, to);
}
}
|
/**
* A save method for saving a single entity.
*
* @author graemerocher
* @since 1.0.0
*/
public class SaveEntityMethod extends AbstractPatternBasedMethod implements MethodCandidate {
public static final Pattern METHOD_PATTERN = Pattern.compile("^((save|persist|store|insert)(\\S*?))$");
/**
* The default constructor.
*/
public SaveEntityMethod() {
super(METHOD_PATTERN);
}
@Override
public boolean isMethodMatch(MethodElement methodElement, MatchContext matchContext) {
ParameterElement[] parameters = matchContext.getParameters();
return parameters.length == 1 &&
super.isMethodMatch(methodElement, matchContext) && isValidSaveReturnType(matchContext, false);
}
@Nullable
@Override
public MethodMatchInfo buildMatchInfo(@NonNull MethodMatchContext matchContext) {
VisitorContext visitorContext = matchContext.getVisitorContext();
ParameterElement[] parameters = matchContext.getParameters();
if (ArrayUtils.isNotEmpty(parameters)) {
if (Arrays.stream(parameters).anyMatch(p -> p.getGenericType().hasAnnotation(MappedEntity.class))) {
ClassElement returnType = matchContext.getReturnType();
Class<? extends DataInterceptor> interceptor = pickSaveInterceptor(returnType);
if (TypeUtils.isReactiveOrFuture(returnType)) {
returnType = returnType.getGenericType().getFirstTypeArgument().orElse(returnType);
}
if (matchContext.supportsImplicitQueries()) {
return new MethodMatchInfo(returnType, null, interceptor, MethodMatchInfo.OperationType.INSERT);
} else {
return new MethodMatchInfo(returnType,
QueryModel.from(matchContext.getRootEntity()),
interceptor,
MethodMatchInfo.OperationType.INSERT
);
}
}
}
visitorContext.fail(
"Cannot implement save method for specified arguments and return type",
matchContext.getMethodElement()
);
return null;
}
/**
* Is the return type valid for saving an entity.
* @param matchContext The match context
* @param entityArgumentNotRequired If an entity arg is not required
* @return True if it is
*/
static boolean isValidSaveReturnType(@NonNull MatchContext matchContext, boolean entityArgumentNotRequired) {
ClassElement returnType = matchContext.getReturnType();
if (TypeUtils.isReactiveOrFuture(returnType)) {
returnType = returnType.getFirstTypeArgument().orElse(null);
}
return returnType != null &&
returnType.hasAnnotation(MappedEntity.class) &&
(entityArgumentNotRequired || returnType.getName().equals(matchContext.getParameters()[0].getGenericType().getName()));
}
/**
* Pick a runtime interceptor to use based on the return type.
* @param returnType The return type
* @return The interceptor
*/
private static @NonNull Class<? extends DataInterceptor> pickSaveInterceptor(@NonNull ClassElement returnType) {
Class<? extends DataInterceptor> interceptor;
if (TypeUtils.isFutureType(returnType)) {
interceptor = SaveEntityAsyncInterceptor.class;
} else if (TypeUtils.isReactiveOrFuture(returnType)) {
interceptor = SaveEntityReactiveInterceptor.class;
} else {
interceptor = SaveEntityInterceptor.class;
}
return interceptor;
}
} |
<gh_stars>1-10
#!/bin/python3
from os import sys, path
baseDir = path.dirname(path.abspath(__file__))
sys.path.append(baseDir + "/testing-lib")
mosquittoBinary = "/usr/sbin/mosquitto"
mosquittoPort = 6699
gatewayBinary = baseDir + "/../build_x86/src/beeeon-gateway"
gatewayStartup = baseDir + "/../conf/gateway-testing.ini"
gwsPort = 8850
import logging
logging.basicConfig(level=logging.DEBUG)
|
package com.readlearncode.lesson2.section2.subsection1;
/**
* Source code github.com/readlearncode
*
* @author <NAME> www.readlearncode.com
* @version 1.0
*/
public class WhileLoop {
public static void main(String... args) {
int count = 0;
while (count < 100) {
System.out.println(count++);
}
while (count < 100)
System.out.println(count++);
System.out.println("step");
while(isAlive()) System.out.println("Johnny 5 is Alive");
while(isAlive())
System.out.println("Johnny 5 is Alive");
while(isAlive())
// Johny forever
System.out.println("Johnny 5 is Alive");
while(isAlive())
System.out.println("Johnny 5 is Alive");
while (isAlive()) {
System.out.println("Johnny 5 is Alive");
}
boolean heartbeat = true;
System.out.println("Johnny 5 is Alive");
while (heartbeat && count > 10) {
count--;
}
}
public static boolean isAlive() {
return true;
}
} |
/**
* Triggers cleanup of {@link de.codecentric.boot.admin.server.domain.entities.Instance}
* specific data in {@link PerInstanceCookieStore} on receiving an
* {@link InstanceDeregisteredEvent}.
*/
public class CookieStoreCleanupTrigger extends AbstractEventHandler<InstanceDeregisteredEvent> {
private final PerInstanceCookieStore cookieStore;
/**
* Creates a trigger to cleanup the cookie store on deregistering of an
* {@link de.codecentric.boot.admin.server.domain.entities.Instance}.
* @param publisher publisher of {@link InstanceEvent}s events
* @param cookieStore the store to inform about deregistration of an
* {@link de.codecentric.boot.admin.server.domain.entities.Instance}
*/
public CookieStoreCleanupTrigger(final Publisher<InstanceEvent> publisher,
final PerInstanceCookieStore cookieStore) {
super(publisher, InstanceDeregisteredEvent.class);
this.cookieStore = cookieStore;
}
@Override
protected Publisher<Void> handle(final Flux<InstanceDeregisteredEvent> publisher) {
return publisher.flatMap((event) -> {
cleanupCookieStore(event);
return Mono.empty();
});
}
private void cleanupCookieStore(final InstanceDeregisteredEvent event) {
cookieStore.cleanupInstance(event.getInstance());
}
} |
// Create a required-argument flag that accepts string values but allows more than one to be specified
// Parameters:
// names []string These are the names that are accepted on the command-line for this flag, e.g. -v --verbose
// def string The argument name of the strings that are appended (e.g. the val in --opt=val)
// help string The help text (automatically Expand()ed) to display for this flag
// Returns:
// *[]string This points to a []string whose value will contain the strings passed as flags
func Strings(names []string, def string, help string) *[]string {
s := make([]string, 0, 1)
f := func(ss string) error {
append(&s, ss)
return nil
}
ReqArg(names, def, help, f)
return &s
} |
def _triweighted_histogram_kernel(x, sig, lo, hi):
a = _tw_cuml_kern(x, lo, sig)
b = _tw_cuml_kern(x, hi, sig)
return a - b |
/**
* Creates a grid geometry with the given extent and scale for testing purpose.
* An arbitrary translation of (2,3) is added to the "grid to CRS" conversion.
*/
private static GridGeometry grid(int xmin, int ymin, int xmax, int ymax, int xScale, int yScale) throws TransformException {
GridExtent extent = new GridExtent(null, new long[] {xmin, ymin}, new long[] {xmax, ymax}, true);
Matrix3 gridToCRS = new Matrix3();
gridToCRS.m00 = xScale;
gridToCRS.m11 = yScale;
gridToCRS.m02 = 200;
gridToCRS.m12 = 500;
return new GridGeometry(extent, PixelInCell.CELL_CORNER, MathTransforms.linear(gridToCRS), null);
} |
.
The literature data and the results of authors own investigations are presented on microarchitectonics and development of molecular layer (layer I) of mammalian neocortex. It originates from the marginal zone of primordial plexiform layer, common with primitive neopallial primordium of reptiles and amphibia, which maintains its initial organization during phylo- and ontogenesis of vertebrates. During initial stages of corticogenesis all migrating neurons establish contacts with Cajal-Retzius cells which coordinate location and exact spatial stratification of neuroblasts in growing cortical plate. The detailed analysis of fundamental mechanisms controlling embryogenesis of neocortex is presented including a) histogenesis of pyramidal and non-pyramidal neurons; b) unifying theory of cytoarchitectonic differentiation of neocortex proposed by M. Marin-Padilla; c) factors of cytoarchitectonic differentiation of cortical areas and specialization of brain hemispheres in mammalian ontogenesis and evolution. The thesis, according to which morphofunctional maturation of both pyramidal and non-pyramidal neurons begins as a result of their contacts with a system of thalamocortical afferent fibers, is substantiated. They grow from the subcortical white matter, initiate layer-by-layer ascending cortical maturation and, ultimately, divide it into descrete functional territories. |
// Next iterates over all inserted tables once,
// returning a single TableInsertErrors every call.
// Calling Next() multiple times will consequently return more tables,
// until all have been returned.
//
// The function returns true if a non-nil value was fetched.
// Once the iterator has been exhausted, (nil, false) will be returned
// on every subsequent call.
func (insert *InsertErrors) Next() (*TableInsertErrors, bool) {
if len(insert.Tables) == 0 {
return nil, false
}
var table *TableInsertErrors
table, insert.Tables = insert.Tables[len(insert.Tables)-1], insert.Tables[:len(insert.Tables)-1]
return table, true
} |
/**
* Add a combo box attribute selector for attribute that holds a
* lang value to the upper attribute panel.
*/
protected void addLanguageAttribute()
{
String lang = elem.getAttribute("lang");
String tag = getClass().getName().substring(
getClass().getName().lastIndexOf('_') + 1) + "_lang";
JLabel label = new JLabel(BUNDLE.getString(tag));
JComboBox combo = new JComboBox(tewin.getLanguages());
combo.setSelectedItem(lang);
combo.setToolTipText(BUNDLE.getString(tag + "_tip"));
combo.setActionCommand("lang");
combo.addActionListener(this);
addAttribute("lang", label, combo);
} |
def topological_sort_from_leaves(leaf_nmtensors: List[NmTensor], cached_training_state: 'TrainingState' = None):
def create_node(producer, producer_args):
if producer_args is None:
return tuple((producer, ()))
return tuple((producer, tuple([(k, v) for k, v in producer_args.items()]),))
def is_in_degree_zero(node, processed_nodes, cached_training_state):
if node[1] == ():
return True
for _, nmtensor in node[1]:
node = create_node(nmtensor.producer, nmtensor.producer_args)
if node not in processed_nodes:
if cached_training_state and cached_training_state.check_tensor_cached(nmtensor.unique_name):
continue
return False
return True
hooks = leaf_nmtensors if isinstance(leaf_nmtensors, list) else [leaf_nmtensors]
processed_nmtensors = set()
indices_to_remove = []
for i, nmtensor in enumerate(hooks):
if nmtensor in processed_nmtensors:
indices_to_remove.append(i)
else:
processed_nmtensors.add(nmtensor)
for i in reversed(indices_to_remove):
hooks.pop(i)
_top_sorted_modules = []
all_nodes = {}
hooks_lst = list(hooks)
while len(hooks_lst) > 0:
nmtensor = hooks_lst.pop()
producer_args = nmtensor.producer_args
node = create_node(nmtensor.producer, producer_args)
if node not in all_nodes:
all_nodes[node] = {k: None for k in nmtensor.producer.output_ports}
all_nodes[node][nmtensor.output_port_name] = nmtensor
processed_nmtensors.add(nmtensor)
new_tensors = set()
if producer_args is not None and producer_args != {}:
for _, new_nmtensor in producer_args.items():
if new_nmtensor not in processed_nmtensors:
new_tensors.add(new_nmtensor)
if cached_training_state:
for _, input_nmtensor in producer_args.items():
if cached_training_state.check_tensor_cached(input_nmtensor.unique_name):
new_tensors.remove(input_nmtensor)
new_tensors = sorted(list(new_tensors), key=lambda x: str(x))
for new_nmtensor in new_tensors:
hooks_lst.insert(0, new_nmtensor)
all_node_with_output = []
for node in all_nodes:
all_node_with_output.append(tuple((node[0], node[1], all_nodes[node])))
processed_nodes = []
while len(all_node_with_output) > 0:
for node in all_node_with_output.copy():
if is_in_degree_zero(node, processed_nodes, cached_training_state):
_top_sorted_modules.append(node)
processed_nodes.append((node[0], node[1]))
all_node_with_output.remove(node)
top_sorted_modules = []
for i, mod in enumerate(_top_sorted_modules):
top_sorted_modules.append((mod[0], dict(mod[1]), mod[2]))
if i > 0 and mod[0].type == ModuleType.datalayer:
raise ValueError("There were more than one DataLayer NeuralModule inside your DAG.")
if cached_training_state and mod[0].type == ModuleType.datalayer:
raise ValueError("Could not compute tensor from current cached training state.")
return top_sorted_modules |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.