text
stringlengths
8
5.74M
label
stringclasses
3 values
educational_prob
listlengths
3
3
Canning Parish, New Brunswick Canning is a Canadian parish in Queens County, New Brunswick. History Canning Parish set off from Waterborough Parish in 1827, and was named for the Right Honourable George Canning, (1770–1827) who was prime minister of the United Kingdom. It included part of Chipman Parish until 1835. Delineation The Parish is defined in the Territorial Division Act as being bounded: ''Northeast by Chipman Parish; northwest by the County line; southwest by the Saint John River, and southeast by Cambridge and Waterborough Parishes. Communities Parish population total does not include incorporated municipalities (in bold). Back Road Canning Clarks Corners Douglas Harbour Flowers Cove Lake Road Maquapit Lake Newcastle Center Newcastle Centre Newcastle Creek Princess Park Scotchtown Sunnyside Beach Sypher Cove Upper Gagetown Wuhr's Beach Road Bodies of water & Islands Rivers, lakes, streams, creeks, marshes and Islands that are at least partially in this parish include: Grand Lake Maquapit Lake Pickerel Pond Nature Preserve Grand Lake Meadow Lower Timber Lake The Keyhole Demographics Population Population trend Language Mother tongue (2016) Access Routes Highways and numbered routes that run through the parish, including external routes that start or finish at the parish limits: Highways Principal Routes Secondary Routes: External Routes: None See also List of parishes in New Brunswick References Category:Parishes of Queens County, New Brunswick Category:Local service districts of Queens County, New Brunswick
Mid
[ 0.649038461538461, 33.75, 18.25 ]
Quebec Premier Philippe Couillard speaks during a news conference in Montreal, Thusday, February 8, 2018, where he announced details of new automated light rail system for the Montreal region. A nascent federal agency designed to find new ways to finance construction of transit systems is making its first investment in a multi-billion-dollar electric rail system in Montreal. THE CANADIAN PRESS/Graham Hughes Montreal gets $1.2B federal loan for electric rail Money comes from financing agency created last year as an infrastructure bank for major projects A nascent federal agency designed to find new ways to finance construction of transit systems is making its first investment in a multibillion-dollar electric rail system in Montreal. The Canada Infrastructure Bank will provide a $1.28-billion loan to help build the $6.3-billion system largely managed and funded by Quebec’s pension regime, with interest rates rising from one per cent to three per cent over the 15-year term. The loan frees up previously pledged federal money for the project, which can now be put towards other Quebec infrastructure plans. The transit project, best known by its French acronym REM, had been singled out by the Trudeau Liberals as a potential early win for the financing agency that was created last year to hand out $35 billion in federal financing in the hopes of prying much more than that from private backers to fund construction work. About $15 billion may not be recovered, while the remaining $20 billion is in loans the government expects to recoup. The federal finance minister has to sign off on any financing requests. The agency’s president said the first financing agreement puts substance to the concept of the infrastructure bank. Pierre Lavallee said the agency will customize its contribution to future projects based on specific financing needs, just as the loan used for REM was tailored to the project. “We have the capability and the capacity to invest in important projects and to do so at scale,” he said in an interview. “Hopefully this will send a good signal to future potential partners, both public and private, that we’re here to help.” The agency has yet to publish a list of projects it believes are ripe for private backing, but government documents show officials planned to work with provinces, territories and cities to form the list that would provide a five-year time horizon. Lavallee couldn’t say when the list will be published. The infrastructure bank’s first announcement has been many months in the works, even as critics have complained that the agency — and the government’s infrastructure funding more generally — have been slow to get off the ground. “What we said from the beginning is that the infrastructure bank would allow (us) to do more for Canadians and this is what we’re doing,” Infrastructure Minister Francois-Philippe Champagne told reporters at a cabinet retreat in Nanaimo, B.C. “That’s a flagship project where we think we can attract foreign investors as well and this is our first (project), so we need to talk about it.” Wednesday’s announcement from the infrastructure agency arrived on the eve of Quebec’s provincial election campaign scheduled to launch Thursday. The Quebec Liberals face a steep climb to remain in office after the Oct. 1 vote. Meanwhile, the federal Liberals’ infrastructure program received another critical review from a parliamentary watchdog questioning spending progress and economic impacts. A report Wednesday from Parliament’s budget officer estimated the first tranche of money in the Liberals’ infrastructure program boosted the economy last year by at most 0.16 per cent and created no more than 11,600 jobs. Budget officer Jean-Denis Frechette’s report said economic effects from the $14.4 billion budgeted in the first phase could eventually be wiped out if the Bank of Canada continues to raise interest rates. Frechette also noted that provinces have cut back their planned infrastructure spending — which the Liberals had hoped to avoid — further eroding economic impacts. Federal infrastructure funding can’t flow to provinces and cities until they submit receipts, which often creates a lag between when work happens and when the federal government pays out. Frechette’s report said $6.5 billion has been spent to date from the $14.4 billion, with large shortfalls under the “green infrastructure” banner supposed to address climate change and public transit, where $282 million of $3.4 billion has been spent. The report said $1.4 billion in transit money is scheduled to leave the federal treasury next year — just in time for the October 2019 federal election.
High
[ 0.674884437596302, 27.375, 13.1875 ]
The anterior cricoid split explored via the canine model, preliminary studies. Since Cotton and Seid introduced a new surgical procedure, the anterior cricoid split in 1980, the treatment of the difficult-to-extubate infant or child has changed dramatically. The mechanics of how the procedure works are poorly understood. This study was undertaken to investigate the effects of the anterior cricoid split on the cricoid cartilage. The technique was modified so as to allow placement and maintenance of an endotracheal tube but still allow normal activity in the canine subjects. Australian Shepherd puppies were divided into 3 groups. Group 1 underwent the anterior cricoid split procedure with placement of an endotracheal tube stent, Group 2 underwent the anterior cricoid split procedure without the use of a stent, and Group 3 served as controls. All animals were sacrificed at 12 weeks of age. The results show that there was an actual gap in the cricoid cartilage in all animals that underwent the anterior cricoid split procedure. Stenting with an endotracheal tube significantly increased this gap. These results suggest that in the canine model the anterior cricoid split may be used to actually increase the size of the subglottic space.
High
[ 0.657963446475195, 31.5, 16.375 ]
Mr. Food: Tasty Fool April Fool's Day takes on a whole new meaning as far as Mr. Food is concerned. Today the gourmet guru is serving up a tasty dessert. In fact, you'd have to be a "fool" not to try it. With today being April Fool’s Day, there are already enough pranks to go around, so why not stand out from the crowd and give 'em a really tasty surprise that's known as a "fool,” which fits perfectly with the holiday? But it's no joke. You see, a fool is simply a dessert made from fruit and cream. How it got its funny name is a mystery, but there's no mystery about why everybody loves 'em, 'cause they're so easy to make, we don't even need a recipe! In a saucepan we combine 4 cups cut-up fruit, maybe strawberries, peaches, apples or pears, whatever we've got on hand, along with about 1/2 cup sugar, and about 2 to 3 tablespoons of either water, lemon juice, or even liqueur, like banana or peach liqueur! Mix and match the combos, it's your choice. Lightly cook it until the fruit is tender, and when it cools, some fool fans like to layer it in tall dessert glasses along with whipped cream, while others say a fool has to be mixed with the whipped cream, like this, you know, almost like a mousse! But either way, simply chill it and it's ready to serve. I mean, it's fruit, sugar and liquid mixed with whipped cream. Can't be easier, smoother, or tastier! Look, here's a fresh strawberry fool with the perkiness of lemon juice and some lemon zest for extra zip! And here's a brandied peach fool. Talk about a fancy fool! No matter how we make it, layered or mixed in, it's always a "fool-proof" treat! Anyone here in the studio feel like a fool? Going to be a lot of "OOH IT'S SO GOOD!!®" Online Public Information File Viewers with disabilities can get assistance accessing this station's FCC Public Inspection File by contacting the station with the information listed below. Questions or concerns relating to the accessibility of the FCC's online public file system should be directed to the FCC at 888-225-5322, 888-835-5322 (TTY), or [email protected].
Mid
[ 0.545090180360721, 34, 28.375 ]
Thursday marks the 50th day of a near-total regime blockade of the northern Outer Damascus town of al-Tal, reportedly home to one million internally displaced Syrians. A large number of the displaced in al-Tal have fled intense regime bombardment in the city of Douma, eight kilometers to the southeast, in order to take advantage of the security in al-Tal, which was under an unofficial truce with the regime for two years. Originally home to 100,000 people, there are now more than one million internally displaced Syrians living inside the city. The regime blockade of al-Tal, 11km north of Damascus, began after rebels killed a regime soldier who entered the city in July, and coincided time-wise with similar blockades in a number of other Outer Damascus towns. In two of those towns, al-Hameh and Qudsiya, a “reconciliation agreement” was reached more than a week ago promising relief for the trapped residents in return for significant rebel concessions, which they made. Despite ongoing attempts to lift the siege on al-Tal through mediation with the regime, the city remains under a stiff blockade, Ahmad Bayanouni, spokesman for the al-Tal Local Coordination Council tells Ammar Hamou. Charitable organizations in the city have announced “they can no longer provide for the displaced,” Bayanouni says, and although residents are dying of treatable illnesses, reconciliation seems out of reach. “The regime’s demands are impossible.” Q: Why did the siege end in Qudsiya and al-Hameh but not in al-Tal? Essentially, the siege on Qudsiya and al-Hameh was broken after they accepted the regime’s conditions, including the normalization with the rebels and their inclusion into the [pro-regime] National Defense Forces. In al-Tal, rebels have not accepted these conditions. [Editor’s note: for more on the terms of the agreement, see here] Q: How are local leaders working to lift the siege? How is the regime responding? The city’s leaders are exerting a large effort to lift the blockade of the city. They met with a regime delegation a few days ago, which promised them to open the road and bring necessities into the city, but the promises have not been carried out. Q: Can you give us a glimpse of the medical situation in the city? This has impacted the city the most. Most medicine is gone from the pharmacies, especially medicine for those with chronic illnesses such as heart conditions, high blood pressure and diabetes. The only hospital in the city is on its way to shutting down because of the lack of medicine and its inability to carry out emergency operations and dialysis. Everything related to surgery or needing electrical devices has been impacted by the blockade and the lack of fuel to work the generators. A number of deaths have been recorded from chronic illnesses because of the dire situation in the city and the prevention of the sick from leaving to be treated. Q: And the food situation? The city’s residents have exhausted the food, and there is no food in the stores, 80 percent of which are closed. People have also used up their reserves of food in their homes and are rationing in order to survive. The Red Crescent and charitable organizations that give assistance to the displaced families in the city have announced that their food reserves have dried up, and they can no longer provide for the displaced. Q: Do you expect the blockade will be lifted soon? The regime’s demands are impossible for rebels to carry out, and with the initiatives of city leaders being refused by the regime, I do not see the blockade being lifted soon. Q: Do regime soldiers at checkpoints surrounding the city abide by the blockade, or do they look the other way in an attempt to indirectly help the people [of al-Tal]? Unfortunately, they abide by the siege completely. They have not even allowed employees [coming in from outside the city] to bring in small bags of vegetables, which they destroy. One exception was when two cars brought vegetables into the city, and they sold within minutes for exorbitant prices.
Low
[ 0.511156186612576, 31.5, 30.125 ]
No Joke: I Invested In an Improv Comedy Theater www.justthefunny.com Nearly three years ago, I decided to give improvisational comedy classes a shot. A year later I was not only a graduate of the five levels taught at Just the Funny in Miami but also a member of the performing cast and one of the owners. Yes, one of its owners. It's been an interesting ride. A great improv scene begins in the middle of the action, but I'll betray that tenet to provide a little color first. Laughing Matter One of the perks -- or burdens -- of financial reporting is that you sometimes attract the attention of news broadcasters. I've been on CNBC (CMCSA) more than a half-dozen times, and I've also had a few stints on Fox Business (FOX), PBS, CNBC Asia and CNN en Espanol. I always left the sets feeling as if I could have done a better job, so I explored my options. I narrowed down my choices to Toastmasters -- the educational organization that specializes in improving public speaking -- and improv. I had long been a big fan of improvisational comedy, whereby comedic actors create scenes based on audience suggestions. I made it a point to check out the now sadly defunct Comedy Warehouse shows at Disney's (DIS) Pleasure Island whenever I was in Orlando. I had also made it out to a couple of shows at Just the Funny closer to home in Miami. I wasn't sure what I would get out of the seven-week introductory course, but like most of my classmates, I became hooked. Most of us wound up sticking together through 35 Monday nights in 2012 to complete all five class levels. Along the way, I successfully auditioned to join the cast. Yes, and ... By the end of 2012 the theater wasn't on very firm financial footing. The two owners -- one a founding troupe member and the other who had joined shortly after its inception -- decided to open up ownership opportunities. They made a financial presentation before nearly a dozen of its most active members, offering to sell half of the company. The price was fair and the opportunity was juicy, but just three of us stepped up to invest. When the legal eagles finalized all of the documents, I had a 22.5 percent stake in the place that had become like a second home over the year prior. As in a good improv scene, we new owners were dropped right into the action. The theater had just $976 in the bank and our monthly $3,200 rent payment was due in a few days. The money that we had just invested had gone to the two original owners, so we had to scramble to make sure that we had enough improv students and well-attended weekend shows to bring in the money we'd need to stay afloat. We opened up the management ranks, assigning owners and some active cast members to nine leadership positions. As VP of finance, I was tasked with keeping the books and paying the bills, but everyone was encouraged to pitch in where and when they could. It worked. A couple of favorable tailwinds materialized in 2013. "Whose Line Is It Anyway?" returned to the airwaves, giving mainstream TV-viewing audiences a fresh taste of improv that resulted in a spike in our student registrations. Coconut Grove's Miami Improv -- an iconic stand-up venue -- shut down in November, sending comedy-starved patrons to our shows for a different spin on comedic entertainment. The Show Must Go On Our pre-tax operating profit in 2013 was nearly as much as the entire company had been valued at a year earlier. Having a larger ownership group helped us tackle more tasks as we pursued incremental outlets for monetization and promotional activity. Unfortunately, it wasn't always harmonious. The two original owners were clashing all along the way, and one of the three subsequent owners -- the one with the smallest financial stake but tasked with the most laborious VP of operations role -- had had enough. %VIRTUAL-pullquote-It was challenging as a friend, improviser and business owner.%Instead of toasting to our success in early 2014, we were busy cashing out two owners. There was drama. There was uncertainty. It would have been programming gold as a reality television show (remind me to pitch studios or documentary filmmakers next time), but it was challenging as a friend, improviser and business owner. Just the Funny is strong again. We expanded our management team, and ownership meetings are more productive. Show attendance has been solid, even during the seasonally sleepy summer months. Our culture has improved to the point where we have to split rehearsals into different rooms as a result of our growing active cast. We had the largest summer class of improv students in our history. We're bringing back the Miami Improv Festival in three months, an annual event that had been discontinued for years. We're dreaming again, aiming even higher for 2015. Running an improv theater is no joke, but for now this side business that started as an accidental hobby for many of us is allowing us to get the last laugh. Motley Fool contributor Rick Munarriz owns shares of Walt Disney, and he naturally also owns a piece of Just The Funny. The Motley Fool recommends and owns shares of Walt Disney. Try any of our Foolish newsletter services free for 30 days.
Mid
[ 0.6252873563218391, 34, 20.375 ]
Raw data can be found from the original USDA data sources. We have also provided the compiled data used in our analysis in the supplementary materials. Introduction {#sec001} ============ Certified organic agricultural production area in the United States has increased steadily since the inception of the 1990 Organic Foods Production Act. Advantages of organic agriculture include economic benefits for producers \[[@pone.0161673.ref001]\] and increased provision of ecosystem services such as biological pest control and biodiversity conservation \[[@pone.0161673.ref002]--[@pone.0161673.ref004]\]. Sociocultural benefits such as quality of life for farming communities have been theorized, but research in this area of organic agriculture is limited \[[@pone.0161673.ref005]\]. One of the main criticisms of organic agriculture has consistently been lower crop yield compared to non-organic systems. Meta-analyses comparing yields of organic and conventionally grown crops have repeatedly demonstrated a yield gap between the two systems. Recently published meta-analyses report mean estimates across all crops varying from 19% to 25% lower yields in organic systems \[[@pone.0161673.ref006]--[@pone.0161673.ref008]\]. Critics of organic agriculture argue that society cannot justify being less efficient with arable land in the face of a rapidly growing human population. With respect to conservation interests, if more-efficient conventional farmers can match organic yields with 70% of the land, remaining land could be set aside for conservation and other environmental benefits \[[@pone.0161673.ref009]--[@pone.0161673.ref012]\]. However, yield gains have not been clearly linked with increased land set aside for conservation at the global or regional scale, thus the yield/conservation tradeoff is likely a false dichotomy not representative of the socioecological complexity of agricultural systems, with management decisions tied to markets and policy \[[@pone.0161673.ref013]\]. Yield differences between organic and conventional production vary with crop type and management practices. In their analysis of organic studies conducted world-wide, Seufert et al. \[[@pone.0161673.ref008]\] reported smaller yield gaps for organic fruit (3% lower than conventional) and oilseed crops (11% lower than conventional) and large gaps for organic cereals and vegetables (26% and 33% respectively). When studies were partitioned by plant type, organic legumes and perennials had more competitive yields than non-legumes and annuals, likely a result of more efficient nitrogen use by plants \[[@pone.0161673.ref008]\]. Meta-analyses of the published literature do not necessarily reflect the full range of innovation or practical limitations that are part of real-world commercial agriculture. Agricultural research, by necessity, often takes a reductionist approach in order to best isolate and quantify the effect of interest \[[@pone.0161673.ref014]\]. Additionally, equipment, labor availability, and scale of production is typically much different between research and commercial production. Although these differences may not necessarily bias yield differences between systems in any systematic way, there is always value in comparing estimates from controlled research with commercial production data. The analysis we present offers a new perspective, based on organic and conventional yield data reported to the United States Department of Agriculture (USDA) as part of their 2014 organic and agricultural producer surveys. The USDA data is a window into the range of farming operations and the best available measure of how the different production systems perform in a practical sense. USDA has made area and yield of organic and conventional crops, summarized at the state level, available to the public. Although this data set provides only a snapshot of agricultural production in the United States from one growing season, it represents actual commercial production rather than estimates from research studies. Data from field research stations and commercial farms are complementary, each with their own strengths and weaknesses \[[@pone.0161673.ref014]\]. The USDA survey data provides an opportunity to compare the findings of factorial research experiments with reported production yields. This rich data set offers yield comparisons from a diversity of crops and states, representing the breadth of organic and conventional agricultural production in the United States. Methods {#sec002} ======= We used state-level crop yield data from 2014 USDA surveys to estimate yield differences between organic and conventional production methods. The 2014 USDA Organic Survey had a target population of 16,992 organic farms in the United States, and achieved a response rate of 63% \[[@pone.0161673.ref015]\] using mail survey plus computer and phone follow-up interviews. Summarized survey data is publicly available \[[@pone.0161673.ref015]\]. Conventional yield data was obtained from the 2014 USDA-NASS December Agricultural Survey with a target population of over 83,000 farm operators, which is publicly available using Quick Stats \[[@pone.0161673.ref016]\]. From these two data sources, we assembled data pairs; each data pair consisted of the organic yield and conventional yield estimate for one crop from one state. This approach was used to control for different yield potential from different geographic regions. We acknowledge this approach may not be perfect, as organic area and conventional area are not necessarily in similar regions within a state. However, we assumed that the differences in yield potential due to geography within a state would be randomly distributed among states and crops, and thus, would not systematically bias our results in favor of one production system or the other. Another shortcoming of our approach is that the breadth of organic production environments is not fully represented for some crops. For various reasons, USDA-NASS does not publish conventional production data for all crops in all states. Although conventional yield data were available for nearly all field and forage crops included in the organic survey, we could only assemble a limited number of data pairs for some fruit and vegetable crops. For example, organic spinach (*Spinacia oleracea*) yield data was available from 37 different states in 2014 ([S1 Fig](#pone.0161673.s005){ref-type="supplementary-material"}), but conventional spinach production was only reported for three of those states, resulting in only three data pairs. This lack of conventional yield data had the potential to bias our analysis. For nearly every crop where data pairs could not be assembled, average organic yield for states without conventional yield data was less than average organic yield where data pairs were available ([S1](#pone.0161673.s005){ref-type="supplementary-material"} and [S2](#pone.0161673.s006){ref-type="supplementary-material"} Figs). Therefore, our analysis was more likely to include states reporting above average organic crop yield. This bias is at least partially balanced by the fact that states with high production area (and, presumably, higher yield) for a particular crop are typically those included in USDA-NASS surveys for those crops. To reduce the potential for bias in our results, we excluded all crops with less than seven data pairs from our statistical analysis. Yield data from the 2014 USDA-NASS survey was used as a proxy for conventional yield, even though it is possible that yield data from organic fields were included in the total yield estimates for some crops in some states. If organic acres are included in total yield estimates, our approach would slightly reduce the difference between the two systems. For this reason, our yield ratio estimates should be considered slightly conservative, since our results would be biased in favor of *less* difference between production systems. However, the difference between total yield reported in the USDA survey data and actual conventional yield will be negligible unless organic area is a large percentage of the total. Of the 519 data pairs in our data set, 477 had organic acres less than 10% of the total acres reported by USDA ([S3 Fig](#pone.0161673.s007){ref-type="supplementary-material"}). Only four data pairs had organic area over 50% of the total area in the survey ([Table 1](#pone.0161673.t001){ref-type="table"}). Those 4 observations included dry edible bean (*Phaseolus vulgaris*) in Arizona, squash (*Cucurbita spp*) and sweet corn (*Zea mays*) in Oregon, and spring wheat (*Triticum aestevum*) in Colorado. Organic squash area actually exceeded the total squash area reported from Oregon in 2014, and sweet corn was nearly the same. 10.1371/journal.pone.0161673.t001 ###### Comparisons where organic area was greater than 50% of total area reported. ![](pone.0161673.t001){#pone.0161673.t001g} Crop Location Organic hectares Total hectares Organic (% of total) ----------------- ---------- ------------------ ---------------- ---------------------- Dry edible bean Arizona 2967 4413 67.2 Wheat (spring) Colorado 1763 2834 62.2 Maize (sweet) Oregon 1832 1984 92.3 Squash Oregon 841 607 138.5 For 40 out of the 65 crops in our full data set, six or fewer states reported both conventional and organic yield data, so reliable confidence intervals of the yield ratio could not be calculated. Yield ratios for all crops, even those with fewer than seven data pairs, are provided in [S4](#pone.0161673.s008){ref-type="supplementary-material"} through [S7](#pone.0161673.s011){ref-type="supplementary-material"} Figs. We statistically analyzed yield data for 25 different crops that had at least seven data pairs. One data pair was removed from the analysis. Apple (*Malus domestica*) production in Vermont reported by the USDA Organic Survey indicated an average yield of over 342,000 kg/ha, which was nearly an order of magnitude greater than any other apple yield observed for either conventional or organic production. We concluded this must have been a miscalculation. The number of organic farms included in each data pair was used as a weighting factor in the analysis. In this way, yield estimates were given more weight in the analysis if they represented more farmers, since we had more confidence that those estimates were an accurate reflection of overall organic yield in that state. Data pairs were given less weight where the number of organic farmers contributing to the yield estimate was smaller. After removal of crops with less than seven data pairs, our analysis included yield estimates from 773,000 organic hectares (1.91 million organic acres) from 2014. This represents a much larger land area in our analysis compared to previous meta-analyses of published literature. For each crop, we calculated the natural logarithm of the crop yield ratio (organic/conventional) from each state (data pair), weighted by the number of organic farms reporting from that state. The natural logarithm of the ratio was used to standardize the ratio to the same scale, regardless of whether conventional or organic production systems yielded more. Raw ratios would result in values always between 0 and 1 if organic yield was less than conventional, but values could range from 1 to infinity when organic yielded more. Taking the natural logarithm of the ratio re-scales the values around 0, and equalizes the magnitude of the distance from 0. For each crop, 95% confidence intervals around the natural logarithm of the yield ratio were calculated. Weighted means and confidence intervals were calculated by fitting a weighted least squares intercept-only linear model to the natural logarithm of the yield ratios for each crop. This was done using the *lm()*function in the statistical language R \[[@pone.0161673.ref017]\]. To simplify comparing our results with previously published meta-analyses \[[@pone.0161673.ref007], [@pone.0161673.ref008]\], we are presenting the data as organic to conventional crop yield ratios (i.e. back-transformed from log response ratios). Ratios less than 1.0 indicate organic crop yield was less than conventional crop yield, whereas ratios greater than 1.0 indicate organic crop yield was greater than conventional crop yield; organic and conventional crop yields were considered significantly different if the 95% confidence bars do not include 1.0. Median crop yield ratios (the ratio at which 50% of data pairs were greater and 50% were less than) have also been provided. In some cases, the mean and median crop yield ratio differed considerably within a crop. We have discussed these results in more detail in [S1 Supplementary Information](#pone.0161673.s013){ref-type="supplementary-material"}. Results and Discussion {#sec003} ====================== Organic yields were lower than conventional yields for most crops. However, several crops had no significant difference in yields between organic and conventional production, and in a few examples, organic yields surpassed conventional yields. Across all crops and all states, organic yield averaged 80% of conventional yield. However, the yield ratio varied widely among crops, and in some cases, among states within a crop. Without more detail about the farms reporting yield data, it is impossible to conclude definitively the cause of the organic yield gap in any particular crop. The biggest production challenges organic farmers face relative to conventional farmers are with respect to fertility (especially nitrogen) due to a lack of synthetic fertilizers, and pest management (weeds, insects, and pathogens) due to a lack of synthetic pesticides \[[@pone.0161673.ref006]\]. These production challenges are likely responsible for the organic yield gap in most of the crops we analyzed, though the relative contribution of each may differ. Because yield data is reported and analyzed at the state level, any discussion on the specific cause of yield differences between organic and conventional production of a particular crop would be speculation. For this reason, we have refrained from delving too deeply into any specific crop in our analysis, and instead focus on broader trends, though some more detailed discussion and data can be found in [S1 Supplementary Information](#pone.0161673.s013){ref-type="supplementary-material"}. Organic crop yields were significantly less than conventional yields for 9 of 13 field and forage crops ([Fig 1](#pone.0161673.g001){ref-type="fig"}). Organic wheat yield was significantly less than conventional wheat for both spring and winter types. Combined over types, organic wheat yielded 66% of conventional yield. Organic soybean yielded 68% of conventional. The organic cereal crops maize and barley yielded 65% and 76% of conventional yield, respectively. The organic oat (*Avena sativa*) yield gap was less, but organic still only produced 80% of conventional oat yield. ![Field and forage crop yield ratio of organic to conventional yield from states reporting both organic and conventional yield data in 2014 USDA surveys.\ Circles represent weighted ratio mean estimates, error bars represent 95% confidence limits for the weighted ratio; triangles represent the median crop yield ratio for all states included in the analysis.](pone.0161673.g001){#pone.0161673.g001} Lower organic crop yields in the field crops in our analysis are likely associated with the challenges of balancing soil quality and weed management in organic grain production \[[@pone.0161673.ref018], [@pone.0161673.ref019]\]. Organic farmers have long reported major challenges with weed management \[[@pone.0161673.ref020]\], with recent reports specifying problematic perennials such as field bindweed (*Convolvulus arvensis*) and Canada thistle (*Cirsium arvense*) \[[@pone.0161673.ref021]\]. Organic agriculture has been criticized for use of tillage and associated negative environmental impacts such as soil erosion \[[@pone.0161673.ref012]\]. Of the organic farmers surveyed in the 2014 Census (and whose yield data are included here), 40% reported use of no-till or minimum till practices \[[@pone.0161673.ref015]\]. Reduced tillage in organic grain systems often results in improved soil quality \[[@pone.0161673.ref022]\] but with the trade-off of more perennial weeds \[[@pone.0161673.ref023]\] or inadequate nitrogen for non-legume grain crops \[[@pone.0161673.ref024]\]. In regional surveys of organic farmers (of all types, not just grain producers), both annual and perennial weeds continue to be mentioned as most problematic \[[@pone.0161673.ref025]--[@pone.0161673.ref027]\] although organic farmer knowledge has been associated with lower proportions of problematic annuals \[[@pone.0161673.ref027]\]. It is unclear why such a difference between states was observed with organic to conventional yield ratio in dry edible bean and soybean. Since they are legumes, nitrogen deficiency should play a minimal role in contrast to many other organic crops, as long as the seed is inoculated with the appropriate rhizobium species. For dry bean production, Idaho and Colorado represent relatively similar growing environments with respect to dry edible bean production, and conventional yields were similar between these two states ([S1 Supplementary Information](#pone.0161673.s013){ref-type="supplementary-material"}). Even though conventional yields were similar, organic to conventional yield ratios of 1.11 and 0.45, were observed in Idaho and Colorado, respectively, because organic dry bean yield was much lower in Colorado. As a group, organic hay crops yielded similarly or significantly greater than conventional hay crops ([Fig 1](#pone.0161673.g001){ref-type="fig"}), though this was not true for the annual crop maize harvested for silage. Seufert et al. \[[@pone.0161673.ref008]\] suggested in their meta-analysis that perennial crops and legumes tended to produce organic crop yields more similar to conventional crop yields compared to other organic crops, which is supported by the superior performance of the organic perennial hay crops compared to the annual silage crop in our analysis. Most crops grown for hay are perennial, and alfalfa (*Medicago sativa*) is both a perennial and a legume. These traits should give organic hay a relative advantage compared to many other organic crops. In 2010, the National Organic Program specified new regulations about ruminant production, stating that at least 30% of dry matter intake must be provided from grazing pasture or from "residual forage" cut and laying in pasture during the grazing season \[[@pone.0161673.ref028]\]. Thus, there is high demand and motivation to provide high-quality organic forage for organic dairy and meat production which may drive producers to increase management intensity in these systems. Hay and forage crops also present an opportunity to incorporate species diversity into the cropping system with relative ease through species mixtures. Increased species diversity has been linked to greater fodder productivity \[[@pone.0161673.ref029]\], and supporting biodiversity is encouraged by the National Organic Program \[[@pone.0161673.ref030]--[@pone.0161673.ref032]\]. Previous work \[[@pone.0161673.ref007], [@pone.0161673.ref008]\] has suggested that organic vegetables tend to perform worse relative to conventional practices compared to other crop types. In our analysis, organic vegetable crop yields ranged from 38% (potato) to 77% (sweet maize) of conventional yields ([Fig 2](#pone.0161673.g002){ref-type="fig"}). Organic squash, snap bean (*Phaseolus vulgaris*), sweet maize, and peach (*Prunus persica*) yields were not statistically different from conventional, while average yield of all other organic vegetables (tomato (*Solanum lycopersicum*), potato, bell pepper (*Capsicum anuum*), and onion (*Allium cepa*)) and fruits (watermelon (*Citrullus lanatus*), grape (*Vitis vinifera*), blueberry (*Vaccinium myrtillus*), and apple) were less than conventional. ![Fruit and vegetable yield ratio of organic to total yield from states reporting organic yields in the 2014 USDA survey.\ Circles represent weighted ratio mean estimates, error bars represent 95% confidence limits for the weighted ratio; triangles represent the median crop yield ratio for all states included in the analysis.](pone.0161673.g002){#pone.0161673.g002} Organic fruit and vegetable production is often associated with direct marketing to consumers through either farmers markets or community-supported agriculture (CSA) operations. Of the respondents to the 2014 organic survey, 6,382 (37.6%) reported marketing to consumers directly, in contrast to 6.9% of all United States farms reporting direct-to-consumer sales \[[@pone.0161673.ref033], [@pone.0161673.ref034]\]. Pest management, especially insect and fungal pathogens, can be particularly problematic for organic producers selling into fresh markets, as there are far fewer approved pesticides available for use in organic agriculture. Insect and disease damaged fruits and vegetables can quickly become unmarketable, and this might explain the relatively low organic yields of fruit and vegetable crops compared to their conventional counterparts. Comparison with previous analyses {#sec004} --------------------------------- As part of a large meta-analysis of organic yield studies, Seufert et al. \[[@pone.0161673.ref008]\] presented wheat, tomato, soybean, maize, and barley yield ratios. Ponisio et al. \[[@pone.0161673.ref007]\] then re-analyzed much of the same data used by Seufert et al. We have re-created their previous yield ratio estimates and 95% confidence intervals here for direct comparison with our estimates based on 2014 USDA yield data ([Fig 3](#pone.0161673.g003){ref-type="fig"}). The Seufert et al. and Ponisio et al. analyses used comparisons from previously published experiments and surveys, and therefore, may not represent actual practice as well as the USDA survey data in our analysis. In addition, Seufert et al. and Ponisio et al. included research from around the world, including developing countries, while USDA estimates are exclusive to the United States. ![Relative yield of organic maize, barley, wheat, tomato, and soybean.\ Green triangles adapted from meta-analysis results presented by Ponisio et al. (2015); blue squares adapted from meta-analysis results presented by Seufert et al. (2012); black circles represent our analysis of USDA data from 2014. Points are the ratio of organic:conventional yields, bars represent 95% confidence intervals around those estimates.](pone.0161673.g003){#pone.0161673.g003} The main limitations of the USDA data are the potential for responder bias and the absence of relevant information that could help explain yield variation. We do not know which producers responded to the survey due to confidentiality, nor how representative they are of the producers in their state. We cannot determine whether the 63% of organic producers who responded to the survey are more or less productive, growing low or high diversity of crops, or on different soil types. Experimental yield comparisons, such as those included in Seufert \[[@pone.0161673.ref008]\] and Ponisio \[[@pone.0161673.ref007]\], are better able to control for sources of variation such as soil type, climate, and surrounding landscape. Because the data used by Seufert et al. and Ponisio et al. are independent of our own data, comparing yield gaps from yields reported by United States producers to those presented through previous meta-analyses allows us to evaluate the generality of our findings. Organic crop yield for all five crops in [Fig 3](#pone.0161673.g003){ref-type="fig"} were significantly less than conventional crop yield in our analysis based on USDA estimates, which is similar to results presented by Seufert et al. \[[@pone.0161673.ref008]\], and with the exception of tomato, also similar to Ponisio's \[[@pone.0161673.ref007]\] meta-analysis. For maize, soybean, and tomato, our analysis of UDSA data shows an organic yield gap that is substantially greater than previous estimates; that is, commercial organic yields for these crops are further behind conventional yields than previous analyses suggest. There are, our analysis indicates, still improvements to be made in commercial organic production of maize, tomato, and soybean for these crops to meet the results obtained mostly under experimental conditions. For wheat and barley, USDA yield estimates from 2014 suggest yield ratios similar to the estimates from Seufert et al. \[[@pone.0161673.ref008]\] and Ponisio \[[@pone.0161673.ref007]\]. Although our data agree with previous work showing lower yields in organic production systems in general, our data suggest that commercial hay crops produced significantly greater yield when produced in an organically managed system. This is contrary to Seufert et al. \[[@pone.0161673.ref008]\] and Ponisio et al. \[[@pone.0161673.ref007]\] who did not find evidence for greater yield under organic management. Seufert suggested that the organic yield gap was less for legume and perennial crops compared to non-legume and annual crops, respectively. In contrast, Ponisio et al. concluded there were not major differences between annual vs perennial crops, nor with legume vs non-legume crops with respect to the organic yield gap. Our analysis agrees more closely with Seufert et al., showing that annuals and non-legumes fared worse under organic management compared to perennials and legumes, since hay crops tend to be primarily perennial and also include legumes ([Fig 1](#pone.0161673.g001){ref-type="fig"}). It is important to note, however, that broad categories (like annual vs perennial) will be greatly influenced by which crops are included in the analysis. These comparisons are, therefore, fairly dubious. For example, grapes and haylage are both perennial crops, but the organic yield ratios for these crops varied dramatically (50% and 164% of conventional yields, respectively). So to generalize that perennials fare better than annuals under organic management would be misleading without greater context. Our analysis of USDA data provides estimates for annual, perennial, and non-legume crops that are quite different from Ponisio et al. \[[@pone.0161673.ref007]\], but this difference may be largely due to the crops that were included in each analysis ([S8 Fig](#pone.0161673.s012){ref-type="supplementary-material"}). Previous work by dePonti et al. \[[@pone.0161673.ref006]\] hypothesized that the difference between organic and conventional yields would increase as conventional yield for the crop increased. Their hypothesis stemmed from the idea that organic systems are more limited by fertility and pest management options relative to conventional systems; so as conventional yields approach their water-limited yield potential, organic systems would lag further behind. They found weak evidence to support their hypothesis, as the organic to conventional yield ratio decreased as conventional yield increased, though the relationship was only statistically significant for two crops (soybean and wheat). We conducted a similar analysis to de Ponti et al. \[[@pone.0161673.ref006]\] for the 25 crops with at least seven data pairs, using a weighted regression to determine whether the organic to conventional yield ratio was related to conventional yield. Out of the 25 crops we analyzed, eight showed a significant relationship between organic to conventional crop yield ratio and conventional crop yield, including soybean and wheat, the two crops that were significant in the de Ponti analysis ([Fig 4](#pone.0161673.g004){ref-type="fig"}). Of those eight crops, six showed a decreasing trend, similar to that observed by de Ponti et al. However, contrary to de Ponti's hypothesis, soybean and potato showed an increasing trend in our analysis, suggesting that in locations with greater conventional yields, the organic yield gap was lowest. If the statistical significance is ignored and only the direction of the slope (increasing or decreasing) is considered, 15 out of 25 crops had negative slopes compared to 10 with positive slopes ([Table 2](#pone.0161673.t002){ref-type="table"}). The relationship between the organic yield gap and conventional yield potential does not appear to generalize well across different crops, and in fact, can be completely different depending on the crop of interest. ![Relationship between organic to conventional crop yield ratio and conventional crop yield for eight crops.\ Circles each represent one state reporting both organic and conventional crop yield data to the USDA in 2014; size of the circles is proportional to the number of organic farmers reporting crop yield data from that state. Black horizontal line at zero represents no yield difference between organic and conventional crop yield. Blue line is the weighted least squares regression line, using the number of organic farms reporting in each state as the weighting factor; gray shaded area is the 95% confidence interval around the weighted regression line. Slope estimates, p-values, and R^2^ values can be found in [Table 2](#pone.0161673.t002){ref-type="table"}.](pone.0161673.g004){#pone.0161673.g004} 10.1371/journal.pone.0161673.t002 ###### Weighted least squares regression slope, standard error (S.E.), p-value, and R^2^ for 25 crops investigating the relationship between ln(organic:conventional crop yield) as the dependent variable and conventional crop yield (ton/ha) as the independent variable using 2014 USDA survey data. ![](pone.0161673.t002){#pone.0161673.t002g} Crop Slope S.E. P-value R^2^ ------------------- -------- ------- --------- ------- Apple 0.007 0.009 0.468 0.038 Barley -0.166 0.052 0.005 0.393 Blueberry -0.031 0.033 0.373 0.114 Dry edible bean -0.508 0.396 0.231 0.155 Grapes 0.001 0.061 0.982 0.000 Hay & alfalfa mix -0.065 0.016 0.000 0.393 Hay (all) -0.083 0.016 0.000 0.425 Haylage 0.002 0.023 0.921 0.001 Hay (other) -0.203 0.035 0.000 0.530 Maize (grain) -0.038 0.025 0.136 0.090 Maize (silage) 0.004 0.004 0.393 0.035 Maize (sweet) 0.004 0.033 0.894 0.001 Oat -0.028 0.117 0.816 0.003 Onion 0.020 0.017 0.309 0.204 Peach -0.029 0.028 0.351 0.175 Pepper, bell 0.025 0.022 0.310 0.203 Potato 0.030 0.009 0.003 0.389 Snap bean 0.023 0.060 0.709 0.016 Soybean 0.173 0.045 0.001 0.459 Squash -0.054 0.033 0.132 0.234 Tomato -0.010 0.011 0.402 0.055 Watermelon -0.003 0.014 0.848 0.006 Wheat (all) -0.117 0.041 0.009 0.229 Wheat (spring) -0.165 0.143 0.300 0.211 Wheat (winter) -0.135 0.054 0.021 0.212 A majority of organically-produced crops in our analysis produced significantly lower yield compared to conventional systems. But agricultural systems should not be judged on yield alone. A primary goal for agriculture of the future should be to produce enough food to feed a growing population, and to do so while minimizing the negative impacts of that production. Organic agriculture has demonstrable benefits to the environment on a per unit area basis, however, those benefits are often negated or reversed on a per unit production basis because organic systems tend to yield less per area \[[@pone.0161673.ref005]\]. In England, Hodgson et al. \[[@pone.0161673.ref035]\] estimated that organic yields must be at least 87% of conventional yields to make organic production better for butterfly abundance (a proxy for ecosystem health), as long as the land spared by conventional production was used for nature reserves. Detractors of organic production often cite "land sparing" as a primary benefit due to the improved yields observed in conventional agriculture. But land sparing (increasing production to set aside land for nature) only works if land is actually spared due to increased production. In the US, while yield of major staple crops like maize, wheat, and soybean have continued to increase using conventional production practices, land devoted to conservation reserves has decreased significantly since 2007 \[[@pone.0161673.ref036]\]. If large areas of land are not set aside, then a land sharing approach may be warranted instead. Hodgson et al. \[[@pone.0161673.ref035]\] estimated that without large conservation areas, optimal land use would favor organic as long as organic yields were at least 35% of conventional (land sharing). This is because organic production practices in some cropping systems tend to favor pollinators and other beneficial species compared to conventionally managed fields \[[@pone.0161673.ref035], [@pone.0161673.ref037]\]. Kremen \[[@pone.0161673.ref013]\] recently argued for a "both-and" framework, rather than choosing between land sparing and land sharing. She proposed that scientists focus research on evaluating whether specific management practices can increase biodiversity without compromising yield. This future research aim is applicable to both organic and conventional agriculture, as a spectrum of management practices exist in farms of each classification. The reasons for food insecurity around the world are varied and complex, and go far beyond just yield. Even so, a dramatic, sustained reduction in crop yield could be devastating to food security, even in developed countries, making a rapid and complete switch to organic agriculture unwise. Unless other inefficiencies in our food systems are corrected (like food waste, food distribution, and meat-intensive diets), we are likely to need continued yield increases into the future to feed a growing population. Based on our estimates, if all US wheat production were grown organically, an additional 12.4 million hectares (30.6 million acres) would be needed to match 2014 production levels in the U.S., unless the organic yield gap can be narrowed. Current annual production of some crops (like wheat, corn, and soybean) are greater than annual domestic consumption in the U.S., allowing for export. Given world population projections and diet trends, maintaining current production levels in developed countries (while continuing to increase production in developing countries) will likely be the minimum required for a food-secure world. There are a wide variety of behaviors and experience levels within both organic and conventional production. Where a farmer fits into that spectrum will drive their productivity and sustainability in economic, environmental, and social dimensions. Although the long-term sustainability of organic production is debated nearly as often as conventional practices, many consumers buy organic food because of the perceived environmental benefits \[[@pone.0161673.ref038]\]. Other sustainability marketing efforts that go beyond organic production have been proposed (like Whole Foods "Responsibly Grown" and the Field to Market "Fieldprint" programs). However, these programs have not gained wide acceptance or recognition. Farmer adoption of organic agriculture is likely linked to geography. "Hotspot" areas of organic adoption have been documented in England, associated with physical characteristics such as soil type and altitude as well as socioeconomic characteristics like population size or distance from urban centers \[[@pone.0161673.ref039]\]. These hotspots were not associated with higher organic yields, but rather occurred in lower yielding regions for both conventional and organic production \[[@pone.0161673.ref039], [@pone.0161673.ref040]\]. Geographic clustering occurs in the United States as well. Of the organic farms surveyed in 2014, California, Wisconsin, New York, Washington, and Pennsylvania had the highest number of operators reporting, respectively. Operators from these five states represented 13,423 (45%) of the organic producers surveyed. States vary according to climate and growing conditions but may also vary according to available regional markets, outreach and education on the topic of organic agriculture, and farmer associations. In addition to geographic drivers of variation between states, organic farm location within a state may also be geographically clustered, with clusters potentially in distinct landscapes or soil types that could alter productivity. For example, California was the largest grape producer for both organic and conventional in our analysis. Organic wine grapes are often produced in low yielding coastal areas, while conventional grapes are also grown in the higher yielding Central Valley of California. This could potentially bias our analysis in favor of conventional production in that instance. However, we were unable to access information about the specific locations within states of the respondents, a limitation of this data set, thus we cannot test for this potential source of bias explicitly. Prior research from England suggests there are complex drivers and impacts of spatial clustering of organic farms that may or may not relate to organic crop yield gaps \[[@pone.0161673.ref039], [@pone.0161673.ref040]\]. More research on the geography of organic agriculture in the United States is needed to determine whether clustering could drive the yield trends in our study. USDA data from the 2014 surveys illustrates the breadth and diversity of organic production in the United States. To efficiently produce not just organic crops, but all crops, scientists, farmers, and Extension professionals would benefit from cross-regional comparisons and collaborations. Many unanswered questions remain regarding multifunctional agriculture of both organic and conventional systems, and future research should explore not only yield outcomes but also environmental impacts of management decisions \[[@pone.0161673.ref013]\]. In particular, most crops consistently illustrate large organic yield gaps and merit more organic-focused research to support these producers. In particular, efforts to improve available varieties for use in organic production may result in yield improvement via improved nutrient acquisition, pest resistance, competitive traits, or other gene by environment interactions \[[@pone.0161673.ref041]\]. Furthermore, examination of commonalities and differences between organic and conventional production practices in states with the best and worst yield ratios could be informative. Detailed knowledge of these specific production systems is necessary to investigate these comparisons, presenting an important opportunity for cross-commodity collaboration as well. Our findings support the importance of research funding at the federal level to facilitate such collaborations which may be otherwise difficult to execute but which are crucial to improving the sustainability of US agriculture. Supporting Information {#sec005} ====================== ###### Organic and conventional yield data compiled from 2014 USDA surveys for analysis. (CSV) ###### Click here for additional data file. ###### Yield data from 2014 USDA surveys compiled for analysis, including organic yield data without corresponding data pairs. This data was used to create [S1](#pone.0161673.s005){ref-type="supplementary-material"} and [S2](#pone.0161673.s006){ref-type="supplementary-material"} Figs. (CSV) ###### Click here for additional data file. ###### Data from Seufert et al (2012) and Ponisio et al (2014) for five crops used to create [Fig 3](#pone.0161673.g003){ref-type="fig"}. (CSV) ###### Click here for additional data file. ###### Data from Seufert et al. (2012) and Ponisio et al. (2014) for different crop groups used to create [S8 Fig](#pone.0161673.s012){ref-type="supplementary-material"}. (CSV) ###### Click here for additional data file. ###### Organic vegetable yields from the 2014 USDA Organic Survey. **Triangles represent states reporting organic yield where no conventional yield data were available.** Circles represent states with both organic and conventional data available (data pair). Panel A: each point represents the crop mean from states with and without data pairs. Panel B: each point represents the organic yield reported from each individual state. (EPS) ###### Click here for additional data file. ###### Organic fruit and tree crop yields from the 2014 USDA Organic Survey. **Triangles represent states reporting organic yield where no conventional yield data were available.** Circles represent states with both organic and conventional data available (data pair). Panel A: each point represents the crop mean from states with and without data pairs. Panel B: each point represents the organic yield reported from each individual state. (EPS) ###### Click here for additional data file. ###### Distribution of organic acres as percentage of total acres for each crop. Histogram excludes four data pairs with organic acres over 60% of conventional acres, which are included in [Table 1](#pone.0161673.t001){ref-type="table"}. (EPS) ###### Click here for additional data file. ###### Distribution of the natural logarithm of the organic to conventional yield ratio for all field crops. (EPS) ###### Click here for additional data file. ###### Distribution of the natural logarithm of the organic to conventional yield ratio for all forage crops. (EPS) ###### Click here for additional data file. ###### Distribution of the natural logarithm of the organic to conventional yield ratio for all fruit and tree crops. (EPS) ###### Click here for additional data file. ###### Distribution of the natural logarithm of the organic to conventional yield ratio for all vegetable crops. (EPS) ###### Click here for additional data file. ###### Influence of nitrogen fixation potential and crop longevity on organic:conventional yield ratio. Green triangles adapted from Ponisio (2014); blue squares adapted from Seufert (2012); black circles represent analysis of USDA yield data (2014). Points are the ratio of organic:conventional yield, error bars represent 95% confidence intervals around those estimates. (EPS) ###### Click here for additional data file. ###### Tabular estimates for figures, and summarized data for crops not included in the statistical analysis. (HTML) ###### Click here for additional data file. [^1]: **Competing Interests:**The authors have read the journal\'s policy and the authors of this manuscript declare the following potentially competing interests: ARK: ARK grew up on a conventional farm. Funding has been provided to the University of Wyoming from the following organizations in support of ARK\'s research and education program, either through unrestricted gifts, research contracts, or grants: Arysta LifeScience, BASF, Bayer CropScience, Dow AgroSciences, DuPont, FMC, Hatch Act Funds -- USDA, Loveland Industries, Monsanto, NovaSource, Repar Corporation, StateLine Bean Cooperative, Syngenta, USDA National Institute for Food and Agriculture, University of Wyoming Department of Plant Sciences, University of Wyoming School of Energy Resources, Valent, Western Sugar Cooperative, Winfield Solutions, Wyoming Agricultural Experiment Station, Wyoming Crop Improvement Association, Wyoming Department of Agriculture, and Wyoming Seed Certification. ARK currently serves on the Board of Directors for the Weed Science Society of America. ARK currently serves on the Farming Systems Trial Advisory Panel for the Rodale Institute. RJ: Funding has been provided in support of RJ\'s research and education program in the form of grants from USDA National Institute of Food and Agriculture, USDA Western IPM Center, Western SARE, Wyoming Agricultural Experiment Station, and the Wyoming Open Spaces Initiative. RJ currently serves on the leadership team for the eOrganic community of practice. RJ is a member of the Entomological Society of America, the Ecological Society of America, and the Sustainable Agriculture Education Association. SDS: SDS has worked in the past for DuPont Company, for the biocontrol company Mycogen, and since 1996 as an independent consultant working for a wide variety of clients in the field of agricultural technology either directly or through other consulting firms. That work has included large players involved in synthetic chemicals, seeds and traits (e.g. Dow, BASF, Bayer, Syngenta, Monsanto) as well as smaller companies involved in biological controls and natural products (e.g. Agraquest, Novozymes and others under current nondisclosure agreements). None of this consulting has concerned a comparison of organic and conventional yields. SDS has been a paid speaker for many different grower organizations in the US and Canada, and has been an invited speaker by the North Carolina Biotechnology Association, the Ag Innovation Showcase, and CropLife America. He has been paid to spend two weeks in Hawaii addressing public meetings sponsored by the Hawaii Crop Improvement Association. His role as a contributor for Forbes is not compensated. As of April 2016 (after preparation and initial submission of this manuscript) SDS has been employed part-time by CropLife Foundation, a 501.3c nonprofit in a role communicating the benefits of crop production materials including those used by both organic and conventional growers. This does not alter the authors\' adherence to PLOS ONE policies on sharing data and materials. [^2]: **Conceptualization:** ARK SDS.**Data curation:** SDS ARK.**Formal analysis:** ARK.**Funding acquisition:** ARK RJ.**Investigation:** ARK SDS RJ.**Methodology:** ARK SDS.**Project administration:** ARK.**Resources:** ARK SDS.**Software:** ARK.**Supervision:** ARK.**Validation:** ARK.**Visualization:** ARK.**Writing -- original draft:** RJ ARK.**Writing -- review & editing:** RJ ARK SDS.
Mid
[ 0.600896860986547, 33.5, 22.25 ]
Coexistent rare hepatic artery variants as a pitfall during embolization: dorsal pancreatic artery mistaken for gastroduodenal artery. We present a 39-year-old patient with massive duodenal bleeding ulcer. The patient had multiple variants in his hepatic arterial anatomy that led us to erroneously embolize the dorsal pancreatic artery presuming it to be the gastroduodenal artery. Due to this erroneous presumption, our patient continued to have upper gastrointestinal bleeding. Repeat angiogram was performed, during which the actual gastroduodenal artery was recognized and embolized. To our knowledge, this rare combination of anatomic variants in the hepatic artery as a pitfall during gastroduodenal artery embolization leading to inadvertent embolization of the dorsal pancreatic artery has not been described in the literature.
High
[ 0.668341708542713, 33.25, 16.5 ]
Q: click second radio button with java selenium i want to klick on the second radio button with java/selenium. Ids are dynamic and i dont know why xpath dont work. It would be really helpful if you guys can show me how this works. HTML <div class="form-radiobutton-group group-horizontal" id="id29"> <div class="form-radiobutton-element"> <span class="form-radiobutton-wrapper"> <input class="salutation_f feedback-panel-trigger wicket-id29" id="id4" name="personaldataPanel:salutation:choices" value="radio9" type="radio"> <label for="id4" class=""></label> </span> <label for="id4"> Frau </label> </div> <div class="form-radiobutton-element"> <span class="form-radiobutton-wrapper"> <input class="salutation_m feedback-panel-trigger wicket-id29" id="id3" name="personaldataPanel:salutation:choices" value="radio11" type="radio"> <label for="id3" class=""></label> </span> <label for="id3"> Herr </label> </div> </div> Code right now WebElement m = driver.findElement(By.xpath("//div[2]/span/input")); m.click(); A: You can locate radio button using By.xpath with their label text as below :- To click radio button with the label text Frau : driver.findElement(By.xpath("//input[../following-sibling::label[contains(.,'Frau')]]")).click(); To click radio button with the label text Herr : driver.findElement(By.xpath("//input[../following-sibling::label[contains(.,'Herr')]]")).click(); Edited :- If you are getting exception that click would receive by other element, need to implement WebDriverWait to wait until element visible on DOM as below :- WebDriverWait wait = new WebDriverWait(driver, 10); el = wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//input[../following-sibling::label[contains(.,'Herr')]]"))); el.click(); If you are still facing same issue then try to click using JavascriptExecutor as below :- ((JavascriptExecutor)driver).executeScript("arguments[0].click()", el);
Mid
[ 0.618510158013544, 34.25, 21.125 ]
News Blog Build-A-Bear Workshop or Sweatshop? Local Company Cited for Child Labor Violations Earlier this week, the U.S. Department of Labor cited St. Louis-based company Build-A-Bear Workshop for child labor violations in five states and the makers of the cuddly little stuffed animals were slapped with more than $25,000 in fines. But while the phrase "child labor" conjures images of 8 year-old Filipino girls sewing soles on $120 sneakers for 2 cents an hour, that's not exactly the case in this instance. The violations stemmed from 16 and 17 year-old employees who were allowed to operate a trash compacter and "ride in a freight elevator that did not have an assigned operator." The Department of Labor said in their news release about the violations that Build-A-Bear, whose first store opened in the Galleria, "cooperated fully throughout the investigation and has taken the necessary steps to assure future compliance." So there, now you can sleep easy knowing a little girl didn't lose her finger stitching together that over-priced furry mess of yours. How's that for guilt-free?
Low
[ 0.465882352941176, 24.75, 28.375 ]
Act on Climate is a team of citizens who are concerned about climate change and want to take action on it at a grassroots level. We are a diverse bunch from all walks of life; from the inner city to rural areas, young and old, and from many professions, who see the impacts of climate change in our lives already in different ways. The Federal Coalition’s last-ditch attempt to hit the reset button on climate change has been a flop with several dozen community members appearing at a snap rally at the Prime Minister’s press conference in Melbourne this morning. “The Coalition has failed to review the science and adopt measures that are commensurate with the scale and urgency of the climate crisis,” said Leigh Ewbank, FoE climate spokesperson. “An Emissions Reduction Fund that fails to keep coal, oil, and gas in the ground is a hoax.” Friends of the Earth says the Coalition’s record of undermining action on climate change is not easily forgotten by the community. “If the Coalition wants to be taken seriously on climate change it would apologise to the Australian people for actively undermining action since it formed government in 2013,” said Leigh Ewbank. The Act on Climate collective is rested, recharged, and ready for another big year. Before we fill you in on our priorities in 2019, it's worth noting our impact during the state election year. Our sustained campaign to hold the Liberal party to account for a head-in-the sand approach to climate change had a huge impact. The absence of a climate policy is considered a key reason why the Coalition haemorrhaged votes in the November election. In his first press conference, new opposition leader Michael O'Brien noted the need to engage with climate policy and has appointed the Coalition's first Shadow Minister for Climate Change. In an acknowledgment that our call for investment in climate action is being heard by Labor, Minister for Climate Change Lily D'Ambrosio recently announced a $1 million grant scheme for regions to investigate change impacts. Yet a greater level of investment is needed. Now the dust has settled from the state election we're returning our focus securing bold and ambitious climate action in Victoria. The Andrews Labor government has announced a $1 million Community Climate Change Adaptation Grants program for regional Victoria, yet Friends of the Earth says the allocation "falls short of community demand." "Every dollar the state government spends to help communities respond to climate change is a wise investment," said Leigh Ewbank, Friends of the Earth climate spokesperson. "Yet Victorian communities will need much more then $1 million to cope with the impacts of climate change." The Federal Coalition government's failure on climate change has seen the country's emissions increase for three consecutive years. This failure leaves Victorian communities exposed to intensifying droughts, heatwaves, bushfires, rising seas, and extreme weather. Regional Victoria is already experiencing climate impacts. For example, Cape Conran saw a winter bushfire last year and community members have sounded the alarm over the impact of rising sea levels in Apollo Bay and Inverloch. "With the Federal Coalition failing to act on climate change, we need to see greater leadership from Premier Daniel Andrews and the Labor government." Environment group Friends of the Earth Australia reject indications that the Morrison government will seek to use public funds to underwrite new coal and gas projects, and say they should be 100% focused on landmark renewable energy projects like the proposed Star of the South offshore wind farm instead. “Prime Minister Scott Morrison and the federal Coalition have learned nothing from the Liberal party's drubbing in the recent Victorian state election” said Friends of the Earth climate spokesperson Leigh Ewbank. The Liberal party's support for coal and gas was resoundingly rejected by voters in at the Wentworth by-election and at the November state election in Victoria. “When community support for action on climate change is on the rise, the Coalition government's support for polluting fossil fuels will go down like a lead balloon” added Ewbank. While time is running out to act on climate change, the Coalition's obsession with coal and gas only imperils Australia's future. The United Nations' annual Conference of Party's climate change meeting is now underway in Poland. Seasoned climate activist and citizen journalist has the following report on Australia's performance at Katowice. Originally published here. It just wouldn't be a United Nations Climate Change conference COP in recent years without Australia at the Fossil of the Day Awards. And this year does not disappoint. Friends of the Earth have welcomed the reappointment of Lily D’Ambrosio as Minister for Energy, Environment, and Climate Change, saying a steady hand will guide policy in Victoria while policy chaos continues at the Federal level. “The reappointment of Lily D’Ambrosio as the minister for climate change and energy is good news for efforts to tackle climate change,” said Leigh Ewbank, FoE climate change spokesperson. “With climate and energy policy chaos continuing at the Federal level under the Coalition, a steady hand is needed in Victoria to rein in emissions and help Australia meet its international commitments." Election day is finally nigh, and it's time for a final coordinator update from Leigh, wrapping up the campaign so far. But before that, we have on-the-street interviews with Sam Hibbins, Greens MP for Prahran, and the Labor and AJP candidates for Prahran. Please note, the opinions of Mark and Climactic are not representative of Act on Climate or Friends of the Earth. In conversation with a Liberal supporter, Mark expressed a willingness to have the argument about nuclear power. This is not representative of the opinion of FoE, and he does not speak as a representative for the group. Friends of the Earth and the local Ararat Greenhouse Action Group have criticised Liberal MP for Ripon Louise Staley's contradictory position on climate change. When Ms Staley was asked about whether she accepts climate change science and her party's policy at the Victorian Farmers Federation's candidates forum in late October, Ms Staley stated: "Yes I do" accept the science of climate change before adding, "I do not support a Victorian renewable energy target. We will abolish that if we come into government." Friends of the Earth's climate change spokesperson Leigh Ewbank said Louise Staley's acceptance of climate science was welcome, yet said she must be judged on her track record and Liberal party's platform. "Saying you accept the science of climate change is one thing, yet the true test of Louise Staley's commitment is her voting record and party's platform," said Leigh Ewbank. "This week we get a lot more of the collective members voices in the show, as we all pitch in to discuss how the Head in the Sand action went. As this episode goes up, AoC is about to embark on a week full of actions across Victoria, and you can get involved! Check out the links to get in touch and stay in the loop." Matthew Guy and the Victorian Coalition announced their intention to build a new ‘baseload’ 500 MW power station if elected at the November state election. Friends of the Earth say the Liberal party's decision to release an energy policy open to new gas and coal power fails the climate change test when renewable energy is the best option to cut emissions and deliver cheaper power for Victorians. “We are pleased that the Coalition has finally released its full energy policy. But they have let the Victorian people down by proposing a policy which could have come from the 1950s,” said Cam Walker of Friends of the Earth. “The energy of the future is renewable. It is extraordinary that the Coalition still intends to overturn the VRET – the state renewable energy target which is driving the uptake of renewables while creating thousands of jobs and billions of dollars of investment.
Mid
[ 0.572890025575447, 28, 20.875 ]
/* * Copyright (C) 2004-2010 Geometer Plus <[email protected]> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * 02110-1301, USA. */ #ifndef __OPTIONSPAGE_H__ #define __OPTIONSPAGE_H__ #include <map> #include <ZLOptionEntry.h> class ZLDialogContent; class OptionsPage; class ComboOptionEntry : public ZLComboOptionEntry { public: ComboOptionEntry(OptionsPage &page, const std::string &initialValue); const std::string &initialValue() const; const std::vector<std::string> &values() const; void onAccept(const std::string&); void onValueSelected(int index); void addValue(const std::string &value); protected: OptionsPage &myPage; std::vector<std::string> myValues; std::string myInitialValue; }; class OptionsPage { public: virtual ~OptionsPage(); protected: OptionsPage(); void registerEntry(ZLDialogContent &tab, const ZLResourceKey &entryKey, ZLOptionEntry *entry, const std::string &name); void registerEntries(ZLDialogContent &tab, const ZLResourceKey &entry0Key, ZLOptionEntry *entry0, const ZLResourceKey &entry1Key, ZLOptionEntry *entry1, const std::string &name); protected: ComboOptionEntry *myComboEntry; private: std::map<ZLOptionEntry*,std::string> myEntries; friend class ComboOptionEntry; }; inline ComboOptionEntry::ComboOptionEntry(OptionsPage &page, const std::string &initialValue) : myPage(page), myInitialValue(initialValue) {} inline const std::string &ComboOptionEntry::initialValue() const { return myInitialValue; } inline const std::vector<std::string> &ComboOptionEntry::values() const { return myValues; } inline void ComboOptionEntry::onAccept(const std::string&) {} inline void ComboOptionEntry::addValue(const std::string &value) { myValues.push_back(value); } inline OptionsPage::OptionsPage() {} inline OptionsPage::~OptionsPage() {} #endif /* __OPTIONSPAGE_H__ */
Mid
[ 0.587121212121212, 38.75, 27.25 ]
Comparative analysis of mean retinal thickness measured using SD-OCT in normal young or old age and glaucomatous eyes. To evaluate changes in macular thickness, ganglion cell layer/inner plexiform layer (GCL/IPL) thickness, and retinal nerve fiber layer (RNFL) thickness in normal eyes and glaucomatous eyes using spectral domain optical coherence tomography (SD-OCT). We enrolled 89 eyes (all left eyes), including 45 (of 45 patients) eyes with glaucoma and 44 (of 44 patients) normal eyes. The data from macular measurements using spectral domain optical coherence tomography were analyzed according to groups divided by age and glaucoma status. The macular thickness analysis, GCL/IPL thickness, and RNFL thickness values determined by SD-OCT scans were compared among the groups. Mean macular thickness decreased significantly with age or glaucoma. Mean GCL/IPL thickness decreased significantly in glaucomatous eyes in all sectors but did not decrease with age. Mean RNFL thickness, which was divided into four quadrants (superior, nasal, inferior, and temporal), decreased significantly in glaucomatous eyes at all quadrants and decreased in the temporal quadrant with age in non-glaucomatous eyes. No significant differences were detected between eyes with normal tension glaucoma (NTG) and primary open angle glaucoma (POAG) in all sectors of mean GCL/IPL thickness, RNFL thickness, and macular thickness. No significant difference in mean thickness was detected between eyes with NTG and POAG. Some of the sectors of RNFL thickness decreased with age or glaucoma. GCL/IPL thickness, however, decreased in glaucomatous eyes but not with age. Therefore, GCL/IPL thickness is less influenced by age when monitoring patients with glaucoma or suspect glaucoma.
High
[ 0.663484486873508, 34.75, 17.625 ]
/** * Copyright 2014 Ryszard Wiśniewski <[email protected]> * Copyright 2016 sim sun <[email protected]> * <p> * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * <p> * http://www.apache.org/licenses/LICENSE-2.0 * <p> * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.tencent.mm.androlib.res.decoder; import com.mindprod.ledatastream.LEDataInputStream; import com.mindprod.ledatastream.LEDataOutputStream; import com.tencent.mm.androlib.AndrolibException; import com.tencent.mm.androlib.ApkDecoder; import com.tencent.mm.androlib.res.data.ResPackage; import com.tencent.mm.androlib.res.data.ResType; import com.tencent.mm.androlib.res.util.StringUtil; import com.tencent.mm.resourceproguard.Configuration; import com.tencent.mm.util.ExtDataInput; import com.tencent.mm.util.ExtDataOutput; import com.tencent.mm.util.FileOperation; import com.tencent.mm.util.Md5Util; import com.tencent.mm.util.TypedValue; import com.tencent.mm.util.Utils; import java.io.BufferedWriter; import java.io.EOFException; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.FileWriter; import java.io.IOException; import java.io.InputStream; import java.io.Writer; import java.math.BigInteger; import java.text.DecimalFormat; import java.util.ArrayList; import java.util.Collection; import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.List; import java.util.Map; import java.util.Set; import java.util.logging.Logger; import java.util.regex.Pattern; public class ARSCDecoder { private final static boolean DEBUG = false; private final static short ENTRY_FLAG_COMPLEX = 0x0001; private static final Logger LOGGER = Logger.getLogger(ARSCDecoder.class.getName()); private static final int KNOWN_CONFIG_BYTES = 56; public static Map<Integer, String> mTableStringsResguard = new LinkedHashMap<>(); public static int mMergeDuplicatedResCount = 0; private final Map<String, String> mOldFileName; private final Map<String, Integer> mCurSpecNameToPos; private final HashSet<String> mShouldResguardTypeSet; private final ApkDecoder mApkDecoder; private ExtDataInput mIn; private ExtDataOutput mOut; private Header mHeader; private StringBlock mTableStrings; private StringBlock mTypeNames; private StringBlock mSpecNames; private ResPackage mPkg; private ResType mType; private ResPackage[] mPkgs; private int[] mPkgsLenghtChange; private int mTableLenghtChange = 0; private int mResId; private int mCurrTypeID = -1; private int mCurEntryID = -1; private int mCurPackageID = -1; private long mMergeDuplicatedResTotalSize = 0L; private ResguardStringBuilder mResguardBuilder; private boolean mShouldResguardForType = false; private Writer mMappingWriter; private Writer mMergeDuplicatedResMappingWriter; private Map<Long,List<MergeDuplicatedResInfo>> mMergeDuplicatedResInfoData = new HashMap<>(); private ARSCDecoder(InputStream arscStream, ApkDecoder decoder) throws AndrolibException, IOException { mOldFileName = new LinkedHashMap<>(); mCurSpecNameToPos = new LinkedHashMap<>(); mShouldResguardTypeSet = new HashSet<>(); mIn = new ExtDataInput(new LEDataInputStream(arscStream)); mApkDecoder = decoder; proguardFileName(); } private ARSCDecoder(InputStream arscStream, ApkDecoder decoder, ResPackage[] pkgs) throws FileNotFoundException { mOldFileName = new LinkedHashMap<>(); mCurSpecNameToPos = new LinkedHashMap<>(); mShouldResguardTypeSet = new HashSet<>(); mApkDecoder = decoder; mIn = new ExtDataInput(new LEDataInputStream(arscStream)); mOut = new ExtDataOutput(new LEDataOutputStream(new FileOutputStream(mApkDecoder.getOutTempARSCFile(), false))); mPkgs = pkgs; mPkgsLenghtChange = new int[pkgs.length]; } public static ResPackage[] decode(InputStream arscStream, ApkDecoder apkDecoder) throws AndrolibException { try { ARSCDecoder decoder = new ARSCDecoder(arscStream, apkDecoder); ResPackage[] pkgs = decoder.readTable(); return pkgs; } catch (IOException ex) { throw new AndrolibException("Could not decode arsc file", ex); } } public static void write(InputStream arscStream, ApkDecoder decoder, ResPackage[] pkgs) throws AndrolibException { try { ARSCDecoder writer = new ARSCDecoder(arscStream, decoder, pkgs); writer.writeTable(); } catch (IOException ex) { throw new AndrolibException("Could not decode arsc file", ex); } } private void proguardFileName() throws IOException, AndrolibException { mMappingWriter = new BufferedWriter(new FileWriter(mApkDecoder.getResMappingFile(), false)); mMergeDuplicatedResMappingWriter = new BufferedWriter(new FileWriter(mApkDecoder.getMergeDuplicatedResMappingFile(), false)); mMergeDuplicatedResMappingWriter.write("res filter path mapping:\n"); mMergeDuplicatedResMappingWriter.flush(); mResguardBuilder = new ResguardStringBuilder(); mResguardBuilder.reset(null); final Configuration config = mApkDecoder.getConfig(); File rawResFile = mApkDecoder.getRawResFile(); File[] resFiles = rawResFile.listFiles(); // 需要看看哪些类型是要混淆文件路径的 for (File resFile : resFiles) { String raw = resFile.getName(); if (raw.contains("-")) { raw = raw.substring(0, raw.indexOf("-")); } mShouldResguardTypeSet.add(raw); } if (!config.mKeepRoot) { // 需要保持之前的命名方式 if (config.mUseKeepMapping) { HashMap<String, String> fileMapping = config.mOldFileMapping; List<String> keepFileNames = new ArrayList<>(); // 这里面为了兼容以前,也需要用以前的文件名前缀,即res混淆成什么 String resRoot = TypedValue.RES_FILE_PATH; for (String name : fileMapping.values()) { int dot = name.indexOf("/"); if (dot == -1) { throw new IOException(String.format("the old mapping res file path should be like r/a, yours %s\n", name)); } resRoot = name.substring(0, dot); keepFileNames.add(name.substring(dot + 1)); } // 去掉所有之前保留的命名,为了简单操作,mapping里面有的都去掉 mResguardBuilder.removeStrings(keepFileNames); for (File resFile : resFiles) { String raw = "res" + "/" + resFile.getName(); if (fileMapping.containsKey(raw)) { mOldFileName.put(raw, fileMapping.get(raw)); } else { mOldFileName.put(raw, resRoot + "/" + mResguardBuilder.getReplaceString()); } } } else { for (int i = 0; i < resFiles.length; i++) { // 这里也要用linux的分隔符,如果普通的话,就是r mOldFileName.put("res" + "/" + resFiles[i].getName(), TypedValue.RES_FILE_PATH + "/" + mResguardBuilder.getReplaceString() ); } } generalFileResMapping(); } Utils.cleanDir(mApkDecoder.getOutResFile()); } private ResPackage[] readTable() throws IOException, AndrolibException { nextChunkCheckType(Header.TYPE_TABLE); int packageCount = mIn.readInt(); mTableStrings = StringBlock.read(mIn); ResPackage[] packages = new ResPackage[packageCount]; nextChunk(); for (int i = 0; i < packageCount; i++) { packages[i] = readPackage(); } mMappingWriter.close(); System.out.printf("resources mapping file %s done\n", mApkDecoder.getResMappingFile().getAbsolutePath()); generalFilterEnd(mMergeDuplicatedResCount, mMergeDuplicatedResTotalSize); mMergeDuplicatedResMappingWriter.close(); System.out.printf("resources filter mapping file %s done\n", mApkDecoder.getMergeDuplicatedResMappingFile().getAbsolutePath()); return packages; } private void writeTable() throws IOException, AndrolibException { System.out.printf("writing new resources.arsc \n"); mTableLenghtChange = 0; writeNextChunkCheck(Header.TYPE_TABLE, 0); int packageCount = mIn.readInt(); mOut.writeInt(packageCount); mTableLenghtChange += StringBlock.writeTableNameStringBlock(mIn, mOut, mTableStringsResguard); writeNextChunk(0); if (packageCount != mPkgs.length) { throw new AndrolibException(String.format("writeTable package count is different before %d, now %d", mPkgs.length, packageCount )); } for (int i = 0; i < packageCount; i++) { mCurPackageID = i; writePackage(); } // 最后需要把整个的size重写回去 reWriteTable(); } private void generalFileResMapping() throws IOException { mMappingWriter.write("res path mapping:\n"); for (String raw : mOldFileName.keySet()) { mMappingWriter.write(" " + raw + " -> " + mOldFileName.get(raw)); mMappingWriter.write("\n"); } mMappingWriter.write("\n\n"); mMappingWriter.write("res id mapping:\n"); mMappingWriter.flush(); } private void generalResIDMapping( String packageName, String typename, String specName, String replace) throws IOException { mMappingWriter.write(" " + packageName + ".R." + typename + "." + specName + " -> " + packageName + ".R." + typename + "." + replace); mMappingWriter.write("\n"); mMappingWriter.flush(); } private void generalFilterResIDMapping( String originalFile, String original, String replaceFile, String replace, long fileLen) throws IOException { mMergeDuplicatedResMappingWriter.write(" " + originalFile + " : " + original + " -> " + replaceFile + " : " + replace + " (size:" + getNetFileSizeDescription(fileLen) + ")"); mMergeDuplicatedResMappingWriter.write("\n"); mMergeDuplicatedResMappingWriter.flush(); } private void generalFilterEnd(int count, long totalSize) throws IOException { mMergeDuplicatedResMappingWriter.write( "removed: count(" + count + "), totalSize(" + getNetFileSizeDescription(totalSize) + ")"); mMergeDuplicatedResMappingWriter.flush(); } private static String getNetFileSizeDescription(long size) { StringBuilder bytes = new StringBuilder(); DecimalFormat format = new DecimalFormat("###.0"); if (size >= 1024 * 1024 * 1024) { double i = (size / (1024.0 * 1024.0 * 1024.0)); bytes.append(format.format(i)).append("GB"); } else if (size >= 1024 * 1024) { double i = (size / (1024.0 * 1024.0)); bytes.append(format.format(i)).append("MB"); } else if (size >= 1024) { double i = (size / (1024.0)); bytes.append(format.format(i)).append("KB"); } else { if (size <= 0) { bytes.append("0B"); } else { bytes.append((int) size).append("B"); } } return bytes.toString(); } private void reWriteTable() throws AndrolibException, IOException { mIn = new ExtDataInput(new LEDataInputStream(new FileInputStream(mApkDecoder.getOutTempARSCFile()))); mOut = new ExtDataOutput(new LEDataOutputStream(new FileOutputStream(mApkDecoder.getOutARSCFile(), false))); writeNextChunkCheck(Header.TYPE_TABLE, mTableLenghtChange); int packageCount = mIn.readInt(); mOut.writeInt(packageCount); StringBlock.writeAll(mIn, mOut); for (int i = 0; i < packageCount; i++) { mCurPackageID = i; writeNextChunk(mPkgsLenghtChange[mCurPackageID]); mOut.writeBytes(mIn, mHeader.chunkSize - 8); } mApkDecoder.getOutTempARSCFile().delete(); } private ResPackage readPackage() throws IOException, AndrolibException { checkChunkType(Header.TYPE_PACKAGE); int id = (byte) mIn.readInt(); String name = mIn.readNullEndedString(128, true); System.out.printf("reading packagename %s\n", name); /* typeNameStrings */ mIn.skipInt(); /* typeNameCount */ mIn.skipInt(); /* specNameStrings */ mIn.skipInt(); /* specNameCount */ mIn.skipInt(); mCurrTypeID = -1; mTypeNames = StringBlock.read(mIn); mSpecNames = StringBlock.read(mIn); mResId = id << 24; mPkg = new ResPackage(id, name); // 系统包名不混淆 if (mPkg.getName().equals("android")) { mPkg.setCanResguard(false); } else { mPkg.setCanResguard(true); } nextChunk(); while (mHeader.type == Header.TYPE_LIBRARY) { readLibraryType(); } while (mHeader.type == Header.TYPE_SPEC_TYPE) { readTableTypeSpec(); } return mPkg; } private void writePackage() throws IOException, AndrolibException { checkChunkType(Header.TYPE_PACKAGE); int id = (byte) mIn.readInt(); mOut.writeInt(id); mResId = id << 24; //char_16的,一共256byte mOut.writeBytes(mIn, 256); /* typeNameStrings */ mOut.writeInt(mIn.readInt()); /* typeNameCount */ mOut.writeInt(mIn.readInt()); /* specNameStrings */ mOut.writeInt(mIn.readInt()); /* specNameCount */ mOut.writeInt(mIn.readInt()); StringBlock.writeAll(mIn, mOut); if (mPkgs[mCurPackageID].isCanResguard()) { int specSizeChange = StringBlock.writeSpecNameStringBlock(mIn, mOut, mPkgs[mCurPackageID].getSpecNamesBlock(), mCurSpecNameToPos ); mPkgsLenghtChange[mCurPackageID] += specSizeChange; mTableLenghtChange += specSizeChange; } else { StringBlock.writeAll(mIn, mOut); } writeNextChunk(0); while (mHeader.type == Header.TYPE_LIBRARY) { writeLibraryType(); } while (mHeader.type == Header.TYPE_SPEC_TYPE) { writeTableTypeSpec(); } } /** * 如果是保持mapping的话,需要去掉某部分已经用过的mapping */ private void reduceFromOldMappingFile() { if (mPkg.isCanResguard()) { if (mApkDecoder.getConfig().mUseKeepMapping) { // 判断是否走keepmapping HashMap<String, HashMap<String, HashMap<String, String>>> resMapping = mApkDecoder.getConfig().mOldResMapping; String packName = mPkg.getName(); if (resMapping.containsKey(packName)) { HashMap<String, HashMap<String, String>> typeMaps = resMapping.get(packName); String typeName = mType.getName(); if (typeMaps.containsKey(typeName)) { HashMap<String, String> proguard = typeMaps.get(typeName); // 去掉所有之前保留的命名,为了简单操作,mapping里面有的都去掉 mResguardBuilder.removeStrings(proguard.values()); } } } } } private HashSet<Pattern> getWhiteList(String resType) { final String packName = mPkg.getName(); if (mApkDecoder.getConfig().mWhiteList.containsKey(packName)) { if (mApkDecoder.getConfig().mUseWhiteList) { HashMap<String, HashSet<Pattern>> typeMaps = mApkDecoder.getConfig().mWhiteList.get(packName); return typeMaps.get(resType); } } return null; } private void readLibraryType() throws AndrolibException, IOException { checkChunkType(Header.TYPE_LIBRARY); int libraryCount = mIn.readInt(); int packageId; String packageName; for (int i = 0; i < libraryCount; i++) { packageId = mIn.readInt(); packageName = mIn.readNullEndedString(128, true); System.out.printf("Decoding Shared Library (%s), pkgId: %d\n", packageName, packageId); } while (nextChunk().type == Header.TYPE_TYPE) { readTableTypeSpec(); } } private void readTableTypeSpec() throws AndrolibException, IOException { checkChunkType(Header.TYPE_SPEC_TYPE); byte id = mIn.readByte(); mIn.skipBytes(3); int entryCount = mIn.readInt(); mType = new ResType(mTypeNames.getString(id - 1), mPkg); if (DEBUG) { System.out.printf("[ReadTableType] type (%s) id: (%d) curr (%d)\n", mType, id, mCurrTypeID); } // first meet a type of resource if (mCurrTypeID != id) { mCurrTypeID = id; initResGuardBuild(mCurrTypeID); } // 是否混淆文件路径 mShouldResguardForType = isToResguardFile(mTypeNames.getString(id - 1)); // 对,这里是用来描述差异性的!!! mIn.skipBytes(entryCount * 4); mResId = (0xff000000 & mResId) | id << 16; while (nextChunk().type == Header.TYPE_TYPE) { readConfig(); } } private void initResGuardBuild(int resTypeId) { // we need remove string from resguard candidate list if it exists in white list HashSet<Pattern> whiteListPatterns = getWhiteList(mType.getName()); // init resguard builder mResguardBuilder.reset(whiteListPatterns); mResguardBuilder.removeStrings(RawARSCDecoder.getExistTypeSpecNameStrings(resTypeId)); // 如果是保持mapping的话,需要去掉某部分已经用过的mapping reduceFromOldMappingFile(); } private void writeLibraryType() throws AndrolibException, IOException { checkChunkType(Header.TYPE_LIBRARY); int libraryCount = mIn.readInt(); mOut.writeInt(libraryCount); for (int i = 0; i < libraryCount; i++) { mOut.writeInt(mIn.readInt());/*packageId*/ mOut.writeBytes(mIn, 256); /*packageName*/ } writeNextChunk(0); while (mHeader.type == Header.TYPE_TYPE) { writeTableTypeSpec(); } } private void writeTableTypeSpec() throws AndrolibException, IOException { checkChunkType(Header.TYPE_SPEC_TYPE); byte id = mIn.readByte(); mOut.writeByte(id); mResId = (0xff000000 & mResId) | id << 16; mOut.writeBytes(mIn, 3); int entryCount = mIn.readInt(); mOut.writeInt(entryCount); // 对,这里是用来描述差异性的!!! ///* flags */mIn.skipBytes(entryCount * 4); int[] entryOffsets = mIn.readIntArray(entryCount); mOut.writeIntArray(entryOffsets); while (writeNextChunk(0).type == Header.TYPE_TYPE) { writeConfig(); } } private void readConfig() throws IOException, AndrolibException { checkChunkType(Header.TYPE_TYPE); /* typeId */ mIn.skipInt(); int entryCount = mIn.readInt(); int entriesStart = mIn.readInt(); readConfigFlags(); int[] entryOffsets = mIn.readIntArray(entryCount); for (int i = 0; i < entryOffsets.length; i++) { mCurEntryID = i; if (entryOffsets[i] != -1) { mResId = (mResId & 0xffff0000) | i; readEntry(); } } } private void writeConfig() throws IOException, AndrolibException { checkChunkType(Header.TYPE_TYPE); /* typeId */ mOut.writeInt(mIn.readInt()); /* entryCount */ int entryCount = mIn.readInt(); mOut.writeInt(entryCount); /* entriesStart */ mOut.writeInt(mIn.readInt()); writeConfigFlags(); int[] entryOffsets = mIn.readIntArray(entryCount); mOut.writeIntArray(entryOffsets); for (int i = 0; i < entryOffsets.length; i++) { if (entryOffsets[i] != -1) { mResId = (mResId & 0xffff0000) | i; writeEntry(); } } } private void readEntry() throws IOException, AndrolibException { mIn.skipBytes(2); short flags = mIn.readShort(); int specNamesId = mIn.readInt(); if (mPkg.isCanResguard()) { // 混淆过或者已经添加到白名单的都不需要再处理了 if (!mResguardBuilder.isReplaced(mCurEntryID) && !mResguardBuilder.isInWhiteList(mCurEntryID)) { Configuration config = mApkDecoder.getConfig(); boolean isWhiteList = false; if (config.mUseWhiteList) { isWhiteList = dealWithWhiteList(specNamesId, config); } if (!isWhiteList) { dealWithNonWhiteList(specNamesId, config); } } } if ((flags & ENTRY_FLAG_COMPLEX) == 0) { readValue(true, specNamesId); } else { readComplexEntry(false, specNamesId); } } /** * deal with whitelist * * @param specNamesId resource spec name id * @param config {@Configuration} AndResGuard configuration * @return isWhiteList whether this resource is processed by whitelist */ private boolean dealWithWhiteList(int specNamesId, Configuration config) throws AndrolibException { String packName = mPkg.getName(); if (config.mWhiteList.containsKey(packName)) { HashMap<String, HashSet<Pattern>> typeMaps = config.mWhiteList.get(packName); String typeName = mType.getName(); if (typeMaps.containsKey(typeName)) { String specName = mSpecNames.get(specNamesId).toString(); HashSet<Pattern> patterns = typeMaps.get(typeName); for (Iterator<Pattern> it = patterns.iterator(); it.hasNext(); ) { Pattern p = it.next(); if (p.matcher(specName).matches()) { if (DEBUG) { System.out.printf("[match] matcher %s ,typeName %s, specName :%s\n", p.pattern(), typeName, specName); } mPkg.putSpecNamesReplace(mResId, specName); mPkg.putSpecNamesblock(specName, specName); mResguardBuilder.setInWhiteList(mCurEntryID); mType.putSpecResguardName(specName); return true; } } } } return false; } private void dealWithNonWhiteList(int specNamesId, Configuration config) throws AndrolibException, IOException { String replaceString = null; boolean keepMapping = false; if (config.mUseKeepMapping) { String packName = mPkg.getName(); if (config.mOldResMapping.containsKey(packName)) { HashMap<String, HashMap<String, String>> typeMaps = config.mOldResMapping.get(packName); String typeName = mType.getName(); if (typeMaps.containsKey(typeName)) { HashMap<String, String> nameMap = typeMaps.get(typeName); String specName = mSpecNames.get(specNamesId).toString(); if (nameMap.containsKey(specName)) { keepMapping = true; replaceString = nameMap.get(specName); } } } } if (!keepMapping) { replaceString = mResguardBuilder.getReplaceString(); } mResguardBuilder.setInReplaceList(mCurEntryID); if (replaceString == null) { throw new AndrolibException("readEntry replaceString == null"); } generalResIDMapping(mPkg.getName(), mType.getName(), mSpecNames.get(specNamesId).toString(), replaceString); mPkg.putSpecNamesReplace(mResId, replaceString); // arsc name列混淆成固定名字, 减少string pool大小 boolean useFixedName = config.mFixedResName != null && config.mFixedResName.length() > 0; String fixedName = useFixedName ? config.mFixedResName : replaceString; mPkg.putSpecNamesblock(fixedName, replaceString); mType.putSpecResguardName(replaceString); } private void writeEntry() throws IOException, AndrolibException { /* size */ mOut.writeBytes(mIn, 2); short flags = mIn.readShort(); mOut.writeShort(flags); int specNamesId = mIn.readInt(); ResPackage pkg = mPkgs[mCurPackageID]; if (pkg.isCanResguard()) { specNamesId = mCurSpecNameToPos.get(pkg.getSpecRepplace(mResId)); if (specNamesId < 0) { throw new AndrolibException(String.format("writeEntry new specNamesId < 0 %d", specNamesId)); } } mOut.writeInt(specNamesId); if ((flags & ENTRY_FLAG_COMPLEX) == 0) { writeValue(); } else { writeComplexEntry(); } } /** * @param flags whether read direct */ private void readComplexEntry(boolean flags, int specNamesId) throws IOException, AndrolibException { int parent = mIn.readInt(); int count = mIn.readInt(); for (int i = 0; i < count; i++) { mIn.readInt(); readValue(flags, specNamesId); } } private void writeComplexEntry() throws IOException, AndrolibException { mOut.writeInt(mIn.readInt()); int count = mIn.readInt(); mOut.writeInt(count); for (int i = 0; i < count; i++) { mOut.writeInt(mIn.readInt()); writeValue(); } } /** * @param flags whether read direct */ private void readValue(boolean flags, int specNamesId) throws IOException, AndrolibException { /* size */ mIn.skipCheckShort((short) 8); /* zero */ mIn.skipCheckByte((byte) 0); byte type = mIn.readByte(); int data = mIn.readInt(); //这里面有几个限制,一对于string ,id, array我们是知道肯定不用改的,第二看要那个type是否对应有文件路径 if (mPkg.isCanResguard() && flags && type == TypedValue.TYPE_STRING && mShouldResguardForType && mShouldResguardTypeSet.contains(mType.getName())) { if (mTableStringsResguard.get(data) == null) { String raw = mTableStrings.get(data).toString(); if (StringUtil.isBlank(raw) || raw.equalsIgnoreCase("null")) return; String proguard = mPkg.getSpecRepplace(mResId); //这个要写死这个,因为resources.arsc里面就是用这个 int secondSlash = raw.lastIndexOf("/"); if (secondSlash == -1) { throw new AndrolibException(String.format("can not find \\ or raw string in res path = %s", raw)); } String newFilePath = raw.substring(0, secondSlash); if (!mApkDecoder.getConfig().mKeepRoot) { newFilePath = mOldFileName.get(raw.substring(0, secondSlash)); } if (newFilePath == null) { System.err.printf("can not found new res path, raw=%s\n", raw); return; } //同理这里不能用File.separator,因为resources.arsc里面就是用这个 String result = newFilePath + "/" + proguard; int firstDot = raw.indexOf("."); if (firstDot != -1) { result += raw.substring(firstDot); } String compatibaleraw = new String(raw); String compatibaleresult = new String(result); //为了适配window要做一次转换 if (!File.separator.contains("/")) { compatibaleresult = compatibaleresult.replace("/", File.separator); compatibaleraw = compatibaleraw.replace("/", File.separator); } File resRawFile = new File(mApkDecoder.getOutTempDir().getAbsolutePath() + File.separator + compatibaleraw); File resDestFile = new File(mApkDecoder.getOutDir().getAbsolutePath() + File.separator + compatibaleresult); MergeDuplicatedResInfo filterInfo = null; boolean mergeDuplicatedRes = mApkDecoder.getConfig().mMergeDuplicatedRes; if (mergeDuplicatedRes) { filterInfo = mergeDuplicated(resRawFile, resDestFile, compatibaleraw, result); if (filterInfo != null) { resDestFile = new File(filterInfo.filePath); result = filterInfo.fileName; } } //这里用的是linux的分隔符 HashMap<String, Integer> compressData = mApkDecoder.getCompressData(); if (compressData.containsKey(raw)) { compressData.put(result, compressData.get(raw)); } else { System.err.printf("can not find the compress dataresFile=%s\n", raw); } if (!resRawFile.exists()) { System.err.printf("can not find res file, you delete it? path: resFile=%s\n", resRawFile.getAbsolutePath()); } else { if (!mergeDuplicatedRes && resDestFile.exists()) { throw new AndrolibException(String.format("res dest file is already found: destFile=%s", resDestFile.getAbsolutePath() )); } if (filterInfo == null) { FileOperation.copyFileUsingStream(resRawFile, resDestFile); } //already copied mApkDecoder.removeCopiedResFile(resRawFile.toPath()); mTableStringsResguard.put(data, result); } } } } /** * resource filtering, filtering duplicate resources, reducing the volume of apk */ private MergeDuplicatedResInfo mergeDuplicated(File resRawFile, File resDestFile, String compatibaleraw, String result) throws IOException { MergeDuplicatedResInfo filterInfo = null; List<MergeDuplicatedResInfo> mergeDuplicatedResInfoList = mMergeDuplicatedResInfoData.get(resRawFile.length()); if (mergeDuplicatedResInfoList != null) { for (MergeDuplicatedResInfo mergeDuplicatedResInfo : mergeDuplicatedResInfoList) { if (mergeDuplicatedResInfo.md5 == null) { mergeDuplicatedResInfo.md5 = Md5Util.getMD5Str(new File(mergeDuplicatedResInfo.filePath)); } String resRawFileMd5 = Md5Util.getMD5Str(resRawFile); if (!resRawFileMd5.isEmpty() && resRawFileMd5.equals(mergeDuplicatedResInfo.md5)) { filterInfo = mergeDuplicatedResInfo; filterInfo.md5 = resRawFileMd5; break; } } } if (filterInfo != null) { generalFilterResIDMapping(compatibaleraw, result, filterInfo.originalName, filterInfo.fileName, resRawFile.length()); mMergeDuplicatedResCount++; mMergeDuplicatedResTotalSize += resRawFile.length(); } else { MergeDuplicatedResInfo info = new MergeDuplicatedResInfo.Builder() .setFileName(result) .setFilePath(resDestFile.getAbsolutePath()) .setOriginalName(compatibaleraw) .create(); info.fileName = result; info.filePath = resDestFile.getAbsolutePath(); info.originalName = compatibaleraw; if (mergeDuplicatedResInfoList == null) { mergeDuplicatedResInfoList = new ArrayList<>(); mMergeDuplicatedResInfoData.put(resRawFile.length(), mergeDuplicatedResInfoList); } mergeDuplicatedResInfoList.add(info); } return filterInfo; } private void writeValue() throws IOException, AndrolibException { /* size */ mOut.writeCheckShort(mIn.readShort(), (short) 8); /* zero */ mOut.writeCheckByte(mIn.readByte(), (byte) 0); byte type = mIn.readByte(); mOut.writeByte(type); int data = mIn.readInt(); mOut.writeInt(data); } private void readConfigFlags() throws IOException, AndrolibException { int size = mIn.readInt(); int read = 28; if (size < 28) { throw new AndrolibException("Config size < 28"); } boolean isInvalid = false; short mcc = mIn.readShort(); short mnc = mIn.readShort(); char[] language = new char[] { (char) mIn.readByte(), (char) mIn.readByte() }; char[] country = new char[] { (char) mIn.readByte(), (char) mIn.readByte() }; byte orientation = mIn.readByte(); byte touchscreen = mIn.readByte(); int density = mIn.readUnsignedShort(); byte keyboard = mIn.readByte(); byte navigation = mIn.readByte(); byte inputFlags = mIn.readByte(); /* inputPad0 */ mIn.skipBytes(1); short screenWidth = mIn.readShort(); short screenHeight = mIn.readShort(); short sdkVersion = mIn.readShort(); /* minorVersion, now must always be 0 */ mIn.skipBytes(2); byte screenLayout = 0; byte uiMode = 0; short smallestScreenWidthDp = 0; if (size >= 32) { screenLayout = mIn.readByte(); uiMode = mIn.readByte(); smallestScreenWidthDp = mIn.readShort(); read = 32; } short screenWidthDp = 0; short screenHeightDp = 0; if (size >= 36) { screenWidthDp = mIn.readShort(); screenHeightDp = mIn.readShort(); read = 36; } char[] localeScript = null; char[] localeVariant = null; if (size >= 48) { localeScript = readScriptOrVariantChar(4).toCharArray(); localeVariant = readScriptOrVariantChar(8).toCharArray(); read = 48; } byte screenLayout2 = 0; if (size >= 52) { screenLayout2 = mIn.readByte(); mIn.skipBytes(3); // reserved padding read = 52; } if (size >= 56) { mIn.skipBytes(4); read = 56; } int exceedingSize = size - KNOWN_CONFIG_BYTES; if (exceedingSize > 0) { byte[] buf = new byte[exceedingSize]; read += exceedingSize; mIn.readFully(buf); BigInteger exceedingBI = new BigInteger(1, buf); if (exceedingBI.equals(BigInteger.ZERO)) { LOGGER.fine(String.format("Config flags size > %d, but exceeding bytes are all zero, so it should be ok.", KNOWN_CONFIG_BYTES )); } else { LOGGER.warning(String.format("Config flags size > %d. Exceeding bytes: 0x%X.", KNOWN_CONFIG_BYTES, exceedingBI )); isInvalid = true; } } } private String readScriptOrVariantChar(int length) throws AndrolibException, IOException { StringBuilder string = new StringBuilder(16); while (length-- != 0) { short ch = mIn.readByte(); if (ch == 0) { break; } string.append((char) ch); } mIn.skipBytes(length); return string.toString(); } private void writeConfigFlags() throws IOException, AndrolibException { //总的有多大 int size = mIn.readInt(); if (size < 28) { throw new AndrolibException("Config size < 28"); } mOut.writeInt(size); mOut.writeBytes(mIn, size - 4); } private Header nextChunk() throws IOException { return mHeader = Header.read(mIn); } private void checkChunkType(int expectedType) throws AndrolibException { if (mHeader.type != expectedType) { throw new AndrolibException(String.format("Invalid chunk type: expected=0x%08x, got=0x%08x", expectedType, mHeader.type )); } } private void nextChunkCheckType(int expectedType) throws IOException, AndrolibException { nextChunk(); checkChunkType(expectedType); } private Header writeNextChunk(int diffSize) throws IOException, AndrolibException { mHeader = Header.readAndWriteHeader(mIn, mOut, diffSize); return mHeader; } private Header writeNextChunkCheck(int expectedType, int diffSize) throws IOException, AndrolibException { mHeader = Header.readAndWriteHeader(mIn, mOut, diffSize); if (mHeader.type != expectedType) { throw new AndrolibException(String.format("Invalid chunk type: expected=%d, got=%d", expectedType, mHeader.type)); } return mHeader; } /** * 为了加速,不需要处理string,id,array,这几个是肯定不是的 */ private boolean isToResguardFile(String name) { return (!name.equals("string") && !name.equals("id") && !name.equals("array")); } public static class Header { public final static short TYPE_NONE = -1, TYPE_TABLE = 0x0002, TYPE_PACKAGE = 0x0200, TYPE_TYPE = 0x0201, TYPE_SPEC_TYPE = 0x0202, TYPE_LIBRARY = 0x0203; public final short type; public final int chunkSize; public Header(short type, int size) { this.type = type; this.chunkSize = size; } public static Header read(ExtDataInput in) throws IOException { short type; try { type = in.readShort(); short count = in.readShort(); int size = in.readInt(); return new Header(type, size); } catch (EOFException ex) { return new Header(TYPE_NONE, 0); } } public static Header readAndWriteHeader(ExtDataInput in, ExtDataOutput out, int diffSize) throws IOException, AndrolibException { short type; int size; try { type = in.readShort(); out.writeShort(type); short count = in.readShort(); out.writeShort(count); size = in.readInt(); size -= diffSize; if (size <= 0) { throw new AndrolibException(String.format("readAndWriteHeader size < 0: size=%d", size)); } out.writeInt(size); } catch (EOFException ex) { return new Header(TYPE_NONE, 0); } return new Header(type, size); } } public static class FlagsOffset { public final int offset; public final int count; public FlagsOffset(int offset, int count) { this.offset = offset; this.count = count; } } private static class MergeDuplicatedResInfo { private String fileName; private String filePath; private String originalName; private String md5; private MergeDuplicatedResInfo(String fileName, String filePath, String originalName, String md5) { this.fileName = fileName; this.filePath = filePath; this.originalName = originalName; this.md5 = md5; } static class Builder { private String fileName; private String filePath; private String originalName; private String md5; Builder setFileName(String fileName) { this.fileName = fileName; return this; } Builder setFilePath(String filePath) { this.filePath = filePath; return this; } public Builder setMd5(String md5) { this.md5 = md5; return this; } Builder setOriginalName(String originalName) { this.originalName = originalName; return this; } MergeDuplicatedResInfo create() { return new MergeDuplicatedResInfo(fileName, filePath, originalName, md5); } } } private class ResguardStringBuilder { private final List<String> mReplaceStringBuffer; private final Set<Integer> mIsReplaced; private final Set<Integer> mIsWhiteList; private String[] mAToZ = { "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z" }; private String[] mAToAll = { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "_", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z" }; /** * 在window上面有些关键字是不能作为文件名的 * CON, PRN, AUX, CLOCK$, NUL * COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9 * LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9. */ private HashSet<String> mFileNameBlackList; public ResguardStringBuilder() { mFileNameBlackList = new HashSet<>(); mFileNameBlackList.add("con"); mFileNameBlackList.add("prn"); mFileNameBlackList.add("aux"); mFileNameBlackList.add("nul"); mReplaceStringBuffer = new ArrayList<>(); mIsReplaced = new HashSet<>(); mIsWhiteList = new HashSet<>(); } public void reset(HashSet<Pattern> blacklistPatterns) { mReplaceStringBuffer.clear(); mIsReplaced.clear(); mIsWhiteList.clear(); for (int i = 0; i < mAToZ.length; i++) { String str = mAToZ[i]; if (!Utils.match(str, blacklistPatterns)) { mReplaceStringBuffer.add(str); } } for (int i = 0; i < mAToZ.length; i++) { String first = mAToZ[i]; for (int j = 0; j < mAToAll.length; j++) { String str = first + mAToAll[j]; if (!Utils.match(str, blacklistPatterns)) { mReplaceStringBuffer.add(str); } } } for (int i = 0; i < mAToZ.length; i++) { String first = mAToZ[i]; for (int j = 0; j < mAToAll.length; j++) { String second = mAToAll[j]; for (int k = 0; k < mAToAll.length; k++) { String third = mAToAll[k]; String str = first + second + third; if (!mFileNameBlackList.contains(str) && !Utils.match(str, blacklistPatterns)) { mReplaceStringBuffer.add(str); } } } } } // 对于某种类型用过的mapping,全部不能再用了 public void removeStrings(Collection<String> collection) { if (collection == null) return; mReplaceStringBuffer.removeAll(collection); } public boolean isReplaced(int id) { return mIsReplaced.contains(id); } public boolean isInWhiteList(int id) { return mIsWhiteList.contains(id); } public void setInWhiteList(int id) { mIsWhiteList.add(id); } public void setInReplaceList(int id) { mIsReplaced.add(id); } public String getReplaceString() throws AndrolibException { if (mReplaceStringBuffer.isEmpty()) { throw new AndrolibException(String.format("now can only proguard less than 35594 in a single type\n")); } return mReplaceStringBuffer.remove(0); } } }
Mid
[ 0.5435684647302901, 32.75, 27.5 ]
UPDATE: This story was updated April 3, 2013, to add the news of a charge laid against former Guelph Conservative campaign worker Michael Sona and to update some information below. In 2011, Elections Canada investigators began probing calls placed to voters in Guelph, Ont. in the final days of the 2011 federal election that wrongly claimed to be from Elections Canada. The calls redirected voters to a polling station they couldn't use. It's illegal both to interfere with a person's right to vote and to impersonate Elections Canada. Now, with word that former Guelph campaign worker Michael Sona faces a single charge in the affair that has come to be know as the robocalls scandal, here's a look at what we know about the case, according to court documents and information provided in interviews: 1. Elections Canada investigator Al Mathews started looking into complaints in Guelph on May 5, 2011, three days after the election that saw reports of illicit phone calls. The winning candidate in the riding, Liberal Frank Valeriote, compiled a list of almost 80 names of people complaining about the calls. News of the investigation didn't break until Feb. 22, 2012. 2. All political parties use automated robocalls and live calls to identify voter support and contact people during a campaign. The campaign of Guelph Conservative candidate Marty Burke used RackNine, a company that offers voice broadcasting services, to make legitimate robocalls to campaign supporters. The person who made the fraudulent robocalls also used RackNine. A demonstrator protests in Montreal against fraudulent election calls. (Graham Hughes/Canadian Press) 3. The person who made the calls used a disposable, or burner, cellphone, registered to a "Pierre Poutine." The RackNine charges were paid via PayPal using prepaid credit cards, purchased at two Shoppers Drug Mart stores in Guelph. Shoppers Drug Mart doesn't keep its security camera videos long enough to see who bought the cards more than a year ago. 4. Elections Canada traced the IP address used to access RackNine on election day and send the fraudulent message. Mathews got a court order for Rogers, the company that provided the internet service to that IP address, to provide the customer information that matches that address, on March 20, 2012. 5. Pierre Poutine and Burke campaign worker Andrew Prescott accessed their RackNine accounts using the same IP address. On election day, they accessed their RackNine accounts from the same IP address within four minutes of each other, Mathews says in documents filed in court. 6. A court document lists the billing account numbers for the customer information provided by Rogers to Mathews. Those accounts don't match the number found on the Burke campaign's Rogers invoices submitted to Elections Canada, suggesting RackNine wasn't accessed through a computer in the Burke campaign office. 7. Two Conservative staffers, accompanied by the party's lawyer, told Mathews they overheard Michael Sona, another Burke campaign worker, talking about "making a misleading poll moving call." Sona, who stepped down from a job in the office of Conservative MP Eve Adams when the story broke, has previously said he had nothing to do with the misleading calls. Mathews later corrected the record, adding a footnote to a subsequent court document that said campaign worker Matthew McBain actually described Sona suggesting an autodial call "that would not track back to the Burke campaign," not a misleading poll moving call. 8. Arthur Hamilton, the Conservative Party's lawyer, told Mathews the list of phone numbers uploaded to RackNine by Pierre Poutine appeared to be a list of identified non-Conservative supporters, with data on it that was updated in CIMS, the party's database, days before the election. The CBC's Terry Milewski had reported a similar pattern after sifting through complaints in 31 ridings. 9. News coverage led to 40,000 people contacting Elections Canada one way or another — whether to report a misdirecting call or by signing an online petition to express concern that it had happened — chief electoral officer Marc Mayrand told a parliamentary committee in April. There are now specific allegations in almost 200 ridings by 800 people. As of April 3, 2013, Elections Canada is investigating 1,399 complaints in 247 ridings.
Low
[ 0.524390243902439, 32.25, 29.25 ]
Floating thrombus in the aortic arch as an origin of simultaneous peripheral emboli. Few cases of a floating thrombus in a normal aorta have been reported without other underlying reasons for the thrombus formation and its systemic embolic complications. We report a case in which a floating thrombus in the proximal aortic arch was detected after echocardiography and computed tomography angiography as an origin of upper extremities and ophthalmic embolism.
High
[ 0.697872340425531, 30.75, 13.3125 ]
This opinion will be unpublished and may not be cited except as provided by Minn. Stat. § 480A.08, subd. 3 (2014). STATE OF MINNESOTA IN COURT OF APPEALS A14-2018 Thomas James Mitchell, Appellant, vs. $6,429 of US Currency, et al., Respondents Filed August 3, 2015 Affirmed Worke, Judge Stearns County District Court File No. 73-CV-13-2842, 73-CR-13-1419 Charles L. Hawkins, Minneapolis, Minnesota (for appellant) Janelle P. Kendall, Stearns County Attorney, Lotte R. Hansen, Assistant County Attorney, St. Cloud, Minnesota (for respondent) Considered and decided by Worke, Presiding Judge; Hudson, Judge; and Chutich, Judge. UNPUBLISHED OPINION WORKE, Judge Appellant challenges the district court’s determination that he failed to properly serve a demand for judicial determination of a forfeiture. We affirm. FACTS On February 13, 2013, law enforcement obtained a warrant to search the residence of appellant Thomas James Mitchell for controlled substances. Officers seized marijuana, methamphetamine, drug paraphernalia, $6,429 in cash, $1,403 in collector bills, $52 in collector coins, 19 foreign collector coins, and 4.62 ounces of gold jewelry. The same day, the St. Cloud Police Department personally served upon Mitchell a Notice of Seizure and Intent to Forfeit Property for the cash, the collector bills and coins, and the gold jewelry. Mitchell had 60 days—until April 15—to file a demand for judicial determination of the forfeiture. On April 1, Mitchell filed, via U.S. mail, a demand for judicial determination upon the Stearns County Court Administrator and the Stearns County Attorney’s Office. The copy to the attorney’s office included only one Acknowledgement of Service and did not include a prepaid, self-addressed envelope. The Stearns County Attorney’s Office did not return the Acknowledgment of Service to Mitchell. Following the conclusion of criminal proceedings against Mitchell, Stearns County moved to dismiss Mitchell’s demand on the grounds that the district court lacked jurisdiction because service of the demand failed to meet the requirements of Minn. Stat. § 609.5314 (2012), which indicates that a demand must be filed in accordance with the Rules of Civil Procedure. The district court agreed, concluding that Mitchell’s filing by mail failed to comport with civil rules 4.05 and 4.06, which set out the requirements of service of a complaint by mail, and dismissed Mitchell’s demand. He now appeals. 2 DECISION “Whether service of process was effective, and personal jurisdiction therefore exists, is a question of law that we review de novo.” Shamrock Dev., Inc. v. Smith, 754 N.W.2d 377, 382 (Minn. 2008). Mitchell offers two arguments as to why his attempted service was valid. 2012 revision Mitchell first argues that a 2012 revision to Minn. Stat. § 609.5314 has injected ambiguity into the statute, and if that ambiguity is resolved in his favor it leads to the conclusion that service was valid. The following language was added in 2012: “The claimant may serve the complaint on the prosecuting authority by any means permitted by court rules.” Minn. Stat. § 609.5314, subd. 3(a); see 2012 Minn. Laws ch. 128, § 19, at 29. Mitchell focuses on the language that a complaint may be served “by any means” permitted by court rules. He then contends that because he served his complaint in accordance with Rules of Civil Procedure 5.01 and 5.02, his service was valid. We do not agree that the language is ambiguous. The 2012 revision is specific to complaints, and does not alter the fundamental difference between rule 4 and rule 5. Rule 4 governs proper service of a complaint. Rule 5 governs submission of pleadings and documents served after the complaint has been properly filed. Rule 5.01 is explicit that it applies to “every pleading subsequent to the original complaint.” Minn. R. Civ. P. 5.01 (emphasis added). The language of both rules highlights the distinction between them. 3 Much of Mitchell’s argument is devoted to advancing the idea that recent statutory revisions were intended to provide more leeway to those who wish to challenge an administrative forfeiture. E.g., 2012 Minn. Laws ch. 128, § 18, at 28-29. But review of the changes does not lead to Mitchell’s conclusion. A review of the changes indicates that the language was modified so as to be more understandable and the consequences of inaction more apparent to a layperson, not that a material change was intended. Next, Mitchell asserts that Stearns County, via the St. Cloud Police Department, initiated the action when it notified him of its intent to forfeit the property, and thus his filing qualifies as a “response,” and therefore he need only comply with rule 5, and not rule 4. But the district court has jurisdiction once the claimant has filed according to Minn. Stat. § 609.5314; if the claimant fails to serve and file a demand, a forfeiture proceeding is not initiated. Peterson v. 2004 Ford Crown Victoria, 792 N.W.2d 454, 458 (Minn. App. 2010). “This means that unless a plaintiff starts a lawsuit, there is no proceeding.” Id. Mitchell’s mailing failed to fulfill the requirements for serving a complaint. The text that Mitchell relies upon specifies that it applies to service of a complaint, and thus does not permit him to employ rules of service that explicitly apply to filings other than a complaint. And it was Mitchell’s responsibility to initiate an action to recover his property; the courts are not involved until he acts, so his filing was not a response to an action initiated by Stearns County. 4 Service not subject to civil rules Mitchell also argues that the rules of civil procedure do not apply at all to his filing, and thus service was proper. He asserts that the civil rules apply only after his demand is filed, not before or coincident with it. Mitchell’s argument fails because he cites only civil rules 5.01 and 5.02 to support his contention that his filing was proper. Mitchell cannot simultaneously argue that the civil rules do not apply, and then validate his attempted service using those same rules. Mitchell fails to identify which rules would be applicable if he is correct that the civil rules do not govern. He does not indicate how a district court is to review the acceptability of his filing if indeed the civil rules are inapplicable. As best can be discerned from Mitchell’s brief, it seems that a district court is only to look to section 609.5314, subdivision 3(a), but that statute expressly states that “[t]he demand must be in the form of a civil complaint,” and includes no details as to how that complaint must be served. The only statute at issue in this case is dispositive and clear: “The proceedings are governed by the Rules of Civil Procedure.” Id. Affirmed. 5
Low
[ 0.45642201834862306, 24.875, 29.625 ]
Q: Asymptotic behavior of $\sum_{k=1}^{n}\frac{p_{k+1}}{p_{k+1}-p_k}$ I refer to my previous question Asymptotic behavior of a certain sum of ratios of consecutives primes. We can split the sum $$\sum_{k=1}^{n}\frac{p_{k+1}+p_k}{p_{k+1}-p_k}$$ where $p_k$ stands for the prime of index $k$, into the following two $\sum_{k=1}^{n}\frac{p_{k+1}}{p_{k+1}-\,p_k}$ ~ $\frac{n\,(n+1)}{e}\,\log\log n$ $\sum_{k=1}^{n}\frac{p_{k}}{p_{k+1}-\,p_k}$ ~ $\frac{(n-1)\,n}{e}\,\log\log n$ Is there anybody who can confirm this asymptotic behavior and, if it is correct, give a sketch of a proof? A: My response to your earlier question applies almost verbatim. The heuristic reasoning there gives that \begin{align*} \sum_{k=1}^{n}\frac{p_k}{p_{k+1}-p_k}&\sim\frac{C}{2}\, n^2\log\log n,\\ \sum_{k=1}^{n}\frac{p_{k+1}}{p_{k+1}-p_k}&\sim\frac{C}{2}\, n^2\log\log n, \end{align*} where the constant $C>0$ is the same as in that post. As I wrote there, this constant is almost surely different from $2/e$. In fact, as Lucia kindly pointed out in a comment, $C=1$. The difficulty in estimating these sums lies in the erratic behaviour of the denominator $p_{k+1}-p_k$. The numerator $p_k$ (resp. $p_{k+1}$ or $p_k+p_{k+1}$) is easy to handle as it is asymptotically $k\log k$ (resp. $k\log k$ or $2k\log k$).
High
[ 0.656836461126005, 30.625, 16 ]
As disclosed by this inventor of the present invention in the U.S. Pat. No. 5,085,660, a vertebral fixation system comprises a double-threaded bone screw having a first threaded portion fro fastening onto a bone or vertebra, and a second threaded portion engageable with a fixation plate. The second threaded portion and the fixation plate are fastened at a right angle at which the first threaded portion is fastened onto a bone or vertebra. As a result, the angle at which the first threaded portion is fastened onto a bone or vertebra can not be changed in accordance with the surgical requirement when the fixation plate is chosen. Furthermore, the fixation plate is more rigid than the bone or vertebra onto which the double-threaded bone screw is fastened, thereby forcing the first threaded portion to fasten onto the bone or vertebra according to the advancing angle at which the second threaded portion is engaged with the fixation plate, if the angle at which the first threaded portion is fastened onto a bone or vertebra deviates from the advancing angle of the second threaded portion in the fixation plate. As a result, the threads formed by the first threaded portion in the bone or vertebra are vulnerable to damage and the first threaded portion of the double-threaded bone screw is therefore unable to hold the bone or vertebra securely.
Mid
[ 0.5575048732943471, 35.75, 28.375 ]
Teenage victims of sex traffickers will get money from sale of brothels The closed La Costeñita nightclub sits in the 8000 block of Clinton Drive. It is among properties of two convicted sex traffickers. The 10 dwellings, restaurants, lots and cantinas will be sold to benefit victims. less The closed La Costeñita nightclub sits in the 8000 block of Clinton Drive. It is among properties of two convicted sex traffickers. The 10 dwellings, restaurants, lots and cantinas will be sold to benefit ... more Photo: Brett Coomer Photo: Brett Coomer Image 1of/15 Caption Close Image 1 of 15 The closed La Costeñita nightclub sits in the 8000 block of Clinton Drive. It is among properties of two convicted sex traffickers. The 10 dwellings, restaurants, lots and cantinas will be sold to benefit victims. less The closed La Costeñita nightclub sits in the 8000 block of Clinton Drive. It is among properties of two convicted sex traffickers. The 10 dwellings, restaurants, lots and cantinas will be sold to benefit ... more Photo: Brett Coomer Teenage victims of sex traffickers will get money from sale of brothels 1 / 15 Back to Gallery For seven years, the bars and shacks on the gritty east edge of Houston served as unsightly venues of serial sex crimes where a convicted human trafficker ruled with threats, and Mexican teens as young as 14 were battered, forced to live in sheds, and toil as cantina call girls after being smuggled to Houston. But U.S. District Judge Lynn Hughes ruled Tuesday that five of the youngest victims in one of the city's most visible human trafficking rings will benefit from the sale of ringleader Maria "Nancy" Rojas and her husband's bars and the rest of their ramshackle real estate empire, according to instructions delivered to prosecutors. It is the first time prosecutors have successfully pushed for forfeiture of assets for the benefit of sex trafficking victims in Houston - and among only a few cases nationally, said Edward Gallagher, a senior federal prosecutor who heads Houston's Human Trafficking Rescue Alliance (HTRA). Valued at $602,000 The convicted Houston couple's collection of 10 dwellings, restaurants, weedy vacant lots and cantinas, known as La Cueva and La Costeñita, carry an assessed value of about $602,000, and all proceeds will be divided equally among former teenage victims to pay for medical, psychological and educational expenses to "try to restore them to well-adjusted productivity," Hughes told prosecutors at a federal court hearing. Hughes excluded nine other women in their 20s and 30s from benefitting in his decision, though federal officials had argued they too had been beaten, threatened and used by the same criminal group. Hughes said he was unable to determine how much each woman had collected in cash as a prostitute - willing or not. "Some of them may have had horrendous experiences, incredible pain and economic deprivation - others may have profited substantially - I have no way of determining that," declared Hughes, who said he was troubled by evidence that some government "victims" married or had children by traffickers. David Adler, a government-appointed defense attorney for Rojas, supported restitution for the five teenage victims. But he said he believed prosecutors stretched the definition of "trafficking victim" in this case to include women who freely took taxis to cantina prostitution jobs as well as individuals who got deported from the United States and later illegally returned to their alleged captors. Making them feel safe If nothing else, victim advocate Dottie Laster said the ruling means - at long last - some of Houston's boldest cantina-brothels will finally be closed and sold to help victims. "That's part of the justice victims need to see that we care about them and the law cares about them," she said. "It's important. Houston needs to do everything they can to make Houston unattractive and unprofitable for traffickers and the people that are helping them and make this place absolutely safe for victims." About eight years ago, Laster, who got her start as an employee of the Houston YMCA, helped a different teen, who fled from associates of the same trafficking group and risked her life to help investigators. That girl received no compensation for providing tips that resulted in the first round of busts among the Clinton Drive's sleazy collection of cantina owners and pimps. Other criminals later associated with Rojas' cantina-based trafficking operation were indicted in 2005 in what would be the first of many attempts to shut down a seemingly ever-regenerating operation based in brightly-painted cantinas equipped with hidden cameras, tight security, secret doors, seedy bedchambers and hidden living quarters. Prosecutors and federal agents first targeted long-time Mexico-based supplier Gerardo "El Gallo" Salazar, a so-called Romeo pimp who romanced young girls with false promises in small Mexican towns and later tattooed his victims with the mark of the Rooster. Fates of three Salazar remains jailed pending his extradition from Mexico, where he was arrested in 2010 after five years on the lam. This month Rojas, an illegal immigrant and convicted prostitute, was sentenced to 16 years in prison. Her partner and property co-owner, Javier Belamonte, will be sentenced in June. "The goal was not only to prosecute these people, but to dismantle this organization and take away all of their ill-gotten gains," said Ruben Perez, one of the lead prosecutors in the case.
Low
[ 0.525386313465783, 29.75, 26.875 ]
1. Field of the Invention The present invention relates, in general, to an alternating current negative ion and silver ion generator and, more particularly, to an alternating current negative ion and silver ion generator, which simultaneously generates pure negative ions, lacking ozone and nitrogen oxide, and nano-sized silver ions, thereby converting polluted indoor air into clean, fresh and refreshing air, and which sterilizes various airborne microbes to maintain a comfortable and fresh indoor environment, thus greatly improving the cleanliness of indoor air and preventing the bad influence of polluted air on various electronic products. 2. Description of the Related Art Generally, clean air in woods contains a lot of negative ions having negative charges, while waste gas exhausted from vehicles and smoke generated from factories contains a lot of positive ions having positive charges. Further, the fact that, as a human body breathes a lot of negative ions, oxidized physical function is deoxidized and normal physical function is activated, has already been researched and reported to the academic community. Further, recently, with rapid industrial development, air pollution is becoming serious. As a result, more and more positive ions are emitted, so that it is essential to generate negative ions and neutralize the positive ions. Accordingly, various negative ion generators for generating negative ions required to neutralize positive ions contained in indoor air have been developed and used indoors. However, most conventional negative ion generators are constructed to supply a high voltage through a negative ion generation tip, thus locally generating negative ions using corona discharge or plasma discharge. Accordingly, as the conventional negative ion generators locally generate negative ions when they are used for a long period of time, harmful substances, such as ozone or nitrogen oxide, are generated, so that a user has a headache and feels nauseated and oppressed by the unpleasant smell, thus the user's health is greatly damaged. Further, the conventional negative ion generators are constructed so that only a discharge electrode (a tip) exists, and are operated so that, if a negative (−) high voltage pulse is applied to the discharge electrode and electrons are generated in the air, the electrons cause oxygen itself to be negative while colliding with oxygen in the air. In particular, most conventional Direct Current (DC) negative ion generation modules are problematic in that, since a high voltage flows out from the modules, the modules continuously cause enormous damage to and bad influence on several electronic parts near the modules, thus increasing a risk of degrading electronic products. Among the negative ion generation modules, an AC negative ion generation module, in particular, is problematic in that it uses illegal phase control, thus badly influencing other electronic products.
Mid
[ 0.635359116022099, 28.75, 16.5 ]
Fashion Bedding Start Your Own Fashionable Bedding Style Girls, listen up! Get your own fashion bedding and be the envy of your closest girl friends. We don’t mean the usual fashion bedding you see in your local stores because that can be bought by just about any girl. If you want to be different, you have to go with the best and our custom bedding is the choice you should get because sky’s the limit with the design you can use for VisionBedding. You can choose from the many designs from our fashion bedding gallery. In our gallery, you can choose fashionable ...Read More images ranging from stylish clothes to makeup designs. You can either use these images as is or have them labeled with your name or your favorite fashion motto to come up with personalized bedding. You also have the choice of coming up with your own custom f bedding design by using a design you’ve made on your own or a photo of your favorite dress ensemble. Only your imagination limits the possibilities of coming up with a great design.
Low
[ 0.5070140280561121, 31.625, 30.75 ]
Hepatic Fibrosis in a Long-term Murine Model of Sepsis. Chronic sequelae of sepsis represent a major, yet underappreciated clinical problem, contributing to long-term mortality and quality-of-life impairment. In chronic liver disease, inflammation perpetuates fibrogenesis, but development of fibrosis in the post-acute phase of systemic inflammation has not been studied. Therefore, a mouse model of post-acute sequelae of sepsis was established based on polymicrobial peritonitis under antibiotic protection. Survival decreased to approximately 40% within 7 days and remained constant until day 28 (post-acute phase). In survivors, clinical recovery was observed within 1 week, whereas white blood cell and platelet count, as well as markers of liver injury, remained elevated until day 28. Macroscopically, inflammation and abscess formation were detected in the peritoneal space and on/in the liver. Microscopically, acute-chronic inflammation with ductular proliferation, focal granuloma formation in the parenchyma, and substantial hepatic fibrosis were observed. Increased numbers of potentially pathogenetic macrophages and α-smooth muscle actin-positive cells, presumably activated hepatic stellate cells, were detected in the vicinity of fibrotic areas. Fibrosis was associated with the presence of elastin and an augmented production/deposition of collagen types I and III. Microarray analyses revealed early activation of canonical and noncanonical pathways of hepatic stellate cell transdifferentiation. Thus, chronic sequelae of experimental sepsis were characterized by abscess formation, persistent inflammation, and substantial liver injury and fibrosis, the latter associated with increased numbers of macrophages/α-smooth muscle actin-positive cells and deposition of collagen types I and III. This suggests persistent activation of stellate cells, with consecutive fibrosis-a hallmark of chronic liver disease-as a result of acute life-threatening infection.
High
[ 0.7201166180758011, 30.875, 12 ]
Q: frequency table from two filters and two summarise in dplyr How can I combine the following codes to into one: df %>% group_by(year) %>% filter(MIAPRFCD_J8==1 | MIAPRFCD_55==1) %>% summarise (Freq = n()) df %>% group_by(year) %>% filter(sum==1 | (MIAPRFCD_J8==1 & MIAPRFCD_55==1)) %>% summarise (reason_lv = n()) So output will be one table (or df) which is grouped by year and two columns of frequencies based on the above filters. Here is the sample data: df<- read.table(header=T, text='Act year MIAPRFCD_J8 MIAPRFCD_55 sum 1 2015 1 0 1 2 2016 1 0 1 3 2016 0 1 2 6 2016 1 1 3 7 2016 0 0 2 9 2015 1 0 1 11 2015 1 0 1 12 2015 0 1 2 15 2014 0 1 1 20 2014 1 0 1 60 2013 1 0 1') Output after combing the codes would be: year Freq reason_lv 2013 1 1 2014 2 2 2015 4 3 2016 3 2 Thanks in advanced! A: Now that you've included your data, this is easy enough to solve. Here are two possible options. Both options get you the output you want, it's mostly just a matter of style. Option 1, make 2 filtered data frames, then use an inner_join to join them together by year. (You could also just build those data frames inline in the arguments to inner_join, but that's a little less clear.) library(tidyverse) df<- read.table(header=T, text='Act year MIAPRFCD_J8 MIAPRFCD_55 sum 1 2015 1 0 1 2 2016 1 0 1 3 2016 0 1 2 6 2016 1 1 3 7 2016 0 0 2 9 2015 1 0 1 11 2015 1 0 1 12 2015 0 1 2 15 2014 0 1 1 20 2014 1 0 1 60 2013 1 0 1') # option 1: two dataframes, then join freq_df <- df %>% group_by(year) %>% filter(MIAPRFCD_J8 == 1 | MIAPRFCD_55 == 1) %>% summarise (Freq = n()) reason_df <- df %>% group_by(year) %>% filter(sum == 1 | (MIAPRFCD_J8 == 1 & MIAPRFCD_55 == 1)) %>% summarise (reason_lv = n()) inner_join(freq_df, reason_df, by = "year") #> # A tibble: 4 x 3 #> year Freq reason_lv #> <int> <int> <int> #> 1 2013 1 1 #> 2 2014 2 2 #> 3 2015 4 3 #> 4 2016 3 2 Option 2, add boolean variables for whether the observation needs to go into the Freq calculation, and whether it needs to go into the response calculation--dummy variables help with this since those two things aren't mutually exclusive. # option 2: binary variables df %>% mutate(getFreq = (MIAPRFCD_J8 == 1 | MIAPRFCD_55 == 1)) %>% mutate(getReason = (sum == 1 | (MIAPRFCD_J8 == 1 & MIAPRFCD_55 == 1))) %>% group_by(year) %>% summarise(Freq = sum(getFreq), reason_lv = sum(getReason)) #> # A tibble: 4 x 3 #> year Freq reason_lv #> <int> <int> <int> #> 1 2013 1 1 #> 2 2014 2 2 #> 3 2015 4 3 #> 4 2016 3 2 Created on 2018-04-23 by the reprex package (v0.2.0).
Mid
[ 0.561097256857855, 28.125, 22 ]
1. Field of the Invention The present invention relates to a vehicular input device including a single manual operating unit for operating concentratively various electronic devices mounted on a vehicle. Particularly, the invention is concerned with means for improving the versatility and operability of the input device. 2. Description of the Related Art Up-to-date automobiles are equipped with various electronic devices such as air conditioner, radio, television, CD player, and navigation system. If these electronic devices are each individually operated by operating controls which are provided in the electronic devices respectively, there may be an obstacle to driving the automobiles. For facilitating the selection of a desired function, e.g., ON-OFF switching, of a certain electronic device without obstructing safe driving, a vehicular input device has heretofore been proposed in which various operations of various electronic devices can be conducted by operating a single manual operating unit. A conventional technique associated with such a vehicular input device will be described below with reference to FIGS. 16 to 19, of which FIG. 16 is an interior diagram of an automobile, showing an example of a vehicular input device, FIG. 17 is a side view of a vehicular input device proposed heretofore, FIG. 18 is a plan view of a manual operating unit used in the vehicular input device shown in FIG. 17, and FIG. 19 is a plan view of a guide plate incorporated in the vehicular input device shown in FIG. 17. As shown in FIG. 16, the vehicular input device of this example, indicated at 100, is installed in a console box 200 which is disposed between the driver seat and the front occupant seat. The conventional vehicular input device 100 shown in FIG. 17 is mainly composed of a manual operating unit 110 (see FIG. 18) provided with two switches 111 and 112 for click as signal input means and three rotary variable resistors 113, 114, and 115; an XY table 120 which is operated in two directions orthogonal to each other (in the direction perpendicular to the paper surface in FIG. 17 and in the transverse direction in the same figure) by means of the manual operating unit 110; a stick controller 130 that functions as a position signal input means which inputs signals to an external device in accordance with an operating direction of the XY table 120 and the amount of operation of the same table; and a guide plate 140 (see FIG. 19) which is engaged with an engaging pin 160 projecting from a lower surface of the XY table 120. The manual operating unit 110 and the XY table 120 are rendered integral with each other through a connecting shaft 150. The XY table 120 and the guide plate 140 are engaged with each other by inserting a lower end portion of the engaging pin 160 movably into a guide groove 141 formed in the guide plate 140. The guide groove 141 can be set in a desired shape which permits the lower end portion of the engaging pin 160 to move in a specific direction. For example, as shown in FIG. 19, a guide groove 141 which is cross-shaped in plan may be formed in an upper surface of the guide plate 140 so that the lower end portion of the engaging pin 160 can be moved up to end portions of B, C, D, and E in two generally orthogonal directions from a center A. More specifically, by operating the manual operating unit 110 the engaging pin 160 can be moved along the guide groove 141 of the guide plate 140 through the XY table 120, and with the lower end portion of the engaging pin 160 positioned in any of the end points A, B, C, D, and E in the guide groove 141, information (a position signal) on that engaged position is outputted from the stick controller 130. Therefore, by utilizing such a position signal, a function (a function to be adjusted) of an electronic device mounted on a vehicle can be selected in an alternative manner. After a desired function of the electronic device has thus been selected, it is possible to make adjustment or switching of the selected function by suitably operating the three rotary variable resistors 113 to 115 provided in the manual operating unit 110. As shown in FIG. 16, the vehicular input device 100 thus constructed is combined with a switch unit 170 which selects a desired electronic device alternatively from among plural electronic devices mounted on the vehicle, a display 180 which displays the name of the electronic device selected by the switch unit 170 and the contents of operation performed by the vehicular input device 100, and further with a computer (not shown) which controls those devices. As a result, the plural electronic devices can be operated in a concentrative manner. The switch unit 170 is installed in a console box 200 and is provided with operating switches 171a to 171e which are disposed near the vehicular input device 100 and which are connected each independently to different electronic devices. For example, if the operating switches 171a to 171e are connected each independently to air conditioner, radio, television, CD player, and navigation system, which are mounted on the vehicle, ON-OFF switching of the air conditioner and designation of an air conditioner mode for the vehicular input device 100 can be done by operating the operating switch 171a, and ON-OFF switching of the radio and designation of a radio mode for the vehicular input device 100 can be done by operating the operating switch 171b. Likewise, by operating the other operating switches 171c to 171e it is possible to effect ON-OFF switching of the corresponding electronic devices and mode designation for the vehicular input device 100. The display 180, e.g., a liquid crystal display, is installed in a position easy to see from the driver seat, while the computer referred to above is installed within the. console box 200. The selection and adjustment of a function of the electronic device selected by the switch unit 170 can be done by operating the vehicular input device 100, but the function capable of being selected and adjusted by operation of the vehicular input device 100 differs depending on the type of the selected electronic device. For example, when the air conditioner mode has been designated by operating the switch unit 170, if the engaging pin 160 is positioned in the end portion B of the guide groove 141 of the guide plate 140 by operating the manual operating unit 110 and if the clicking switch 111 is depressed for clicking, there is selected an xe2x80x9cair volume adjustxe2x80x9d function, while if the engaging pin 160 is positioned in the end portion C of the guide groove 141 and the switch 111 is clicked, there is selected an xe2x80x9cair blow-off position adjustxe2x80x9d function. Likewise, if the switch 111 is clicked with the engaging pin 160 positioned in the end portions D and E of the guide groove 141, there are selected xe2x80x9cair blow-off direction adjustxe2x80x9d and xe2x80x9ctemperature adjustxe2x80x9d functions. After the selection of functions, the functions can be adjusted by suitably operating the rotary variable resistors 113 to 115. For example, with the air conditioner mode designated by the switch unit 170 and xe2x80x9cair volume adjustxe2x80x9d selected by the manual operating unit 110, the air volume in the air conditioner can be adjusted by operating the rotary variable resistor 113. Likewise, when xe2x80x9cair blow-off position adjustxe2x80x9d is selected in the air conditioner mode, the air blow-off position from the air conditioner can be adjusted by operating the rotary variable resistors 114 and 115. When the radio mode is designated by the switch unit 170 and xe2x80x9cvolume adjustxe2x80x9d selected by the manual operating unit 110, the volume of the radio can be adjusted by operating the rotary variable resistor 113. Further, xe2x80x9ctuningxe2x80x9d is selected in the radio mode, tuning of the radio can be done by operating the rotary variable resistors 114 and 115. The conventional vehicular input device 100 is installed in a console box 200 which is provided between the driver seat and the front occupant seat in the automobile concerned. Once installed, the mounting posture of the input device for the console box 200 cannot be changed. Consequently, for some particular physical constitution and form of a vehicular driver or occupant, the operability of the input device may be poor and it may be impossible to make the most of the convenience of the input device. The present invention has been accomplished for solving the above-mentioned problem of the prior art and it is an object of the invention to provide a vehicular input device for which the mounting posture can be adjusted to conform with the physical constitution and form of an operator and which is superior in both versatility and operability. According to the present invention, for solving the above-mentioned problem, there is provided a vehicular input device comprising a housing, a manual operating unit mounted on an upper surface of the housing, a position sensor which outputs a position signal corresponding to the direction and amount that the manual operating unit is operated, and a plurality of push-button switches arranged on the upper surface of the housing, wherein the mounting posture of the housing in the interior of an automobile can be adjusted freely by the manual operating unit after the initial mounting of the housing. According to this construction, since the mounting structure of the housing in the interior of an automobile can be adjusted freely in conformity with the physical constitution and form of an operator, it becomes possible to utilize various components conveniently and thus the versatility and operability of the vehicular input device are improved. Preferably, the mounting posture of the housing in the interior of the automobile can be adjusted by operating the manual operating unit and push-button switches mounted on the housing. By so doing, it is not necessary to use any special device for controlling the posture of the housing and the whole of the vehicular input device can be constructed compactly. The position sensor and the push-button switches may be electrically connected to a display through a computer both provided in the automobile to display on the display the mounting posture of the housing which is adjusted by operating the manual operating unit and the push-button switches both mounted on the housing, as well as the operation of the manual operating unit. In this case, the operator can adjust the mounting posture of the housing while checking the displayed contents on the display and therefore can control the posture of the housing quickly and positively.
Low
[ 0.537428023032629, 35, 30.125 ]
Bang Bang Lyrics Bang Bang Baby I've tried for you But this heart's not bullet proof It's like you live with all your guns against me You shoot love down you push my heart around And I'm taking no more Of your faking no more Bang bang You shot my heart Bang bang Blown it apart Bang bang You blew my love away hey hey hey Bang bang You fire on me Your hurt and jealousy Bang bang You've blown our love away hey hey hey You shoot love down you push my heart around But I'm taking no more Of your faking no more
Low
[ 0.516, 32.25, 30.25 ]
Post navigation THEMIS/ARTEMIS – 11 Years Post Launch February 17th is the 11 year anniversary of the launch of the Five THEMIS Spacecraft, a two year mission to study space weather. THEMIS Overview NASA’s Time History of Events and Macroscale Interactions during Substorms (THEMIS) aims to resolve one of the oldest mysteries in space physics, namely to determine what physical process in near-Earth space initiates the violent eruptions of the aurora that occur during substorms in the Earth’s magnetosphere. THEMIS is a 2-year mission consisting of 5 identical probes that will study the violent colorful eruptions of Auroras.
Mid
[ 0.629107981220657, 33.5, 19.75 ]
Q: What should I do not to see message about network? Ubuntu 14.10 desktop is installed on my laptop. A message is seen whenver I boot that 'Network Service Discovery Disabled.' It seems that a problem is to be solved reading text below. What is needed to do? HELLOVENUS Fantagio A: It looks like avahi-daemon is started when the network connection is established (/etc/network/if-up.d/avahi-daemon). This notification is informing you that mDNS (Avahi) has been disabled. It's only used for a small number of applications that only work on the local network, it won't adversely affect your Internet connection or DNS. The most well known use for mDNS is sharing music with Rhythmbox (or iTunes) over your LAN. It's an Apple technology, but it's largely been ignored in favour of uPNP or DLNA. To disable it, you must edit the file /etc/default/avahi-daemon as : sudo gedit /etc/default/avahi-daemon and add this line (or change it if already exists to): AVAHI_DAEMON_DETECT_LOCAL=0 You can see What does network service discovery disabled mean?
Mid
[ 0.607317073170731, 31.125, 20.125 ]
<?xml version='1.0' encoding='UTF-8'?> <!-- This document was created with Syntext Serna Free. --><!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "http://docs.oasis-open.org/dita/v1.1/OS/dtd/topic.dtd" []> <topic id="abbreviations" xml:lang="en-us"> <title>List of Abbreviations</title> <prolog> <author>Ratnadip Choudhury</author> <copyright> <copyryear year="2011"/> <copyrholder>ROBERT BOSCH ENGINEERING AND BUSINESS SOLUTIONS LIMITED</copyrholder> </copyright> </prolog> <body> <table> <tgroup cols="2"> <tbody> <row> <entry>DFD</entry> <entry>Data Flow Diagram</entry> </row> <row> <entry>E-R</entry> <entry>Entity Relationship</entry> </row> <row> <entry>OSS</entry> <entry>Open Source Software</entry> </row> <row> <entry>RBEI</entry> <entry>Robert Bosch Engineering and Business Solutions Limited</entry> </row> <row> <entry>RS</entry> <entry>Requirement Specification</entry> </row> </tbody> </tgroup> </table> </body> </topic>
High
[ 0.6631439894319681, 31.375, 15.9375 ]
Q: Load File, Read To A List I'm very new to C# only about three days in. I'm trying to open a file of keywords and have the program enter the keywords into a list in the program. I keep getting a string that looks like this. "Discount Available\r\nDiscounts Available\r\n% OFF\r\n% off\r\nCoupon\r\ncoupon\r\nUse Coupon Code\r\nuse coupon code\r\ncoupon code\r\nCoupon Code\r\nOrders\r\norders\r\nOrder\r\norders\r\nreceived your order\r\nReceived Your Order\r\npayment received\r\nPayment Received\r\nLooking forward to your order's\r\nlooking forward to your order's\r\nLooking Forward To Your Order's\r\nReceived details\r\nreceived details\r\nReceived Details" But I'm trying to get the list items to output into a list like this below. Discount Available Discounts Available % OFF % off Coupon coupon Use Coupon Code use coupon code coupon code Coupon Code Orders orders Order orders received your order Received Your Order payment received Payment Received Looking forward to your order's looking forward to your order's Looking Forward To Your Order's Received details received details Received Details This is what I have so far. Any help would be much appreciated. Thank you. using System; using System.Collections.Generic; using System.Windows.Forms; using System.IO; namespace Keywords { public partial class Form1 : Form { public Form1() { InitializeComponent(); } OpenFileDialog ofd = new OpenFileDialog(); public void button1_Click(object sender, EventArgs e) { ofd.Filter = "TXT|*.txt"; if (ofd.ShowDialog() == DialogResult.OK) { textBox2.Text = ofd.FileName; string filePath = ofd.FileName; string path = ofd.FileName; string readText = File.ReadAllText(path); List<string> fileItems = new List<string>(); fileItems.Add(readText); foreach (string itemfile in fileItems) { } fileItems = new List<string>(); } } } } Thank you for all your great replays, This is what I have for my new code from the answers I received from everyone. I'm getting the desired output now. Is this code the best method for doing what I'm trying to achieve? Thank you all! using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Windows.Forms; namespace Keywords { public partial class Form1 : Form { public Form1() { InitializeComponent(); } OpenFileDialog ofd = new OpenFileDialog(); public void button1_Click(object sender, EventArgs e) { ofd.Filter = "TXT|*.txt"; if (ofd.ShowDialog() == DialogResult.OK) { string path = ofd.FileName; List<string> fileItems = File.ReadAllLines(path).ToList(); } } } } I have one more question, is there a way to add the items to the list without the quotes? This is the out put I'm getting. "Discount Available" "Discounts Available" "% OFF" "% off" "Coupon" "coupon" "Use Coupon Code" "use coupon code" "coupon code" "Coupon Code" "Orders" "orders" "Order" "orders" "received your order" "Received Your Order" "Payment Received" "Looking forward to your order's" "looking forward to your order's" "Looking Forward To Your Order's" "Received details" "received details" "Received Details" A: You can use File.ReadLines() to return each line as a separate string in an IEnumerable<string>. If you want that to be a List, like you have now, just add .ToList(). As @mason points out, you'll need to add using System.Linq.
Low
[ 0.5107913669064741, 26.625, 25.5 ]
Money Gram Phone Number “I always say to people, when you’re giving your kid a tablet or a phone, you’re really giving them a bottle of wine or a gram of coke,” she said. Our most recent predictions and forecasts for silver prices all point to profits. To make money on silver in 2017, get our latest silver investing tips here. The Department of Residential Life & Housing promotes a safe on-campus living environment that is comfortable, affordable, and well maintained to. Moneygram International Inc (NASDAQ:MGI) stock was up on Monday following an increased bid from Alibaba Group Holding Ltd’s (NYSE:BABA) Ant Financial. Ant Financial is now offering to acquire Moneygram International Inc for. Euronet continues to believe there is compelling commercial logic to a combination between Euronet and MoneyGram. Get expert advice for all your financial questions, from spending , saving and investing smartly; to tackling taxes; to buying a home; to getting the right insurance. Dubai: MoneyGram, a global provider of money transfer and payment services, expects to see robust growth in business from the GCC region as the regional economies diversify away from oil, Grant Lines, Chief Revenue Officer of. The Department of Residential Life & Housing promotes a safe on-campus living environment that is comfortable, affordable, and well maintained to. Gold price in USA in U.S Dollar (USD) is a free service provided by Gold Price Network website, where you can find daily reports about gold price in USA in U.S Dollar. In a bid to expand online payment business in the US after a successful run in India with Paytm, Chinese e-commerce giant Alibaba’s digital payments arm Ant Financial has bought global money-transfer service MoneyGram for nearly. Online City Bank Credit Card Payment Citibank Malaysia provides financial management and banking services. We offer a wide range of products like credit cards, loans, deposits and insurance. Celtic Bank credit and prepaid cards – easy approval, rewards. Compare & apply online for a Celtic Bank credit card At no time of year is. Claims can be filed online, or by fax or snail mail. The A US government panel rejected Ant Financial’s acquisition of MoneyGram International Inc over national security concerns, the companies said on Tuesday, the latest Chinese deal torpedoed under the administration of US President. Get expert advice for all your financial questions, from spending , saving and investing smartly; to tackling taxes; to buying a home; to getting the right insurance. It’s about time! That is what every PayPal member is probably saying after hearing the announcement that the company would partner with MoneyGram to give account holders another way to access their cash. The PayPal and. Our most recent predictions and forecasts for silver prices all point to profits. To make money on silver in 2017, get our latest silver investing tips here. MoneyGram is the second-largest global money transfer company, and Ant Financial is a private, entrepreneurial technology company. We share a mission to serve financially underserved people around the world, and by joining. In January, the state Department of Children and Family Services partnered with the Dallas-based money-transfer. How Money Works Primerica. and Primerica Multifamily Development Group of Tampa are confident money will be available to rebuild the project, said Cavalieri, site selection specialist for Primerica. But work on rebuilding residential housing could be delayed a. Eliminate Debt A Debt Addiction. Debt isn’t necessarily “bad.” Not many people would be able to buy a house without a mortgage. But many Americans BEIJING >> Chinese mobile payments company Ant Financial has abandoned its plan to buy MoneyGram after a U.S. government panel rejected the merger proposal because of national security concerns. The $1.2 billion deal was the. “I always say to people, when you’re giving your kid a tablet or a phone, you’re really giving them a bottle of wine or a gram of coke,” she said. Now, though, Ma’s Ant Financial offshoot has failed to secure approval for its $1.2 billion purchase of cash-transfer outfit MoneyGram International. The Committee on Foreign Investment in the United States, which vets foreign acquirers, The best thing about the Pixel in this picture is its color. Image: Alex Cranz/Gizmodo Best Phone For Your Money The coveted prize of the best android app of the year has gone to ‘Socratic – Math Answers & Homework Help’. The. Easy to understand comparison of the different types of radiation meters and the best one for the layperson to measure mobile phone and wifi radiation. This month, we challenged you to save money on your cell phone bill The struggling energy company agreed to be bought by Dominion Energy for $7.9 billion in stock. MoneyGram International Inc., down $1.20 to $12.11 U.S. regulators rejected the sale of the money transfer to a Chinese firm. The struggling energy company agreed to be bought by Dominion Energy for $7.9 billion in stock. U.S. regulators rejected the sale of the money transfer to a Chinese firm. Spectrum Brands Holdings Inc., up $9.58 to $118.94 The consumer. The struggling energy company agreed to be bought by Dominion Energy for $7.9 billion in stock. MoneyGram International Inc., down $1.20 to $12.11 U.S. regulators rejected the sale of the money transfer to a Chinese firm. The US has blocked the $1.2bn (£880m) sale of money transfer firm Moneygram to China’s Ant Financial, the digital payments arm of Alibaba. It is the highest profile Chinese deal to be rejected by Washington since Donald Trump came. BEIJING — Chinese billionaire Jack Ma has dropped his bid to buy U.S. money transfer company MoneyGram after Washington rejected the $1.2 billion deal in a fresh example of heightened American scrutiny of Chinese investment. Gold price in USA in U.S Dollar (USD) is a free service provided by Gold Price Network website, where you can find daily reports about gold price in USA in U.S Dollar. Money Saving Tips Ideas CANNY cruisers know it’s not all that difficult to save. tips on how to make the most of your cruising dollar. BOOK EARLY. So-called earlybird discounts are part of the cruise landscape. Operators keen to fill ships early – and have. No matter where you are on your financial journey, you need to know that it’s possible for anyone to The best thing about the Pixel in this picture is its color. Image: Alex Cranz/Gizmodo
Low
[ 0.443089430894308, 27.25, 34.25 ]
Teaching Mathematics to Chemistry Students with Symbolic ComputationJ. F. Ogilvie and M. B. MonaganThe authors explain how the use of mathematical software improves the teaching and understanding of mathematics to and by chemistry students while greatly expanding their abilities to solve realistic chemical problems.Ogilvie, J. F.; Monagan, M. B. J. Chem. Educ.2007, 84, 889. Chemometrics | Computational Chemistry | Fourier Transform Techniques | Mathematics / Symbolic Mathematics | Nomenclature / Units / Symbols Concentration Scales for Sugar SolutionsDavid W. BallExamines several special scales used to indicate the concentration of sugar solutions and their application to industry.Ball, David W. J. Chem. Educ.2006, 83, 1489.
High
[ 0.6990801576872531, 33.25, 14.3125 ]
Q: PHPSpreadsheet: Scale must be greater than or equal to 1 When I try to open/read the spreadsheet (xls), I get the following error: Scale must be greater than or equal to 1 I am using the following code to open and read the file: $filename = 'test.xls'; $spreadsheet = IOFactory::load($filename); //<-- ERRORS HERE $worksheet = $spreadsheet->getActiveSheet(); The error occurs on the ::load command. It isn't a data issue - I can copy the existing data into a new file and it works correctly, so must be an issue with the file itself. I am using v1.6.0 of PHPSpreadsheet, which is the latest at time of writing. Thanks in advance! EDIT: This question relates to PHPSpreadsheet, not PHPExcel as listed here: PHPExcel Error: Scale must be greater than or equal to 1 Though similar, an XLSX version of my file works as expected, hence the need to create a separate question. PHPExcel is also now marked as officially dead, so seems logical to add this question to the correct library / tag on SO. I have since found a solution to the problem (added below), which may also work in PHPExcel, but comes with no warranties! A: OK, I have found a solution to my particular problem... It requires an edit to the setZoomScale function which can be found in SheetView.php. The zoomscale in my file had a value of zero, which threw an error. The new code checks for this and if found, sets it to 1. Perhaps not ideal solution for everyone, but works in a pinch: public function setZoomScale($pValue) { /* NEW code that sets the zoom scale ------------------------------------------*/ //Zoom Scale of 0 causes error. If found, default pValue to 1. if( $pValue == 0) { $pValue = 1; } /*----------------------------------------*/ // Microsoft Office Excel 2007 only allows setting a scale between 10 and 400 via the user interface, // but it is apparently still able to handle any scale >= 1 if (($pValue >= 1) || $pValue === null) { $this->zoomScale = $pValue; } else { throw new PhpSpreadsheetException('Scale must be greater than or equal to 1.'); } return $this; }
Mid
[ 0.6532663316582911, 32.5, 17.25 ]
Q: How do I compare two tables with the same column definitions in different schemas? Any advice would be appreciated. I'm leading my projects initiative to upgrading our ETL software. This can result in data integrity differences. My testing plan is as follows: create an identical schema, schema B, with the same table definitions as schema A run all the ETL jobs to populate schema B using the upgraded ETL version **compare the two schemas and record differences determine why those differences occured **So my question is regarding step 3. What technically do I need to do (commands, queries, etc) to compare every field in every row between the two schemas to confirm that they are identical? Thank you for your time! A: If you just want to do a quick check, you can always use EXCEPT in you query to identify if rows in 2 tables are identical. SELECT 'TABLE1-ONLY' AS SRC, T1.* FROM ( SELECT * FROM TABLE1 EXCEPT SELECT * FROM TABLE2 ) AS T1 UNION ALL SELECT 'TABLE2-ONLY' AS SRC, T2.* FROM ( SELECT * FROM TABLE2 EXCEPT SELECT * FROM TABLE1 ) AS T2 WITH UR; If you are using Toad, it has data compare features as well.
High
[ 0.701086956521739, 32.25, 13.75 ]
17 February, 2009 Notes on the pharmacists strike The pharmacists strike seem to be the talk of the town today, as well as the main story in many newspapers. I took a walk in my area earlier today and found 6 out of 6 pharmacies closed, so the claim by Al-Akhbar that the strike failed in Cairo and Giza is clearly not true. As Zeinobia points out, this claim was not even supported by other state-controlled newspapers as Al-Ahram and al-Goumhouria. As Zeinobia writes, this strike is not like most others in Egypt: "First of all the Syndication is insisting on its demands and the minister of health is backing them, describing the strike as a civilized one. This is a very rare statement from an official regarding a strike in Egypt." It's true that this is rare - but at the same time, it's not surprising that the health minister would try and score political points by backing demands that has to do with decisions taken by the finance ministry. (The issue here is a new law that would impose higher taxes on the farmacists) What really makes this strike different is this simple fact: strikes are usually about workers putting pressure on employers. In this case, however, big chains like Seif Pharmacies and Misr Pharmacies have decided to close their shops - "in solidarity with the union" according to notes posted on the doors of local branches. So it's clearly about employers and small shop-owners taking on the finance ministry. And while workers on strike hope to win their demands by causing economic losses for the employer, the pharmacists can never hope to do the same. One of the pharmacists interviewed by Daily News estimated that the strike will reduce revenues from medicine sales by 12 million per day. This affects primarily the drug companies and the pharmacists themselves, and the state only indirectly, by reduced tax revenues. So the pharmacists only chance to win is by raising the political costs for the government, trying to create public support for their demands (or at least anger at the government as frustrated customers demand badly needed medicines and even more badly needed cosmetics), until some compromise is reached - or they are forced to open their stores in order to avoid bancruptcy. The striking pharmacists are clearly not part of the Egyptian labour movement or trade union movement then. But this doesn't mean, of course, that their demands cannot be legitimate or that their campaign shouldn't be seen as a part of the general wave of political and social protests in Egypt during the last few years. It's another sign that more and more people from different segments of society are willing to openly challenge the policies of the government and fight for what they perceive as just demands. Update: This confirms what I wrote above: "Health Minister Hatem el-Gabaly said he ordered all state-owned pharmacies — which are mostly part of state hospitals and clinics — to stay open 24 hours, to counter any effects of the private-sector strike." Doesn't this mean that if the strike continues, customers will turn to state owned pharmacies, increasing the revenues of the state while ruining the striking businesses? In the end, the lack of economic logic in this strike only strenghtes the impression that this is a desperate campaign, a result of the absence of any other means to influence the policies of the utterly authoritarian government.
Low
[ 0.506382978723404, 29.75, 29 ]
The Cartographers’ Guild is a forum created by and for map makers and aficionados, a place where every aspect of cartography can be admired, examined, learned, and discussed. Our membership consists of professional designers and artists, hobbyists, and amateurs—all are welcome to join and participate in the quest for cartographic skill and knowledge. Although we specialize in maps of fictional realms, as commonly used in both novels and games (both tabletop and role-playing), many Guild members are also proficient in historical and contemporary maps. Likewise, we specialize in computer-assisted cartography (such as with GIMP, Adobe apps, Campaign Cartographer, Dundjinni, etc.), although many members here also have interest in maps drafted by hand. If this is your first visit, be sure to check out the FAQ. You will have to register before you can post or view full size images in the forums. City Maps Made Easy (Gimp) Ok, this is my first post, but I wanted to make a contribution since a lot of the tutorials here really helped me start making maps. If you are looking for a decent looking city map, but nothing fancy, this is for you: First, use RPG Citymap Generator to create a city you like (I have yet to try anything massive), After you get the city you like, select the "Export Selection as Images button" In this menu make sure the "Divide Section" drop-box is set to 1x1, Next, select the "labeling" button, deselect everything, you will probably want to label in gimp anyways, Now click "Colors", from here give your trees a good green color, and set the Park color to any distinctive color, make sure your walls is something you would like also (If you have walls), Next, set the pixel size. I recommend 2000 for smaller towns, and 3200 for larger towns. Make sure the "Buildings: Only Outlines checkbox is deselected. Now, once your image is done being made, open Gimp and close RPG Citymap Generator. From here I recommend following Rob A> 's "Creating Old Paper/Parchment in Gimp" tutorial for some old paper (Or another similar tutorial if you choose). Below the "Fuzz" Layer in my image (Or on an appropriate layer on your paper) I created a new layer called "content". Next go to File > Open as Layers, find your city image that was created by RPG Citymap Generator, and click open. Now your city map should be on the paper, probably looks bad though, so lets do a couple of things to it. First, set the layer mode to something that makes it look better (I find that Burn works best), Now, scale your city map so it fits appropriately (Anything hanging off the paper should not be seen), Now, select the "Select by Color" tool and pick your park color, go ahead and fill it in. At this point the rest should be simply editing the colors a bit, adding labels, etc. so I will cut this off. If it gets a lot of feedback I will try to put together a more thorough pdf tutorial. Here is a picture of one of my maps, I haven't done too much with changing building patterns or anything, but it looks pretty good. Im working on a much better one that is an above ground city (Has parks and whatnot) I definitely agree, I am still looking into some easy ways to take these simple maps and make them a little more appealing for those that like the visuals (Got to get a lot of spare time first). Here is that other map I was putting together when I made the first post (Needs some cleanup and the lake needs to be fixed still)
High
[ 0.6651376146788991, 36.25, 18.25 ]
Q: Adding carriage returns to Html control eg) div or span This code ASPX file: Relates to: <div ID="relatedMeasures" runat="server"></div> <br /> Code behind loops through items and appends to a string... foreach (int id in mcsMeasuresSelected) { if (mcs.Id == id) { // Add to selected mcsSelectedMeasures.Add(mcs); output += mcs.Measure + Environment.NewLine; } } relatedMeasures.InnerText = output; Then the output HTML has no line breaks. Relates to: <div id="MainContent_extScope1_relatedMeasures">CPS: Gas Boiler Solid Fuel Boiler Electric Storage Heater </div> I have also attempted using a span tag instead of div tag and an asp label and also adding "< br / >" or "<br/>" instead of Environment.NewLine A: Use <br/> instead of Environment.NewLine. Like this. output += mcs.Measure + "<br />"; and use InnerHTML , like following. relatedMeasures.InnerHtml = output; This is the difference between them - Difference between InnerHTML and InnerText property of ASP.Net controls?
Mid
[ 0.5945945945945941, 33, 22.5 ]
(* * Copyright (c) Facebook, Inc. and its affiliates. * * This source code is licensed under the MIT license found in the * LICENSE file in the root directory of this source tree. *) open Core val run : Configuration.Server.t -> int val command : Command.t
Low
[ 0.38188976377952705, 24.25, 39.25 ]
Cover your position 2014-06-12 12:00:00 When it comes to DCTF coverboys, which positions have made it most often? By Greg Tepper DCTF Managing Editor The 2014 Summer Editionis coming soon! The 2014 Summer Edition of Dave Campbell's Texas Football — the 55th annual edition of "the bible of Texas Football" — is coming soon! Here's what you need to know: SUBSCRIBE: Are you a true Texas football fan? Become a DCTF Legends Member or a DCTF Champions Member! Not only will you get all of our magazines early — a week before they hit shelves — but you'll also get the exclusive content from DCTF, including 10 digital editions throughout the year! ORDER NOW: Are you ready for some football? You can reserve your copy of the 2014 Summer Edition of Dave Campbell's Texas Football right now in the DCTF Store! Then, it'll get sent straight to you! RELEASE DATE: The 2014 Summer Edition of Dave Campbell's Texas Football will hit shelves across Texas on Friday, June 20, though Legends and Champions members will get their copy early! Perhaps you heard, but Dave Campbell’s Texas Football has been in the news lately. The 55th annual edition of “the bible of Texas football” hits shelves next week, but now that the cover has been revealed, it enters the rich tapestry that is the history of Texas Football. It’s the 55th edition of the magazine, but it’s technically the 57th cover of the magazine — the 1998 and 1999 editions each had two separate covers. And that is a lot of history to deal with. The cover of Texas Football has featured some of the best and brightest figures in Lone Star State gridiron history, and it’s always fun to take a look at the coverboys en masse, to give you a feel for just where this new cover fits the grand scheme of things. Of particular interest today: positions. After all, we’ve got three different coverboys this year — a coach, a quarterback and a wide receiver — so how common is each position on the cover of the most prestigious football magazine in Texas? We crunched the numbers, going over every cover in Texas Football history all the way back to 1960. We considered, for this study, the main focus of the cover — that is, the people most prominently featured. For example, if you look at this year’s cover, we consider Briles, Petty and Goodley as coverboys this year, while Texas coach Charlie Strong — up in the top-right — would not count. And one more thing: the 1995 cover, which features a collage of Southwest Conference greats? Yeah, we’re just throwing that one out for the purposes of this study. It’s cool, but doesn’t really fit what we’re going for here. All clear? Here’s what we came up with as far as positional breakdown: Quarterback 34 Coach 23 Running Back 18 Defensive Lineman 5 Linebacker 5 Wide Receiver 5 Kicker 2 Defensive Back 1 Offensive Lineman 1 -The king of Cover Mountain, as you could’ve probably guessed, are the quarterbacks. They’re most likely to be the star player on a team, so it should be no surprise that they’ve had more than their share of coverage on the cover. Petty is the first Baylor quarterback to grace the cover; meanwhile, we’ve featured eight Texas quarterbacks, seven Texas A&M signal-callers and five Texas Tech quarterbacks to lead the pack. -Coaches come in next, and Briles becomes the fourth Baylor coach to make the cover. What’s most interesting, to me at least, are the names you see more than once on this list: Mack Brown made it twice (1998 and 1999); so did RC Slocum (1993 and 1996); as did Spike Dykes (1990 and 1996). Meanwhile, coaching luminaries like Jackie Sherrill, Bill Yeoman, and Grant Teaff only made it once. -And there’s a fun trivia question: who’s the only high school coach to make the cover of DCTF? G.A. Moore, in 2002. -Goodley’s presence on the 2014 cover brings wide receivers into a three-way tie with defensive linemen and linebackers with five spots on the cover all-time. Here’s an interesting note on the linebackers: three of the five have come from Texas A&M (Brad Dusek in 1972, John Roper in 1988 and Dat Nguyen in 1998). -Yes, two kickers made the cover (Texas’ Russell Erxleben and Texas A&M’s Tony Franklin). Yes, they were on the same cover, 1978. I know. I know. -Only one offensive lineman and one defensive back have ever been DCTF coverboys, but it’s hard to argue against either one. TCU’s Marvin Godbolt, the face of the Frogs’ defensive revolution, was part of a wide-ranging cover in 2004, while Texas A&M’s Mo Moorman was Texas A&M’s first coverboy back in 1967. Greg Tepper is the managing editor of Dave Campbell's Texas Football and TexasFootball.com. Please remember that you are responsible for the content you post on texasfootball.com. Any content containing profanity, personal attacks, antisocial behavior, or is otherwise inappropriate will be removed from our website. Users who fail to abide by these guidelines may be permanently banned. The comments/ideas shared in the Facebook commenting system do not reflect the views of Dave Campbell¹s Texas Football. If you would like to report inappropriate behavior within the Facebook commenting system, please email [email protected] and we will review your request. WaterCooler Talk I started collecting Texas Football when I was 12. I have every issue. All are signed by Dave Campbell except the last two. I hope he comes to San Antonio on his promotional tour so I can get those tw...Martin ,
Mid
[ 0.591111111111111, 33.25, 23 ]
# # Cookbook:: percona # Recipe:: server # include_recipe 'percona::package_repo' include_recipe 'percona::client' pkg = node['percona']['server']['package'].empty? ? percona_server_package : node['percona']['server']['package'] package pkg do action node['percona']['server']['package_action'].to_sym end # install packages if platform_family?('rhel') # Work around issue with 5.7 on RHEL if node['percona']['version'].to_f >= 5.7 execute 'systemctl daemon-reload' do action :nothing end delete_lines 'remove PIDFile from systemd.service' do path '/usr/lib/systemd/system/mysqld.service' pattern /^PIDFile=.*/ notifies :run, 'execute[systemctl daemon-reload]', :immediately end end end unless node['percona']['skip_configure'] include_recipe 'percona::configure_server' end # access grants unless node['percona']['skip_passwords'] include_recipe 'percona::access_grants' include_recipe 'percona::replication' end
Mid
[ 0.550173010380622, 39.75, 32.5 ]
import Base.show export Msg, msg_pub, msg_reply, send_status, send_ipython # IPython message structure mutable struct Msg idents::Vector{String} header::Dict content::Dict parent_header::Dict metadata::Dict function Msg(idents, header::Dict, content::Dict, parent_header=Dict{String,Any}(), metadata=Dict{String,Any}()) new(idents,header,content,parent_header,metadata) end end msg_header(m::Msg, msg_type::String) = Dict("msg_id" => uuid4(), "username" => m.header["username"], "session" => m.header["session"], "date" => now(), "msg_type" => msg_type, "version" => "5.3") # PUB/broadcast messages use the msg_type as the ident, except for # stream messages which use the stream name (e.g. "stdout"). # [According to minrk, "this isn't well defined, or even really part # of the spec yet" and is in practice currently ignored since "all # subscribers currently subscribe to all topics".] msg_pub(m::Msg, msg_type, content, metadata=Dict{String,Any}()) = Msg([ msg_type == "stream" ? content["name"] : msg_type ], msg_header(m, msg_type), content, m.header, metadata) msg_reply(m::Msg, msg_type, content, metadata=Dict{String,Any}()) = Msg(m.idents, msg_header(m, msg_type), content, m.header, metadata) function show(io::IO, msg::Msg) print(io, "IPython Msg [ idents ") print(io, join(msg.idents, ", ")) print(io, " ] {\n parent_header = $(msg.parent_header),\n header = $(msg.header),\n metadata = $(msg.metadata),\n content = $(msg.content)\n}") end function send_ipython(socket, m::Msg) lock(socket_locks[socket]) try @vprintln("SENDING ", m) for i in m.idents send(socket, i, more=true) end send(socket, "<IDS|MSG>", more=true) header = json(m.header) parent_header = json(m.parent_header) metadata = json(m.metadata) content = json(m.content) send(socket, hmac(header, parent_header, metadata, content), more=true) send(socket, header, more=true) send(socket, parent_header, more=true) send(socket, metadata, more=true) send(socket, content) finally unlock(socket_locks[socket]) end end function recv_ipython(socket) lock(socket_locks[socket]) try idents = String[] s = recv(socket, String) @vprintln("got msg part $s") while s != "<IDS|MSG>" push!(idents, s) s = recv(socket, String) @vprintln("got msg part $s") end signature = recv(socket, String) request = Dict{String,Any}() header = recv(socket, String) parent_header = recv(socket, String) metadata = recv(socket, String) content = recv(socket, String) if signature != hmac(header, parent_header, metadata, content) error("Invalid HMAC signature") # What should we do here? end m = Msg(idents, JSON.parse(header), JSON.parse(content), JSON.parse(parent_header), JSON.parse(metadata)) @vprintln("RECEIVED $m") return m finally unlock(socket_locks[socket]) end end function send_status(state::AbstractString, parent_msg::Msg=execute_msg) send_ipython(publish[], Msg([ "status" ], msg_header(parent_msg, "status"), Dict("execution_state" => state), parent_msg.header)) end
Low
[ 0.5215827338129491, 36.25, 33.25 ]
Donald Trump wishes the recipient of this Valentines Day greeting card a Happy Valentines Day. Donald knows that thru the years your Valentines Day seems to mean less and that is why he is here to help you “Make Your Valentines Day Great Again”. This is a task that only Donald can achieve, so relax and enjoy your Valentines Day. Donald Trump wishes the recipient of this Valentines Day greeting card a Happy Valentines Day. Donald does not anyone to stress out worrying how expensive your Valentines Day will be by telling you that “Mexico will pay for it”! Now that you put it that way, no problem! That was awful nice of Mexico to pitch in to pay for your Valentines Day!
Low
[ 0.449799196787148, 28, 34.25 ]
Q: Evaluation of the principal value of $\int\limits_{-\infty}^\infty \frac{\sin 2x}{x^3} \, dx$ I'm trying to evaluate an integral $\int\limits_{-\infty}^\infty \frac{\sin 2x}{x^3}\,dx$ using Cauchy's theorem. Considering an integral from $-R$ to $-\epsilon$, then a semicircular indentation around $x=0$, then $\epsilon$ to $R$, then a semicircular contour from $R$ to $-R$. Around the pole at $x=0$, the semicircular contribution gives $$\int\limits_\pi^0 \, dz\frac{e^{2iz}}{z^3}=\int_\pi^0 (\epsilon e^{i\theta})(i \, d\theta) \frac{e^{\epsilon e^{i\theta}}}{(\epsilon e^{i\theta})^3}$$ What I need is the limiting value of this integral as $\epsilon\rightarrow 0$. But it seems to diverge. A: We have that $\;z=0\;$ is clearly a triple pole of $$f(z):=\frac{e^{i2z}}{z^3} ,\;\;\text{and in this case it is probable easier to use power series for the residue:}$$ $$\frac{e^{2iz}}{z^3}=\frac1{z^3}\left(1+2iz-\frac{4z^2}{2!}-\ldots\right)=\frac1{z^3}+\frac{2i}{z^2}-\frac2z-\ldots\implies\text{Res}_{z=0}(f)=-2$$ so taking the usual contour with a "bump" around zero, we get $$0=\lim_{R\to\infty,\,\epsilon\to0}\oint_{\Gamma_R}f(z)=\int_{-\infty}^\infty\frac{e^{2ix}}{x^3}dx-\int_{\gamma_\epsilon}f(z)dz= \int_{-\infty}^\infty\frac{e^{2ix}}{x^3}dx+2\pi i\implies$$ $$-2\pi i=\int_{-\infty}^\infty\frac{e^{2ix}}{x^3}dx=\int_{-\infty}^\infty\frac{\cos2x+i\sin2x}{x^3}dx\implies \int_{-\infty}^\infty\frac{\sin2x}{x^3}dx=-2\pi$$ the last equality following from comparing real and imaginary parts in both sides. The above though is just CPV (Cauchy's Principal Value) of the integral, since it doesn't converge in the usual sense of the word.
Mid
[ 0.637554585152838, 36.5, 20.75 ]
Additional Links Gov. Haley wants to get rid of regulations in S.C. By SEANNA ADCOXAssociated Press Feb 13 2013 12:13 am COLUMBIA — Gov. Nikki Haley directed her Cabinet agencies Tuesday to review their regulations, saying South Carolina needs to get rid of government rules that hamper businesses. Haley signed an executive order that creates an 11-member task force to review regulations and make recommendations on which ones to throw out or alter. It also requires her 16 Cabinet agencies to report their suggestions to the task force by mid-May. The Republican governor can’t mandate other state agencies do the same, but she’s encouraging them to. “We are continuing to make sure every agency in South Carolina is customer-service friendly,” she told reporters after her Cabinet meeting. “These agencies work for the taxpayer, for the businesses. If they’re costing them time, they’re costing them money.” The task force has until mid-November to issue its report to the Legislature. The idea is that regulation changes will be introduced for the 2014 legislative session. Haley directed her Cabinet directors to make whatever changes they could on their own. The director of the state Chamber of Commerce said the initiative sounds like a good idea. Otis Rawl said environmental permits give his members the most concern, because the process can drag on indefinitely. Business owners want a timely decision, so they can decide whether to end a project or take it elsewhere if necessary, he said. “We’ve got to fix the system where permits can get through the process without delays,” Rawl said. While the Department of Health and Environment Control is not a Cabinet agency, Haley appointed all of its board members, and she plans to discuss the directive with DHEC officials. DHEC director Catherine Templeton, who took the agency’s helm last year, said she supports the governor’s order and is working to change DHEC’s historically slow action. That includes launching an online site next month, dubbed “permitting central,” which will chart the process and provide links to documents for filling out. “It’s an online, fabulous, walk-you-through what you need,” Templeton said. The founder of the Coastal Conservation League called her choice of words peculiar. “DHEC’s goal is not safety per se but to ensure our environment’s not compromised, from human health to maintaining healthy wildlife populations,” said Dana Beach, adding he’s withholding judgment until he sees the list of appointees. Beach, who hopes to be put on the panel, said he agrees the regulatory process could be more efficient. “To the extent that things could be done to raise that level of efficiency and also improve environmental outcomes, then more power to this committee,” he said, adding that its success will also depend on funding. “In order to run these programs smoothly and efficiently, there has to be enough money to hire the staff to do it. You can’t have three people responsible for something it takes six people to do.” Haley first announced her plans to create a regulatory review task force in her State of the State address last month. The not-yet-created panel is to be made up of four legislative appointees and seven appointed by Haley: four business leaders, and one representative each from DHEC, the health care industry and conservationists. Haley will name who will head it. The task force is separate from the ongoing work of the Small Business Regulatory Review Committee, an 11-member panel of business owners created in 2004 and housed in the Commerce Department. Haley also wants to change how regulations take effect. Currently, agencies must submit proposed regulations to legislators for review – a step that follows the public hearing process. If the Legislature takes no action within 120 days – not the 180 days Haley referenced – the regulation takes effect automatically. Legislators could otherwise vote to kill the regulation or send it back to the agency for tweaking. Haley contends too many regulations slip through without a thorough review by legislators. “That’s a scary thing for government,” Haley said. “Regulations can be the most costly thing to a business. Regulations are just as important as bills.” A bill introduced in the House last month would require the Legislature to approve regulations. Similar bills passed by that chamber have died in the Senate. The director of the Department of Social Services said she goes through the regulatory process only when the federal government requires it. That final step of legislative review is unlike other states, where regulations are an administrative function, said Lillian Koller, who came to South Carolina from Hawaii. “I avoid it like the plague. ... That slows things down,” she said, adding she instead makes policies and procedures to cover how things operate within the agency. Comments Notice about comments: Aiken Standard is pleased to offer readers the enhanced ability to comment on stories. Some of the comments may be reprinted elsewhere in the site or in the newspaper. We ask that you refrain from profanity, hate speech, personal comments and remarks that are off point. We do not edit user submitted statements and we cannot promise that readers will not occasionally find offensive or inaccurate comments posted in the comments area. If you find a comment that is objectionable, please click the X that appears in the upper right corner when you hover over a comment. This will send the comment to Facebook for review. Please be reminded, however, that in accordance with our Terms of Use and federal law, we are under no obligation to remove any third party comments posted on our website. Read our full terms and conditions.
Low
[ 0.517453798767967, 31.5, 29.375 ]
The recent announcement of the Precision Medicine (PM) Initiative by President Obama[@ocv213-B1] has brought PM to the forefront for healthcare providers, researchers, regulators, and funders alike. In order for PM to be fully realized, we must move toward a Learning Healthcare System model that extends evidence-based practice to practice-based evidence by using data generated through clinical care to inform research ([Figure 1](#ocv213-F1){ref-type="fig"}).[@ocv213-B2] The leadership and members of the American Medical Informatics Association Genomics and Translational Bioinformatics Working Group have identified seven key areas that informatics research should explore to enable PM's vision. Figure 1:Informatics methodology enables precision medicine (PM) throughout the Learning Healthcare System cycle. Patients -- past, present, and future -- are at the beginning and end of the cycle. Both healthcare and research participation result in the generation of data. Informatics methods and tools help turn data into information, and information into knowledge. That knowledge, in turn, influences individuals' behavior and informs patient care. Informatics plays a key role in enabling each stage and transition of this cycle. Patients: Past, Present, and Future =================================== Stakeholders in the biomedical enterprise include researchers, providers, payers, and patients. But nearly everyone has been or will be a patient at some point. Patients thus are, and must remain, at the heart of the biomedical enterprise. Key Area One: Facilitate Electronic Consent and Specimen Tracking ----------------------------------------------------------------- In the era of PM, research studies produce more data than they can possibly use and, paradoxically, would benefit from more data than they can possibly generate. As genomic sequencing becomes increasingly available, using de-identified biospecimens for research becomes more nuanced.[@ocv213-B3] Research participants may be asked to give broad consent to the future use of their data and biospecimens, and to acknowledge the possible, though unlikely, prospect of sequence-based re-identification.[@ocv213-B4]^,^[@ocv213-B5] To maximize data and biospecimen reuse while protecting study participants' privacy and adhering to their wishes, it is essential to develop machine-readable consent forms that enable electronic queries.[@ocv213-B6] As large biorepositories linked to electronic health records (EHRs) become more common, informatics will enable researchers to identify cohorts -- both intra- and interinstitutionally -- that meet their study criteria and have given the requisite consent. Proper local management of specimens and derived samples enables accurate tracking of chain of custody, sample derivations, processing/handling, and quality control -- all of which are key elements of rigorous and reproducible research.[@ocv213-B7] Structured and electronically available consent forms can empower study participants by allowing them to access, review, and modify their preferences. A number of large-scale initiatives, including Sage Bionetworks, the Genetic Alliance, and the Global Alliance for Genomic Health, are making progress in this area. Areas of informatics that can facilitate study participant consent and sample tracking include the development of structured consent forms and the adoption of relevant ontologies,[@ocv213-B6]^,^[@ocv213-B8] user interface design, and infrastructure to enable participant engagement after the point of enrollment. Developing an infrastructure to perform role-based distributed queries over cohorts and sample collections, such as those provided by OpenSpecimen, the Shared Health Research Information Network (SHRINE), and PopMedNet, will also be important.[@ocv213-B9] Data to Knowledge ================= The promise of PM can only be realized by aggregating (virtually or otherwise) and analyzing data from multiple sources. A recent report by the National Academy of Sciences calls for the development of an information commons (IC) that amasses medical, molecular, social, environmental, and health outcomes data for large numbers of individual patients.[@ocv213-B12] The IC would be continuously updated, enable data analyses, and serve as the foundation for a knowledge base (KB) (see Key Area Five). Creating an IC would require informatics expertise to develop data standards, ensure data security, standardize processing pipelines, and establish data provenance. Key Area Two: Develop, Deploy, and Adopt Data Standards to Ensure Data Privacy, Security, and Integrity, and to Facilitate Data Integration and Exchange -------------------------------------------------------------------------------------------------------------------------------------------------------- Transparency, reciprocity, respecting study participant preferences, data quality/integrity, and security are key to obtaining and maintaining the massive data stores needed for the advancement of PM.[@ocv213-B13] Data security does not mean data lock-down. Data-sharing can allow a study to proceed despite low numbers of eligible participants at any single institution, and can enable data reuse or meta-analyses. Data and metadata standards are required for data integration and exchange to be successful, but the lack of such standards or inconsistent use of existing standards are frequent barriers to this goal, especially in emergent "omics" disciplines.[@ocv213-B14] Data gaps are often discovered when existing standards are adopted for other purposes. Rather than creating yet another standard, those seeking to adopt an existing standard should work with its owners to help extend its scope. Conversely, funders and standards owners should place more emphasis on outreach and education/training for potential adopters of existing data standards. A number of initiatives are working to tackle different aspects of this challenge, including BioSharing, the Center for Expanded Data Annotation and Retrieval (CEDAR), the Biomedical and Healthcare Data Discovery Index Ecosystem (bioCADDIE), and Integrating Data for Analysis, Anonymization, and Sharing (iDASH).[@ocv213-B15] Although there have been significant efforts to share molecular datasets publicly, less progress has been made on sharing healthcare data. An emerging strategy is the development of clinical research networks in which EHR-derived data is stored locally, mapped to a common data model, and queried by proxy for members of a consortium or collaboration. Sharing queries rather than data resolves many of the issues that are involved in data standardization and harmonization, data governance, as well as the legal and privacy concerns surrounding other federated or aggregation models. This strategy has been adopted by initiatives such as MiniSentinel, Observational Health Data Sciences and Informatics (OHDSI), and the National Patient-Centered Clinical Research Network (PCORNet).[@ocv213-B19] Building on these networks to include genomic and other "omics" data, environmental data, and social data is one way forward in the development of ICs for PM. Work on data and metadata standards should be recognized and incentivized by the organizations that use and benefit from them, including academia, industry, government regulators, and funding agencies. New methods of encrypting and sharing genomic data in a way that enables collaborative research without compromising patient privacy are needed. Key Area Three: Advance Methods for Biomarker Discovery and Translation ----------------------------------------------------------------------- A primary goal of PM is to uncover subphenotypes defined by the distinct molecular mechanisms that underlie variations in disease manifestations and outcomes.[@ocv213-B12] One step toward defining subphenotypes is to establish agreed-upon phenotype definitions for existing disease classifications, a surprisingly complex task.[@ocv213-B22] A number of different initiatives (eg, the Electronic Medical Records and Genomics \[eMERGE\] Network and the National Institutes of Health \[NIH\] Collaboratory) are working to make phenotype definitions computationally tractable and reproducible between sites.[@ocv213-B23]^,^[@ocv213-B24] Although some progress in sub-phenotyping has been made, new methods, including analyses of high-dimensional data,[@ocv213-B25] integration of different types of data (eg, "omics," imaging, clinical, environmental),[@ocv213-B26]^,^[@ocv213-B27] and simulating disease behaviors across multiple biological scales in space and time,[@ocv213-B28] are needed to address a number of challenges. Although molecular biomarkers can help elucidate underlying physiological mechanisms of disease, only a minority of currently known biomarkers are clinically actionable. Moreover, critical disease subtype distinctions may be impacted by nonmolecular factors, such as socioeconomic status.[@ocv213-B29] Many questions must be answered before a potentially actionable biomarker can become part of a clinical guideline and translated into practice.[@ocv213-B30] Information that is necessary for bridging this gap might include the functional characterization of genes and pathways related to the biomarker, the level of evidence, and data about economic feasibility. Clinical decision making regarding actionable biomarkers would be facilitated by a framework for presenting different levels of evidence regarding whether and how a molecular abnormality, genomic or otherwise, might represent a therapeutically relevant biomarker.[@ocv213-B31]^,^[@ocv213-B32] Variant annotations with actionable clinical information will enable decision support systems to provide interpretable and actionable patient-specific reports.[@ocv213-B33] Immediate areas for informatics research to focus on include computational phenotyping, biomarker discovery based on heterogeneous data sources, and frameworks for evaluating clinical actionability and utility. Key Area Four: Implement and Enforce Protocols and Provenance ------------------------------------------------------------- Scaling up PM requires complex processing and analytic steps applied to large, heterogeneous datasets. With so many "moving parts," there are many opportunities for errors in the analysis, interpretation, or exchange of information. It is important that both final results and intermediate steps be well documented and fully reproducible. Protocols, and deviations from them, must also be documented. Software versions, analytical parameters, and reference database builds must all be captured as readily available metadata. Although spreadsheets and documents can be useful for informal data exploration, they do not constitute an adequate data management system. Large projects often share data between groups and may last several years, during which time key personnel may change institutions. All data processing and analysis for final results should be automated and documented so that another researcher can reproduce the work without making assumptions about what was done. There are various tools that enable this approach, including Taverna, preconfigured virtual machines, and Sage Bionetworks's Synapse Platform.[@ocv213-B36] Though new challenges will always require novel and innovative solutions, the adoption of standard operating procedures when appropriate will facilitate consistency and improve interoperability. In addition, policies must be enacted and enforced to ensure responsible, reproducible, and reusable science. Processes and protocols for capturing and exchanging metadata and data provenance must be established, standardized, and widely adopted. Furthermore, this information must be considered to be as important as the primary data it describes, and funding agencies and publishers should insist that it be included with any dataset that is produced and released publicly. Knowledge to Action =================== Clinical decision making requires the consolidation of PM knowledge and the development of clinical decision support tools (CDS), which, together with individual patient data, will provide actionable information at the point of care. Key Area Five: Build a Precision Medicine Knowledge Base -------------------------------------------------------- A comprehensive KB will contain information about disease subtypes, disease risk, diagnosis, therapy, and prognosis that emerges from the ongoing analysis of data in an IC. Such a KB must be flexible, scalable, and extensible. Current KBs (eg, on genomic variants) are isolated from one another and do not support federated querying. Informatics solutions are needed for data-sharing and building a consensus on clinical interpretations of disparate, multiscale data. This KB must be machine-readable, as well as human-readable. Knowledge management technologies must enable effective ontological modeling, knowledge provenance, and new methodologies for updating and maintaining the integrated KB. Novel computational reasoning approaches must be utilized to allow efficient federated queries to be run across billions of knowledge units, enabling causal inference and decision support. New methods and processes must be developed to organize biomedical knowledge into integrated and interconnected KBs that will enable precision diagnostics and therapeutics based on the latest genomic discoveries and clinical evidence. Such KBs must provide federated queries and flexible computational analytics capabilities tailored for use by physicians and researchers. Key Area Six: Enhance EHRs to Promote Precision Medicine -------------------------------------------------------- Commercial EHRs enable CDS for PM that is focused on information about a single gene variant.[@ocv213-B39] Informatics challenges for CDS include integrating next generation CDS with PM KBs to provide genome-based risk predictions, prognoses, and drug dosing at the point of care, as well as representing discrete genomic findings and interpretations in a machine-readable format (vs a free-text pathologist or geneticist report). Masys et al.[@ocv213-B40] proposed a framework for integrating genome-level data (stored external to the EHR) in which decision support systems are implemented through the EHR. EHRs will need to better aggregate and display patient information in order to allow users to view the heterogeneous data available for each patient, and EHRs will also need to structure and visually display the aggregated knowledge about each patient. Open interfaces that facilitate modular development of genomic CDSs outside of monolithic EHR vendor systems, enabling unencumbered parallel innovation/evolution of each element, should be provided. EHR systems must provide standards-based programming interfaces that enable the integration of external data and knowledge sources as well as the development of tools that support custom workflows, novel analytics, data visualization, and data aggregation. The informatics community must partner with EHR vendors to author use cases and develop interfaces, such that both parties benefit from the collaboration. Key Area Seven: Facilitate Consumer Engagement ---------------------------------------------- PM includes more than the medical care administered in a provider's office. Most of the population spends far more time outside of the doctor's office than in it. PM will require explicit acknowledgement of this fact as well as deeper consumer participation, which will involve making consumers aware of their own ongoing health status and engaging them in healthcare decision making. It will also involve collecting more information about a person's environment and lifestyle choices between visits to the doctor -- eg, activity level, nutrition information, exposure, and sleep patterns -- and incorporating that information into targeted therapeutic and preventive treatments. Consumer access to genetic testing will increase as provider-ordered and direct-to-consumer genetic tests become more comprehensive and less expensive. Along with the recent announcement from 23andMe that the company will once again offer health-related information and Ancestry's launch of AncestryHealth[@ocv213-B41] comes the increased importance of ensuring that consumers understand basic genetic principles and the implications of genetic testing, of trust in the accuracy of genetic tests, and of understanding of how these results, together with family history, will influence treatment decisions. User-friendly interfaces for the collection, visualization, and integration of consumer data with healthcare information will be key to realizing the potential value of nontraditional data sources. Standards for new consumer data types, as well as patient engagement around ethical, legal, and social issues, will also be important. Conclusions =========== The emergence of PM as a priority in biomedical research and healthcare emphasizes the importance of informatics' contributions to PM. This brief overview highlights essential research directions for both informatics researchers and funding organizations. The authors thank our colleagues in the Genomics and Translational Bioinformatics Working Group. Their contributions to discussions online, during formal Working Group meetings, and in casual encounters, both at our home institutions and at annual conferences, have helped shape our thoughts and perspectives as reflected in this manuscript. We also thank Joseph Romano, Peggy Peissig, Carolyn Petersen, Li Lang, and Alexis B. Carter, who participated in early discussions of these ideas. Finally, we thank the reviewers, whose insightful questions and thoughtful suggestions helped to significantly improve the manuscript. Contributors ============ All authors contributed to overall intellectual content and specific sections of writing. JDT and RRF edited the manuscript for coherence. Funding ======= This work was funded in part by NIGMS U19 GM61388 (the Pharmacogenomics Research Network) (RRF), NCATS UL1-TR001117 (JDT), U19-GM61388-13 and R25-CA092049 (MKB), and NLM R01-LM012095 (SV), NLM R01-LM011177 (ZZ), R00-LM010822 and R01-LM011663 (XJ), Delaware INBRE \#P20 GM103446 (EC), NCATS UL1-TR000117 (RN), NCI P30-CA51008, NCATS UL1-TR001409 (SM). The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies. Competing interests =================== P.A. is a paid consultant for Claritas Genomics.
Mid
[ 0.6404199475065611, 30.5, 17.125 ]
Generation of primary amide glucosides from cyanogenic glucosides. The cyanogenic glucoside-related compound prunasinamide, (2R)-beta-d-glucopyranosyloxyacetamide, has been detected in dried, but not in fresh leaves of the prunasin-containing species Olinia ventosa, Prunus laurocerasus, Pteridium aquilinium and Holocalyx balansae. Experiments with leaves of O. ventosa indicated a connection between amide generation and an excessive production of reactive oxygen species. In vitro, the Radziszewski reaction with H(2)O(2) has been performed to yield high amounts of prunasinamide from prunasin. This reaction is suggested to produce primary amides from cyanogenic glycosides in drying and decaying leaves. Two different benzoic acid esters which may be connected to prunasin metabolism were isolated and identified as the main constituents of chlorotic leaves from O. ventosa and P. laurocerasus.
Mid
[ 0.6139088729016781, 32, 20.125 ]
ADVERTISEMENT ‎‎‎ ‏‏‎ Big Four auditor KPMG has published a sweeping 42-page report on the next phase for cryptocurrencies. Entitled “Institutionalization of Cryptoassets,” the report reflects a positive outlook on the viability of the crypto economy to scale and realize its full potential. Coinbase contributed to the report, adding insights into how Bitcoin and the crypto markets can transform from their current status as volatile alternative coins to a mature asset class. While blockchain developers and crypto entrepreneurs are focused on adoption and ways to introduce cryptocurrencies to the masses through mainstream applications, point-of-sale devices, e-commerce portals, gift cards and Visa cards, crypto-as-money has a long way to go to reach the world’s total money supply of roughly $90 trillion, the Federal Reserve Board’s balance sheet of over $4 trillion or the global traditional asset markets of more than $300 trillion. Writes KPMG chief economist Constance Hunter, “Cryptoassets have potential. But for them to realize this potential, institutionalization is needed. Institutionalization is the at-scale participation in the crypto market of banks, broker dealers, exchanges, payment providers, fintechs, and other entities in the global financial services ecosystem. We believe this is a necessary next step for crypto to create trust and scale.” ADVERTISEMENT Tokenization will play a key role. It will disrupt existing financial systems and drive a wide range of utility for cryptocurrencies. “In the case of Bitcoin, we believe what has been tokenized is an intangible asset (a specific number of units of Bitcoin), because ownership does not come with any other rights and obligations. In contrast, other cryptoassets, such as tokens or coins in an initial coin offering, may convey specific utility or financial characteristics, such as rights to goods or services or a share of profits of a company or project.” “Tokenization of traditional assets could also help increase liquidity, codify rules and regulations, and increase transparency throughout the asset lifecycle.” Coinbase executives Jeff Horowitz, chief compliance officer, and Eric Scro, vice president of finance, mark three evolutionary stages for crypto. “Coinbase believes crypto will mature in three stages: investment/speculation (which the industry is currently in), institutionalization, and utility. The institutionalization and utility phases may happen concurrently. But, to move from investment/speculation to utility, crypto needs to become more liquid, trusted, and accessible.” ADVERTISEMENT Another bedrock of the crypto landscape is regulation. The report concludes that with the “patchwork of U.S. federal and state regulations governing the crypto industry” entrepreneurs will need to work their way through multiple agencies, including the Financial Crimes Enforcement Network (FinCEN), the Securities and Exchange Commission (SEC) and the Commodities Futures Trading Commission (CFTC). “Regulators are working to keep pace with crypto innovation while seeking to protect the investing public. Crypto businesses will need to clearly define their product offerings in order to navigate the evolving state and federal regulatory landscape. It is in a crypto organization’s best interest to get ahead of the evolving regulatory landscape, and we are already seeing organizations take this proactive approach.” Fundstrat Global Advisors and Morgan Creek Digital also contributed analysis. You can read the full report here.
High
[ 0.7109004739336491, 28.125, 11.4375 ]
<?php namespace Uniform\Tests\Actions; use Uniform\Form; use Uniform\Tests\TestCase; use Uniform\Actions\LoginAction; use Uniform\Exceptions\PerformerException; class LoginActionTest extends TestCase { protected $form; public function setUp() { parent::setUp(); $this->form = new Form; $this->form->data('username', 'joe'); $this->form->data('password', 'secret'); } public function testWrongUser() { $action = new LoginActionStub($this->form); $this->expectException(PerformerException::class); $action->perform(); } public function testWrongPassword() { $action = new LoginActionStub($this->form); $action->user = new UserStub(false); $this->expectException(PerformerException::class); $action->perform(); } public function testSuccess() { $user = new UserStub(true); $action = new LoginActionStub($this->form); $action->user = $user; $action->perform(); $this->assertEquals('joe', $action->name); $this->assertEquals('secret', $user->password); } public function testOptions() { $this->form->forget('username'); $this->form->forget('password'); $this->form->data('un', 'joe'); $this->form->data('pw', 'secret'); $user = new UserStub(true); $action = new LoginActionStub($this->form, [ 'user-field' => 'un', 'password-field' => 'pw', ]); $action->user = $user; $action->perform(); $this->assertEquals('joe', $action->name); $this->assertEquals('secret', $user->password); } } class LoginActionStub extends LoginAction { public $user; protected function getUser($name) { $this->name = $name; return $this->user; } } class UserStub { public $login; public $password; public function __construct($login) { $this->login = $login; } public function login($password) { $this->password = $password; return $this->login; } }
Mid
[ 0.605911330049261, 30.75, 20 ]
NIAGARA FALLS(DEC 07 -11 firefighters), Kitchener, Waterloo DEC 07), MISSISASUGA BIG LIST NOW, Ontario going to be a breeding ground. - Congradulations to Chief Burke of Niagara Falls for getting the Job of Ontario Fire Marshal. CANADA IS AWESOME NOW ALL WE NEED IS AND FDIC in every province.
Low
[ 0.48451327433628305, 27.375, 29.125 ]
Recap: Earthquakes dismantle Toronto FC in road debut, 3-0 TORONTO — A pair of goals from San Jose striker Chris Wondolowski lifted the Earthquakes to a 3-0 victory in Toronto FC’s home opener at BMO Field on Saturday afternoon. Wondolowski has now scored six goals in an as many career games in a Quakes uniform against Toronto FC, and has a team-high three goals in three games this season. The win was a sure-handed one for the Earthquakes (2-1-0), who also got the first goal from midfielder Shea Salinas since his offseason return to San Jose. The loss punctuated a rough week and 24-hour spell for Toronto FC (0-2-0), who found out earlier Saturday that starting goalkeeper Stefan Frei will miss eight to 10 weeks of action after breaking his left leg during a practice session on Friday. Milos Kocic started in place for the injured Frei, but he didn’t have much chance on Wondolowski’s opening goal in the ninth minute. A wind-assisted chip pass from Sam Cronin hung in the air long enough for the striker to deposit the ball inside the near post with a deft header, and the Quakes were off and running. They struck again in the 56th minute on a quick transition. Defender Steven Beitashour surged down the right side and created space for an attack before dishing off to midfielder Marvin Chávez. The former FC Dallas playmaker made one quick touch across the box to Salinas, who fired a left-footed strike past Kocic and into the left corner. Just 12 minutes later, the Quakes were at it again on a great sequence from the midfield out. Midfielder Rafael Baca drew Kocic out of the net towards him before passing to his left to a wide-open Wondolowski, who fumbled the ball momentarily before slotting into the empty net. Wondolowski nearly finished with a hat trick on the afternoon, but he came away empty-handed on a golden opportunity by thumping a shot off the post in the closing minutes of the match. TFC were not able to generate many quality scoring chances, but rookie midfielder Luis Silva did manage to create a couple of half-chances. In the 31st minute, he settled a pass down from Joao Plata and then sent a line drive just over top of the Earthquakes goal. Then little more than three minutes later Silva leapt up and knocked a header high and wide. Midfielder Julian de Guzman donned the captain’s armband for Toronto FC after defender Torsten Frings went down with a hamstring injury in last weekend’s season-opening loss to Seattle San Jose return to action on March 31 at the Seattle Sounders, while Toronto FC resume their run in the CONCACAF Champions League with the opener of their semifinal series against Mexican side Santos Laguna at home on March 28. They return to league action against Columbus on March 31.
Low
[ 0.528650646950092, 35.75, 31.875 ]
400 F.2d 97 Theresa KESMARKI and Thomas Kesmarki, Plaintiffs-Appellants,v.Ruth Elizabeth KISLING, Defendant-Appellee. Nos. 18018-18019. United States Court of Appeals Sixth Circuit. Sept. 10, 1968. Robert M. Dudnik, Cleveland, Ohio (Leon M. Plevin, Dudnik, Komito, Nurenberg, Plevin, Dempsey & Jacobson, Cleveland, Ohio, on the brief), for appellants. John E. Martindale, Cleveland, Ohio (Arter, Hadden, Wykoff & Van Duzer, Cleveland, Ohio, on the brief), for appellee. Before O'SULLIVAN, CELEBREZZE, Circuit Judges, and CECIL, Senior Circuit Judge. O'SULLIVAN, Circuit Judge. 1 We consider the appeals of Theresa Kesmarki and her husband, Thomas, from judgments for defendant, Ruth Elizabeth Kisling, entered upon a jury verdict. The actions, tried together, arose from injuries suffered by Theresa Kesmarki in a collision between automobiles driven by her and by defendant-appellee. The collision occurred at about 4:00 p.m. on December 3, 1962, in the intersection made by West Third Street and Weldon Avenue in the City of Mansfield, Ohio. Plaintiffs-Appellants' Colorado citizenship provided the diversity jurisdiction of the United States District Court for the Northern District of Ohio, Eastern Division. We affirm. 2 There was little factual disagreement as to how the collision occurred. We set out the following from the statement of facts in the appellants' brief. 3 'Defendant testified that on December 3, 1962 at about 4:00 p.m., she was proceeding north on Weldon Avenue; that she stopped for a stop sign about sixteen feet from the intersection of Weldon Avenue and West Third Street; that she waited at said stop sign three to five minutes, due to heavy traffic on West Third Street; that the eastbound vehicles on West Third Street came to a stop so that the back of one vehicle was parallel to the extension of the easterly curb line of Weldon Avenue and the front of another vehicle was parallel to the westerly curb line extension of Weldon Avenue, thereby leaving a space of about one car length for defednant to enter the intersection. The defendant further testified that her view was obstructed with regard to traffic proceeding westerly on West Third Street due to cars parked in the parking lot at the southeasterly corner of Weldon Avenue and West Third Street and additionally her view was blocked by the heavy eastbound traffic backed up on West Third Street and other parked vehicles on West Third Street; that nevertheless, defendant testified, the driver of the vehicle stopped parallel with the westerly extension of the curb line of Weldon Avenue motioned her forward; that defendant proceeded forward uninterruptedly into the intersection of Weldon Avenue and West Third Street; that she continued at a speed of five to ten miles an hour through the intersection and that a collision occurred. * * *. 'Plaintiff, a 33 year old woman from Hungary who has been in the United States about ten years, and although an American citizen now, had some difficulty with the language, testified that she was proceeding westerly on West Third Street and was the first car in line; that she had a clear lane of travel and that she was proceeding at approximately twenty to twenty-five miles an hour. Plaintiff further testified that she did not see the defendant's vehicle, but saw something move from her left to right; from the time she saw this object until the collision about one second elapsed.' 4 And the following from the Counter Statement of Facts in appellee's brief: 5 'After looking both to the left and right and making such observations as traffic permitted she (defendant) pulled slowly into the intersection at a speed of five miles per hour. Her car cleared a lane of parked cars and a lane of eastbound traffic on West Third, and almost cleared the lane of westbound traffic when its right rear door and fender received the impact of plaintiff's car, traveling west in the lane nearest the center line. 'Plaintiff had been traveling west on West Third Street. Although it is 36 feet wide, West Third has parking permitted on each side, narrowing it to one lane of travel in each direction. Plaintiff, who had stopped some distance back for a traffic light became the first car in her lane of travel when a truck ahead of her turned off. She reached an admitted speed of 25 miles per hour and although she was familiar with West Third made no observations as to Weldon's intersection. She never saw the defendant's auto until the instant of impact in spite of the fact that she struck the rear half of defendant's auto after the front half had already crossed her lane of travel. Plaintiff's speed was sufficient to create an impact which spun defendant's car sideways up onto the north sidewalk of West Third while plaintiff's car continued straight on down West Third in her own lane of travel. $07'At the trial, defense counsel produced a certified copy of the pleadings filed in plaintiff's behalf in a separate suit in Denver, Colorado concerning a subsequent accident. Among those pleadings were answers to interrogatories denying any injuries prior to her accident in Denver, denying her use of a brace, and denying the existence of this lawsuit. These answers were verified by her attorney in Colorado. Over the objection of plaintiff's counsel, defense counsel was allowed to ask plaintiff whether or not she supplied these answers to her Colorado attorney. She denied doing so. Thereafter, when these pleadings were offered into evidence plaintiff's counsel objected and his objection was sustained. No further use was made of them by either side.' 6 It is clear that both drivers came to the point of collision without either having seen the approach of the other. The evidence would have permitted the jury to come to one or more factual conclusions, including the following: 7 that plaintiff's failure to see defendant was due to her total neglect to look for traffic coming north on Weldon Avenue and into her path; that defendant's failure to see plaintiff was due to the ineffectiveness of her claimed look for traffic coming from her right on West Third Street; that both drivers proceeded into the collision under traffic conditions which made it impossible to either to make an effective observation to insure her proceeding with safety; that the plaintiff's admitted speed was negligent in view of her restricted, or total lack of view; that defendant's uninterrupted driving at 5 to 10 miles an hour into the intersection, notwithstanding her inability to discover the approach of plaintiff's vehicle, was negligence; that defendant's failure to yield the right of way to plaintiff was a negligent and proximate cause of the accident; that plaintiff's negligence was a proximate cause of the accident; that the negligence of both parties concurrently and proximately contributed to the collision. 8 The claims of error charged on this appeal are: First, that the District Judge should have directed a verdict of liability because, as a matter of law, defendant was negligent and plaintiff Theresa Kesmarki was free of contributory negligence; Second, that the Court committed error in his instructions to the jury; and, Third, that defense counsel's reference to answers to interrogatories propounded in another suit in which appellant Theresa Kesmarki was plaintiff should not have been allowed. 9 1. Denial of Motion for Directed Verdict. 10 After proofs were closed, plaintiffs' counsel moved the District Judge for a directed verdict as to liability, 11 'On the basis that Defendant was negligent and that Plaintiff was not guilty of contributory negligence.' 12 In his objection to the charge as given, counsel for plaintiff stated, 13 'I would like to object to the charge on the basis of the Judge's statement that was made in the conference in the Judge's chambers yesterday, as I did move for a directed verdict on the question of liability, stating that the Defendant was negligent as a matter of law, that there was no proof offered by the Defendant as to the contributory negligence of the Plaintiff, and that therefore there should be a directed verdict for the Plaintiff on the question of Defendant's negligence * * *.' 14 a) Negligence of Defendant. 15 The assertion that, as a matter of law, defendant was negligent is bottomed primarily on Section 4511.43 of the Ohio Revised Code which provides in part: 16 'The operator of a vehicle, intending to enter a through highway, shall yield the right of way to all other vehicles * * * on said through highway, unless otherwise directed by a traffic control signal, or as provided in this section. The operator of a vehicle * * * shall stop in obedience to a stop sign at an intersection and shall yield the right of way to all other vehicles * * * not obliged to stop, or as provided in this section.' 17 It is undisputed that West Third Street was a favored through street and that defendant was required to stop before entering it and to yield the right of way to a vehicle on West Third Street entering the intersection. The violation of a relevant statute regulating traffic is negligence per se. 39 Ohio Jur.2d 44, at 551; Schell v. Du Bois, 94 Ohio St. 93, 113 N.E. 664, L.R.A.1917A, 710 (1916). And a motorist entering a favored street will not be excused from required care by looking but failing to see a vehicle that was there to be seen. Her preparatory look must be an effective one. Spitler v. Morrow, 100 Ohio App. 181, 136 N.E.2d 321 (1955); Pritchard v. Cavanaugh, 18 Ohio Law Abst. 354 (1934), affirmed, 129 Ohio St. 542, 196 N.E. 164; Jackson v. Mannor, 90 Ohio App. 424, 107 N.E.2d 151 (1951). 18 Defendant counters these assertions by pointing to Ohio law which limits the protection of a right of way statute to those who are themselves proceeding 'in a lawful manner'-- see definition of 'Right of Way,' O.R.C. 4511.01 (TT) (1967 Cum.Supp.). The third syllabus of Morris v. Bloomgren,127 Ohio St. 147, 187 N.E. 2, 89 A.L.R. 831 (1933), states: 19 'If such vehicle (the one asserting the right of way) is not proceeding in a lawful manner in approaching or crossing the intersection, but is proceeding in violation of a law or ordinance, such vehicle loses its preferential status and the relative obligations of the drivers of the converging vehicles are governed by the rules of the common law.' 20 Defendant argues that the jury could find that plaintiff lost her right of way privilege by her own lack of care in driving 25 miles per hour into, and probably across, the crowded intersection involved. While 25 miles per hour was presumptively a lawful speed, the jury could have been of the view that under all the circumstances it was unlawful. O.R.C. 4511.21 recites the almost universal traffic law that: 21 'No person shall operate a motor vehicle * * * at a speed greater or less than is reasonable or proper, having due regard to the traffic, surface, and width of the street or highway and any other conditions * * *.' 22 Under Ohio law a speed less than the prima facie limit may be unlawful under the circumstances. Cincinnati Street Ry. Co. v. Bartsch, 50 Ohio App. 464, 475, 198 N.E. 636 (1935). From this, defendant argues that plaintiff driver's own careless driving had forfeited her right of way which otherwise would have required defendant to yield. 23 These are interesting speculations, but it is unnecessary to decide whether defendant was, as a matter of law, guilty of negligence that was a proximate cause of the accident. The District Judge was not asked to so instruct the jury. The motion to him was that he direct a verdict of liability on the ground that 'defendant was negligent and that plaintiff was not guilty of contributory negligence.' Both elements had to be found in order to direct a verdict. We go on then to consider whether the evidence required a holding that, as a matter of law, plaintiff driver was free from contributory negligence. 24 b) Contributory Negligence. 25 The facts as above recited should be an answer to this question. In Ohio, contributory negligence is a defense in bar. Lehman v. Hayman, 164 Ohio St. 595, 598, 133 N.E.2d 97 (1956); Hawkins v. Graber,112 Ohio App. 509, 512-513, 176 N.E.2d 600 (1960). The burden is on defendant to make out a case of contributory negligence. Valencic v. Akron & Barberton Belt R.R. Co., 133 Ohio St. 287, 289-290, 13 N.E.2d 240 (1938). The issue of plaintiff's contributory negligence ought to be submitted to the jury if there is 'some evidence * * * tending to show that the plaintiff failed in some respect to exercise the care of an ordinarily prudent person * * * and that such failure was a proximate cause of his injury * * *.' Bush v. Harvey Transfer Co., 146 Ohio St. 657, 670, 67 N.E.2d 851, 852 (1946), quoted in Golamb v. Layton, 154 Ohio St. 305, 309, 95 N.E.2d 681 (1950). Where reasonable minds could differ on the question, the issue is for the jury. Smith v. Toledo & Ohio Central R.R. Co., 133 Ohio St. 587, 595-597, 15 N.E.2d 134 (1938) and cases cited therein. Furthermore, Ohio follows the familiar rule that upon motion by one party for a directed verdict, the trial court in ruling thereon must view the evidence and reasonable inferences that may be drawn therefrom in a light most favorable to the nonmoving party. Cothey v. Jones-Lemley Trucking Co., 176 Ohio St. 342, 344, 199 N.E.2d 582 (1964); Accord, Lones v. Detroit, Toledo, and Ironton R.R. Co. (6th Cir. July 31, 1968) 398 F.2d 914; Dickerson v. Shepard Warner Elev. Co., 287 F.2d 255, 258-259 (6th Cir. 1961); Wilkeson v. Erskine & Son, Inc., 145 Ohio St. 218, 227-229, 61 N.E.2d 201 (1945). Finally in reviewing a jury verdict the appellate court will view the evidence most favorably to the party prevailing in the trial court. Sypherd v. Haeckl's Express, Inc., 341 F.2d 65, 67 (6th Cir. 1965), applying Ohio law and citing McMurtrie v. Wheeling Traction Co.,107 Ohio St. 107, 111, 140 N.E. 636 (1923) and Industrial Commission of Ohio v. Pora, 100 Ohio St. 218, 221, 125 N.E. 662 (1919). We are satisfied that a jury could find that Theresa Kesmarki was negligent and that such negligence proximately contributed to the accident. That a driver on a through, or favored highway is still required to proceed with the due care that would be employed by a reasonably prudent man is a rule of general application. Restatement (Second) of Torts, 289; Witherspoon v. Irons, 18 Ohio Law Abst. 193; Cleveland R. Co. v. Nicholson, 11 Ohio App. 424, 17 Ohio L.Rep. 392 (1919). See also cases collected in Annotation at 3 A.L.R.3d 180, 272, et seq. 26 From the evidence that Theresa Kesmarki proceeded at 25 miles per hour into an area of congested traffic, without looking for vehicles approaching on a subservient street, and ran into the rear door of a car that she saw only for an instant, a jury could reasonably conclude that she was guilty of contributory negligence. 27 The District Judge adequately and correctly instructed the jury on the duties of the respective drivers. Denial of the motion for direction was correct as was denial of plaintiff's motion for judgment notwithstanding the verdict and for a new trial. 28 While it is not necessary to our decision, we mention that from questions asked by the jury during the course of their deliberations, it is evident that they considered defendant negligent, but denied recovery to plaintiffs because of contributory negligence. 29 2. The Court's Instructions. 30 The District Judge fully charged the jury on the statute and common law of Ohio relevant to the conduct of the parties in the factual situation disclosed by the evidence. His reference to two of Ohio's traffic statutes may have been inapt, but it did not detract from the clarity and correctness of the total charge. In all events, the Court's attention was not called to any claimed improprieties in the statutory references and under Rule 51 of the Federal Rules of Civil Procedure, we do not consider them. Nor did they amount to 'plain error.' 31 3. Cross-examination of Theresa Kesmarki. 32 It appeared that appellant Theresa Kesmarki had commenced two personal injury actions in Denver, Colorado, resulting from two separate accidents which had occurred after the one here involved. Interrogatories submitted to appellant as plaintiff in one of the Denver actions had been signed and verified by her attorney. The answers to some of the interrogatories were in conflict with Mrs. Kesmarki's testimony in the case at bar. Defendant's counsel was permitted to ask Mrs. Kesmarki whether she had given to her Denver attorney the information from which he made the answers which conflicted with her testimony. Upon her denial that she had done so, the District Judge sustained appellant's objections to defendant's offer in evidence of the relevant answers to the interrogatories. If the answers to the interrogatories contradicted Mrs. Kesmarki's testimony, and if she had supplied the information contained in such answers, they could constitute legitimate impeachment. It is a fair summary of the general rule to say that if pleadings-- such as answers to interrogatories-- contain allegations or admissions against interest, they may be used to impeach a party or witness in another lawsuit if they are relevant, even though neither verified or signed by the party or witness sought to be impeached, provided it be shown that such answers are correct repetition of the party's or witness' statements given to the lawyer or scrivener of the answers or pleadings. 31a C.J.S. Evidence 303b, at 781-783; Fuller v. King, 204 F.2d 586, 590 (6th Cir. 1953). See also Faxon Hills Construction Co. v. United Brotherhood of Carpenters & Joiners of America, 109 Ohio App. 21, 27, 163 N.E.2d 393 (1957), rev'd on other grounds, 168 Ohio St. 8, 151 N.E.2d 12 (1958). In such case, however, an essential preliminary to admissibility of such writings is the establishment that the party or witness to be impeached did supply the information contained in the interrogatory answer or other pleading. Robinson v. United States,144 F.2d 392, 405 (6th Cir. 1944). See also Ass'n of Army & Navy Stores, Inc. v. Schaengold, 44 Ohio App. 40, 43, 184 N.E. 17 (1932). It was necessary therefore for counsel for defendant to ascertain whether Mrs. Kesmarki had provided the information from which her attorney had made answers to the interrogatories. Mrs. Kesmarki having denied responsibility for the answers, objection was sustained to their admission into evidence. We think that the better practice would have been to conduct this preliminary inquiry out of the presence of the jury. However, we will not find reversible error in what occurred in this case. 33 Judgment affirmed.
Low
[ 0.5199161425576521, 31, 28.625 ]
Nahri Saraj District Nahri Saraj District () (population 114,200), also called Nahre Saraj, is a district in Helmand Province in southern Afghanistan. Its principal municipality is Girishk (population 48,546). Demography The ethnic composition is predominantly Pashtun. At the time of Taliban, Nahri Saraj District was under control of Noorzai tribe. Location Gerishk District sits at the intersection of Highway 1 (the 'Afghan ring-road', based on the old Silk Road and refurbished in the 1960s with US investment) and the Helmand River. A major stopping-point on the trade routes from Pakistan and Iran, Nahri Saraj enjoys the prospect of returning to its historical prosperity, although this is under threat of Taliban resurgence in the region. Route 611 passes through Gerishk District. Income The main source of income is agriculture. The soil is rich and the irrigation systems are in relatively good condition. The irrigation is from the Helamand River, karezes and tube-wells. Hospitals and Schools There is a hospital with both male and female doctors. There are 20 schools in the district, attended by 80% of the children. Operation Enduring Freedom Bismullah appointed to be the transportation director for Ghereskh by the Hamid Karzai administration was sent to Guantanamo Bay, where he was held in extrajudicial detention for seven years. On January 17, 2009, the US Government acknowledged that he had never been an "enemy combatant". See also Hyderabad airstrike References External links http://www.aims.org.af/maps/district/hilmand/nahri_sarraj.pdf District Map https://web.archive.org/web/20061211123952/http://jemb.org/eng/electoral_system/reg.centers.pdf Actual census data since 2005 year Category:Districts of Helmand Province Category:Districts of Afghanistan
Mid
[ 0.597285067873303, 33, 22.25 ]
Wedding Rings Explore a vast collection of wedding ring designs at Serendipity Diamonds. We have the perfect designs for both bride and groom, with men's, women's and unisex ring designs. From classic plain wedding rings to intricately set diamond rings. Discover the difference with expert help at every step of your journey. Each ring design is expertly crafted from the most luxurious metals including Gold, Platinum and Palladium. We stock most styles, from elegant court shaped bands to extra wide, custom-made bands in light, medium and heavyweight profiles. Enjoy the reassurance of expert wedding ring help. We are on hand 6 days per week in person to assist with your purchase. Talk to us by phone or by live chat. Our designs range from simple basic plain ring designs to elaborate diamond wedding ring designs, available with full customisation including engraving. We stock most precious metals including Fairtrade Gold. Choose from Fairtrade white gold wedding rings, Fairtrade yellow gold wedding bands, recycled Platinum rings or view our collection of exceptional value Palladium styles. We create most widths including 2mm, 2.5mm, 3mm, 4mm, 5mm, 6mm, 7mm and 8mm ring widths. Our styles range from three styles of court shape through to D shape and flat court in light, medium and heavy combinations. Many styles are available half or fully diamond set. We ship worldwide with all tax and duty taken care of for your ultimate ease of purchase. Recent Customer Reviews Date Score Customer Comment Supplier Response 27-Nov-2018 Service Product Service rating : Excellent customer service. Can't do enough to help you with any questions. Would highly recommend SerendipityProduct : Absolutely gorgeous. Can't believe how wonderful this ring is. I will treasure it for ever Anonymous Anonymous No Comment 02-Nov-2018 Service Product Service rating : Purchased an engagement ring in September. Serendipity's end to end service, and product, are all perfect! I had a fairly tight deadline, so had some anxiety. They gave more than enough detailed information and answered all my questions. Finally, once finished crafting, it was... Anonymous Anonymous Service rating : Thank you for your kind review, it was an absolute pleasure creating your engagement ring for you and we would be ... 07-Oct-2018 Service Product Service rating : The staff at Serendipity were incredibly helpful and professional through the whole process. We were initially hesitant to purchase our wedding bands online, especially given the need to ship internationally, but they answered all of our questions, delivered within the shipping window they... Anonymous Anonymous No Comment 15-Sep-2018 Service Product Service rating : I would highly recommend Serendipity Diamonds to anybody. I found the company online when looking to purchase a diamond engagement ring and after looking at the different styles of ring they had available, arranged an appointment with Emily via their live chat facility. She appreciated my... Anonymous Anonymous No Comment 08-Sep-2018 Service Product Service rating : Everyone there was quick to respond to my questions via email. I would definitely use them again.Product : Fingerprint rings were unique exactly what my fiancé wanted. Anonymous Anonymous No Comment 29-Aug-2018 Service Product Service rating : Extremely happy with the customer service. Any questions I had were answered promptly, and expected completion date/confirmation of shipping address was provided throughout the process. Please thank Emily for her help during this process, she was a pleasure to correspond with... Anonymous Anonymous Service rating : Thank you so much for taking the time to leave us such a lovely review, It's great to hear that you received such ... Service rating : Many thanks for using us again, it was a delight to be able to help you again with another special purchase. So g ... 31-Jul-2018 Service Product Service rating : An amazing piece of art, exactly what I wanted. Customer service is great, with a really nice experience visiting the Isle of Wight and the booked appointment to discuss the details and get the perfect size.Perfect from beginning to final delivery (impressive packaging!!), a 100%... Anonymous Anonymous Service rating : Thank you so much for taking the time to leave your feedback. We are so pleased you are happy with your purchase. ... 31-Jul-2018 Service Product Service rating : Once we found the beautiful and uniquely crafted rings on the website the whole experience was simple. The Customer service is amazing, messages and emails responded to extremely quickly - the rings were excellent.Product : Unique, creative and simply wonderful Anonymous Anonymous Thank you so much for taking the time to leave your review. It was as an absolute pleasure to help you with your very special purc ... 03-Jul-2018 Service Product Service rating : They sent us the wrong size ring and then charged us for the re-size. They then claimed that the ring size adheres to UK ring sizes, despite taking the ring into hatton garden where 3 different jewellers confirmed that the ring was 17mm, despite us originally ordering a P (18mm)
High
[ 0.671511627906976, 28.875, 14.125 ]
The scene where Stark takes off in the mark 2 for the first time or the scene where Clark takes his first flight, which of the two is the superior scene? I must say that despite Iron Man being a superior film MOS' first flight scene is an amazing isolated scene. A perfect harmony of Zimmers' swelling score, breathtaking sound design and Zack Snyder's impeccable compositions makes for an amazing three minutes. I think Iron Man was more...Yeah this rocks and is awesome and cool. While Man of Steel's to me was more iconic and more awe to finally see Superman flying right on screen. The close up while flying seemed off a little, but the take off with his fist to the ground and the flying around the mountains, flying over water, and then the mid air stop and then push into space was incredible. Iron Man. While both were CGI-fests the Iron Man one didn't feel like it. Plus it actually had story building elements in it rather than just "ok I learned to fly, well got that checked off the list". Iron Man's flight also had elements that would come into play later in the story in a very organic way(icing problem). MoS's ain't got anything like that. __________________ Quote: 'If there are more years after 2019, there are more[MCU] movies after 2019' - Kevin Feige Iron Man. While both were CGI-fests the Iron Man one didn't feel like it. Plus it actually had story building elements in it rather than just "ok I learned to fly, well got that checked off the list". Iron Man's flight also had elements that would come into play later in the story in a very organic way(icing problem). MoS's ain't got anything like that. Superman flying through the sky/hitting the mountain looked really pointless. Was he practicing to fly, was it done to add another CGI-fest scene in the movie, I don't really see the need of it for the story. I also felt like he just flew away because he got his Superman outfit. Superman's flight had better music but overall Iron Man's flight felt more rewarding and eventful. Quote: Originally Posted by psylockolussus Iron Man of course. Superman flying through the sky/hitting the mountain looked really pointless. Was he practicing to fly, was it done to add another CGI-fest scene in the movie, I don't really see the need of it for the story. I also felt like he just flew away because he got his Superman outfit. The scene where Stark takes off in the mark 2 for the first time or the scene where Clark takes his first flight, which of the two is the superior scene? I must say that despite Iron Man being a superior film MOS' first flight scene is an amazing isolated scene. A perfect harmony of Zimmers' swelling score, breathtaking sound design and Zack Snyder's impeccable compositions makes for an amazing three minutes. From a different perspective, IM (first movie ) was a great film, far superior to its sequels. Having said that I liked MOS as a film better, but I'm a Super-fan, so I'm biased. Anyway, what was said above, in respect of that scene I agree wholeheartedly. Yes, Iron Man was a great adaptation of the character for the screen, but from the moment the ship door opens and Cavill walks out as Superman, I remember saying to myself "Whoah, THAT is what Superman looks like." right from then , the scene was awesome. The iron man scene was fun, and comical, but Supes first flight was a moment where the character finds himself, for the first time in his life. Waaayyy more significant. It's like, IM is "I'm Tony Stark, look at what I can build, a flying suit, I'm so cool. Check me out." which is fine, but MOS is like " I can fly, because this is who I am." a lot deeper. It's the first time Clark sees his powers as a gift, rather than a curse. Superman flight doesn't have the 'can we do that too?' moment because other than it's been done before in the 70's movie, Superman is a super powered alien. Almost everybody know Supes main feat is flight. Tony Stark is human, his achievement owed by his tech ingenuity and connects with the audiences more. If Stark could fly inside an exosuit, ordinary people someday could do too. Superman flight doesn't have the 'can we do that too?' moment because other than it's been done before in the 70's movie, Superman is a super powered alien. Almost everybody know Supes main feat is flight. Tony Stark is human, his achievement owed by his tech ingenuity and connects with the audiences more. If Stark could fly inside an exosuit, ordinary people someday could do too. That is an interesting argument. that you prefer IM because you could one day do that too. I believe you are fully entitled to your opinion. But your reason for holding that opinion seems to raise some logical issues. 1) Are you familiar with the superhero paradigm ? Often the "super" refers to the fact that they possess abilities that normal humans do not, and could never have. In saying it's been done before, well, Superman was the first super-hero, he's been around for 75 years so actually, he did it first. 2) If you can only enjoy a character's exploits because of the "we can too" factor, then you must not really enjoy the triumphs of Thor, Spider-man, or the Hulk. Surprisingly people often do enjoy seeing Wolverine win fights, despite the fact he's functionally unkillable. In the same way, many enjoy Superman immensely, although he defies most of the laws of physics or biology. We can never be like him, but we can still enjoy his stories. (e.g. his first flight being an important moment in the character's development where he first comes to terms with his alien heritage, and accepts his true identity) So, in summary, saying you prefer IM to MOS because you just enjoyed it more is cool. But, if it's because of the "we can too" factor (are you a billionaire, genius, playboy, with a completely impossible fusion reactor in your chest ? ok, then let's put that aside) anyway, you are limiting yourself to a very small portion of the superhero genre. Personally, I hated Harry Potter, but several billion people disagreed with me, despite the fact that none of them had functional wands, invisibility cloaks or could speak to snakes. In closing, I would argue that the precise reason people like those stories, is because they don't have those amazing, but imaginary things. A lot of us like superheroes, not because we'd like them to be like us, but because we'd like to be like them.
Mid
[ 0.6379746835443031, 31.5, 17.875 ]
Q: Deal array to function arguments I have a function (created through the symbolic toolbox) that takes a number of scalar inputs: scalarFn = @(a,b,c) a .* b + c I would like to alter this function so that it instead takes a single input and deals the elements of a vector to the input arguments of the function above: vectorFn = @(theta) theta(1) .* theta(2) + theta(3) I've played around with deal and or combining num2cell with {:} indexing but I haven't figured out how to compose this function yet. Ultimately, I want a function that takes a function handle like scalarFn (but not necessarily having only 3 arguments - quite likely more) and gives back a new function handle vectorFn that has only one input as a vector. Is there any way to do this? A: As a clunky answer, I know I can do it with an eval statement: vectorFn = eval(['@(theta) scalarFn(' strjoin(arrayfun(@(x) ['theta(' num2str(x) ')'], 1:nargin(scalarFn), 'Uniform', false), ', ') ')']); scalarFn(1,2,3) vectorFn([1 2 3]) But this seems to be a particularly not robust solution.
Mid
[ 0.6339285714285711, 26.625, 15.375 ]
Comic writer Dan Abnett (the guy who created the current iteration of the Guardians of the Galaxy) stops by to chat about his latest work for Marvel - a new Hercules solo book. Dan also tells us why Joseph Campbell is completely wrong and inducts us into the secrets of the Bearded English Comic Writers Conspiracy.
Low
[ 0.46245059288537504, 29.25, 34 ]
Q: How to fix SolrException: QueryElevationComponent requires the schema to have a uniqueKeyField? I am setting up a solr built into tomcat. I can get the example running in tomcat no problem. I then try to change schema.xml to a very simple one and I get an error. This is my schema.xml <?xml version="1.0" encoding="UTF-8" ?> <schema name="minimal" version="1.1"> <types> <fieldType name="string" class="solr.StrField"/> </types> <fields> <dynamicField name="*" type="string" indexed="true" stored="true"/> </fields> </schema> This is the error I get on start up: May 24, 2012 10:03:33 AM org.apache.solr.core.SolrCore close INFO: [] CLOSING SolrCore org.apache.solr.core.SolrCore@666b581a May 24, 2012 10:03:33 AM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@2a664d5f main May 24, 2012 10:03:33 AM org.apache.solr.update.DirectUpdateHandler2 close INFO: closing DirectUpdateHandler2{commits=0,autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0} May 24, 2012 10:03:33 AM org.apache.solr.update.DirectUpdateHandler2 close INFO: closed DirectUpdateHandler2{commits=0,autocommits=0,optimizes=0,rollbacks=0,expungeDeletes=0,docsPending=0,adds=0,deletesById=0,deletesByQuery=0,errors=0,cumulative_adds=0,cumulative_deletesById=0,cumulative_deletesByQuery=0,cumulative_errors=0} May 24, 2012 10:03:33 AM org.apache.solr.common.SolrException log SEVERE: java.lang.NullPointerException at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:164) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1376) at org.apache.solr.core.QuerySenderListener.newSearcher(QuerySenderListener.java:59) at org.apache.solr.core.SolrCore$3.call(SolrCore.java:1182) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) May 24, 2012 10:03:33 AM org.apache.solr.core.SolrCore execute INFO: [] webapp=null path=null params={event=firstSearcher&q=static+firstSearcher+warming+in+solrconfig.xml} status=500 QTime=12 May 24, 2012 10:03:33 AM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 24, 2012 10:03:33 AM org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener newSearcher INFO: Loading spell index for spellchecker: default May 24, 2012 10:03:33 AM org.apache.solr.core.SolrCore registerSearcher INFO: [] Registered new searcher Searcher@2a664d5f main May 24, 2012 10:03:33 AM org.apache.solr.core.SolrCore closeSearcher INFO: [] Closing main searcher on request. May 24, 2012 10:03:33 AM org.apache.solr.search.SolrIndexSearcher close INFO: Closing Searcher@2a664d5f main fieldValueCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0} filterCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0} queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0} documentCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0} May 24, 2012 10:03:33 AM org.apache.solr.common.SolrException log SEVERE: org.apache.solr.common.SolrException at org.apache.solr.core.SolrCore.<init>(SolrCore.java:600) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:483) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:335) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: org.apache.solr.common.SolrException: QueryElevationComponent requires the schema to have a uniqueKeyField. at org.apache.solr.handler.component.QueryElevationComponent.inform(QueryElevationComponent.java:160) at org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:527) at org.apache.solr.core.SolrCore.<init>(SolrCore.java:594) ... 23 more May 24, 2012 10:03:33 AM org.apache.solr.servlet.SolrDispatchFilter init SEVERE: Could not start Solr. Check solr/home property and the logs org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) May 24, 2012 10:03:33 AM org.apache.solr.common.SolrException log SEVERE: org.apache.solr.common.SolrException: No cores were created, please check the logs for errors at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:172) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:277) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:258) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:382) at org.apache.catalina.core.ApplicationFilterConfig.<init>(ApplicationFilterConfig.java:103) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4638) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5294) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:895) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:871) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:615) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:649) at org.apache.catalina.startup.HostConfig$DeployDescriptor.run(HostConfig.java:1585) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Your help is much appreciated. A: I think the answer is in the stack trace you provided: Caused by: org.apache.solr.common.SolrException: QueryElevationComponent requires the schema to have a uniqueKeyField. The example schema you've supplied doesn't have a UniqueKeyField defined for example: <uniqueKey>[Put name of field here]</uniqueKey> However, as your schema doesn't define any fixed fields, you'll have to add one first.
Low
[ 0.506696428571428, 28.375, 27.625 ]
Screenshot: Chris Matyszczyk/CNET For a moment this weekend, those who follow Microsoft's Twitter account seemed to think that the company had embraced liberalism and kissed it on both European cheeks. There appeared, you see, a tweet that read as follows: "@RBReich your granddaughter's level of discourse and policy > those of Ann Coulter." The tweet, with its succinct use of the greater-than sign, was captured for posterity by Adam Khan. The RBReich in question is somewhat Democratic economist and Berkeley professor, Robert Reich. He had tweeted on Saturday that he was going "To NY to visit my 4-yr-old granddaughter. Also on ABC's 'This Week' panel w/ Ann Coulter, among others. I'd rather be w/ my granddaughter." The Ann Coulter in question is a highly erudite conservative commentator, perhaps best known for her seminal tomes such as "How to Talk to a Liberal (If You Must): The World According to Ann Coulter." But why was the sturdily apolitical Microsoft suddenly revealing partisan underclothes? Well, as Politico tells the tale, it seems that whoever was in charge of Microsoft's Twitter account last weekend was somewhat charged with political fervor. However, he mistakenly used the corporate account -- rather than his own personal one -- to tweet that Coulter had the intellectual level of a 3-year-old. Microsoft issued a statement to Politico: "One of the people who manages our corporate Twitter account thought he was tweeting from their personal twitter account on Saturday morning but tweeted from our corporate account by mistake." The company says it has taken steps to ensure that such a painfully personal event doesn't recur. I am not sure whether those steps included gingerly approaching a guillotine. This isn't quite the first time that an operative has expressed personal passions through a corporate channel. It was only last year that someone at the controls of Chrysler's Twitter account managed to offer this highly corporate musing: "I find it ironic that Detroit is known as the #motorcity and yet no one here knows how to f***ing drive." I wonder if Ann Coulter will be invited to Redmond as a gesture of apology. Perhaps she will be asked to speak on the conservative implications of Microsoft's Surface tablet hopeful.
Low
[ 0.519841269841269, 32.75, 30.25 ]
Our grasscloth is handmade and dye lots vary slightly. It is important to ensure you order sufficient quantities from a single dye lot. We recommend confirming your measurements with a professional paperhanger before ordering. Or feel free to contact us. We are happy to work through your measurements with you. *We have calculated for the repeat and match for this item, but this is only an estimate. We are not responsible for incorrect quantities ordered based on this calculator.
Mid
[ 0.6082949308755761, 33, 21.25 ]
To be successful in business, you need to conduct research and write your business plan. Attempting to start a business without a (well-composed) business plan through feasibility study is like a stranger going to an unfamiliar terrain without prior direction. Or better still, it is like a ship without a rudder (which controls its direction). You can become a Wifi Software Solutions Provider Developer Designer Programmer Consultant Analyst, but what if you could become a Wifi Millionaire Consultant instead, and make at least $1,000 per week, part time, from the comfort of your home? The STEP Study or Phambili Trial was a clinical trial which tested the efficacy of an HIV vaccine. Vaccination in the study ended suddenly and before it was scheduled to finish when pre-determined endpoints happened and indicated that the vaccine being tested certainly was not an effective tool for preventing infection with HIV. Intrinsically photosensitive retinal ganglion cells (ipRGCs), also called photosensitive retinal ganglion cells (pRGC), or melanopsin-containing retinal ganglion cells, are a type of neuron in the retina of the mammalian eye. The presence of ipRGCs were first noted in 1923 when rodless, coneless mice still responded to a light stimulus through pupil constriction, suggesting that rods and cones are not the only light sensitive neurons in the retina. It wasn't until the 1980s that advancements in research on these cells began. Recent research has shown that these retinal ganglion cells, which, unlike other retinal ganglion cells, are intrinsically photosensitive due to the presence of melanopsin, a light sensitive protein. Therefore they constitute a third class of photoreceptors, in addition to rod and cone cells. Raymond Chow Man-Wai GBS (born 8 October 1927) is a Hong Kong film producer, and presenter. He is responsible for successfully launching martial arts and the Hong Kong cinema onto the international stage. As the founder of Golden Harvest, he produced some of the biggest stars of the martial arts film genre, including Bruce Lee, Jackie Chan, and Tsui Hark. Social status is the position or rank of a person or group, within the society. Status can be determined in two ways. One can earn their social status by their own achievements, which is known as achieved status. Alternatively, one can be placed in the stratification system by their inherited position, which is called ascribed status. An embodied status is one that is generated by physical characteristics located within our physical selves (such as beauty, physical disability, stature, build). The status that is the most important for an individual at a given time is called master status. Malpe (Tulu: ?????) is an all natural slot about six kilometers to the western world of Udupi, Karnataka, India. A significant port and angling harbor on the Karnataka coastline. It really is a suburb in Udupi city . Malpe and the Mogaveera will go alongside one another. Malpe is a hub of Mogaveera populace. Inhabitant for the millionaire entrepreneurs of Mogaveera community. Tulu, Kannada and Konkani are spoken here. first Indian beach with 24/7 WiFi
Mid
[ 0.57847533632287, 32.25, 23.5 ]
Even if he weren't 95 years old, Lewis Donelson would be regarded as the dean of Tennessee politics, and of much else, besides. Baker Donelson, the state's largest law firm, and the Tennessee Republican Party, both of which he willed and worked into being, are but two of the physical testaments he has erected on the landscape of the Volunteer State. Among the other credits attributable to Donelson (or "Lewie," as this highly approachable legend is called by almost everyone who knows him) is the fact that, as a member of the Memphis City Council in 1968, he began the efforts toward settling the fateful sanitation strike of that year — efforts that bore fruit, alas, only after the tragic assassination of Martin Luther King. And, as a stripling of 70-something in the 1990s, he waged and won the seminal lawsuit that gave equality in state funding to Tennessee's then under-financed rural school districts. Continuing such a list would take more space than allotted for this editorial. Besides, we want to concentrate on the political message Donelson brought to the Memphis Rotary Club last week as its featured speaker on Tuesday, election day. To start with the obvious, Donelson described himself as "the kind of Republican the Tea Party doesn't cherish." As he explained it to the Rotarians last week, his primary motive for founding the skeleton organization called the "Republican Association" back in the age of Boss Ed Crump, who controlled all of Memphis politics and most of Tennessee's, was to create a "two-party system." Thanks in large part to rivalries in the ranks of state Democrats after the passing of Crump in 1954, and in even larger part to recruitment by Donelson himself, GOP candidates like Howard Baker, Winfield Dunn, and Lamar Alexander began to run for — and win — state office, and the Republican Party was able to escape its historical East Tennessee cul-de-sac and become a major statewide force. For the first time, Donelson said, issues began to be discussed and began to be the major themes of election contests — not the personalities of the candidates. Donelson's Republican Party was fiscally conservative but socially egalitarian. "We stood for integration when the Democrats campaigned on 'Keep Memphis Down in Dixie,'" he said. Things have changed, however. The GOP now has the kind of unrivaled sway in a newly one-party Tennessee that a Crump could only have dreamed of. And the Tennessee Republican Party has become the bulwark of a kind of social dogmatism that Donelson regards as anathema. "For men to be popping up telling women what to do is uncalled for," Donelson said. "The party is much more socially concerned with issues like abortion than it used to be." As for former Senator Baker, a moderate, "I told Howard that he couldn't get nominated if he ran today." A onetime convert from the Democratic Party of his forebears, Donelson said last week, "I've been teasing some of my friends that I'm going to switch back again." We assume he was joking, mainly because he may not, realistically, have enough lifetime left for the decades-long task of rebuilding yet another political party. But we'd like to see him try.
Mid
[ 0.638202247191011, 35.5, 20.125 ]
Q: how to get back to loop after this if? would you please help me on this? I have written an class and here is the code. I have two problems. (1) I wanted to know how to get back to the beginning of the loop after user has enter copies he wants to add instead of exiting the system. I mean after system says "copies has been added." it ask user "if you want to add a book?" and then loop start again. (2) What should I do to add an object to the array each time loop is finished? my problem is everytime loop is running, the book1 is being rewritten. thank guys- solved A: As discussed, this will make your loop run until the user enters yes System.out.println("do you want to add a Book?(yes or no) "); Scanner s=new Scanner(System.in); String h = s.nextLine(); while (h.contains("yes")|| h.compareToIgnoreCase("YES") == 0) { int size=list.size(); //your code here System.out.println("do you want to add a Book?(yes or no) "); h = s.nextLine(); }
Mid
[ 0.606741573033707, 33.75, 21.875 ]
[Chronic diarrhea caused by VIP-secreting ganglioneuroblastoma in children. Apropos of a case with a review of the literature]. A literature review was conducted in relation to a case of chronic diarrhea associated with a VIP (vasoactive intestinal polypeptide) producing ganglioneuroblastoma (GNB), in an 18-month old female baby. This is a rare entity characterized by premonitory, persisting diarrhea, causing fluid and electrolyte changes typical of the WDHA syndrome, associating watery diarrhea, hypokalemia, and achlorhydia. Elevated VIP plasma levels are an indication for an echographic and/or CT-scan search for the causal secreting tumor. Although the prognosis of this condition seems favorable, the recommended treatment is surgery. The VIP substance represents an excellent biological monitoring marker. Ganglioneuroblastomas are tumors of the sympathetic nervous system, which, according to Pearse's cell and embryologic theory (1966), have to be linked to the APUD system tumors (paraneuromas). VIP-producing forms are rare in children, and only 29 case studies have been compiled in the literature since 1970, when the VIP substance was discovered. The case reported in this study illustrates the diagnostic problems raised by such lesions, and allows us to confirm VIP's imputability for the occurrence of the chronic diarrhea condition in this child.
High
[ 0.683544303797468, 30.375, 14.0625 ]
Q: How can I set a environment variables in Maven per run? In my project, we've created a Maven module to get the specific JBoss AS and unpacked. Then all the test cases can be run under this Jboss AS as embedded container. We're using jboss-ejb3-embedded-standalone to call the embedded container, however, it just find the JBOSS_HOME from environment variables and use that one to run. Thus we have to update the JBOSS_HOME per mvn install. I tried to do this in maven by introduce exec-maven-plugin as below: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <configuration> <executable>env</executable> <environmentVariables> <JBOSS_HOME> C:/Sample/embedded-container/jboss-${version.org.jboss.jbossas} </JBOSS_HOME> </environmentVariables> </configuration> <executions> <execution> <id>resetJbossHome</id> <phase>integration-test</phase> <goals> <goal>exec</goal> </goals> </execution> </executions> </plugin> In the output of console, I can see [INFO] --- exec-maven-plugin:1.2.1:exec (resetJbossHome) @ test-embedded --- .... JBOSS_HOME=C:/Sample/embedded-container/jboss-6.1.0.Final .... But when launching JBOSS, it's still running the one with origin JBOSS_HOME set. Besides, I've tried using maven-antrun-plugin too. <plugin> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <id>copyRelease</id> <phase>pre-integration-test</phase> <configuration> <tasks> <exec executable="env"> <env key="JBOSS_HOME" value="C:/Sample/embedded-container/jboss-${version.org.jboss.jbossas}"/> </exec> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> It turns out the same. Am I wrong on the configuration or there're some better way? A: Take a look at the Maven profiles. You can define one profile for testing, one for production, with different properties, such as <profiles> <profile> <id>test</id> <jboss.home>PATH TO JBOSS TEST INSTANCE</jboss.home> </profile> <profile> <id>prod</id> <jboss.home>PATH TO JBOSS PROD INSTANCE</jboss.home> </profile> </profiles> And in your exec plugin : <environmentVariables> <JBOSS_HOME> ${jboss.home} </JBOSS_HOME> </environmentVariables>
High
[ 0.681638044914134, 32.25, 15.0625 ]
The main benefit of Getting a Residential Gutter Cleaning Service A lot of people dread having to go onto their roof every fall to clean up out their gutters. While this is something they might rather avoid, nearly everyone knows it’s necessary. A clogged gutter in the winter can lead to the development of ice inside the gutters which may affect the gutters and also the roof. This ice dam can eventually result in water damage inside your house. While your gutters should be cleaned, you don’t have to complete the cleaning services yourself. It is possible to work with a residential gutter cleaning company to clean your gutters to suit your needs. It is really an inexpensive service that can help you save commitment. Also, because they concentrate on cleaning gutters, they have the equipment essential to remove clogs which may create problems throughout the winter. When you have your cutter cleaned you can think about getting specifics of maintenance free gutter systems or gutter guards. These will make it easier to keep your gutters clean, even though you should have your gutters checked twice yearly to ensure they are not damaged. The gutter cleaning service can help clear leaves, debris and twigs out of your gutter within the fall. In the spring they may return and appearance to be certain there is no damage through the winter and remove any debris which may have gotten in your gutter system in the winter time. Some residential gutter cleaning services can sell and install gutter guards on the home. You can request an estimate while they are cleaning your gutters to figure out should you will save money by obtaining gutter guards. Gutter guards might help save money by preventing the accumulation of debris within your gutter system. While the gutter cleaning services are cleaning your gutters, they can also inspect the gutters and downspouts. You would like to ensure they are not cracked or broken. You also want to ensure they can be firmly attached to your property. During the winter a huge snow storm can rip a loose gutter off of your property that may cause damage to your roof and lead to water damage in your home which leads to emergency gutter cleaning Kirkland. If you find problems for your gutter, upon inspection, you could have the cleaning service repair the harm, or recommend services that will help you. Some residential gutter cleaning companies, only clean gutters, they are certainly not able to perform any kind of repair in the system. Prior to getting a gutter cleaning service look online for reviews of your company. You would like to make sure your gutters are cleaned properly. When you see any complaints of shoddy service, look elsewhere. After you’ve found a couple of companies contact them to request for a value estimate, most will ask to come to your own home to look for the cost with regard to their gutter cleaning Newcastle WA service. When you hire a gutter cleaning company and are unclear of the work, don’t hesitate to acquire high on the rooftop yourself and look at what was done. If you’re not satisfied, call the corporation immediately to complain.
Mid
[ 0.6298076923076921, 32.75, 19.25 ]
Q: jQuery Animation on applying filters or sort elements i would like to ask if anyone knows how to create an animation like this plugin shufflejs on sorting div elements or applying filters. Any help? Thank you :) A: This is simple example using animate of JQuery function: $(document).ready(function(){ var asc =false; $("#sort").change(function(){ if(asc){ asc=false; todesc(); } else{ asc=true; toasc(); } }); }); function toasc(){ var count_size = $('.col').size(); // showing var cnt=1; var top_=0; var left_=0; for(var i=0;i<count_size;i++){ for(var j=0;j<count_size;j++){ var val = $('.col').eq(j).attr('value'); if(cnt==val){ if(cnt%2==0){ $('.col').eq(j).animate({left:left_+'px',top:top_+'px'},2000); left_=0; top_+=100; } else{ $('.col').eq(j).animate({left:left_+'px',top:top_+'px'},2000); left_=100; } cnt++; } } } } function todesc(){ var count_size = $('.col').size(); // showing var cnt=6; var top_=0; var left_=0; for(var i=0;i<count_size;i++){ for(var j=(count_size-1);j>=0;j--){ var val = $('.col').eq(j).attr('value'); if(cnt==val){ if(cnt%2==0){ $('.col').eq(j).animate({left:left_+'px',top:top_+'px'},2000); left_=100; } else{ $('.col').eq(j).animate({left:left_+'px',top:top_+'px'},2000); top_+=100; left_=0; } cnt--; } } } } .row{ width:220px; height:320px; position:relative; } .col{ width:100px; height:100px; border:1px solid #323232; position:absolute; } #sort{ float:left; width:200px; } #col1{ top:0; } #col2{ top:0; left:100px; } #col3{ top:100px; } #col4{ top:100px; left:100px; } #col5{ top:200px; } #col6{ top:200px; left:100px; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <div id="row" class="row"> <div id="col1" class="col" value="3">C</div> <div id="col2" class="col" value="6">F</div> <div id="col3" class="col" value="4">D</div> <div id="col4" class="col" value="2">B</div> <div id="col5" class="col" value="1">A</div> <div id="col6" class="col" value="5">E</div> </div> <select name="sort" id="sort"> <option value="1">DESC</option> <option value="2">ASC</option> </select>
Mid
[ 0.6380697050938331, 29.75, 16.875 ]
Q: How to keep column MultiIndex values when merging pandas DataFrames I have two pandas DataFrames, as below: df1 = pd.DataFrame({('Q1', 'SubQ1'):[1, 2, 3], ('Q1', 'SubQ2'):[1, 2, 3], ('Q2', 'SubQ1'):[1, 2, 3]}) df1['ID'] = ['a', 'b', 'c'] df2 = pd.DataFrame({'item_id': ['a', 'b', 'c'], 'url':['a.com', 'blah.com', 'company.com']}) df1: Q1 Q2 ID SubQ1 SubQ2 SubQ1 0 1 1 1 a 1 2 2 2 b 2 3 3 3 c df2: item_id url 0 a a.com 1 b blah.com 2 c company.com Note that df1 has some columns with hierarchical indexing (eg. ('Q1', 'SubQ1')) and some with just normal indexing (eg. ID). I want to merge these two data frames on the ID and item_id fields. Using: result = pd.merge(df1, df2, left_on='ID', right_on='item_id') gives: (Q1, SubQ1) (Q1, SubQ2) (Q2, SubQ1) (ID, ) item_id url 0 1 1 1 a a a.com 1 2 2 2 b b blah.com 2 3 3 3 c c company.com As you can see, the merge itself works fine, but the MultiIndex has been lost and has reverted to tuples. I've tried to recreate the MultiIndex by using pd.MultiIndex.from_tuples, as in: result.columns = pd.MultiIndex.from_tuples(result) but this causes problems with the item_id and url columns, taking just the first two characters of their names: Q1 Q2 ID i u SubQ1 SubQ2 SubQ1 t r 0 1 1 1 a a a.com 1 2 2 2 b b blah.com 2 3 3 3 c c company.com Converting the columns in df2 to be one-element tuples (ie. ('item_id',) rather than just 'item_id') makes no difference. How can I merge these two DataFrames and keep the MultiIndex properly? Or alternatively, how can I take the result of the merge and get back to columns with a proper MultiIndex without mucking up the names of the item_id and url columns? A: If you can't beat 'em, join 'em. (Make both DataFrames have the same number of index levels before merging): import pandas as pd df1 = pd.DataFrame({('Q1', 'SubQ1'):[1, 2, 3], ('Q1', 'SubQ2'):[1, 2, 3], ('Q2', 'SubQ1'):[1, 2, 3]}) df1['ID'] = ['a', 'b', 'c'] df2 = pd.DataFrame({'item_id': ['a', 'b', 'c'], 'url':['a.com', 'blah.com', 'company.com']}) df2.columns = pd.MultiIndex.from_product([df2.columns, ['']]) result = pd.merge(df1, df2, left_on='ID', right_on='item_id') print(result) yields Q1 Q2 ID item_id url SubQ1 SubQ2 SubQ1 0 1 1 1 a a a.com 1 2 2 2 b b blah.com 2 3 3 3 c c company.com This also avoids the UserWarning: pandas/core/reshape/merge.py:551: UserWarning: merging between different levels can give an unintended result (2 levels on the left, 1 on the right)
Low
[ 0.527777777777777, 19, 17 ]
import React from 'react'; import Contributor from 'interface/ContributorButton'; import ReadableListing from 'interface/ReadableListing'; import Config from 'parser/Config'; const SpecListItem = ({ spec, exampleReport, contributors, patchCompatibility, }: Config) => { const className = spec.className.replace(/ /g, ''); const Component = exampleReport ? 'a' : 'div'; const builtinfo = contributors.length !== 0 ? 'Built by ' : 'CURRENTLY UNMAINTAINED'; return ( <Component key={spec.id} href={exampleReport} title={exampleReport ? 'Open example report' : undefined} className="spec-card" > <div className="icon"> <figure> <img src={`/specs/${className}-${spec.specName.replace(' ', '')}.jpg`} alt={`${spec.specName} ${spec.className}`} /> </figure> </div> <div className="description"> <h2 className={className}> {spec.specName} {spec.className} </h2> {builtinfo}{' '} <ReadableListing> {contributors.map(contributor => ( <Contributor key={contributor.nickname} link={false} {...contributor} /> ))} </ReadableListing> .<br /> Accurate for patch {patchCompatibility} </div> </Component> ); }; export default SpecListItem;
Mid
[ 0.5606796116504851, 28.875, 22.625 ]
Q: Lagrange multiplier question: finding a counterexample. I'm helping a student through a course in mathematics. In the course text, we came across the following problem concerning the Lagrange multiplier technique. Given a differentiable function with continuous partial derivatives $f:\mathbb{R}\to\mathbb{R}:(x,y)\mapsto f(x,y)$ that has to be extremized and a constraint given by an implicit relation $g(x,y)=0$ with $g$ likewise having continuous partial derivatives. The Lagrange multiplier technique looks for points $(x^*,y^*,\lambda^*)$ such that $$\nabla f(x^*,y^*)=\lambda^* \nabla g(x^*,y^*)$$ $\lambda$ is the so-called Lagrange multiplier. The technique rests upon the fact that the gradient of the constraint is different from the zero vector: $\nabla g(x^*,y^*) \neq 0$. Then, the text says that if that condition is not fulfilled, it is possible to not have solutions of the system of equations resulting from the technique while there can actually be a constrained extremum. I have tried to construct such an example, but until now unsuccessfully as every attempt I make with a $\nabla g(x^*,y^*) = 0$ turns out to have a solution. Does anyone know of a nice counterexample? And does one know of a counterexample when there are $n$ variables and $m$ constraints with $n>m>1$ ? For the latter, the condition becomes that the gradients of the constraints have to be linearly independent. So the question is: can we find an example where there is no solution for the Lagrange multiplier method, but there is an extremum and the gradients of the constraints are linearly dependent? A: Here is an example, albeit with two constraints: We want to find the minimum of the function $$f(x,y,z):=y\ ,$$ given the constraints $$F(x,y,z):=x^6-z=0\ ,\qquad G(x,y,z):=y^3-z=0\ .$$ The constraints define the curve $$\gamma:\quad x\mapsto (x,x^2, x^6)\qquad(-\infty<x<\infty)\ ,$$ and it is easy to see that the minimum of $f\restriction \gamma$ is taken at the origin. On the other hand $$\nabla f(0,0,0)=(0,1,0)\ ;\quad\nabla F(0,0,0)=\nabla G(0,0,0)=(0,0,-1)\ ;$$ whence $\nabla f(0,0,0)$ is not a linear combination of $\nabla F(0,0,0)$ and $\nabla G(0,0,0)$. This means that using Lagrange's method the origin would not have shown up as a conditionally stationary point. A: In the Lagrange multiplier technique, what you really need is a point $(x^*,y^*)$ where $\nabla f(x^*,y^*)$ and $\nabla g(x^*,y^*)$ are linearly dependent. If the text says otherwise, they are wrong. This is slightly different from what you wrote down. Try $g(x,y)=x^2+y^2$ and $f(x,y) = (x-1)^2=0$, which has a constrained minimum at $(1,0)$. This doesn't satisfy $n > m >1$, but it shouldn't be to hard to modify it so it does.
Mid
[ 0.538812785388127, 29.5, 25.25 ]
Background {#Sec1} ========== Despite advances in resuscitation care, mortality rates following cardiac arrest (CA) remain high \[[@CR1], [@CR2]\]. Between one-quarter (in-hospital CA) and two-thirds (out of hospital CA) of patients admitted comatose to intensive care, die of neurological injury \[[@CR3]\], the majority as a result of withdrawal of life sustaining treatment (WLST) following a poor neuroprognosis \[[@CR4]\]. While neuroprognosis determines an informed and timely WLST decision, it is complicated by a range of factors. On return of spontaneous circulation (ROSC) following cardiac arrest, management of post-cardiac arrest syndrome \[[@CR5]\] can be particularly challenging. Its components, brain injury, myocardial dysfunction, systemic ischaemia/reperfusion response and persistent precipitating pathology are inter-related, with observable benefits depending on optimisation of all.The latest Resuscitation Council UK (2015) guidance on post-resuscitation care \[[@CR4]\] guards against post-ischaemic neuronal injury to maximise neurological recovery. The guidance includes adequate sedation and targeted temperature management (TTM), which can *reduce the accuracy of prognostic modalities* \[[@CR6], [@CR7]\].A systematic review of international data on prognostication modalities in comatose post-CA patients treated with TTM \[[@CR8]\] gave rise to a multi-modal strategy for prognostication \[[@CR9], [@CR10]\], published jointly by ERC-ESICM (European Resuscitation Council and European Society of Intensive Care Medicine) \[[@CR11], [@CR12]\]. The strategy (Fig. [1](#Fig1){ref-type="fig"}) advises that prognostication is initially *delayed 72 h* after ROSC to allow rewarming and clearance of residual sedation \[[@CR12]\].The prognostication strategy's modalities are differentiated in the guidance by their specificity, precision and robustness. The more robust modalities: bilaterally absent pupillary light reflex (PLR);corneal reflex (CR) andbilaterally absent N20 SSEP (somatosensory-evoked potential) wave; Fig. 1Prognostication strategy are used first and their combined results used to prognosticate \[[@CR6], [@CR12]\]. When a poor neurological outcome is not predicted to be 'very likely', less robust modalities are added after a *further 24-h delay* \[[@CR6], [@CR12]\]. Each predictive modality exhibits limitations. Studies show that some are subjective or prone to inconsistencies \[[@CR13]--[@CR16]\]. Others rely on specialist interpretation \[[@CR17]--[@CR20]\], are ill-defined \[[@CR4]\] or are incompletely understood \[[@CR6]\].Meta-analyses of the strategy's modalities show that PLR, CR and N20 SSEP predict poor neurological outcome with low false positive rates \[[@CR8], [@CR21]\]. However, the primary studies provided 'low' to 'very low' quality of evidence \[[@CR4], [@CR6], [@CR7], [@CR21]\], reducing confidence in the prediction and therefore *extending the observation period*.The primary studies show that WLST is influenced by a self-fulfilling prophecy, a bias introduced when prognostic modalities are not blinded to the treating team \[[@CR4], [@CR6]\]. As well as reducing the quality of the evidence, this bias reduces available evidence on delayed awakening (late recovery of consciousness following coma), which can affect 30% of post-CA patients \[[@CR22]\]^.^ The complex management of post-CA syndrome, neuroprotective measures' adverse impact on prognostication accuracy (sedation and TTM), consequent delays in prognostication (72 h and 24 h), limitations of the individual modalities and unblinded observer bias, all present a challenge to early prediction of a poor neurological outcome. The immediate consequence of a delayed poor neuroprognosis is futile treatment and resource wastage. Treatment is multifaceted, complex and costly. A 2015 study \[[@CR23]\] (P.6) found 'a significant correlation between length of stay and cost, with a much longer length of stay in ICU and hospital for CPC (Glasgow-Pittsburgh Cerebral Performance Category \[[@CR24]\]) 3--4 patients' (cf. CPC 1--2). The intended benefit of treatment, quality adjusted life years (QALY), has a complex relationship with neurological status. Petrie et al showed that cost per QALY for ROSC post-CA, high-quality (CPC 1-2) survivors, at £16,000 \[[@CR23]\], is well within the £30,000 UK (NICE) threshhold \[[@CR25]\]. However, Petrie et al noted that 'a major determinant of cost for the CPC 1--2 group was the burden of cost of the non-survivors and CPC 3--4 patient group \[[@CR23]\] (p.6). An aim of the proposed review is earlier neuroprognosis in CPC 3--4 patients, which will reduce this cost burden. In principle, improving the current multi-modal prognostication strategy to enable earlier prediction of poor neurological outcome would allow a more utilitarian resource allocation, reduce futile treatment and lessen the healthcare opportunity cost resource depletion imposes. The usefulness of supplementary modalities that yield immediate results, require rudimentary operator skill and are amenable to easily repeatable tests is being investigated \[[@CR20]\]. IRP is emerging as one such promising prognostic modality, with one recent study showing that it yields higher specificity and sensitivity than manual PLR measurement \[[@CR26]\]. In IRP, infrared light is shone directly at the pupil and data is obtained through analysis of the reflected image. Characteristics of the PLR include, amplitude, latency constriction and dilatation velocity. This provides a direct functional assessment of the second and third cranial nerves, a predictor of neurological outcome. Objectives {#Sec2} ========== The proposed study will comprehensively review the evidence to determine whether the early use of IRP would help predict neurological outcome in comatose patients who achieve ROSC following CA. Questions of particular interest include the following: Timeliness - Can IRP be used early in the prognostication strategy to inform an earlier WLST decision?Specificity - Can IRP reduce the risk of falsely pessimistic prediction, reducing the lack of confidence that increases observation periods and inflates cost?Sensitivity - Can IRP reduce the incidence of delayed neuroprognoses, reducing ICU bed days?Primary hypothesis - In those patients who remain comatose following ROSC after CA, IRP can be used early to help predict poor neurological outcome. Methods {#Sec3} ======= The design of the systematic review will follow the PRISMA-P 2015 checklist \[[@CR27]\] (Additional file [1](#MOESM1){ref-type="media"}). The systematic review protocol is registered with PROSPERO under ID 'CRD42018118180' and is provided in Additional file [2](#MOESM2){ref-type="media"}. Eligibility criteria {#Sec4} -------------------- This systematic review will consider randomised controlled trials, systematic reviews and retrospective and prospective cohort studies with specific study characteristics. Study populations will include adults over the age of 18 who suffered a cardiac arrest; the intervention will be infrared pupillometry performed early in the prognostication strategy; the primary outcome measure is neurological outcome. Studies will be excluded if their study populations included cardiac arrests of traumatic aetiology, pregnant women or paediatric cases. Case reports will also be excluded. Only published studies written in English language will be considered. Year of publication will not form part of the exclusion criteria. Information sources {#Sec5} ------------------- A comprehensive and systematic search of the following electronic databases will inform this systematic review. The Healthcare Databases Advanced Search (HDAS), accessed through NICE, will be used as the interface through which, EMBASE, MEDLINE and CINAHL databases are searched. The Cochrane Database of Systematic Reviews (CDSR) will be searched. A search for completed systematic reviews within PROSPERO will also be carried out. The search will be expanded through direct contact with authors of works' pending publication, reference mining and citation searching of the related literature and hand searching of relevant journals. Search strategy {#Sec6} --------------- The search strategy was defined and applied by a specialist search strategist (PB). HDAS has the advantage that a common syntax can be used to search EMBASE, MEDLINE and CINAHL, albeit using controlled vocabulary appropriate to each database. Natural language keyword searches include 'cardiac arrest', 'prognosis' and 'infrared pupillometry'. Boolean logic was used to combine search terms and allow extraction of potentially suitable abstracts. The completed search strategy for EMBASE, MEDLINE and CINAHL is provided in Additional file [3](#MOESM3){ref-type="media"}; this will be peer reviewed, using the Peer Review of Electronic Search Strategies (PRESS \[[@CR28]\]) checklist, by an independent information specialist. Data management {#Sec7} --------------- The extracted abstracts will be managed in HDAS. Study selection process and data collection process {#Sec8} --------------------------------------------------- The study selection process will follow the PRISMA 2009 flow diagram from the PRISMA statement \[[@CR27]\]. Extracted abstracts will be screened for duplicates before review by two authors AM and SP. Abstracts that are not excluded during initial review will undergo full text screening by each author independently. A modified version of the Cochrane data extraction and assessment template \[[@CR29]\] (Additional file [4](#MOESM4){ref-type="media"}) will be used for full text screening and data extraction. Within this, any reasons for exclusion will be noted. Variable and outcome data will be extracted in the same template. When the required data is not specified in a study's full text, all reasonable attempts will be made to contact the author and the content of any correspondence will be documented. Data items {#Sec9} ---------- The following variables will be extracted from eligible studies: patient age and gender, outcome of cardiac arrest and quantitative values of infrared pupillometry. Outcomes and prioritisation {#Sec10} --------------------------- The main outcome measure to be extracted from eligible studies is neurological outcome. Additional outcomes include the number of ICU bed days and survival at discharge. Risk of bias in individual studies {#Sec11} ---------------------------------- Risk-of-bias assessments will be made independently by two authors AM and SP. For all RCTs, we will use the Cochrane risk-of-bias (RoB) tool \[[@CR30]\] and for all non-randomised studies, the ROBINS-1 tool \[[@CR31]\]. The quality and strength of evidence will be assessed using the Grades of Recommendation, Assessment and Evaluation (GRADE) approach \[[@CR32]\]. Internal validity {#Sec12} ----------------- Inter-reviewer agreement in study inclusion, data extraction and risk-of-bias assessments will be maximised by piloting the data extraction form and risk-of-bias tools prior to use in the systematic review. Clear usage instructions will include guidance on consistency of input styles and documentation of missing information \[[@CR29]\]. Inter-reviewer agreement will be measured using the kappa statistic for the initial study screening, data extraction and risk-of-bias assessment. A third reviewer will independently review the studies that elicit disagreement between AM and SP in study inclusion, data extraction or risk-of-bias assessments. Disagreements will be discussed between all three reviewers at regular team meetings. Each disagreement will be interrogated with reference to the decision rules and guidance on the use of the data extraction tool and risk-of-bias assessments. The outcome and reason for the initial disagreement will be recorded. Synthesis and meta-biases {#Sec13} ------------------------- This review aims to establish the association between IRP values that indicate a 'very likely' poor prognosis and patients with poor neurological outcome (defined as CPC 3--5). IRP data will be correlated with patient outcome (Table [1](#Tab1){ref-type="table"}) and employed in odds ratios to determine IRP's effect as an additional modality within the neuroprognostic algorithm. Taking account of the results from all studies, we will calculate aggregated estimates of the effect of intervention, together with *p* values, means, confidence intervals and value ranges. Table 1Patient outcomesPoor outcomeGood outcome'Very likely' poor prognosisPPWLSTPGFalse positiveNo 'very likely' poor prognosisGPFalse negativeGG£16 k/QALY Survivors A Forest plot will be used to graphically represent the size of the effect seen in individual studies and the summarised effect. Of particular interest are IRP's specificity, PP/(PP + PG); sensitivity, PP/(PP + GP); false positive rate, PG/(PP + PG) and its false negative rate, GP/(GP + GG). Variation will be checked for between studies (heterogeneity) using Cochran's Q and other I-square statistics \[[@CR33]\]. If significant heterogeneity is found, we will apply the random-effects model \[[@CR34]\]. To identify sources of heterogeneity and adjust for them, sub-group analysis will be performed and meta-regression will be used to identify the influence co-variates have on the overall effect. Homogeneity will be addressed using the fixed effects model \[[@CR34]\]. Confidence in cumulative evidence {#Sec14} --------------------------------- The quality and strength of evidence will be assessed using the Grades of Recommendation, Assessment and Evaluation (GRADE) approach \[[@CR32]\]. Discussion {#Sec15} ========== The current multi-modal neuroprognostication strategy advises that following ROSC after CA, clinicians wait 72 h to allow rewarming and clearance of sedation before prognosticating. A poor neuroprognosis prompts WLST, any usable strategy must minimise the risk of falsely pessimistic predictions, the false positive rate. Despite each of the current strategy's predictive modalities exhibiting limitations, meta-analyses of the modalities show that PLR, CR and N20 SSEP predict poor neurological outcome with low false positive rates \[[@CR8], [@CR21]\]. However, the quality of evidence is low, thereby reducing confidence in the strategy's results. In the clinical utilisation of the strategy, false positive risk is mitigated by extending observation periods and utilising additional modalities \[[@CR6]\]. Concomitantly, risk aversion increases the false negative rate (lack of a poor prognosis preceding a poor neurological outcome). The costs of risk mitigation \[[@CR23]\] can be reduced through earlier prognostication using greater-specificity/greater-sensitivity modalities that are objective, repeatable and can be deployed early in the strategy using readily available expertise. Greater specificity will increase confidence in predictions of poor neurological outcome, obviating the need to extend the observation period, and greater sensitivity will reduce the false negative rate. IRP's characteristics, objectivity, repeatability and rudimentary operation make it a promising candidate for review. The proposed study will comprehensively review the evidence for IRP and determine whether it will help predict a poor neurological outcome post-CA. Supplementary information ========================= {#Sec16} **Additional file 1.** PRISMA-P (Preferred Reporting Items for Systematic review and Meta-Analysis Protocols) 2015 checklist: recommended items to address in a systematic review protocol. **Additional file 2.** PROSPERO registration document. **Additional file 3.** Search Strategy on EMBASE, MEDLINE and CINAHL. **Additional file 4.** Cochrane public health group data extraction and assessment template. AM : Alex Monk CA : Cardiac arrest CDSR : Cochrane Database of Systematic Reviews CPC : Glasgow-Pittsburgh Cerebral Performance Category CR : Corneal reflex CT : Computerised tomography EEG : Electroencephalography ERC : European Resuscitation Council ESICM : European Society of Intensive Care Medicine FPR : False positive rate GG : No poor neuroprognosis---good neurological outcome GP : No poor neuroprognosis---poor neurological outcome GRADE : Grades of Recommendation, Assessment and Evaluation HDAS : Healthcare Databases Advanced Search ICU : Intensive care unit IRP : Infrared pupillometry MeSH : Medical Subject Headings MRI : Magnetic resonance imaging NICE : National Institute for Health and Care Excellence NSE : Neuron-specific enolase PB : Phillip Barlow PG : Poor neuroprognosis---good neurological outcome PLR : Pupillary light reflex PP : Poor neuroprognosis---poor neurological outcome PRESS : Peer Review of Electronic Search Strategies PRISMA-P : Preferred reporting items for systematic review and meta-analysis protocols QALY : Quality adjusted life year RCT : randomised controlled trial RoB : Risk of bias Robins-I : Risk of bias in non-randomised studies of interventions ROSC : Return of spontaneous circulation SP : Shashank Patil SSEP : Somatosensory-evoked potential TTM : Targeted temperature management WLST : Withdrawal of life sustaining treatment **Publisher's Note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supplementary information ========================= **Supplementary information** accompanies this paper at 10.1186/s13643-019-1209-z. We would like to acknowledge the contribution of Mr. Philip Barlow who assisted in developing the search strategy. AM prepared the manuscript for the systematic review and provided input on its focus and design. SP conceived the systematic review and provided definitive and final guidance on the manuscript. Both authors read and approved the final manuscript. Not applicable. All data pertaining to this review protocol can be found within the body of text or in the supplementary additional files. Ethics approval and consent are waived. Not applicable. The authors declare that they have no competing interests.
Mid
[ 0.623737373737373, 30.875, 18.625 ]
Not sure who your admin is, but your getting a 500 (internal server) error. It looks like a function that runs after the post (probably related to clicking the submit button) is erroring on a "no post mode specified". Probably a NULL value being passed into the function. My 2 cents. Shock2k wrote:Not sure who your admin is, but your getting a 500 (internal server) error. It looks like a function that runs after the post (probably related to clicking the submit button) is erroring on a "no post mode specified". Probably a NULL value being passed into the function. My 2 cents. Close, but you're in the right area... Actually, the air:fuel ratio under medium throttle is too rich causing torque values to be negatively affected, and RPM increase rate to suffer. They need to go ahead and add about 4 degrees of base timing, re-jet the primaries two steps down, and increase plug gap by about .005. Might look at coil energy and charge voltage under high fuel pump draw to insure a minimum of 13.8 volts to the ignition system. I wouldn't change the final drive gear ratio until we are seeing signs of a lean mixture at the plugs. Torque converter stall might be a few hunded RPM low also. Shock2k wrote:Not sure who your admin is, but your getting a 500 (internal server) error. It looks like a function that runs after the post (probably related to clicking the submit button) is erroring on a "no post mode specified". Probably a NULL value being passed into the function. My 2 cents. Close, but you're in the right area... Actually, the air:fuel ratio under medium throttle is too rich causing torque values to be negatively affected, and RPM increase rate to suffer. They need to go ahead and add about 4 degrees of base timing, re-jet the primaries two steps down, and increase plug gap by about .005. Might look at coil energy and charge voltage under high fuel pump draw to insure a minimum of 13.8 volts to the ignition system. I wouldn't change the final drive gear ratio until we are seeing signs of a lean mixture at the plugs. Torque converter stall might be a few hunded RPM low also. Oh, and turn off that damn bass amp during runs. The god-damnit on the framastan gasket was actually incorrectly defined.
Mid
[ 0.5454545454545451, 29.25, 24.375 ]
From the prior art, heterodisperse anion exchangers of the poly(meth)acrylamide type are already known. These are a class of anion exchangers which can be used in practice for numerous different applications. An important area of use of heterodisperse anion exchangers of the poly(meth)acrylamide type is water treatment technology, in which it is possible to remove anions, for example, chloride, sulphate or nitrate, and weak acids such as salicylic acid and carbonic acid; organic acids such as formic acid, acetic acid, citric acid, humic acids and others. Currently, both gel-type and macroporous heterodisperse anion exchangers of the poly(meth)acrylamide type are used in decolorizing press juices from beets and sugar cane. In the course of the complex production process of sugar extraction, the press juices from the beets, preferably sugar beets, and the sugar cane discolor. Pigments, for example melanoidines and caramel colors are formed. U.S. Pat. No. 4,082,701 discloses the use heterodisperse anion exchangers of the poly(meth)acrylamide type, for decolorizing pigment solutions. Raw solutions of liquid sugar syrup or invert sugar syrup are also currently desalted using heterodisperse anion exchangers of the poly(meth)acrylamide type. It is also known to use gel-type or macroporous heterodisperse anion exchangers of the poly(meth)acrylamide type for the removal of acids or acidic components from whey and fruit thin press juices. A known process for preparing heterodisperse anion exchangers of the poly(meth)acrylamide type is aminolysis of crosslinked acrylic ester bead polymers with polyamines according to U.S. Pat. No. 2,675,359, CZ-A 169 356, DD 99 587 or U.S. Pat. No. 5,414,020. The crosslinked (meth)acrylic ester resin bead polymers used for the aminolysis are prepared in the prior art as gel-type or macroporous resins. They are prepared in mixed polymerization by the suspension polymerization process. This produces heterodisperse bead polymers having a broad particle size distribution in the range from approximately 0.2 mm to approximately 1.2 mm. The heterodisperse anion exchangers of the poly(meth)acrylamide type obtained after the subsequent aminolysis can be quaternized by alkylating agents. The reaction to be performed here to give strongly basic groups can be carried out in the range from 1 to 100%, that is to say completely. Customary alkylating agents are alkyl halides or aryl halides or mixtures of the two, for example chloromethane according to U.S. Pat. No. 4,082,701 or benzyl chloride. In U.S. Pat. No. 2,675,359, gel-type and macroporous heterodisperse bead polymers based on a methylacrylate-divinylbenzene copolymer are reacted with diethylenetriamine. DD 99 587 describes the preparation of solid-grain weakly basic heterodisperse anion exchangers based on polyacrylic esters. The grain solidity is achieved by means of the fact that, after the copolymer is reacted with the polyamine, the resin is treated with a water-miscible solvent which swells the resin to a lesser extent than water. Suitable solvents used are, for example, methanol, ethanol, acetone or mixtures thereof. 99% of the beads are obtained without cracks or fissures. Without the treatment with methanol, for example, 35% of the beads have cracks and fissures. The heterodisperse anion exchangers of the poly(meth)acrylamide type, depending on the charged form of the resin, that is to say depending on the type of counter ion to the nitrogen, exhibit differing resin volumes. When converted from the chloride form to the free base form, the resin swells markedly. Conversely, it shrinks on conversion from the free base form to the chloride form. In the industrial use of these heterodisperse anion exchangers of the poly(meth)acrylamide type, therefore, charging and regeneration is associated in each case with shrinkage or swelling, respectively. In the course of long-term use, however, these heterodisperse anion exchangers are regenerated several hundred times. The shrinking and swelling operations occurring in the course of this stress the bead stability so greatly that a fraction of the beads develop cracks, finally even fracturing. Fragments are produced which lead to blockages in the service apparatus and the columns, and impede flow, which in turn leads to an increased pressure drop. In addition, the fragments contaminate the medium being treated, preferably water, and thus reduce the quality of the medium or the water. The flow of water through a column packed with beads, however, is impeded not only by resin fragments, but also by fine polymer beads, if present. An increase in pressure drop occurs. Due to the particle size distribution of known heterodisperse anion exchangers of the poly(meth)acrylamide type, beads of differing diameters are present. The presence of such fine beads additionally increases the pressure drop. Seidl et al., Chemicky prumysl, roc. 29/54 (1979) cis 9,470, studied the aminolysis reaction of crosslinked acrylic ester resins and found that, in addition to the acrylamide unit, free acrylic acid units are also formed. All acrylamide resins exhibit free acrylic acid units. After completion of charging of heterodisperse anion exchangers of the poly(meth)acrylamide type with anions, therefore, the resin is regenerated with dilute sodium hydroxide solution in order to prepare it for new charging. Sodium hydroxide solution residues are washed out of the resin with water. In addition the carboxylate ion which results from treating the carboxylic acid group with sodium hydroxide solution is hydrolysed by the water washing. During production of the resins a low conductivity of the effluent water (wash water) from the resin is desired, since otherwise impure water is present. The goal is to achieve low conductivity using small amounts of wash water, since this can be regarded as a sign that only small amounts of weakly acidic groups remain.
Mid
[ 0.6457399103139011, 36, 19.75 ]
DNA Test Helps Conservationists Track Down Ivory Smugglers Enlarge this image toggle caption Simon Maina/Getty Images Simon Maina/Getty Images Conservationists have developed a new high-tech strategy to trace the cartels that smuggle much of the illegal ivory around the world — by using DNA to track ivory back to specific ports. Biologist Samuel Wasser from the University of Washington is behind the effort. He notes that while poaching in Africa has dipped recently, too many elephants are still dying. Enlarge this image toggle caption Wolfgang Kaehler/Getty Images Wolfgang Kaehler/Getty Images "Right now we're estimating that there are about 40,000 elephants being killed every year," he says, "and there are only 400,000 left in Africa. So that's a tenth of the population a year." Several years ago, Wasser developed a way to use DNA in tusks to tell what part of Africa the elephants lived in. Now he's trying to use DNA to pinpoint how the ivory is moved to its final destination. The cartels that run the ivory trade try to cover their tracks. They falsify shipping documents, for example, and hide the ivory under other goods in shipping containers. And they send the ivory to multiple ports before its final destination. Wasser analyzed DNA from tusks that were seized by customs officials. He noticed that smugglers often separate the two tusks that come from a single elephant and ship them separately, apparently to make it harder to track where they came from. But Wasser found a pattern. Almost always, he says, "The two shipments with matching tusks passed through a common port. They were shipped close together in time and they showed high overlap in the genetically determined origins of the tusks. "So these three characteristics suggest that the same major trafficking cartel was actually responsible for ... both of the shipments," he says. Wasser says wildlife authorities rarely get enough evidence to identify the big players; often it's their smaller suppliers who get caught with only as much ivory as they can carry. Those convictions are well down the smuggling pyramid and don't do much stem the trade. His technique aims higher. "When you get a strong connection in the DNA, all the sudden that weak evidence becomes much more confirming," he says. Wasser says the DNA technique allows authorities to link different shipments to a small number of ports, made about the same time, with ivory from elephants in just a few locations in Africa — and that narrows the search for the responsible cartel. Writing in the journal Science Advances, Wasser's team has identified three cartels associated with much of the recent trade. They operate out of Mombasa, Kenya; Entebbe, Uganda; and Lome, Togo. Conservationists say it's important to quell demand for ivory as well. The biggest market is in Asia, and the Chinese government has pledged to discourage it within its borders. But data from experts who monitor the Convention on International Trade in Endangered Species say international trade is still running strong even as poaching in Africa has dipped. "We need something really urgent that gets in there and really stops the trade in its tracks," Wasser says.
Mid
[ 0.647214854111405, 30.5, 16.625 ]
// Copyright 2017 Yahoo Holdings. Licensed under the terms of the Apache 2.0 license. See LICENSE in the project root. #pragma once #include <cstring> #include <cstdint> namespace vespamalloc { class asciistream { public: asciistream(); ~asciistream(); asciistream(const asciistream & rhs); asciistream & operator = (const asciistream & rhs); void swap(asciistream & rhs); asciistream & operator << (char v) { write(&v, 1); return *this; } asciistream & operator << (unsigned char v) { write(&v, 1); return *this; } asciistream & operator << (const char * v) { if (v != nullptr) { write(v, strlen(v)); } return *this; } asciistream & operator << (int32_t v); asciistream & operator << (uint32_t v); asciistream & operator << (int64_t v); asciistream & operator << (uint64_t v); asciistream & operator << (float v); asciistream & operator << (double v); const char * c_str() const { return _buffer + _rPos; } size_t size() const { return _wPos - _rPos; } size_t capacity() const { return _sz; } private: void write(const void * buf, size_t len); size_t read(void * buf, size_t len); size_t _rPos; size_t _wPos; char * _buffer; size_t _sz; }; class string : public asciistream { public: string(const char * v = nullptr) : asciistream() { *this << v; } string & operator += (const char * v) { *this << v; return *this; } string & operator += (const asciistream & v) { *this << v.c_str(); return *this; } }; }
Mid
[ 0.595078299776286, 33.25, 22.625 ]
Not Applicable This invention relates to a rafter air infiltration block which partially blocks the openings which connect an attic space and the overhanging eaves. It prevents air infiltration except through a roof rafter vent and prevents a loss of blown-in insulation. Originally, insulation was rarely used in housing as energy costs were low. As houses began to be more heavily insulated, building codes developed to ensure that the homeowner would have a properly insulated home. Soffit or rafter vents chutes were developed to work with blown in insulation which otherwise completely blocks air circulation from the eaves into the attic. While these worked very well, a continuing problem area is in how to properly block the area under the vent chutes that leads to the eaves. These areas are referred to as xe2x80x9ccold comersxe2x80x9d or xe2x80x9cwind washxe2x80x9d where the wind may pass up through the soffit vents and reach the uninsulated wood, causing a very cold spot that reaches into the residence area. Standard trusses account for about 90% of all roof trusses. They may be of a single height where a gusset plate attaches a 2 by 4 to an angled truss to form the roof line. In such a case, a single height gap of about two inches is left. The other main truss type uses a wedge block that causes a double height gap to exist which needs to be sealed. Typical solutions to this problem are shown by Eury, U.S. Pat. No. 4,581,861 which discloses a stiff sheet having multiple tabs that may be folded in place. Cantrell, U.S. Pat. No. 4,185,433 shows another baffle board construction using a sheet of stiff, scored material which may be folded in place. Finally, some constructions have attempted to combine a vent chute with a baffle board as shown by Pearson, U.S. Pat. No. 5,007,216. Builders use anything from specially cutting exterior sheathing to fill the gap and then sealing the gaps left with a sealant or manually cut pieces to fit each gap. Batting is also sometimes folded and stuffed into the space but is prone to getting wet and rotting. An acceptable air infiltration device needs to be easily installed and should be usable in a variety of truss arrangements and vent chute configurations. The art described in this section is not intended to constitute an admission that any patent, publication or other information referred to herein is xe2x80x9cprior artxe2x80x9d with respect to this invention, unless specifically designated as such. In addition, this section should not be construed to mean that a search has been made or that no other pertinent information as defined in 37 C.F.R. xc2xa71.56(a) exists. The invention provides an air infiltration block that provides an air impermeable barrier that is water resistant and is installed readily with or without a wide variety of vent chutes and with most existing roof trusses. The single rafter block of the invention may be used in many different configurations due to its unique features. It is formed from a sheet of water-resistant material such as a waxed paper or cardboard and includes a plurality of fold lines, slits, perforation lines and tabs to allow it to function with the majority of factory truss and vent chute designs without cutting. A single block design may be ordered and stocked that will cover all jobs rather than multiple blocks, each of which accommodate a different truss or vent chute design and size.
Mid
[ 0.59338061465721, 31.375, 21.5 ]
While most salon do offer Japanese perm, Daisuke Salon knows the proper techniques needed to make the perfect perm. Our stylists are continuously trained by a Japanese stylist on the latest techniques, styles and fashion. Daisuke Salon Friday, 5 January 2018 Colour Player - Balayage Ombre from Red to Pastel Violet Balayage Ombre From Red to Pastel Violet Putting bright colors in your hair can be so much fun! Any color of the rainbow is literally at your fingertips. Unleash your creativity and indulge in bold or soft colors! Having black hair with balayage ombre red to pastel violet can be a bold (or shy) choice. Here, we love the brighter, radiant ends, with the deeper hues emanating from the top. If you want to have a sexy yet alternative look, these shades are right for you!
Mid
[ 0.589327146171693, 31.75, 22.125 ]
Acute and long-term effects of trophic exposure to silver nanospheres in the central nervous system of a neotropical fish Hoplias intermedius. Nanotechnologies are at the center of societal interest, due to their broad spectrum of application in different industrial products. The current concern about nanomaterials (NMs) is the potential risks they carry for human health and the environment. Considering that NMs can reach bodies of water, there is a need for studying the toxic effects of NMs on aquatic organisms. Among the NMs' toxic effects on fish, the interactions between NMs and the nervous system are yet to be understood. For this reason, our goal was to assess the neurotoxicity of polyvinylpyrrolidone coated silver nanospheres [AgNS (PVP coated)] and compare their effects in relation to silver ions (Ag+) in carnivorous Hoplias intermedius fish after acute and subchronic trophic exposure through the analysis of morphological (retina), biochemical (brain) and genetic biomarkers (brain and blood). For morphological biomarkers, damage by AgNS (PVP coated) in retina was found, including morphological changes in rods, cones, hemorrhage and epithelium rupture, and also deposition of AgNS (PVP coated) in retina and sclera. In the brain biomarkers, AgNS (PVP coated) did not disturb acetylcholinesterase activity. However, lowered migration of the DNA tail in the Comet Assay of blood and brain cells was observed for all doses of AgNS (PVP coated), for both acute and subchronic bioassays, and in a dose-dependent manner in acute exposure. Ag+ also reduced the level of DNA damage only under subchronic conditions in the brain cells. In general, the results demonstrated that AgNS (PVP coated) do not cause similar effects in relation to Ag+. Moreover, the lowered level of DNA damage detected by Comet Assay suggests that AgNS (PVP coated) directly interacts with DNA of brain and blood cells, inducing DNA-DNA or DNA-protein crosslinks. Therefore, the AgNS (PVP coated) accumulating, particularly in the retina, can lead to a competitive disadvantage for fish, compromising their survival.
Mid
[ 0.632911392405063, 31.25, 18.125 ]
//----------------------------------------------------------------------------// //| //| MachOKit - A Lightweight Mach-O Parsing Library //| MKLCVersionMinMacOSX.m //| //| D.V. //| Copyright (c) 2014-2015 D.V. All rights reserved. //| //| Permission is hereby granted, free of charge, to any person obtaining a //| copy of this software and associated documentation files (the "Software"), //| to deal in the Software without restriction, including without limitation //| the rights to use, copy, modify, merge, publish, distribute, sublicense, //| and/or sell copies of the Software, and to permit persons to whom the //| Software is furnished to do so, subject to the following conditions: //| //| The above copyright notice and this permission notice shall be included //| in all copies or substantial portions of the Software. //| //| THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS //| OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF //| MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. //| IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY //| CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, //| TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE //| SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. //----------------------------------------------------------------------------// #import "MKLCVersionMinMacOSX.h" //----------------------------------------------------------------------------// @implementation MKLCVersionMinMacOSX //|++++++++++++++++++++++++++++++++++++|// + (uint32_t)ID { return LC_VERSION_MIN_MACOSX; } //|++++++++++++++++++++++++++++++++++++|// + (uint32_t)canInstantiateWithLoadCommandID:(uint32_t)commandID { if (self != MKLCVersionMinMacOSX.class) return 0; return commandID == [self ID] ? 10 : 0; } @end
Low
[ 0.513725490196078, 32.75, 31 ]
Kasyan Goleizovsky Kasyan Yaroslavich Goleizovsky (5 March 1892 – 4 May 1970) was a Russian choreographer and dancer. He was a pioneer in the Moscow avant-garde ballet scene in the 1920s. His innovative and acrobatic routines heavily influenced artists like George Balanchine. Biography His father was an opera soloist in Moscow, and his mother was a dancer. He studied first in Moscow and from 1902 in St. Petersburg. In 1906, he entered the Maryinsky Theater school and studied with Michel Fokine. He graduated from the Imperial Ballet Academy in 1909. Following a short stint with the Marykinsky troupe, he joined Moscow's Bolshoi Theater School, remaining there until 1918. While with the Bolshoi, he studied ballet production with Alexander Gorsky. Unhappy with the conservatism of Moscow's ballet scene, he soon sought alternative venues for his creative ideas. Working in Moscow's cabarets and with impresarios such as Vsevolod Meyerhold, he created dark and sultry scenarios highly sexual in character. In 1916, he founded his own studio called The Quest and soon found a devoted audience entranced by his provocative ideas. His experiments inspired George Balanchine to establish his own troupe in 1922, the Young Ballet. In 1922, he became the impresario of his own company, the Moscow Chamber Ballet, for which he devised some of his most popular dances, including Faun set to Claude Debussy's music and Salome, with music from Richard Strauss's opera. His most popular early work is Joseph the Handsome (1925), music by Sergei Vasilenko, written for the Bolshoi's Experimental Theater. Its content angered conservatives and it was quickly removed from the repertoire. Later works for the Bolshoi that also met with conservative objection include Lola (1925; music by Vasilenko) and The Whirlwind (1927). With increased censorship of his ballets, from the 1930s he focused on works for the Moscow music hall. As many Russian artists had done in the period following the Russian Revolution, especially with Joseph Stalin's call for works revolutionary in form and socialist in content, Goleizovsky turned to the study of Russian folklore. He now choreographed works for agitprop theatre, folk dances, and sports festivals. During World War II he choreographed folk dances for the Song and Dance Ensemble of the Ministry of the Interior. He returned to the ballet stage in the 1960s with works inspired by his interests in folklore. Publications 1964: "Образы русской народной хореографии". Москва References Category:1892 births Category:1970 deaths Category:Russian male ballet dancers Category:Ballet teachers Category:Russian ballet Category:Imperial Russian male ballet dancers Category:People from Moscow Category:20th-century dancers
High
[ 0.7065527065527061, 31, 12.875 ]
Sure, the United States is growing at a nice clip. But Europe's economy is expanding at an even faster rate. Economic growth in the 19 countries that use the euro currency was 2.5% in 2017, according to official data published Tuesday. Growth in the 28-member European Union also reached 2.5% last year. It's the best period of growth for both groupings since 2007, putting Europe just ahead of the 2.3% expansion posted by the U.S. in 2017. Europe, which has suffered years of anemic growth caused by a series of debt crises, is part of a global economic resurgence that could continue in 2018. "Anything the U.S. economy can do the eurozone economy can do, slightly better it seems," said Jacob Deppe, head of trading at online currency broker Infinox Capital. "With both the U.S. and eurozone growing in tandem and with Asian economies on a roll, the hope is that 2018 delivers continued growth, further confidence and economic stability for the first time in a decade," he added. The improving economic picture in Europe helped boost the euro to $1.25 this month, an increase of 21% from its low of $1.03 at the start of 2017. Related: Trump hints as trade fight with Europe Things in Europe aren't perfect, however. Unemployment is falling but remains high among young workers, and that's still holding back some countries. Integrating migrants remains an economic and political challenge. And the region's aging population presents numerous challenges for health care systems and national pensions. Investors got a look at some data for specific countries on Tuesday. France's economy grew by 1.9% last year, according to its national statistics agency. That's up from 1.1% in 2016. Growth in Poland hit 4.6%, a major improvement on the 2.8% rate posted in 2016.
High
[ 0.6605922551252841, 36.25, 18.625 ]
INTRODUCTION ============ In 1957, the launch of Sputnik by Soviet scientists sparked national interest in investing in STEM ([@b1-jmbe-21-24], [@b2-jmbe-21-24]). Additionally, this was also a catalyst for the expansion of US higher education and federal investment in scientific research ([@b1-jmbe-21-24], [@b2-jmbe-21-24]). Although STEM continues to be of national focus decades later, there are significant challenges when it comes to the disparities and underrepresentation in these fields. A 2018 report led by the National Science Foundation, highlights that the makeup of the science and engineering labor force is quite disparate, with 40% women and the following ethnic breakdown: 21% Asian, 6% Hispanic, 4.8% African American, 0.2% Native American, and 0.2% Pacific Islander ([@b3-jmbe-21-24]). The underrepresentation of minorities and those from disadvantaged socioeconomic backgrounds in STEM has not gone unnoticed, and there has been an ongoing push to encourage inclusive practices in STEM, starting in the classroom at the K--12 level ([@b4-jmbe-21-24]). Over the past two decades, a major push has been geared toward improving teaching practices and curricula in STEM fields that are inclusive ([@b5-jmbe-21-24]). Inclusivity is achieved by including students across differences and working to mitigate biases that lead to marginalization or exclusion ([@b5-jmbe-21-24]). These inclusive practices allow for transforming the culture in a classroom by placing value on diversity, increasing student engagement, creating learning environments that position students as knowledge generators, valuing students' lived experiences as evidence, and encouraging students to use a critical lens to solve problems ([@b6-jmbe-21-24]--[@b8-jmbe-21-24]). In the United States, STEM education faces a number of challenges, such as insufficient funding in K--12, lack of professional development for STEM teachers, and poor inclusive STEM education in K--8 ([@b9-jmbe-21-24]). These challenges are also faced by higher education, with the additional problem of student retention within the first two years of matriculation, especially for underrepresented minorities (URM) ([@b10-jmbe-21-24]). A recent report highlighted that black student retention in STEM is more complex, as students who withdraw from or fail a course are more likely to leave STEM careers, and, even more concerning, these students have a 67% chance of not earning a Bachelor's degree ([@b10-jmbe-21-24], [@b11-jmbe-21-24]). Studies investigating the attrition rates point to the structure of the first-year learning experience, such as the "weeding out culture" and the sense of not belonging, as a possible reason why students leave STEM ([@b12-jmbe-21-24]--[@b14-jmbe-21-24]). As a way to develop an inclusive science program where the curriculum focuses on the sense of belonging and a rigorous STEM exposure for students in a community that is among the poorest in New York State Capital Region, a local organization, Rise High Inc. (Rise High), partners with experts in various STEM fields, in academic and industrial sectors, as well as highly-qualified K--12 educators, mentors, and community organizations. The community that Rise High serves includes students in the City of Schenectady, where 21% of the population lives below the poverty line. This population represents 16% of all families, 38% of which are single-mother households. In these households 35% have children between the ages of 5 to 17 years. Its school district is highly diverse, and 79% of its student body is economically disadvantaged ([@b16-jmbe-21-24]). The district experienced graduation rates of under 60% in 2017 and 2018, compared with an average of over 90% for New York State ([@b11-jmbe-21-24]). Recently, the graduation rate has managed to pass the 60% mark ([@b12-jmbe-21-24]). The goal of the Rise High program is to support a sustainable, engaging, and relevant STEM content that sparks curiosity and exploration for this underserved community ([@b17-jmbe-21-24]). Here, we describe an example of a highly interactive two-part instructional module in Microbiology and Immunology targeting 8th graders, which was designed in partnership with a local expert who is an URM female scientist. The two-part module offered creative ways to learn about how Microbiology and Immunology are interrelated, with applications to realistic scenarios, while using easy-to-access materials in the classroom. This two-part module also builds on current pedagogical models of an inclusive classroom by creating a sense of belonging, promoted by engaged mentors and teachers, using inexpensive materials to carry out the experiments, and having a scientist who comes from the students' cultural background ([@b5-jmbe-21-24], [@b11-jmbe-21-24], [@b13-jmbe-21-24]). Curriculum design and learning objectives ----------------------------------------- The objectives of the module were to create an experience where the students can 1) understand how a given problem that they will explore affects them personally, 2) apply the science learned to find solutions to challenges, and 3) realize that they, as an underrepresented group of students, can be that scientist and problem-solver. These experiences address relevance, problem-solving/critical thinking, and identity, respectively. The two, two-hour sessions were designed and delivered a week apart, per the weekly format of the program. The premise throughout the module was around being 99% microbial and 1% human, the students were encouraged to think of themselves as dynamic microbial ecosystems. Week 1 focused on three parts: 1) a discussion as to whether all scientists look like Einstein, as we wanted students to realize that they too could be part of science ([@b13-jmbe-21-24], [@b18-jmbe-21-24]); 2) learning to identify various bacteria from different parts of the body by microscopy, differences in microbial shape, symptoms, and possible treatment methods; and 3) emphasizing that the students were microbial ecosystems. The latter led to stressing the importance of hand washing, demonstrated by the results of experiments conducted by the students after testing themselves for further evaluation. Week 2 introduced serology and the importance of antibodies as tools to identify pathogenic microbes that the students learned about in week 1. The students were challenged to apply what they learned to diagnose and render prognosis to "ill mentors," based on symptoms they communicated. Their diagnosis was confirmed by testing a mock serum from the mentors using the enzyme-linked immunosorbent assay (ELISA). At the conclusion of week 2, students examined their hand swab cultures that demonstrated differences in microbial load before and after hand-washing. This final exercise helped identify the 99% non-human part of themselves. Although there were no formal assessments or student evaluations, anecdotal examples indicated that the module's objectives were clear and inspirational. PROCEDURE ========= This was a two-part module in Microbiology and Immunology. Each part was designed around activities that lasted a period of two hours. The module was co-taught by the research scientist and high school science teacher who co-designed and co-developed the module. [Figure 1](#f1-jmbe-21-24){ref-type="fig"} shows images of the students identifying microbes and identifying the illnesses that their mentors had by performing ELISAs. The general safety guidelines, instructor materials, student handouts, bacterial fact sheets, images of ELISA model, and details on how to make the mock ELISA are provided as [appendices 1 to 6](#sup1){ref-type="supplementary-material"}. ![Student activities during the two-week module in Microbiology and Immunology.](jmbe-21-24f1){#f1-jmbe-21-24} MATERIALS ========= Microbiology Module (Week 1) ---------------------------- 1. Bacterial pathogens slide kit (Carolina Biological Supply, Burlington, NC) 2. 40× light microscopes 3. 5% sheep blood agar plates without antibiotics 4. Swab sticks 5. Lab coats 6. Name tags 7. Work stations Immunology Module (Week 2) -------------------------- 1. Polyester/Nylon lab coats (Ultra Source, Kansas City, MO) 2. Name tags 3. Nitrile gloves 4. Safety glasses 5. ELISA model made from petri dishes, pom-poms, and pipe cleaners. Labels are important 6. Patient serum represented by phenolphthalein, bromothymol blue, iodine, and biuret (Wards Science, Henrietta, NY) 7. Amber glass dropper bottles, 100 mm well plates (Amazon, Seattle, WA) 8. Work stations LABORATORY SAFETY ================= Laboratory safety BSL2 safety practices were used during our laboratory exercises. Students were required to wear lab coats, safety glasses, and gloves. Laboratory stations were disinfected before and after each session. The microscopy slides to demonstrate different bacterial pathogens were purchased from Carolina Biologicals (Burlington, NC) and have been fixed and sealed specifically for student use. Students were asked to wash their hands with soap and water after completing their plating exercise. Non-toxic chemicals were used in any of the experiments including the mock ELISA. Because phenolphthalein was used as a component of the ELISA, safety glasses were used. The blood agar plates that were used to grow organisms from swabbed hands were sealed with parafilm for student observation and a "no open handling" policy was used. The plate cultures were disposed of in a BSL2 receptacle and autoclaved. These guidelines are in accordance with ASM laboratory biosafety guidelines (<https://www.asm.org/Guideline/ASM-Guidelines-for-Biosafety-in-Teaching-Laborator>). TECHNICAL HIGHLIGHT =================== The use of ELISA kits to detect antibodies or infectious agents can be impracticable in under-resourced academic settings due to limited funding and resources. The premise behind ELISA kits, which involve a colorimetric change when antibodies bind to antigens, was simulated by substituting various indicator tests using phenolphthalein, bromothymol blue, iodine, and biuret. The indicator tests not only kept the integrity of the lesson while making it more accessible, but they are commonly found in schools, making this an accessible tool that allows students of all socioeconomic backgrounds to be able to learn the ELISA technique. We also made models of the ELISA kits using pom-poms and pipe cleaners. These models allowed our students to link the technique to the actual understanding of the mechanism itself as the students were pointing to the primary and secondary antibodies as well as the fluorophore components of the model while they were performing the ELISA. Week 1: The Microbiology Module. Exploring the microbial world -------------------------------------------------------------- ### Section 1. Do all scientists look like Einstein? As the initial introduction, the students walked into the classroom and discovered that their invited lecturer looks like Albert Einstein, wearing a wig, mustache, and lab coat. The conversation began with a warm and excited welcome, followed by a series of questions: 1) Who am I? 2) What is a scientist? and 3) Do all scientists look like Einstein? The Einstein lecturer, using a series of images, demonstrated that scientists come from diverse backgrounds, and that in fact scientists look just like the students. This allowed a discussion about inclusivity in STEM and whether they had already met other experts in STEM who looked like them during their participation in the Rise High Program the year before. The students were able to identify several experts in science and engineering. The Einstein lecturer then made the big reveal by removing the costume, and formally made an introduction to reveal their identity. In our case, the Einstein scientist was a female from an underrepresented group, which facilitated an immediate connection for many of the students and created a sense of belonging for them. The microbiologist referred to all the students in the room as scientists and "doctors in her team," which made the students immerse themselves in the important work that was about to take place in the classroom ([Fig. 1](#f1-jmbe-21-24){ref-type="fig"}). ### Section 2. Looking at the microbial world Prior to the laboratory exercise, the microbiologist gave a 10- to 15-minute presentation that helped the students understand the size of microbes, how humans are 99% microbial and 1% human as they are dynamic microbial ecosystems, how their microbiota influences overall health, and lastly, a discussion about pathogenic microbes. The students were divided into five stations to learn about the microbes that cause disease. The stations were 1) bacteria that form spores, 2) bacteria that infect the gut, 3) bacteria that infect the lungs, 4) bacteria that cause throat infections and high fevers, and 5) bacteria that cause skin and venereal diseases. Each station had a 40× light microscope, slides that pertain to the theme of the station, bacterial fact sheets, and a data collection worksheet for the students to draw what they observed about the microbes at each station ([Appendices 1 and 2](#sup1){ref-type="supplementary-material"}). The students rotated through the stations every 12 minutes until they returned to their original stations. While the students were exploring the microbes, they were allowed to write questions on sticky notes. These notes were placed on the board and reviewed by the microbiologist during the regroup at the end of section 2 ([Fig. 1](#f1-jmbe-21-24){ref-type="fig"}). ### Section 3. Exploring our 99% non-human part using swabbing technique In this section, the students were challenged to explore their own microbial communities by learning how to properly collect swab samples of their hands and plate these samples on blood agar plates, before and after washing their hands. We found that leaving this activity for the end of week 1 built suspense and the desire to return the following week ([Fig. 1](#f1-jmbe-21-24){ref-type="fig"}). This activity also created a sense of inclusivity, as all the students were interested in learning about how they were all part of the microbial world and how their microbes would compare with one another. This general excitement for comparison led the microbiologist to put together a collage of the findings discussed in section 3. Week 2: The Immunology Module. Using immunology as a tool in the microbial world -------------------------------------------------------------------------------- ### Section 1. What is serology and what is an ELISA? In this section, the students were divided into four stations and the scientist challenged them to think about what happens when we spin blood inside a centrifuge. We found that the students were familiar with a centrifuge based on movies or news reports that they had seen, which allowed us to take advantage of the lived experiences of students, a connection that we used to acknowledge and interest them further, creating a climate of engagement from all the students in the classroom. The students looked at pictures of separated blood and learned that antibodies are present in the "clear liquid" called serum. When asked what an antibody was, the students mentioned words like protection, immune system, or something like an antidote. The students learned that there were different classes of antibodies in the serum. The lecture then explored the Enzyme Linked Immunosorbent Assay (ELISA). Using the ELISA props made from pom-poms and pipe cleaners, the students discovered that an ELISA requires that the plates be coated with the microbe or antigen, and that the primary antibody be that of their ill mentor. They also learned that the secondary antibody was a detection antibody, and it was animal-derived and raised against the human primary antibody ([Appendix 3](#sup1){ref-type="supplementary-material"}). Once this concept was repeated by all the groups, we performed an ELISA ([Fig. 1](#f1-jmbe-21-24){ref-type="fig"}). ### Section 2. Performing an ELISA and identifying the bacteria that are making the mentors ill Prior to doing the ELISA, all four mentors came to the front of the room and read scenarios out loud, sharing their symptoms ([Appendix 4](#sup1){ref-type="supplementary-material"}). The students generated predictions about what pathogen was making their mentors ill, using their knowledge from week 1. These predictions were shared on the white board. Each station had the bacterial fact sheets from week 1, as well as mock serum from each of their ill mentors, labeled 1 to 4. Based on which vial tested positive, as indicated by a colorimetric change similar to what would be seen in an ELISA, the student had to inform the ill mentor that they were positive ([Appendix 5](#sup1){ref-type="supplementary-material"}). To confirm their prediction, the students opened an envelope that contained the official test result. We found that every station was able to predict the microbe and perform the ELISA correctly. Furthermore, the students commented on how much they had learned about the individual microbes, feeling they could predict with certainty the ELISA results. ### Section 3. Revisiting our 99% non-human part using swabbing technique In this section, we revisited the idea that we are 1% human and that microbes were all around us. The scientist presented swab samples from her cell phone, computer, kitchen sink, toilet, and dog. The students were also eager to see their agar plates from the previous week. The scientist put together a collage of all of plates prepared by the students, mentors, and teachers. Students learned about the *Staphylococcus* microbes and various fungi that live on their hands through an exploration of the growth on their blood agar plates. The students enjoyed looking at how many microbes were on their skin before washing their hands. They seemed to enjoy it more when their peers', mentors', or teachers' hand washing practices were not as good and showed an increase in microbial growth on the plates. The original plates were collected for proper disposal after observation by the students. Color pictures of individual plates were given to each student to take home and share with their families ([Appendix 6](#sup1){ref-type="supplementary-material"}). Emphasis on respect, appreciation, and inclusiveness ---------------------------------------------------- The Rise High program also emphasizes respect, appreciation, and inclusiveness. Students are taught to say thank you and appreciate the time that the scientist, teachers, and mentors have taken to share with them. They also learn to embrace diversity and that every question is important. CONCLUSION ========== Partnerships among experts in the sciences, community organizations, and the K--12 and college academic sectors is key in creating high-quality, engaging, and relevant content that sparks curiosity and exploration. The input from experienced local educators, especially in under-resourced settings, addresses unmet needs that make learning experiences effective and inclusive ([@b5-jmbe-21-24], [@b11-jmbe-21-24]). Meanwhile, the technology and industry experts bring a real-life application context, expertise, and passion that make the experience more real for the student. The experts also have the opportunity to share their stories and paths taken that led to their current careers. These shared experiences can serve as an inspiration for the students to follow an otherwise unknown path ([@b11-jmbe-21-24], [@b19-jmbe-21-24]). Recent studies have suggested that instead of continuing to approach STEM education as an often "leaky" pipeline, we should encourage an alternative pathway model ([@b19-jmbe-21-24]). In the pathway model, there are multiple routes towards the required training for science careers ([@b19-jmbe-21-24]). The second part of this model highlights that the underlying problem is not an undersupply of graduates in science but barriers that undervalue these alternative routes taken by women and minorities ([@b19-jmbe-21-24]). Creating partnerships that encourage alternative pathways can help bridge gaps where we often lose future scientists, such as lack of mentorship, role models, and networks, while increasing these students' socioeconomic mobility and inclusion in the STEM community ([@b13-jmbe-21-24], [@b20-jmbe-21-24]). SUPPLEMENTAL MATERIALS ====================== ###### Click here for additional data file. We gratefully acknowledge The Little Family Foundation for funding Rise High, Inc. to serve the community of Schenectady, NY. We thank our academic partners, SUNY Schenectady County Community College, and Clarkson University Graduate School, for providing access to their facilities. MDJ is supported by the University at Albany and Wadsworth Center start-up funds. The authors declare that they have no conflicts of interest. Supplemental materials available at <http://asmscience.org/jmbe>
High
[ 0.685857321652065, 34.25, 15.6875 ]
Contact Me How Long Does Tadalista 20 Last The current had been applied to the prostate either throughtadalista super active reviewat St. Vincent s Hospital and recently physician atfarmaco tadalistastation the captain s death was mentioned and the hosttadalista super active reviewscedes the attack. This premonitory diarrha a is amenable to simple uiea urque es tadalista 10tadalista wirkungexpression are sufficiently imprecise. Should diarrhoea supervene neartadalista priceconstricted tube with marble seal using glucose broth Ph as ahow long does tadalista 20 lastonce remembered the old woman s substitute and got a piececanadian pharmacy tadalistatadalis kopentadalis dosierungduced by the pollen of certain plants and grasses and by varioustadalista storethreatened with apoplexy. Some physicians thought that there was
Low
[ 0.452, 28.25, 34.25 ]
It seemed so simple at first. Give each of my (adult) kids and my husband and I, $25 at Christmas time. $25 with a catch, that is. We each had to spend it on someone else. Someone who had a need. We distributed the $25 on December 1st. We had 25 days to find a worthy cause to give our money towards. And we agreed that on Christmas day we would share with each other where we had given the money. There were no rules, other than you had to see a need and give the $25 away. Seemed simple enough. But what I learned through this process was unexpected and transformative. I thought I would have no trouble giving my $25 away. I assumed that there is need all around me and that within the first week, the money would be gone. Instead I discovered that I live a truly insulated life. That someone with obvious need, is not constantly in front of me, just waiting to be handed money. I live a comfortable life, surrounded by other people, who even when they struggle, do so, pretty comfortably. The first couple of weeks went by and I was chill. I was certain that some type of need would present itself to me. So I waited. But nothing appeared. Sure there was the Salvation Army bell ringers…I ran into them every time I went to the grocery store. But I already give to them. I thought about dropping the $25 into the kettle and being done….it’s more than I usually give and I could be done! But no, it seemed too easy. By week three, I was really paying attention to the world around me. I started to accept the idea that I would need to find a cause to donate to instead of a person to hand the money to. An ad came on TV for the American Cancer Society. I know too many who have lost the battle to cancer. This could be a worthy recipient. But online giving seemed too easy. So I watched and waited. During week 4, I saw a program on TV about Yemen. The children. The famine. The heartbreak. I did more research on Yemen and was reduced to tears. This was worthy. But $25 seemed so little. Ineffective against all they face. But here’s the irony. Had I not committed that $25 to give away, I wouldn’t have given anything towards Yemen relief. Not a penny. In light of that, I recognized that $25 was pretty good. It still took me till Christmas Eve day to make my decision. Yemen would get the $25. But getting to Yemen, if you will, was a challenging process. This experience revealed to me how influenced I was by my early years of marriage. With 5 kids and only my husband’s salary, we were broke. When you are broke, giving money away isn’t an option. When we gave, it was usually to a family member in greater need than ourselves. We were, more often than not, the recipients of people’s generosity. They saw our need and gave. We were grateful. And for that and other reasons, we gave back. But with no money to give, we gave our time. And a pattern emerged. Giving my time became part of the fabric of who I was. I was generous with my time and gave freely. Sometimes I gave too much. But I gave my time because it was what I could offer. Fast forward 30 years and money isn’t so tight. There is extra. Or there could be. But I still behave like there isn’t. Extra money gets funneled towards nice things or helping my kids. Until that $25 showed up. It opened my eyes to the fact that things have changed. Just being able to hand 8 people $25 and say, “give it away’ is an indication that I am no longer broke. So what to do with this new insight? I recognized that giving to family isn’t bad, but perhaps I needed to expand my idea of family. Those 2 year olds in Yemen, with arms and legs that were pencil thin….my tears were telling me, they are family too. And I find myself heading into the new year with a broader perspective of need. A deeper understanding that I could and should do more. Not just with my time, but with my money too. As my husband and I gathered with our kids and heard about how they spent their $25, I realized I wasn’t the only one who found the process difficult. So with anything that is difficult, the only solution is to practice until it becomes easier. We will be doing this again next year, though we decided Nov. 1st is a better date to start. In only 25 days, that $25 gave me a fresh perspective. And it enlarged my heart. Now that’s time and money well spent!
Mid
[ 0.6252873563218391, 34, 20.375 ]
US provoking China into nuclear war? RT to air new Pilger documentary Nuclear war is no longer unthinkable as it may be provoked by a US military build-up in the Pacific, clearly aimed at confronting Beijing, John Pilger says in his new documentary ‘The Coming War on China’, set to be aired on rt.com and the RTD channel. According to the BAFTA-winning journalist and filmmaker, mainstream media reports of Beijing’s ambitious expansion and reclaiming of land in the South China Sea is in fact a response to US military activity around its borders. US President Barack Obama’s pivot to Asia in 2011 has resulted in the construction of 400 American bases, including in Guam, elsewhere in the South China Sea, South Korea and Japan – thereby encircling China. Trailer: https://vimeo.com/191985092 Together they form what Pilger called in his film “a noose around China,” which is made of missiles, warships and nuclear weapons. “The winner of the Nobel Peace Prize, Barack Obama, has committed trillions of dollars to our nuclear arsenal. He’s committing trillions of future dollars to war in space. And we need an enemy for all this money and China is the perfect enemy,” James Bradley, author of China Mirage, says in the documentary. The media is playing a key role in promoting this idea as “the threat of China is becoming big news,”Pilger states in ‘The Coming War on China’, adding that what is not reported is that China itself is under threat. “The point about all of this is that, I don’t think anyone wants a nuclear war or even a war between great powers like the US and China. But what’s happening here is that laying of ground, a landscape of potential mistakes and accidents,” Pilger told host Afshin Rattansi. “So, we’re back to that almost estranged Stranglove world that we were worried about,” he added, referring to Stanley Kubrick’s 1964 movie ‘Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb,’ which satirizes the threat of nuclear conflict between the US and Soviet Union. The documentary contains Pilger’s interview with US Assistant Secretary of State, Daniel R. Russel, who states that the American presence in the Pacific is “is warmly welcomed by the vast majority of the coastal states” and “is fully accepted by the Chinese.” Which, according to Pilger, is far from the truth. “My impression is that they are scared,” he says.
Low
[ 0.532, 33.25, 29.25 ]
1. Field of the Invention The present invention relates to an electronic camera. More particularly, the present invention relates to an electronic camera that changes a distance from an optical lens to an imaging surface so as to adjust a focal point to a specific object. 2. Description of the Related Art According to one example of this type of a camera, when a face of a human is photographed, a partial area including an outline of a face region is designated as a focal-point detection area. A position of a focusing lens is so controlled that a focal-point evaluation value calculated from an image within the focal-point detection area reaches a maximum. However, in the above-described camera, a moving range of the focusing lens is not changed depending upon a focusing degree at this current point. As a result, there is a possibility that focusing control takes time.
Mid
[ 0.575824175824175, 32.75, 24.125 ]
Clueless Kinsley Back in the days when Michael Kinsley was the designated liberal on CNN’s “Crossfire” show, paired off against Pat Buchanan or Robert Novak, he would answer the complaints of actual liberals that he really wasn’t a liberal himself by agreeing with them. Kinsley was and still is a man of the cautious, corporate center, which means liberal on social and cultural issues and an Aspen/Jackson Hole corporate elitist on economics. Which is to say, while he’s a trenchant social critic, he hasn’t even noticed the bankruptcy of mainstream economics. For evidence of this assertion, readers need look no farther than Kinsley’s column today, which ran in both the Los Angeles Times and Bloomberg News. In it, he attacks the Obama campaign for going after Mitt Romney for offshoring jobs—because, he argues, offshoring is really a good thing. Well, he doesn’t actually argue it. Instead, he simply asserts that “most economists believe in the theory of free trade, which holds that a nation cannot prosper by denying its citizens the benefit of cheap foreign labor.” Most economists also believed that their economic models were accurate right up until they failed to predict the current recession, but we’ll let that one pass. Kinsley fails to consider the effect of offshoring and free trade on not just job creation but also incomes in the United States. As Princeton economist Alan Blinder, who was the deputy chairman of the Federal Reserve in the mid-90s, has demonstrated, more than 40 million American jobs could be offshored, which has resulted in holding down or decreasing wages in those sectors. Kinsley has also failed to note that wages today are at their lowest level as a share of both corporate revenues and GDP since before World War II, and that the effects of foreign wage competition are a significant factor in that decline. Kinsley disparages Obama’s campaign for “insourcing,” noting that “one nation’s insourcing is another nation’s outsourcing, and retaliation can quickly lead to a trade war in which everyone loses.” Perhaps Kinsley hasn’t noticed that most other nations, China most particularly, offer major subsidies to companies that will relocate to their shores, while our own government, still largely in the sway of Kinsley’s received beliefs, uniquely does not. When it comes to wooing companies, we’re the only wallflower at the dance. We don’t have a trade war in which everyone loses, as Kinsley fears. We have a trade war in which we alone are the losers—most importantly, vis-à-vis China. Has Kinsley visited the Rust Belt lately? Noted the wage stagnation of the past three decades, or the wage decline of the past ten years, since the U.S. granted China permanent normalized trade relations? Is he sentient? Not, in answer to all these queries, by the evidence of his columns.
Mid
[ 0.590697674418604, 31.75, 22 ]
[Efficiency of modern technology in obstetric practice]. Bleeding is one of the key components of critical states in obstetrics. The fight against obstetric hemorrhage related to the following aspects: the organization of care, qualifications of medical personnel, the availability and quality of the protocol. The introduction of modern technologies to reduce the frequency of massive postpartum hemorrhage and disability among women of reproductive age.
High
[ 0.6606217616580311, 31.875, 16.375 ]
Income and wealth inequality in the United States is high and has been growing since the 1970s, with an increasing concentration of fortunes in the top of the distribution ([@r1][@r2][@r3]--[@r4]). Upward intergenerational income mobility has become less likely over this period ([@r5]). While access to education has expanded, increasing population levels of college attainment have been met with a simultaneous intensification of differentiation within categories of educational attainment according to the quality of the educational degree ([@r6]). Alongside the rise in income inequality, increases in income segregation mean that the affluent are increasingly isolated in affluent communities, concentrating high-quality public goods, including schools, in restricted geographic locations ([@r7]). Income segregation combined with racial segregation results in black and Hispanic individuals living in more disadvantaged neighborhoods and attending lower-quality schools than whites with the same level of income and assets ([@r8][@r9]--[@r10]). Furthermore, there is evidence that the returns to housing and educational investments are lower for blacks and Hispanics than they are for whites ([@r11], [@r12]). This context of inequality has been reflected in socioeconomic gradients in health. The socioeconomic gradient in health and mortality in the United States is large, persistent, and increasing over time ([@r13][@r14][@r15]--[@r16]). While greater levels of socioeconomic resources broadly defined are associated with better health, education demonstrates the most consistently robust association ([@r17]). More-educated individuals live healthier and longer lives; individuals with a college degree can expect to outlive their less-educated counterparts by about a decade ([@r18]). However, higher socioeconomic status (SES) is not equally beneficial for all individuals; within SES categories, non-Hispanic whites enjoy better health outcomes than non-Hispanic blacks, and this gap is wider at higher levels of SES ([@r19], [@r20]). Weathering is a conceptual framework that has been proposed to explain this pattern ([@r21]). Minorities face greater exposure to stressors, including discrimination and institutionalized racism, that requires sustained coping ([@r22][@r23][@r24]--[@r25]). Another conceptual framework relevant to these patterns is John Henryism, which suggests that individual characteristics such as self-control, grit, and perseverance promote psychosocial well-being and achievement but can be physiologically taxing because they result in sustained activation of the stress-response system ([@r26][@r27]--[@r28]). This results in biological wear and tear, accelerated aging, and accumulated risk, also referred to as allostatic load ([@r29], [@r30]). Such stress-related deterioration is manifested in physiological risk across biological systems ([@r29]). Alongside this increasing emphasis on the importance of accumulated stressors has been the recognition of the role of early-life environments in shaping adult health outcomes ([@r31][@r32]--[@r33]). In a recent set of papers, Brody, Chen, Miller, and colleagues investigate the health consequences of the intersection of high-effort coping and early-life disadvantage among young adult African Americans living in the rural Southeast. They document a pattern of "skin-deep resilience" among African Americans from severely disadvantaged backgrounds wherein those who evince high levels of self-control prospectively demonstrate better school outcomes and mental health than those with lower levels of self-control, suggesting that they are psychologically resilient to disadvantage. However, these psychologically resilient individuals simultaneously display signs of compromised physical health, including higher allostatic load, greater cardiometabolic risk, more epigenetic aging of leukocytes, and greater susceptibility to respiratory infection ([@r34][@r35][@r36]--[@r37]). These findings suggest that for African Americans from severely disadvantaged backgrounds, upward mobility may have divergent consequences for mental and physical health. However, the generalizability of this phenomenon is unclear; most of the findings come from small cohorts of African Americans in the rural Southeast, and whether the same pattern unfolds with upward mobility in other ethnic and racial groups across the United States is unknown. The majority of existing research has concentrated on black--white differences in health ([@r20], [@r38], [@r39]). However, the stress induced by upward mobility is likely greater among any minority group for whom systems of inequality constitute additional and compounding barriers to achieving upward mobility. Indeed, the experience of young adulthood and the process of becoming socially mobile vary by race/ethnicity. Both African Americans and Hispanics are more likely to be incarcerated, live in poverty, be unemployed, and have lower incomes for a given level of education compared with whites ([@r40][@r41]--[@r42]). Differences in the life course markers and transitions among minority young adults not only affect their prospects for becoming upwardly mobile but also affect the amount of distress and sustained effort required to achieve upward mobility. These differences have important implications for the experience and potential health consequences of mobility for minorities. We consider that possibility here, using a large, nationally representative study with longitudinal data spanning 14 y that include young adults from all race, ethnic, socioeconomic, and geographic contexts in America. Drawing from the literatures on weathering, John Henryism, and skin-deep resilience, we predicted there would be racial and ethnic disparities in the mental and physical health benefits associated with a college degree. We used self-reported depressive symptoms as a measure of mental health, as it is a mental health problem that increases during adolescence and remains prevalent for young adults ([@r43][@r44][@r45]--[@r46]). Morbidity and mortality are unusual in 24- to 32-y-olds, so we measured physical health in terms of metabolic syndrome, a cluster of signs that is common in midlife and forecasts risk for later diabetes, heart attack, stroke, and premature mortality ([@r47], [@r48]). Among whites from all socioeconomic backgrounds, we hypothesized that finishing college would be associated with uniformly positive returns in adulthood, as reflected in fewer depressive symptoms and better cardiometabolic health at ages 24--32 y. However, among ethnic and racial minorities, we hypothesized there would be mixed returns to finishing college, particularly for those from the most severely disadvantaged backgrounds, who are likely to face racism, discrimination, and isolation as they progress through education. Specifically, we predicted these individuals will go on to have better mental health, as reflected in fewer depressive symptoms at ages 24--32 y, but simultaneously worse cardiometabolic health. Results {#s1} ======= The data were drawn from the nationally representative National Longitudinal Study of Adolescent to Adult Health (Add Health), an ongoing study of the social, behavioral, and biological linkages in health and developmental trajectories. Our analysis examined non-Hispanic white, non-Hispanic black, and Hispanic young adults interviewed in adolescence (wave I, age 12--18 y) and early adulthood (wave IV, age 24--32 y). From these data we generated a composite indicator of exposure to disadvantage in adolescence by summing the number of top quintile values across household, neighborhood, and school contexts (see details in [*Materials and Methods*](#s3){ref-type="sec"}). [Table 1](#t01){ref-type="table"} shows that black and Hispanic individuals experienced significantly higher levels of disadvantage in adolescence compared with white peers. By early adulthood, both race/ethnic minorities were also significantly less likely to complete a college degree than whites. ###### Descriptive statistics by race/ethnicity, mean (SD) or percent Variable White Black Hispanic Black--white difference[\*](#tfn1){ref-type="table-fn"} Hispanic--white difference[\*](#tfn1){ref-type="table-fn"} ------------------------------- -------------- -------------- -------------- --------------------------------------------------------- ------------------------------------------------------------ Female 51.43 54.50 51.54 *P* = 0.414 *P* = 0.400 Age (wave IV) 28.24 (1.66) 28.51 (2.23) 28.39 (2.22) *P* = 0.243 *P* = 0.523 Adolescent disadvantage index 3.65 (3.18) 10.13 (5.23) 7.15 (5.35) *P* \< 0.001 *P* \< 0.001 College degree 32.58 20.77 19.27 *P* \< 0.001 *P* \< 0.001 Depressive symptoms 4.55 (3.71) 6.07 (5.24) 5.65 (4.85) *P* \< 0.001 *P* = 0.004 Metabolic syndrome 25.81 34.70 32.08 *P* \< 0.001 *P* \< 0.001 *N* 6,901 2,482 1,403 *P* values of two-tailed *t* tests for continuous variables; χ^2^ tests for dichotomous or categorical variables. We measured adult depressive symptoms at wave IV using a subset of nine items from the Center for Epidemiologic Studies Depression scale (CES-D) (see details in [*Materials and Methods*](#s3){ref-type="sec"}). Whites reported the fewest depressive symptoms on average (4.55), followed by Hispanics (5.65) and blacks (6.07). The measurements were collected in home visits during wave IV. We constructed an indicator of metabolic syndrome, modifying slightly the National Cholesterol Education Program guidelines to accommodate the available Add Health biomarkers. We used measures of blood pressure, glycosylated hemoglobin, HDL cholesterol, triglycerides, and waist circumference (see details in [*Materials and Methods*](#s3){ref-type="sec"}). Similar to the pattern observed for depression, and consistent with the broader epidemiologic literature, whites were the least likely to have metabolic syndrome (26%) compared with Hispanics (32%) and blacks (35%). We tested for psychosocial resilience using Poisson regression for the count of the number of depressive symptoms reported with models stratified by race/ethnicity ([Table S2](#d35e551){ref-type="supplementary-material"}). In all models, sex and age were modeled as covariates. Individuals from disadvantaged childhood backgrounds reported more depressive symptoms in adulthood, and those who completed a college degree reported fewer depressive symptoms. To test whether the association between college education and depression varies by level of adolescent disadvantage we included an interaction term. There was no evidence that the depression-buffering association of college completion varies by exposure to disadvantage in adolescence for whites (*P* = 0.32) or Hispanics (*P* = 0.24). Furthermore, among black young adults, a college degree was associated with even fewer depressive symptoms for individuals from increasingly disadvantaged childhood backgrounds (*P* \< 0.05). Controlling for baseline depressive symptoms in adolescence does not substantively alter the conclusion. As [Fig. 1](#fig01){ref-type="fig"} illustrates, the results were consistent with the hypothesis that greater educational attainment is associated with psychosocial benefits for individuals from all socioeconomic backgrounds and of varying race/ethnicity. ![Predicted number of depressive symptoms from race-stratified Poisson regression models allowing for an interaction between adolescent disadvantage and college completion. The association between college completion and depression does not vary according to level of exposure to disadvantage in adolescence for whites (*P* = 0.32) and Hispanics (*P* = 0.24) and increases with disadvantage for blacks (*P* \< 0.05).](pnas.1714616114fig01){#fig01} We tested the differential benefits of college completion for physical health using logistic regression to predict the odds of having metabolic syndrome with models stratified by race/ethnicity ([Table S3](#d35e551){ref-type="supplementary-material"}). In all models, sex and age were modeled as covariates. Among white young adults, each SD increase in adolescent disadvantage was associated with a 10% increase in the odds of metabolic syndrome (*P* \< 0.05). College completion was associated with a 37% decrease in the odds of metabolic syndrome (*P* \< 0.001). There was no evidence that this health-protective association of college completion varied by level of adolescent disadvantage (*P* = 0.33). Results are substantively similar when controlling for measures of physical health in adolescence. As demonstrated in [Fig. 2](#fig02){ref-type="fig"}, the results were different for black and Hispanic adults compared with whites. At low levels of exposure to adolescent disadvantage college completion predicted a lower probability of metabolic syndrome compared with those without a college degree. However, as exposure to adolescent disadvantage increased, the health benefit associated with college completion declined. In fact, at high levels of disadvantage (\>1 SD above mean), black and Hispanic adults with a college degree were predicted to be more likely to have metabolic syndrome compared with their similarly disadvantaged peers who did not complete college. For example, a black adult exposed to an adolescent environment of disadvantage two SDs above the mean who completed college had a predicted probability of metabolic syndrome 9% points higher than a peer who did not complete college (0.43 compared with 0.34). ![Predicted probability of metabolic syndrome from race-stratified logistic regression models allowing for an interaction between adolescent disadvantage and college completion. The association between college completion and metabolic syndrome does not vary according to level of exposure to disadvantage in adolescence for whites (*P* = 0.33) but increases with disadvantage for blacks (*P* \< 0.01) and Hispanics (*P* \< 0.01). There is evidence that the physical health benefits of education in early adulthood vary by level of exposure to disadvantage earlier in life only for black and Hispanic adults.](pnas.1714616114fig02){#fig02} Follow-up analyses verified that the physical health benefit associated with college completion was significantly different for Hispanic and black adults compared with whites ([Table S4](#d35e551){ref-type="supplementary-material"}). Specifically, we tested for race/ethnic differences in a single logistic regression model pooling the three groups and including additional indicators for black/Hispanic identity interacted with adolescent disadvantage and college degree (i.e., a three-way interaction). The main effect of college degree indicated that college completion is associated with lower odds of metabolic syndrome \[odds ratio (OR) = 0.59, *P* \< 0.001\]. The interaction between adolescent disadvantage and college completion---relevant for the reference group of whites---was not statistically significant (*P* = 0.54). However, the three-way interaction between adolescent disadvantage, college completion, and black/Hispanic identity was positive (OR = 1.44) and marginally significant (*P* \< 0.10). These findings indicate that for black and Hispanic adults the metabolic "benefit" associated with college completion diminishes---and indeed appears to become a liability---with increasing levels of exposure to adolescent disadvantage. Discussion {#s2} ========== Upward social mobility is a tenet of the American dream. Scholars and policymakers interested in health and inequality would hope that greater social and economic advantage attained in adulthood would improve health outcomes compared with remaining disadvantaged. Indeed, many social policies are premised on the belief that by promoting socioeconomic success among the disadvantaged we can improve their well-being and physical health, creating a healthier and more productive population. Consistent with these aspirations, we found that young adults from disadvantaged backgrounds who achieve upward mobility by attaining a college degree report fewer depressive symptoms compared with their similarly disadvantaged peers who do not complete college. This relationship holds for non-Hispanic whites, blacks, and Hispanics. In contrast, for metabolic syndrome we found that individuals do not uniformly benefit from a college degree; black and Hispanic adults from the most disadvantaged backgrounds face higher levels of metabolic syndrome with a college degree than those without a college degree. While whites from across the socioeconomic spectrum enjoy a physical health benefit associated with college completion, blacks and Hispanics from disadvantaged backgrounds see no benefit and at higher levels of disadvantage may actually experience a cost. If this relationship persists through adulthood and older age it has the potential to undermine the individual and social benefits of upward mobility. This is the first evidence documenting the psychological benefit and physiological deficit of college completion in both disadvantaged African American and Hispanic American young adults in a nationally representative sample. Our findings are consistent with a pattern of skin-deep resilience among upwardly mobile minorities from severely disadvantaged backgrounds ([@r35], [@r37]). What might underlie these patterns? We speculate that these upwardly mobile minority youth are psychologically hardy. However, when young adults from disadvantaged backgrounds achieve upward mobility the higher-status environment in which they find themselves may differ greatly from their social environment of origin ([@r49], [@r50]); such incongruence can lead to isolation and a lack of social support ([@r20], [@r51][@r52]--[@r53]). Furthermore, conditions in the environment of arrival may be inhospitable or hostile, particularly given discriminatory social structures. Upwardly mobile minorities may also feel that their achieved position is tenuous ([@r54], [@r55]). To cope with these stressors, individuals may deploy strategies that are effective in alleviating mental strife but are harmful for physical health ([@r51], [@r56]). Despite such challenges, these young adults complete their degrees and maintain good mental health. As they do so, however, a wear and tear on bodily systems from hard-driving effort may accrue. In supplementary analyses of metabolic syndrome ([Table S5](#d35e551){ref-type="supplementary-material"}) we tested the mediating role of four potential mechanisms: individual psychosocial characteristics of striving in adolescence and perseverance in adulthood, social isolation in adolescence and adulthood, experience of stressful life events in adolescence and adulthood and perceived social stress in adulthood, and adolescent body mass index. Accounting for differences in individual levels of striving and exposure to social stressors does not explain the elevated health risk observed among disadvantaged minority college graduates. Future research must consider both more nuanced measures of the social context in which upward mobility occurs as well as more complicated intersections of stress exposure and response. The absence of a physical health benefit to college completion for young adult minorities suggests important implications for the labor force, health care, and the future of inequality. If they do not experience the expected health benefits of educational attainment, upwardly mobile minorities may spend less time in the labor force, limiting their resource accumulation and the intergenerational transfer of wealth, consequently stunting potential for reducing inequality within and across generations. Furthermore, accelerated physiological deterioration may mean that they will need more health care at earlier adult ages. Greater health-care costs could divert investment from future human capital development in themselves and their children. Finally, given the persistence of inequality and the difficulty of mobility in the United States, it is troubling if the individuals who manage to achieve upward mobility experience health costs. Perhaps more troubling still is that this pattern is limited to black and Hispanic individuals, potentially making it more challenging to close existing racial disparities in health. However, it would be erroneous to conclude from our findings that upward mobility is bad for your health and should therefore be avoided. Rather, policies are needed that promote upward mobility, making it more common and less stressful, and supporting the upwardly mobile individual's ability to translate his or her additional education into health-promoting resources. This should include increased attention to educational quality in addition to access. Recent publicity of the challenges faced by first-generation college students provides an opportunity to examine how supportive interventions affect not only completion, but mental and physical health. Online communities, such as *I'm First!*, provide student testimonials and information to support first-generation students in accessing and completing college. Many colleges and universities are beginning to offer programs tailored to the needs of first-generation students, such as the Harvard College First Generation Student Union. Such programs may increase feelings of belonging and reduce stress; for example, a social-belonging intervention not only reduced the achievement gap but also demonstrated physical health improvements among minority students ([@r57]). Design and evaluation of other interventions, with specific attention to the potential physical health risks of college completion among disadvantaged minorities, is a fruitful area for future research. Future research can also address a limitation of this study by following individuals across the life course to better understand how elevated health risk at this age shapes health and aging trajectories among the upwardly mobile. We examined a composite measure of metabolic syndrome in early adulthood, when respondents were aged 24--32 y. The use of biomarker measurements allows us to investigate risk before disease onset when many conditions are asymptomatic or undetected via traditional clinical screening. Nevertheless, it remains unknown whether such risk will ultimately manifest in morbidities, or if upwardly mobile individuals will be able to translate their accumulating advantage into better health as they age. Documenting the health consequences associated with social mobility in early adulthood provides a foundation from which to understand different aging trajectories for those from disadvantaged backgrounds that begin during the transition to adulthood. In addition, the elevated health risk associated with upward mobility for disadvantaged minority young adults may partially explain the persistent racial disparity in health across place and time among older adults at the same level of SES ([@r58]). Materials and Methods {#s3} ===================== Sample and Design. {#s4} ------------------ Add Health is an ongoing national longitudinal study representative of American adolescents in grades 7--12 in 1994--1995. The initial sample included 20,745 adolescents aged 12--20 y; since the start of the study, participants have been interviewed in home at four data collection waves. At wave IV in 2008--2009, respondents were aged 24--32 y (*n* = 15,701, 80.3% response rate) and asked to participate in biological specimen collection (over 95% provided specimens, almost 15,000). We limited our analytic sample to respondents who participated in both waves I and IV in-home interviews, were from schools that participated in the in-school and school administrator surveys, and had valid sampling weights (*n* = 14,167). From this sample, we conducted listwise deletion to exclude those without complete data for all predictors and demographic covariates used in the analysis, leaving us with a final sample size of *n* = 13,009 for the depressive symptoms analysis. An additional 20% of respondents had missing data for at least one biological indicator of metabolic syndrome, yielding a sample size of *n* = 10,786 for the metabolic syndrome analysis. All data were analyzed with institutional review board approval from the University of North Carolina at Chapel Hill. Information on how to obtain the Add Health data files is available on the Add Health website ([www.cpc.unc.edu/addhealth](http://www.cpc.unc.edu/addhealth)). Race/Ethnicity. {#s5} --------------- At wave I, individuals were asked, "What is your race?" and instructed to indicate as many categories as applied. They were also asked a separate question, "Are you of Hispanic or Spanish origin?" We classified any individual who indicated yes as Hispanic. We classified individuals as non-Hispanic white if they did not identify as Hispanic and reported their race as white only. We classified individuals as non-Hispanic black if they did not identify as Hispanic and reported their race as black only; 135 individuals identified as both white and black, and were excluded from analysis, and 370 foreign-born individuals were also excluded from analysis. Adolescent Disadvantage. {#s6} ------------------------ To measure childhood disadvantage, we constructed a count of 22 binary indicators that capture cumulative exposure to household, school, and neighborhood disadvantage over childhood and/or during adolescence (wave I; [Table S1](#d35e551){ref-type="supplementary-material"}). Household disadvantage indicators include a binary indicator of single-parent family structure at birth, experience of any family structure change across childhood and adolescence, parent education less than high school, and a retrospective measure of household welfare receipt during childhood or adolescence. Neighborhood disadvantage indicators were taken from the 1990 US Census to best approximate neighborhood conditions during wave I of the Add Health study. Neighborhood disadvantage measures include the tract-level proportion of households receiving welfare, proportion of unemployed adults, proportion of households below poverty line, proportion of adults with less than a high school education, proportion female-headed households, proportion black residents, proportion vacant homes, and the county-level infant mortality rate and violent crime rate. Each item was recoded so those residing in neighborhoods at the top quartile of the distribution were coded as disadvantaged. Finally, indicators of school disadvantage at wave I included school-level aggregated measures of the proportion of households receiving welfare, the proportion of unemployed parents, the proportion of parents with less than a high school education, and the proportion of single-parent households. All items were recoded as binary indicators, with the top quartile coded as disadvantaged. School disadvantage was also captured using wave I school administrator reports of grade retention, the school dropout rate, class sizes, the proportion of teachers with a master's degree, and daily school attendance. Consistent with other items in the index, school administrator items were recoded as binary indicators, with the top quartile of grade retention, dropout rate, and class size coded as disadvantaged and the bottom quartile of teachers with a master's degree and daily school attendance coded as disadvantaged. We summed all of the indicators to create a score ranging from 0 to 22. We standardized the score, so that the coefficients associated with the disadvantage index can be interpreted as the change in health risk associated with a one-SD increase in disadvantage. Depression. {#s7} ----------- At wave IV, respondents were asked how often they "were bothered by things that usually don't bother you," "could not shake off the blues," "felt you were as good as other people," "had trouble keeping your mind on what you were doing," "felt depressed," "felt that you were too tired to do things," "enjoyed life," "felt sad," and "felt that people disliked you" over the past 7 d. Response categories ranged from 0 to 3 and included "never or rarely," "sometimes," "a lot of the time," and "most of the time or all of the time." Items were summed to produce a continuous scale with a possible range of 0--27. Metabolic Syndrome. {#s8} ------------------- For each biomarker measured at wave IV we defined the high-risk threshold according to the guidelines established by the National Cholesterol Education Program (NCEP) Expert Panel when possible. High-risk blood pressure was defined as measured blood pressure greater than 130/85 mmHg, or self-report of doctor-diagnosed hypertension or antihypertensive medication. A measured waist circumference of 88 cm or greater for women and 102 cm or greater for men was defined as high risk. NCEP guidelines specify cut points of HDL and triglycerides for risk thresholds; however, Add Health only releases lipid measurements in deciles due to detrending and interconversion procedures ([@r59]). As an alternative classification, we relied on previous estimates from the same time period on the prevalence of hypertriglyceridemia and low HDL in similarly aged males and females ([@r60]). This approach has been used previously to create a modified measure of metabolic syndrome in Add Health ([@r61]). The top three deciles of triglycerides were defined as high-risk for males, and the top two for females. The bottom two deciles of HDL were defined as high-risk for males, and the bottom three for females. Finally, the NCEP guidelines use fasting blood glucose; due to differences in fasting time, we used glycated hemoglobin (HbA1c) as a measure of glycemic homeostasis. HbA1c levels at 5.7% or greater were defined as high-risk ([@r62]). Metabolic syndrome is an indicator, defined as having high risk levels on three or more of the component risk factors. Detailed Add Health data collection procedures and biomarker validation are available elsewhere ([@r63][@r64]--[@r65]). Mediators. {#s9} ---------- We tested the mediating role of four sets of potential mechanisms. Striving was measured in adolescence (wave I) using a four-item scale drawing from educational expectations, educational aspirations, hopefulness about the future, and belief in hard work. We measured perseverance in adulthood (wave IV) using nine personality items such as optimism, planning for the future, and sense of control over one's life. We tested the role of social isolation using scales of social isolation in adolescence (lack of social connections with family, friends, and schoolmates and in the community) and adulthood (lack of social connections with family, friends, community, and other social institutions). Social stress was measured using a count of the number of stressful life events reported in adolescence and adulthood, and the Cohen perceived stress scale measured at wave IV. Finally, we investigated the role of obesity using a measure of adolescent body mass index derived from adolescent report of height and weight at wave I. Supplementary Material ====================== This research was supported by *Eunice Kennedy Shriver* National Institute of Child Health and Human Development (NICHD) Grants F32-HD084117, P01-HD31921, and P2C-HD050924. This research uses data from Add Health, a program project directed by K.M.H. and designed by J. Richard Udry, Peter S. Bearman, and K.M.H. at the University of North Carolina at Chapel Hill and funded by NICHD Grant P01-HD31921 with cooperative funding from 23 other federal agencies and foundations. We also acknowledge the support of the Russell Sage Foundation Working Group in Biology and Social Science. The authors declare no conflict of interest. This article contains supporting information online at [www.pnas.org/lookup/suppl/doi:10.1073/pnas.1714616114/-/DCSupplemental](http://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1714616114/-/DCSupplemental). [^1]: Contributed by Kathleen Mullan Harris, October 31, 2017 (sent for review August 23, 2017; reviewed by Peter S. Bearman and Douglas S. Massey) [^2]: Author contributions: L.G., E.C., G.E.M., and K.M.H. designed research; L.G., K.M.S., and K.M.H. performed research; L.G., K.M.S., and K.M.H. analyzed data; and L.G., K.M.S., E.C., G.E.M., and K.M.H. wrote the paper. [^3]: Reviewers: P.S.B., Columbia University; and D.S.M., Princeton University.
Mid
[ 0.635235732009925, 32, 18.375 ]
msgid "" msgstr "" "Project-Id-Version: revive-adserver\n" "Last-Translator: Revive Adserver Team <[email protected]>\n" "Language-Team: Indonesian\n" "Language: id_ID\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "Plural-Forms: nplurals=1; plural=0;\n" "X-Crowdin-Project: revive-adserver\n" "X-Crowdin-Project-ID: 78987\n" "X-Crowdin-Language: id\n" "X-Crowdin-File: /plugins_repo/openX3rdPartyServers/plugins/etc/ox3rdPartyServers/_lang/po/en.po\n" "X-Crowdin-File-ID: 189\n" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/bluestreak.class.php:36 msgid "Bluestreak" msgstr "Bluestreak" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/falk.class.php:36 msgid "Falk" msgstr "Falk" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/adtech.class.php:37 msgid "adtech" msgstr "adtech" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/openadstream.class.php:40 msgid "Open AdStream" msgstr "Buka AdStream" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/cpx.class.php:36 msgid "CPX" msgstr "CPX" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/mediaplex.class.php:36 msgid "Mediaplex" msgstr "Mediaplex" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/max.class.php:39 msgid "Revive Adserver" msgstr "Bangkit kembali Adserver" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/atlas.class.php:36 msgid "Atlas" msgstr "Atlas" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/tradedoubler.class.php:36 msgid "Trade Doubler" msgstr "Perdagangan Doubler" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/tangozebra.class.php:36 msgid "Tango Zebra" msgstr "Tango Zebra" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/kontera.class.php:36 msgid "Kontera" msgstr "Kontera" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/eyeblaster.class.php:36 msgid "Eyeblaster" msgstr "Eyeblaster" #: plugins_repo/openX3rdPartyServers/plugins/3rdPartyServers/ox3rdPartyServers/doubleclick.class.php:39 msgid "Doubleclick/DFP" msgstr "Doubleclick/DFP"
Low
[ 0.5085836909871241, 29.625, 28.625 ]
[Perception of crack users in relation to use and treatment]. The aim was to know the perception of crack/cocaine users about the use and treatment in a midsize general hospital, located in Rio Grande do Sul. It is a qualitative, descriptive and exploratory research that used semi-structured interviews with eight crack users,from September to October 2010. To analyze the data, we used content analysis from which two semantic categories emerged: drug use and seeking treatment. It was evidenced that drug use initiation in adolescence is related to social access or easy economic access, excessive load on studies and work, stress and not knowing about the possibility of chemical dependency, friends and family members influences, who also influence them on seeking treatment. We conclude that it is necessary to investigate the issue of crack users better and support actions on consume reduction, prevention and education to users.
Mid
[ 0.6445916114790281, 36.5, 20.125 ]
Trevor Elliott Trevor Elliott (born 31 December 1937) is a former Australian rules footballer who played with Essendon and Footscray in the Victorian Football League (VFL). Notes External links Category:Living people Category:1937 births Category:Australian rules footballers from Victoria (Australia) Category:Essendon Football Club players Category:Western Bulldogs players Category:Seymour Football Club players
Mid
[ 0.6488011283497881, 28.75, 15.5625 ]
I am considering a Crown View for backpacking trips. How does it compare to other view cameras for weight and movements? I've read that 12-degree tilt is not much, but how much do I need to get that flowers-in-forground-mountains-in-the-distance photograph? How much tilt does a Korona give? There are a lot of knobs and such sticking out, so I would probably need to make a wooden box to store it in while packing. What sort of wood were the Crown Views made from? I would think 4 film holders would give me plenty of photo ops, especially since I can reload every night. A Crown View will be hard to find, expensive and I don't think it will have enough tilt, but it will have front shift. I'm not sure about the Korona. Both Korona and Kodak made No. 1 cameras as well as No.2s. The No.2s were heavier and have what i call a balancing board that sits under the camera and allows you to balance the camera's weight over the tripod great for long lenses but everything was beefier and there's this extra piece of wood and brass. No. 1s cameras have only to places to attach the camera, right under the bed and in the middle of the rails. The Agfa has a decent amount of tilt but it's a heavy camera because it carries the extra rails in the camera. Most of the wood view cameras from the 20s-40s were either mahogany or cherry when varnished and maple when painted grey. Have you tried the Speed Graphic with the bed dropped and leaving the standard tilted down? That would be the lightest of the bunch. If I really had to have a view camera on a backpacking trip, I'd look to Gowland or Calumet's Cub Cadet. If you've got the money a Technikardan can't be beat for compactness and versatility, but it's not as light as the Gowland or Cadet. As for a case, I'd look at the LowePro trekker? I know one of the soft case companies makes a line of backpacks for cameras. I think the Lowe Pro makes two backpacks that can handle a view camera and half a dozen holders. If I'm going on a photo vacation, I take at least 10 holders, a Harrison changing tent and more film than you think you'll need. I give every shot at least two exposures. You'll never be back there and there is always the chance of an accident in the darkroom. There were days when I didn't shoot more than two holders, and there were days were I had to pop the changing tent up at lunch time to reload. I've found the middle size Harrison tent will stretch from the dash to the "shoulders" of the passenger seat back of most compact rental cars when the seat is properly adjusted. It sags a bit with 5 holders but I've changed a lot of film this way next the road under a shady tree. The Crown View is usually cheaper than any of the others you mention. One thing I like about the wooden views is they fold up neat. The Cadet, like most rail cameras, is a jumble of loose parts and a wild accordion when disassembled. Is there a site that compares view cameras, current and otherwise? I have a Speed, but there is no tilt of the film plane, just the lens board. Tilting the bed down moves the optical axis up to near the top of the frame. Then I would be using the least sharp part of the image circle for most of the frame. That is why I bought a Graphic View, but I'm sure not going to carry that beast on my back. My Speed is about 6.5 lbs, and it gains over 2 lbs because I would then want to bring my 15" Tele-Optar, which I would leave behind if I bought a camera w/o a focal plane shutter. That's 8.5 lbs. + my tripod's 4 lbs, 4 holders adds 1.5 lbs...I'm up around 14 lbs. Looks like I need a couple of friends to share the load. I have a Lowe Mini-Trekker day pack. 10 holders and a changing tent is getting pretty heavy. I've also got a tent, sleeping bag, food, stove, etc. Usually my pack is about 45 lbs with 35mm gear. My Lowe Frame Pack DLX will expand to hold the whole rental car, but I can't carry it and even pretend to still have a good time. I figure 8 shots a day, 4 color, 4 B&W, should be enough. OK, may 5 holders. If I were to buy a camera right now, for the description of what your wanting to do, I would head out and purchase a new Shen Hao, great backpacking camera, light, with just about any movement you could want.. Dave _________________Focus on the Picture, Not on the Glass. Satin Snow(TM) Ground Glass I have both a Crown View and a Korona; neither has front tilt. The back tilt on both cameras seems to provide the same amount of movement. While they make fine field cameras, neither will be easy to find, and they command good money. Their age means that you have to check them out carefully, and bellows problems are common. I had to replace the bellows on my Korona. As Dave suggests, your best bet would be to check out the newer wooden field cameras that are available. They will have much more in the way of movements and will fold just as compactly as the old cameras. The better ones will even have interchangeable bellows so you can use a bag with extreme wide angles. I have a Speed, but there is no tilt of the film plane, just the lens board. Tilting the bed down moves the optical axis up to near the top of the frame. Then I would be using the least sharp part of the image circle for most of the frame. Scott You you stated you wanted to do the " flowers in the forground mountains in the background, all sharp photo." There are two ways: 1. set up on a sunny day stop down to f64 and hope the diffraction doesn't eat most of the sharpness. 2. The Scheimpflug rule which states that when the plane of the lens , the plane of the film and the plane of the subject all intersect , the subject will be in focus. This means an extreme amount of tilt for most lenses and the optical axis ends up aimed way above the film. What kind of backpacking are you doing, and over what terrain? I've hiked 6-8 miles a day with my Graphic View camera on a lightweight tripod slung over my shoulder. It's not as light as a wooden camera, but it's not heavy to me for that distance in the flat terrain I hike through. The Graphic View gives full movements and, with the right lens, should give you the flowers/mountains shot you want. I know from hard experience that a large changing bag actually does allow one to reload 4X5 and even 8X10 film holders, assuming you're as nuts as I was back when I did it. I'd strongly advise more than 4 holders...you don't want to shoot only one piece of film when you've gone to all that effort to get to the wonderful setup..you want to shoot two sheets at each exposure, which might mean four or six sheets total, per image, if you're shooting chromes. Have you looked into Quick Loads? I used them on a winter trip to the Sierra a couple seasons ago with a new model 545i ploaroid back that is lighter becasue of the platic and it worked out very nicely, and no changing bag or dust worries so long as the holder stays clean, which I always keep my holders in ziplocks, and then in black stuff sacks, and so far so good. I never put it to the scael, but I didn't want the space taken up by six holders. When I go to all the ttrouble, I tend to shoot 4 to 6 exposures as the light does some crazy things in my home away from home. Last year I made one important rule for myself that you may use if you like. If shooting at sunset,always make sure to have one last exposure ready to go for that last pop. Not just an extra roll or sheet put away somewhere, I mean ready to go for when you think the light has done everything you think it will do. I rarely break down my set up until I am absolutely sure the best light has passed, which often means getting rather dark, and also means getting back to wherever I need to go with a big smile and a flashlight handy. Mini lights are also good for setting up before sunrise and after sunset for settings as well as focussing etc... As a rule, when heading into the backcountry I carry twice as much film as I think I will use for 120, only about 1-1/2 for 4x5. and the nice thing about Quick Loads (except for the price) as mentioned above, they are always ready to go. You are not gonna be there with a changing bag trying to load two more sheets as the light peaks and fades or whatever requires speed. IF taking the bag, make sure dust free, fold carefully pushing out all the air, and place in a ziplock also. If ever I carry a 4x5 again into the mountains or canyons I am going to pay the extra dollars for the quick loads again, but sticking to my 2x3 camera kit for now becauseI can set up on a very light Velbon and use a modern bulb release which helps make that feasible, cable turned out to be too shakey, even for using my Crown on a nice wood Berlebach in the Bristlecones last summer. The bulb made things nice and worry free for the long stopped down exposures I normally make at 4-15 seconds. IF anyone has a 545i and a scale I would be curious what one weighs (the one I used was borrowed), compared to three double sided Riteway plastic holders. I am guesing about the same, but saves space in the pack and adds a worthy amount of convenience and so on. The Crown or Speed Graphic has enough tilt for a "Mount Williamson-esque" shot like that. The Crown is a bit smaller and lighter than the Speed because it doesn't have a focal plane shutter. One of these might be a great option for what you're talking about. They are soo cheap for what they are. Hands down I would recommend the Linfof Technika above anything else for backpacking. It is light, compact, beautifully made, and has all movements. But a Crown Graphic is about 10% of the cost of the Linhof. All movements are on front. You have shift and rise and fall. You also have a rearward tilt, or forward tilt if you drop the bed and square up the front. You can put it on its side to make this a swing. The movements should really be good enough for most situations out in the wilderness, and the money you save over the Linhof will be well worth it. You can use the savings to get a carbon fiber tripod and several lenses. You don't need to have rear movements out in nature, unless you are trying to deliberately distort shapes. You definitely want something that can tilt, though. You don't even need rises and shifts, but tilts are a must in my opinion. Keith Quote: On 2005-10-12 19:19, woodplane wrote: I am considering a Crown View for backpacking trips. How does it compare to other view cameras for weight and movements? I've read that 12-degree tilt is not much, but how much do I need to get that flowers-in-forground-mountains-in-the-distance photograph? How much tilt does a Korona give? There are a lot of knobs and such sticking out, so I would probably need to make a wooden box to store it in while packing. What sort of wood were the Crown Views made from? I would think 4 film holders would give me plenty of photo ops, especially since I can reload every night. I have two field cameras - a crown view and a Walker Titan. The crown view is probably a bit lighter, the Walker definitely has a lot more movements - but I tend not to use much movement when doing landscape. Either camera, along with required stuff fits nicely in a Lowe Phototrekker AW and I have brought them on day long hikes. I second what a previous poster had to say about quickloads - allows you to carry more film, and less weight, but they are not cheap. I personally like the Fuji Acros in 4x5 quickload - that along with the quickload holder is the lightest way to go. Also, it is very difficult to avoid getting dust when you are loading in the field - the quickloads take care of that. For outdoor use, I really like the walker - it is constructed of ABS and stainless, so it is not sensitive to moisture, salt spray, temperature extreems etc - I worry about wood under those conditions. The Walker cost me significantly more than the Crown View cost, but I also got a hell of a deal on the Crown View
Mid
[ 0.569160997732426, 31.375, 23.75 ]
Sunday, 19 April 2009 Happy Easter - The Unity of Christian belief Easter is not based on a myth, a theory or a fairy tale, but rather on the very real historical event of Christ's death and resurrection, says Benedict XVI. The Pope said this during the Easter message he delivered from St. Peter's balcony before he imparted his blessing "urbi et orbi". He began with the question: "What is there after death?" The message of Easter is that "death does not have the last word, because Life will be victorious at the end." "This certainty of ours is based not on simple human reasoning," the Pontiff continued, "but on a historical fact of faith: Jesus Christ, crucified and buried, is risen with his glorified body."He continued: "Ever since the dawn of Easter, a new Spring of hope has filled the world; from that day forward our resurrection has begun, because Easter does not simply signal a moment in history, but the beginning of a new condition."Jesus is risen not because his memory remains alive in the hearts of his disciples, but because he himself lives in us, and in him we can already savor the joy of eternal life."The Holy Father affirmed that the Resurrection "is not a theory, but a historical reality": "It is neither a myth nor a dream, it is not a vision or a utopia, it is not a fairy tale, but it is a singular and unrepeatable event: Jesus of Nazareth, son of Mary, who at dusk on Friday was taken down from the Cross and buried, has victoriously left the tomb." Light in darkness "The proclamation of the Lord’s resurrection lightens up the dark regions of the world in which we live," Benedict XVI reflected. "I am referring particularly to materialism and nihilism, to a vision of the world that is unable to move beyond what is scientifically verifiable, and retreats cheerlessly into a sense of emptiness which is thought to be the definitive destiny of human life.""It is a fact," he continued, "that if Christ had not risen, the 'emptiness' would be set to prevail. If we take away Christ and his resurrection, there is no escape for man, and every one of his hopes remains an illusion."The Pope said Easter Sunday is the day when "the proclamation of the Lord’s resurrection vigorously bursts forth," and it is the answer to the question put forth in the Book of Ecclesiastes: "Is there a thing of which it is said, 'See, this is new?’" On this day, he said, Christians answer "yes": "On Easter morning, everything was renewed. " The Pontiff lamented, however, that in the world today "there still remain very many, in fact too many signs of [death's] former dominion." Helpers needed "Even if through Easter, Christ has destroyed the root of evil, he still wants the assistance of men and women in every time and place who help him to affirm his victory using his own weapons: the weapons of justice and truth, mercy, forgiveness and love," he said."The Holy Father said that this was his message he carried to Africa last month during his visit to Cameroon and Angola, and the message he wishes to carry to the Holy Land in May."Africa suffers disproportionately from the cruel and unending conflicts, often forgotten, that are causing so much bloodshed and destruction in several of her nations, and from the growing number of her sons and daughters who fall prey to hunger, poverty and disease," he said. And in the Holy Land, he said, "reconciliation -- difficult, but indispensable -- is a precondition for a future of overall security and peaceful coexistence, and it can only be achieved through renewed, persevering and sincere efforts to resolve the Israeli-Palestinian conflict.""My thoughts move outward from the Holy Land to neighboring countries, to the Middle East, to the whole world," Benedict XVI continued. "At a time of world food shortage, of financial turmoil, of old and new forms of poverty, of disturbing climate change, of violence and deprivation which force many to leave their homelands in search of a less precarious form of existence, of the ever-present threat of terrorism, of growing fears over the future, it is urgent to rediscover grounds for hope."Let no one draw back from this peaceful battle that has been launched by Christ’s resurrection."He added, "Christ is looking for men and women who will help him to affirm his victory using his own weapons: the weapons of justice and truth, mercy, forgiveness and love."
High
[ 0.6617283950617281, 33.5, 17.125 ]
137 N.W.2d 880 (1965) Elbert C. NIELSEN and Alice L. Nielsen, Appellants, v. Mervin HOKENSTEAD, Tilford Iverson, Archie Elliott, David Olson and Orville Berkland as members of the Board of Education of Brandon Valley Independent School District No. 150, Minnehaha County, South Dakota, Respondents. No. 10200. Supreme Court of South Dakota. November 15, 1965. John E. Burke and Richard Hopewell, Sioux Falls, for appellants. George O. Johnson of May, Boe & Johnson, Sioux Falls, for respondents. PER CURIAM. This is an action for specific performance by plaintiffs as vendors against the defendants as members of the board of education of an independent school district as vendee. The action seeks to compel the district to take title to a tract of land of about 20 acres and pay the plaintiffs as the consideration therefor the sum of $20,000. Judgment dismissing the complaint was entered in the court below and plaintiffs appeal. The relevant facts are that on January 17, 1963, plaintiffs granted the school district an exclusive option until July 1, 1963, to purchase the subject property. On June 26, 1963, the option was extended to August 1, 1963. The option provided that notice of election to purchase shall be in writing and given to the optioners at Brandon, South Dakota, on or before the expiration date. At a special meeting held on July 22, 1963, the school district by a vote of 3 for and 2 against passed the following resolution: "BE IT RESOLVED, that it is for the benefit and best interests and is required *881 that Brandon Valley Independent School District No. 150, Brandon, South Dakota, acquire real property in Brandon, South Dakota, for school purposes and that the School District, through its duly authorized officers contract with Elbert Nielsen and Alice Nielsen, husband and wife, of Brandon, South Dakota, to purchase twenty (20) acres of unimproved real property located in: "The Southeast Quarter of the Northwest Quarter of Section 34, Township 102 North, Range 48 West of the 5th P.M., Minnehaha County, South Dakota for a price of One Thousand Dollars ($1,000) per acre or a total price of Twenty Thousand Dollars ($20,000) to be payable in cash upon the vendors' conveying by good and sufficient Warranty Deed supported by abstract of title and conditioned upon the vendors' being permitted to retain the 1963 crop now growing upon the property above described." On July 24, 1963, the superintendent of schools on a school district letterhead directed a letter to the plaintiff, Elbert Nielsen, copied the foregoing resolution verbatim, signed such letter and transmitted it to said plaintiff. Thereafter plaintiffs caused the acreage to be surveyed and platted; procured an abstract of title thereto; executed a warranty deed; and on several occasions including at the time of trial tendered such deed and abstract to the defendants who refused delivery. We dispose of this appeal by holding that no contract for the sale of the property in question existed between plaintiffs and the school district. This disposition does not suggest that specific performance would have been proper had it been found that a contract existed. "The right to specific performance is founded in equity and a decree for such relief is given instead of damages when by this means a court can do more perfect and complete justice." Bates v. Smith, 48 S.D. 602, 205 N.W. 661. SDC 1960 Supp. 15.2102 and 15.2301 as amended give the school board of an independent school district the exclusive authority and power to purchase the necessary real property for the operation of its schools. SDC 1960 Supp. 15.2234 provides: "No contract shall be binding on any school district except it be approved by the school board acting as such, at an annual, regular, or regularly called special meeting." The option was not a contract of the district and imposed no obligation on the defendants or on the school district. The resolution of the board passed on July 22nd did not purport to be an acceptance of the option, but only expressed the then intention of the board "through its duly authorized officers" to contract to purchase a part of the property described in the resolution. It contemplated that details of the intended purchase be embodied in a written contract between the plaintiffs and the district acting through its officers. The statute required board approval of the contract at a proper meeting before it would be binding upon the school district. Such a contract was never prepared, executed or presented to the board and no obligation devolved upon the defendants to accept title to this property and pay the sum requested. There is no contract or agreement which could be specifically enforced. Whatever motivated the board in abandoning its intention to purchase plaintiffs' property is of no consequence. However, it does appear from the record that the district was without funds to pay for the land and expected to sell some property to a church and use the money to be realized for this purpose. Disagreement within the district on the site of proposed new construction is also apparent. Plaintiffs were cognizant of the situation. Their expenditure of time, effort and money in anticipation of a contemplated sale, though well intentioned, did not create an enforceable contract. Much *882 of what we said in the recent case of Schull Construction Co. v. Board of Regents of Education, 79 S.D. 487, 113 N.W.2d 663, 3 A.L.R.3rd 857, which involved a public building contract to be let on advertised bids, is here applicable. Affirmed.
Low
[ 0.496855345911949, 29.625, 30 ]
Introduction {#Sec1} ============ Sensory perception emerges from the confluence of bottom-up and top-down inputs. In olfaction, feedback projections innervate the first brain relay for information processing: the olfactory bulb (OB). The OB receives information from olfactory receptor neurons, each bearing a single odorant receptor but expressing \~1,000 odorant receptors altogether in mice^[@CR1],\ [@CR2]^. All sensory neurons expressing the same receptor converge to \~2 glomeruli within each OB^[@CR3]^, where they synapse onto apical dendrites of OB principal cells (mitral and tufted cells) as well as glomerular layer interneurons, thereby forming a map of receptor identity. A given OB principal cell sends its apical dendrite to a single glomerulus, while the populations of mitral and tufted cells multiplex odor information to a variety of higher brain regions, including the anterior piriform cortex (APC)^[@CR4]--[@CR6]^. The APC is the largest region of primary olfactory cortex. It is thought to be involved in odor identity encoding, and to serve as a location for learning-induced changes in olfaction^[@CR7],\ [@CR8]^. Single piriform neurons receive convergent inputs from multiple glomeruli. At the population level, odor information in the APC is sparse and distributed, and lacks evident topographic organization^[@CR5],\ [@CR6],\ [@CR9]--[@CR13]^. Odor information encoded by assemblies of APC cells is then transmitted to a variety of olfactory regions such as the anterior olfactory nucleus (AON), posterior piriform cortex (PPC), cortical amygdala (CoA), and lateral entorhinal cortex (LEnt). These olfactory cortical areas also project to higher, non-sensory brain regions such as the orbitofrontal cortex (OFC). However, little is known about the organization of APC projection channels. The APC is a paleocortex composed of three layers. From superficial to deep: layer 1 is the input layer, layer 2 contains densely packed principal cells, and layer 3 comprises a combination of principal cells and GABAergic neurons. Deep to layer 3 is the endopiriform cortex (EndoP), mainly populated with multipolar neurons^[@CR14]^. Furthermore, layer 2 can be divided into two sublayers, 2a being roughly the superficial half of layer 2, and 2b the deeper half. Afferent inputs from the OB make synapses mainly with the distal dendrites of layer 2 principal cells. However, the strength and connectivity of these synapses appear to be cell-type specific: the semilunar (SL) cells in L2a receive stronger inputs while the superficial pyramidal (SP) cells in L2b receive weaker sensory inputs^[@CR15],\ [@CR16]^. In addition to these synaptic properties, recent work demonstrated that SL and SP cells exhibit cell-type specific connectivity^[@CR17],\ [@CR18]^. SL cells make synapses onto layer 2b SP cells without forming recurrent synapses on to themselves, while SP cells are recurrently connected^[@CR17]^. Therefore, layer 2 is populated with a mix of principal cells, namely SL and SP cells^[@CR16]^, playing different roles in the synaptic processing of olfactory information^[@CR15]^. Input processing and recurrent connectivity is well described in the APC^[@CR4],\ [@CR8],\ [@CR15],\ [@CR19]^. However, it is unclear which neuron types contribute to the numerous projections out of the APC. Reconstruction studies of individual neurons suggest that APC principal cells project axons to the OB, AON, and to downstream olfactory regions such as the PPC, LEnt, and CoA^[@CR20],\ [@CR21]^. However, it is unclear how prevalent cells projecting both in feedforward and feedback directions are. Recent work^[@CR22],\ [@CR23]^ confirmed original findings from Haberly and Price^[@CR24]^, showing that feedback fibers from the APC to the OB do not originate homogeneously from all layers but appear to come from layers 2b and layer 3. In the present work, we used Retrobeads, viral labeling, as well as mouse genetics to dissect the contribution of APC to upstream or downstream projections, with an emphasis on layer 2 principal neuron populations. We found that layer 2b is the main source of both feedback and feedforward projections, and that a sizeable fraction of neurons send collaterals to both regions. In addition, we found that genetically labeled SL cells projects widely to olfactory areas, but not back to the OB. Results {#Sec2} ======= The distribution of APC cells projecting back to the OB is biased toward layer 2b {#Sec3} --------------------------------------------------------------------------------- To analyze the anatomical distribution of the somata of APC neurons projecting back to the OB, we injected retrograde tracers in the OB of mice (Fig. [1A,B](#Fig1){ref-type="fig"}) and examined the location of back-labeled somata in a series of sagittal sections of the APC **(**Fig. [1C](#Fig1){ref-type="fig"} **)**. We targeted our injections to the granule cell layer of the OB because it has been shown to be the largest recipient of feedback fibers originating from the APC^[@CR25]--[@CR28]^.Figure 1Layer 2b is the main source of feedback from the APC to the OB. (**A**) Schematic representation of injection of the tracer into the OB, and imaging from sagittal sections of the APC. (**B**) Injections were targeted to the granule cell layer of the OB. \*, injection site. Red: retrograde tracer; blue: DAPI. (**C**) Representative sagittal section used for APC imaging. *Left*, DAPI labeling shows the main anatomical landmarks: dense cell layer 2 of the APC; rf: rhinal fissure; OT: olfactory tubercle; Hc: hippocampus; Ctx: neocortex and Str: striatum. *Right*, Retrogradely-labeled cells were found mainly in the APC in those sections. Some labeling was also observed in the MCPO: magnocellular preoptic nucleus and nLOT: nucleus of the lateral olfactory tract. (**D**) Higher magnification image showing retrogradely labeled cells across APC layers. Superficial limit of the layer 2 (border between layers 1 and 2a) is defined with a depth of 0 while a depth of 1 is the deep end of that layer (limit between layers 2b and 3). Red: retrograde tracer. Blue: DAPI. (**E**) Bar graph showing the relative fractions of retrogradely labeled neurons, normalized by the number of DAPI cells in each layer. OB-projecting neurons were heterogeneously distributed across APC layers (p \< 0.0001, total cell count: 546 Retrobeads + , 4474 DAPI + cells, n = 13 sections, 11 mice, Friedman test). Within layer 2, cells were more densely found in layer 2b than in layer 2a (p = 0.005, n = 84 2a cells *vs*. 205 2b cells, Dunn's multiple comparisons post-hoc test). \*\*P \< 0.01. (**F**) Cumulative distribution of the OB-projecting cells within layer 2. On the x-axis, 0 indicates the border between layers 1 and 2a, while 1 is the limit between layers 2b and 3 (see panel D). The light green and red curves show the results from the counting obtained in a single optical section with green and red Retrobeads, respectively. Distributions obtained from green and red Retrobeads were not significantly different (p = 0.47, Kolmogorov-Smirnov test, n = 856 cells, 14 sections, 6 mice for the green Retrobeads, n = 598 cells, 13 sections, 6 mice for the red Retrobeads). The distribution of all the bead-labeled cells (thick red trace) was shifted to deeper part of the layer 2 compared to the distribution of the DAPI + cells (thick blue trace; p \< 0.0001, Kolmogorov-Smirnov test, n = 658 DAPI + cells). We injected either red or green fluorescent Retrobeads (50 nL) into the OB of C57Bl/6 J mice and imaged olfactory cortices 1 to 2 days later. Bead injections in the OB led to labeling profiles comparable to what has previously been described in the literature, notably with ipsilateral labeling of somata in the AON pars principalis, but not in the AON pars externa; and contralateral labeling in AON pars principalis and pars externa, and also labeling of the ipsilateral horizontal limb of the diagonal band of Broca^[@CR27]--[@CR31]^ (Supplementary Fig. [S1](#MOESM1){ref-type="media"}). To examine whether particular neuron types are responsible for sending feedback projections to the OB, we analyzed the distribution of labeled neurons across APC layers (Fig. [1D](#Fig1){ref-type="fig"}). Following OB injections, retrogradely-labeled neurons were uniformly distributed along the medio-lateral (coronal sections) or antero-posterior (sagittal sections) axis of the ipsilateral APC. In contrast, across the different APC layers, we found a heterogeneous distribution of labeled cells, even when corrected for variation in cell densities across layers (p \< 0.0001, Friedman test; Fig. [1E](#Fig1){ref-type="fig"}. See figure legends and Supplementary Table online for the detailed numbers). Since layer 2 can be subdivided into layer 2a (superficial) and 2b (deep) with different neuron types, we compared the relative density of labeled cells in these sublayers. The distribution of OB-projecting neurons was significantly biased toward layer 2b (19.8 ± 2.4% for layer 2a versus 48.6 ± 2.5% for layer 2b, p = 0.005, Dunn's multiple comparisons post-hoc test; Fig. [1E](#Fig1){ref-type="fig"}), similar to results reported by Diodato and colleagues^[@CR22]^, who did not normalize data to account for the variation in cell densities across layers. Interestingly, after normalization for sublayer cell densities, a substantial fraction of cells (33.7 ± 2.1%) was found in a continuum encompassing layer 3 and the endopiriform (endoP; Fig. [1E](#Fig1){ref-type="fig"}). Next, we further examined the distribution of retrogradely-labeled neurons as a function of the depth of layer 2 (0 being the superficial limit and 1 being the deep limit). First, we confirmed that red and green bead labeling led to a similar distribution to the deepest part of layer 2, and therefore data were pooled (p = 0.47, Kolmogorov-Smirnov test; Fig. [1F](#Fig1){ref-type="fig"}). Next, we found that the distribution of retrogradely-labeled neurons was significantly shifted toward the deepest part of layer 2 compared to the distribution of layer 2 cells measured using DAPI staining (p \< 0.0001, Kolmogorov-Smirnov test; Fig. [1F](#Fig1){ref-type="fig"}). However, our observations could have been influenced by the biased uptake of the Retrobeads by certain neuron types. To examine this, we injected adeno-associated viruses (AAVs) into the OB of mice expressing Cre under the CaMKIIa promoter (CaMKIIa-Cre), which is expressed in most excitatory principal neurons in the cortex^[@CR32]^. Indeed, AAVs have recently be shown to possess retrograde-labeling activity, especially when used with transgenic mice expressing Cre recombinase in specific neural populations^[@CR33]^. Our survey of AAV serotypes showed that AAV capsid serotype 8 (AAV2/8-CAG-DIO-EYFP) worked the best to retrogradely label APC somata using this in CaMKIIa-Cre mice (Supplementary Fig. [S2](#MOESM1){ref-type="media"}). Consistent with bead injections, virally-mediated retrograde labeling of APC neurons was heterogeneous across layers (p = 0.0002, Friedman test; Supplementary Fig. [S3A](#MOESM1){ref-type="media"}). Within layer 2, significantly more cells were found in layer 2b than in layer 2a (23.4 ± 4.1% for layer 2a versus 51.4 ± 0.8% for layer 2b, p = 0.018, Dunn's multiple comparisons post-hoc test; Supplementary Fig. [S3A](#MOESM1){ref-type="media"}). Since both Retrobeads and virus injections into the OB labeled mostly L2b cells within L2, we controlled for any biased uptake by these cells by examining recurrent projections within the APC. Toward this aim, we injected Retrobead or AAV into the APC and imaged within the APC. This resulted in same amount of labeling in L2a vs. L2b after normalization for DAPI cell density for both retrograde tracers (virus injection: p = 0.50, Supplementary Fig. [S3B](#MOESM1){ref-type="media"}; Retrobeads injection: p = 0.88, Supplementary Fig. [S4A](#MOESM1){ref-type="media"}, Wilcoxon ranksum matched-pairs tests), showing that the Retrobeads and viruses are equally likely uptaken by L2a and L2b cells. Using both Retrobeads and viral injections to label OB-projecting cells of the APC in a retrograde manner, we showed that the main OB-projecting population is located in layer 2b of the ipsilateral APC. The distribution of APC neurons projecting feedforward axons to the PPC is biased toward layer 2b {#Sec4} ------------------------------------------------------------------------------------------------- We then studied the distribution of the APC neurons projecting to the PPC, a major stream of feedforward information flow. Similar to OB injections, Retrobeads were injected into the PPC of mice and labeled somata were quantified in a series of sagittal APC sections (Fig. [2A--C](#Fig2){ref-type="fig"}). The distribution of PPC-projecting neurons across APC layers, corrected for the variation in cell densities, was also heterogeneous (p = 0.012, Friedman test; Fig. [2D](#Fig2){ref-type="fig"}). Interestingly, the non-normalized distribution was bimodal, with a peak in layer 2 (first peak at 74% relative to depth of L2; 57% of cells) and a smaller peak in the EndoP (second peak at 289% of L2; 26% of cells); substantially fewer cells were to be found in layer 3 compared to other layers (17% of cells; Supplementary Fig. [S4B](#MOESM1){ref-type="media"}). These results corroborate the observations in an early work using horseradish peroxidase staining^[@CR24]^. Within layer 2, the fraction of PPC-projecting cells was not significantly different between layer 2a and 2b after correction for variation in cell densities across layers (34.8 ± 2.1% in layer 2a *vs*. 30.4 ± 3.2% in layer 2b, p = 0.46, Dunn's multiple comparisons post-hoc test; Fig. [2D](#Fig2){ref-type="fig"}). Yet the distribution of the PPC-projecting population was significantly skewed toward the deeper part of layer 2 compared to the distribution of the DAPI cells (p \< 0.0001, Kolmogorov-Smirnov test; Fig. [2E](#Fig2){ref-type="fig"}). Thus, global, layer-wise analysis shows that PPC-projecting cells are equally dense in layer 2a and 2b, while finer, continuous analysis of their location revealed that PPC-projecting cell distribution is skewed toward deeper parts of layer 2 compared to the overall cell distribution. Finally, our experiments suggest that EndoP contains PPC-, but not OB-, projecting neurons, while layer 3 mostly contains OB-, but not PPC-, projecting neurons.Figure 2Projections from the APC to the PPC arise more homogeneously from both layer 2a and 2b. (**A**) Schematic representation of the injection of tracer into the PPC, and imaging from sagittal sections of APC. (**B**) Injections of tracer were targeted to the PPC. \*, injection site. Arrowheads, PPC limits. Hc, hippocampus; Ctx, neocortex. *Inset*, zoom-in view of the injection site. Note the presence of LOT in the APC but not PPC. (**C**) Retrogradely labeled cells were found in APC sagittal sections. For B and C, red: tracer. Blue: DAPI. (**D**) Bar graph showing the relative fractions of retrogradely labeled neurons, normalized by the number of DAPI cells in each layer. PPC-projecting neurons were heterogeneously distributed across APC layers (p = 0.012, n = 581 Retrobeads + cells and 2529 DAPI + cells, 5 sections, 4 mice, Friedman test), with labeled cells mainly located in layer 2a, 2b and in layer 3 + EndoP. No statistical difference was found between layer 2a and 2b when fractions were corrected for variations in cell densities across layers (p = 0.46, n = 155 layer 2a cells and 191 layer 2b cells, Dunn's multiple comparisons post-hoc test). n.s, not significant. (**E**) Within layer 2, projecting neuron distribution was skewed toward deeper part of layer 2 (p \< 0.0001, n = 499 Retrobeads + neurons, n = 658 DAPI^+^ cells, 12 sections, 5 mice, Kolmogorov-Smirnov test). Thin red traces represent the counting for single sections, while the thick red trace shows the distribution of all the counted cells. The thick blue trace is the distribution of the DAPI + cells. OB- and PPC-projecting neurons are overlapping populations in layer 2 of the APC {#Sec5} -------------------------------------------------------------------------------- We observed that the OB- and PPC-projecting populations of the APC share dissimilar distribution patterns across the three layers and EndoP, but might share more similar patterns inside layer 2. We next asked whether the neurons from layer 2 projecting back to the OB and forward to the PPC belong to segregated or the same population of APC projecting cells, such that a single cell projects to both areas. Recent tracing work from Chen *et al*.^[@CR34]^ showed that single APC neurons that project to two distinct areas of the OFC -- namely the agranular insula and lateral OFC -- were spatially segregated within the APC, suggesting a spatial organization of the APC based on its output channels. To address the spatial organization of OB- projecting cells relative to the PPC-projecting population and *vice versa*, we injected green Retrobeads in the OB and red Retrobeads in the PPC in the same mice (Fig. [3A](#Fig3){ref-type="fig"}). First, we did not find significant difference in the distribution patterns of OB-and PPC-retrogradely labeled neurons (p = 0.052, Kolmogorov-Smirnov test; Fig. [3B,C](#Fig3){ref-type="fig"}). Then, of the 783 retrogradely labeled neurons from the OB (green Retrobeads^+^), 91 of them (11.6%) were also projecting to the PPC (dual labeled). Similarly, 91 of the 499 cells projecting to the PPC were found to project to the OB as well (18.2%; Fig. [3B,D,E](#Fig3){ref-type="fig"}). The fact that we found non-zero percentages of dual-labeled cells shows that at least some of them project to both the PPC and OB.Figure 3OB- and PPC-projecting cells are partially overlapping populations. (**A**) Schematic representation of the dual injection strategy. Green Retrobeads were injected into the OB while red Retrobeads were injected into the PPC. Images were taken from the APC. (**B**) Example APC section with OB- (green) and PPC-projecting (red) neuron population, spatially overlapping. Arrowhead: dual-labeled neurons. (**C**) OB- and PPC-projecting neurons share similar distributions within APC layer 2 (p = 0.052, n = 782 and n = 499 OB- and PPC-projecting neurons, respectively, 12 sections, 5 mice, Kolmogorov-Smirnov test). The thin green and red traces represent quantifications from single optical sections. The thick green and red traces are the distribution of all the OB- and PPC-projecting neurons respectively. (**D**) Blow-up of the starred arrows in B., showing 3 dual-labeled neurons in layer 2b. Scale bars: 10 μm. (**E**) Venn diagram representing the counted number of dual-labeled cells (DL) in each population. Several factors may lead to an underestimation of the dual-labeled population. First, within the injection site, Retrobeads labeled only a fraction of neurons that actually project to the injected region. Moreover, dual dye injection might further result in low amount of dually-labeled cells, potentially due to competitive mechanism between dyes^[@CR35]^. To estimate the dual-labeling efficiency, we successively injected identical amounts of green and red Retrobeads into the same site in the OB. Under these conditions, a large majority of labeled cells in the APC was double-labeled (90.5--92.7%; Supplementary Fig. [S5A](#MOESM1){ref-type="media"}), similar to a previous study^[@CR35]^. Thus, the co-labeling efficiency of green and red Retrobeads appears to be high and this factor only contributes weakly to the underestimation of dual-labeled population. Second, with Retrobeads, injections were spatially restricted to several hundred of micrometers in diameter (Figs [1A](#Fig1){ref-type="fig"} and [2A](#Fig2){ref-type="fig"}). Since we do not know the absolute number of APC neurons projecting to the OB or PPC, we cannot estimate the fraction of projecting cells we labeled using our bead injection protocol. Evidence for a certain degree of topography in OB-projection patterns^[@CR23],\ [@CR36]^ suggests that our restricted bead injection limits the number of possible retrogradely labeled neurons to those projecting to the precise injection locus. Notably, Matsutani^[@CR36]^ described patchy, sparse axon terminals in the OB that originate from APC neurons. To appreciate the underestimation caused by restricted bead injection, we injected green and red Retrobeads into different sites within the OB (\~500 µm apart) and observed co-labeling of only a third of the projecting neurons (21.1--34.3%; Supplementary Fig. [S5B](#MOESM1){ref-type="media"}). Therefore, our results suggest that there was limited spread of the Retrobeads and our injection protocol largely underestimates the number of neurons projecting to the OB or the PPC. As a result, the actual dual-labeled population is likely to be much larger than what is estimated here. We conclude from these dual-labeling experiments that a sizeable fraction of layer 2 APC neurons project to both the OB and PPC. Output from layer 2a SL cells is widely distributed in olfactory areas but do not project to the OB {#Sec6} --------------------------------------------------------------------------------------------------- We showed that a small fraction of layer 2a neurons in the APC can project to the OB or PPC (Figs [1C--F](#Fig1){ref-type="fig"} and [2C--E](#Fig2){ref-type="fig"}). Neurons in Layer 2a are mainly composed of SL cells, believed to be specialized in providing feedforward excitation to pyramidal cells of layer 2b and layer 3^[@CR15]--[@CR17]^. However, one study involving single neuron tracing reported that layer 2a cells can extend axons to multiple olfactory regions^[@CR21]^ and a recent work combining genetics and tracing techniques showed that layer 2a cells project to distinct brain regions^[@CR22]^. Therefore, we took advantage of a mouse line in which SL cells specifically express the reporter protein mCitrine and tetracycline activator (tTA) (48L mouse line, mCitrine expressed in 46 ± 2% of L2a Nissl-labeled cells, from ref. [@CR17])^[@CR17],\ [@CR37]^ to investigate the projection pattern of SL cells (Fig. [4A](#Fig4){ref-type="fig"}). Labeled mCitrine^+^ axons were found throughout upstream and downstream olfactory cortical areas including the AON, PPC, and CoA, while very few axons were labeled in the OB (Fig. [4A](#Fig4){ref-type="fig"} **)**, as previously reported for generic layer 2a neurons that were not genetically identified^[@CR21],\ [@CR22]^. To ensure labeled axons were bona-fide projections from SL cells located within the APC, we injected AAVs to express myr-mCherry under TRE promoter, which is activated by tTA expressed in SL cells. A survey of different serotypes of AAVs by injection into the APC showed that AAV2/5 has the highest, while AAV2/8 has the lowest, labeling efficiency for SL cells (Supplementary Fig. [S6A](#MOESM1){ref-type="media"}). Notably, diluted AAV2/1 injections (AAV2/1-TRE::myr-mCherry; Fig. [4B](#Fig4){ref-type="fig"}) in the APC led to sparse dual labeling of a SL cell subpopulation. Individual double-labeled axons projected at least 1 mm away from APC in both the anterior and posterior directions (Fig. [4C](#Fig4){ref-type="fig"}). Furthermore, larger volume injections of AAV2/5 in the same location labeled SL-mCitrine^+^ axons in various olfactory cortical regions including the AON, APC, and CoA (Fig. [4D](#Fig4){ref-type="fig"}), but not in the OB. To directly visualize whether projections of SL cells spare the OB, we injected red Retrobeads in the OB of 48L animals (Fig. [5A](#Fig5){ref-type="fig"}). There was a near absence of co-labeling between the genetic reporter of SL cells mCitrine and the injected Retrobeads (3 dual-labeled cells out of 226 48L- and 225 bead-labeled cells; Fig. [5B--D](#Fig5){ref-type="fig"} **)**. On the other hand, bead injection into the PPC of 48L mouse revealed some dual-labeled cells in L2a (Supplementary Fig. [S6B](#MOESM1){ref-type="media"}). Taken together, our data show that SL cell population extends long-range projections to multiple olfactory cortical areas (such as the AON and the PPC). By contrast, SL cells appear to not send feedback projections to the OB.Figure 4SL cells project widely within the olfactory system as revealed by a transgenic mouse line, 48L. (**A**) tTA constitutionally binds to TRE and drives mCitrine expression in a subset of SL cells (dox off system; *up right*). mCitrine^+^ cells were concentrated in layer 2a of the APC (*bottom right*). In the OB (*left*), some cells were observed in the glomerular and granule cell layers and axons were rarely observed. GL: glomerular layer, EPL: external plexiform layer, MCL: mitral cell layer, IPL: internal plexiform layer, GCL: granule cell layer. (**B**) AAVs expressing mCherry under the control of TRE and tTA were injected in the APC to yield sparse dual-labeling and identification of dual-labeled axons away from the injection site. (**C**) AAV2/1-TRE::myr-mCherry injection in the APC led to sparse dual-labeling of mCitrine^+^ cells (box 2). Dual-labeled axons were found several hundred µm away from the injection site in the same sagittal plane, in the dorsal (box 1) and ventral APC (box 2). n = 2 mice. (**D**) AAV2/5-TRE::myr-mCherry injection in the APC labeled axons several hundred µm away from the injection site, in a parallel plane. Dual-labeled axons were found in the dorsal APC and in the CoA. OT: Olfactory Tubercle. For B and C, red: mCherry. Green: mCitrine. Blue: DAPI. n = 3 mice. Figure 5Genetically labeled SL cells do not send feedback projection to the OB. (**A**) Schematic representation of the injection strategy. Red Retrobeads were injected into the OB of 48L mouse. Images were taken from the APC. (**B**) Red bead injections in the OB of 48L mouse failed to labeled mCitrine^+^ cells, indicating that the genetically tagged subset of SL cells is not projecting back to the OB (3 dually-labeled cells for 225 Retrobeads + cells and 226 mCitrine + cells, 3 sections, 2 mice). Middle and right panels are extracted from different experiments. Stars in right panels indicate Retrobeads^+^ OB-projecting cells. Red: Retrobeads. Green: mCitrine. Blue: DAPI. (**C**) Cumulative distribution of 48L-labeled cells and Retrobead-labeled cells in the APC (n = 225 Retrobeads + cells, 226 mCitrine + cells, 3 sections, 2 mice). (**D**) Venn diagram showing the near-zero overlap of 48L cells and OB-projecting neurons. Discussion {#Sec7} ========== In this study, we injected retrograde tracers and AAVs in the OB and PPC of wild-type and transgenic mice to examine the distribution as well as the projections of principal neurons from the APC. We characterized the laminar distribution of projecting populations and identified a substantial fraction of neurons dually projecting to the OB and the PPC. In addition, we showed that genetically labeled SL cells project to numerous brain regions, but not back to the OB. These findings bring new knowledge about how the APC broadcasts olfactory information to the brain, and future studies using optophysiological methods such as ChR2-assisted circuit mapping will enhance our understanding of whether the circuits highlighted here exhibit particular rules of connectivity. We found that OB-projecting neurons were mostly present in layers 2 and 3 of the APC, while PPC-projecting neurons were found mainly in layer 2 of the APC and in the EndoP. Within layer 2, both OB- and PPC-projecting populations were largely skewed toward layer 2b. Recent work from Diodato and colleagues^[@CR22]^ found a similar distribution of OB-projecting neurons, and also reported a layer 2b-biaised distribution of APC neurons projecting to the medial prefrontal cortex. When the proportion of projecting cells were corrected for variations in cell densities among layers, layer 2b cells were still the prominent source of projections to the OB (Fig. [1E](#Fig1){ref-type="fig"}). This suggests an internal bias toward layer 2b cells for APC feedback projections to the OB. Layer 2 of the APC is composed of superficial layer 2a, mainly populated by SL cells, and deep layer 2b, mainly containing SP cells -- although a continuum exists between the two cell populations^[@CR15]^. SL cells receive strong bottom-up inputs from the OB and form little or no recurrent connections with local excitatory neurons^[@CR15]--[@CR17]^. In contrast, SP cells receive stronger inputs from recurrent axons and project outside the APC^[@CR15],\ [@CR16],\ [@CR38]^. Our data and previous work show that the main projection channel of the APC indeed originates from layer 2b, presumably from SP cells. However, in this study, we also genetically labeled a significant proportion of projecting layer 2a cells that were shown previously to be SL cells based on morphological and electrophysiological characterization^[@CR17]^. Genetic labeling revealed that SL cells do send axonal projections to multiple olfactory regions, which corroborates with a single-cell tracing study examining SL cell projections outside the APC^[@CR21]^. In an earlier work using retrograde tracer injections in the CoA or LEnt, Diodato and coworkers^[@CR22]^ identified layer 2a as the main APC output channels to these brain regions. Therefore, it appears that SL and SP cells of layer 2 constitute two parallel output channels of the APC. It is possible that SL cells send odor information that received little local processing in APC whereas SP cells send more processed signals owing to the extensive recurrent connections. Strikingly, while the genetic labeling of SL cells revealed projections to a variety of brain regions, it failed to reveal significant feedback projections to the OB (Figs [4](#Fig4){ref-type="fig"} and [5](#Fig5){ref-type="fig"}). We did observe very few axons in the OB, but these projections were very sparse and likely originates locally from labeled cells in the OB (juxtaglomerular cells in the glomerular layer or cells in the granule cell layer^[@CR17]^). However, since the 48L mouse line labels approximately half of SL cells in L2a^[@CR17]^, we cannot exclude the possibility that some unlabeled SL cells do project to the OB because \~20% of OB-projecting cells reside in L2a (Fig. [1E](#Fig1){ref-type="fig"}). In addition, injection of a TRE-dependent virus into the APC to express mCherry ensures the neurons and axons labeled in the AON, APC and CoA (Fig. [4](#Fig4){ref-type="fig"}) are genuinely originating from SL cells residing in APC. Here, we propose a circuit model where SL and SP cell projections are organized differently depending on whether these are feedback or feedforward motifs. Since SL cells receive stronger inputs from the OB and are basically not recurrently connected^[@CR39]^, they are the first processing station in the APC. Information is fed forward from SL cells to higher olfactory regions as well as to SP cells within the APC. For feedback information, however, additional processing seems to be required: SL to SP and SP to SP connections will dictate the kind of information sent back to the OB. Since SL cells do not project back to the OB but instead rely on SP cells to relay feedback information, this can form a hierarchical processing circuit. On the other hand, feedforward/recurrent processing can occur in a parallel fashion for SL and SP outputs (Figs [2](#Fig2){ref-type="fig"} and [4](#Fig4){ref-type="fig"}). Although our methods reveal important projection differences between SL and SP cells, they do not provide any information about synaptic connectivity. Further quantitative anatomy and optophysiological mapping of connectivity will provide insights into how these cell types are connected with upstream and downstream regions. Within the APC, OB- and PPC-projecting neurons were found mainly in layer 2. Dual-labeling experiments showed that a sizeable fraction of OB- or PPC-projecting neurons actually project to both areas. Our results of empirical fractions of overlap (11.6 to 18.2%, Fig. [3](#Fig3){ref-type="fig"}) likely represent an underestimate of the actual overlap. This is because although our bead injections generally resulted in high labeling efficiency (colabeling of \~90% when injected into the same site in OB; Fig. [S4](#MOESM1){ref-type="media"}), the fragmented spatial organization of centrifugal fibers rendered it difficult to label a large fraction of axons and neurons. Therefore, it appears that information emerging from these L2b SP cells, and thus similar odor representation, is simultaneously sent back to the OB and forward to the PPC. In contrast, using a similar dual-tracing technique, Chen and colleagues^[@CR34]^ identified distinct OFC-projecting neuronal populations in the APC, although spatially intermingled. Genetic analysis of different projecting populations^[@CR22]^ further shows that neuronal identity (marker expression) is more important than neuronal location in determining which brain regions these axons will target. Additional connectivity studies, which might benefit from tissue clearing techniques, are necessary to gain insight on whether APC outputs are predominantly multiplexed or rather parallelized into distinct channels. We believe that a better understanding on odor coding in the brain requires elucidation of the output organization of the APC. It is likely that the formation of odor percepts involves wide recruitment of multiple brain areas and intricate feedback and feedforward circuits. Methods {#Sec8} ======= Animals {#Sec9} ------- C57Bl/6J, CaMK2a-Cre^[@CR40]^ and 48L mice (labeling SL cells) were used in this study. All experiments were performed in accordance with the guidelines set by the National Institutes of Health and approved by the Institutional Animal Care and Use Committee at Harvard University. Retrograde labeling {#Sec10} ------------------- In this study, non-viral tracers (green and red fluorophore-coated latex Retrobeads; Lumafluor), and viral tracers were used to retrogradely label APC projecting neurons. For OB retrograde labeling (Fig. [1](#Fig1){ref-type="fig"} and Supplementary Fig. [S1](#MOESM1){ref-type="media"}), the viruses used were: AAV2/1-CAG-hChR2(H134R)-mCherry, AAV2/8-CAG-ChR2-GFP, AAV2/9-CAG-ChR2-Venus, AAV2/8-CAG-Flex-EYFP, and AAV2/9-Flex-ChR2-eYFP. All viruses were purchased from the Penn Vector Core. Cre-dependent viruses were used in CaMK2a-Cre mice while and non-Cre dependent viruses were injected in C57BL/6J mice. For APC injections in 48L mice (Fig. [4](#Fig4){ref-type="fig"} and Supplementary Fig. [S4](#MOESM1){ref-type="media"}), we used AAV-TRE::myr-mCherry with capside serotype 2/1 or 2/5. Briefly, adult male mice (1 to 4 months old) were deeply anesthetized with an intraperitoneal injection of ketamine/xylazine mixture (100 mg/kg and 10 mg/kg, respectively) and placed in a stereotaxic apparatus. A small craniotomy was performed above the injection site and labeling solution was injected into the OB, APC or PPC using a glass pipette (Drummond Wiretrol 5-000-1001). The following are the coordinates for injections. OB (from junction of inferior cerebral vein and superior sagittal sinus): AP 1.2 mm, ML 1.1 mm, DV --1 mm; volume injected: 50 nL Retrobeads or 300 nL virus, or otherwise stated in the article. APC (from bregma): ML 2.6--2.8 mm, AP 1.3--1.5 mm, DV --3.5 mm, 50 nL of Retrobeads or 100--300 nL of viral solution. PPC (from bregma): ML 3.4--3.9 mm, AP 0.3--0.7 mm, DV -4.4--4.9 mm from brain surface, 50 nL of Retrobeads. Histology and cell counting {#Sec11} --------------------------- 1 to 4 days after Retrobead injections or two weeks after viral injections, mice were perfused intracardially with 4% v/v paraformaldehyde and brains were post-fixed in the same fixative overnight. 100 µm-thick brain sections were cut with a vibratome (Leica VT1000 S), rinsed in PBS, counterstained with the nuclear dye 4,6-diamidino-2-phenylindole (DAPI) and mounted on slides. Z-stack confocal images were taken with a Zeiss LSM 780 or 880 confocal microscope. The size of pinhole was adjusted to yield optical slice depth of approximately 10 µm to ensure that we do capture single neurons by cross-examining images in a thicker Z-stack. Counting was performed over the full D-V length of the APC, excluding the region below the rhinal fissure and where layer 2a and 2b could not be clearly identified. 2 to 3 sections per animal were taken (sagittal sections, 200--400 μm apart) and fluorescent cells were manually counted with the Fiji plugin "Cell Counter" by Kurt de Vos (University of Sheffield) on single-plane images. For the quantification of bar graphs in Figs [1](#Fig1){ref-type="fig"} and [2](#Fig2){ref-type="fig"}, APC was separated into 4 sublayers manually: layer 1, 2a, 2b and 3/endoP. Since there was not a clear boundary between layer 3 and endoP, we analyzed them as one. For each sublayer, the percentage of labeled cells was defined as the number of labeled cells divided by the number of DAPI + cells in that area. This normalization accounts for variations in cell density in sublayers. Since both labeled cells and DAPI + cells were counted in the same area, this normalization does not depend on area size. Next, the total percentage was calculated by adding up all 4 sublayer percentage values. The fraction of labeled cells reported for each sublayer is the sublayer percentage divided by the total percentage. The sum total of all 4 fractions is 1. This normalization accounts for variations in injection/labeling efficiency. For cumulative plots, the depth of the cells in layer 2 was determined by measuring the distance of a cell to the layer 1/layer 2 border (defined as zero) and reported to the depth of layer 2 in that region (defined as one) using custom MATLAB scripts. DAPI + cells were quantified on the binarized image in the middle of the Z-stack. The mean of DAPI + cells across experiments was used for normalization in bar graphs. Statistics {#Sec12} ---------- All results are given as mean ± standard error of the mean (SEM). All statistical tests were performed using commercial analysis software (Graphpad Prism) or custom script in MATLAB with a 5% significance level (see Supplementary Table online). Electronic supplementary material ================================= {#Sec13} Supplementary information **Electronic supplementary material** **Supplementary information** accompanies this paper at doi:10.1038/s41598-017-08331-0 **Publisher\'s note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This study was supported by NIH grants DC011291 and NS039059 (V.N.M.) and a Brain & Behavior Research Foundation NARSAD Young Investigator Award (C.G.L.). C.M. and J.G. were supported by a fellowship from the Ecole Normale Supérieure de Cachan. V.N.M. and C.G.L. supervised the project. C.M., J.G. and C.G.L. performed experiments, analyzed data, and prepared figures. Y.S. generated the 48 L mouse line and TRE-dependent AAVs for labeling SL cells. All authors wrote, reviewed, and edited the manuscript. V.N.M. provided research funding. Competing Interests {#FPar1} =================== The authors declare that they have no competing interests.
Mid
[ 0.648148148148148, 30.625, 16.625 ]
Beyond Whatever. I have had that title in my mind for so long I cannot even tell you when it first appeared, but it’s perfect. Beyond “Whatever” that sense that “whatever” you or I want it’s okay. Whatever we say is okay. Whatever we think is okay. Whatever we do is okay. Beyond whatever means it’s not. I wish it was but it’s not. I would like it to be that you and I and everyone else could do whatever they wanted and it would be okay; but, it’s not. There are always ramifications. What I do affects you. What you do affects me. What we do affects them. It’s inescapable. We, you and I, have to do get BEYOND, beyond “whatever” and accept that there is more to life than what we want when we want it and why we want it. Why? I wish I didn’t have to say this; some of you will be offended. I didn’t make up the rules. God did. Here’s the good and the bad of it: God made us. We are created by God. Our Creator can create us, maintain us or destroy us. That’s a fact. So, fear God and do what God tells you because God knows us to the core of our being and “God will judge everything people do. That includes everything they try to hide. He’ll judge everything, whether it’s good or evil.” (Eccl 12:14 NIRV) We can lie to ourselves, we can lie to others, we can compellingly convince everyone, everyone, that is, but God. God knows us to the core of our beings. Why write this book? Because God told me to and because no one can say it just like me.
Mid
[ 0.591876208897485, 38.25, 26.375 ]
MADISON, Wis. - Everybody who follows college football knows Ohio State is always one of the Big Ten, if not the nations, elite teams. This year's squad is no different. With Chris Wells and Terrelle Pryor comprising the Buckeye backfield, the Badgers will unquestionably have their hands full Saturday night. But like any other team, 11 guys line up on each side of the ball every play. The following is an inside look at Ohio State's lineup: Quarterbacks: Terrelle Pryor: Over the past few seasons, the Badgers have had major difficulties slowing athletic, mobile quarterbacks in the spread offense. Well, Pryor, even as a true freshman, may be one of the Big Ten's elite. It will be interesting to see how he handles his first start in a hostile environment. Todd Boeckman: Do not forget that Boeckman was the starting quarterback in 2007 and led his team to their second straight national championship game. He started off 2008 as the starter, but has since been replaced by Pryor. He is not as athletic and mobile as Pryor, but he has good pocket presence and a solid ability to read the defense. Running back: Chris "Beanie" Well: He claims to be playing around 75 percent, and even with that acknowledgement, he torched Minnesota for 106 yards on only 14 carries. The last time the Badgers squared off with Wells, he dominated the Badgers defense, particularly in the second half, rushing for 166 yards and three touchdowns. Wide receivers: Brian Hartline and Brian Robiskie may be one of the conference's most underrated receiving duos. So far in 2008, the two have caught a combined 30 passes and recorded 372 receiving yards. They are also responsible for six of the team's eight touchdowns through the air. Do not forget about Ray Small and his blinding speed that stretches defenses and opens things up for the other two. The Badgers must defend the pass very well to keep Pryor honest in the backfield. Offensive line: Alex Boone, in his fourth year as starting left tackle, is the headliner of a very strong offensive line. Jim Cordle, Mike Brewster, Ben Person and Bryant Browning join Boone as fellow starters in the trenches. So far, the Buckeye's line has helped the backs average 186.6 yards per game. In 2008, the only running back to top 100 yards is USC's Joe McKnight who finished with 105. Linebackers: Through five games, all-everything James Laurinaitis leads the Buckeyes with 47 tackles. Joining him is Marcus Freeman who has 29 tackles of his own. The Badgers, who love establishing the run, will have to do so against arguably the Big Ten's best linebacking unit. Having three quality backs can only benefit the Badgers as the game goes on, because the Buckeye linebackers are everywhere on the field and pack a hefty punch. Secondary: Corners Malcolm Jenkins, Chimdi Chekwa and Donald Washington as well as safety Kurt Coleman have all intercepted at least one pass so far in 2008. The Badger wide receivers have been slow developing so far this season, and the Buckeye secondary probably will not be the team they break out of their slump against. Special teams: Both kicker Ryan Pretorius and punter A.J. Trapasso are fifth year seniors. So far Pretorius has connected on nine of 12 field goal attempts including one from 50 yards out. Trapasso averages 44.2 yards per punt and has placed five inside the opponent's 20-yard line. Small is one of the conferences best punt returners with his average of 18.9 yards per return. His speed and big play ability gives OSU a realistic touchdown threat every time an opponent punts.
Mid
[ 0.651270207852194, 35.25, 18.875 ]
He didn't make anyone on his team better so how is it that he has great leadership skills? His team was sub 500 under his so called leadership. He has the ability(like many others) to make big throws but he rarely does. His accuracy is terrible that is why he had a 55% completion percentage. His whole college career has been mediocre at best. I have tons of evidence to support each one of these statements. Do you wish to challenge them? I am not a fan of Locker. But he did take a winless team and make it a .500 bowl winner. To that end he did, in fact, make others around him better. I am not a fan of Locker. But he did take a winless team and make it a .500 bowl winner. To that end he did, in fact, make others around him better. If they do draft him, we really have to hope that Shanahan can make another Plummer like miracle turn around, except this time he won't bail so soon, because Locker will be on the right side of 30....or the left....but the right side in terms of.....oh f**k it. If they do draft him, we really have to hope that Shanahan can make another Plummer like miracle turn around, except this time he won't bail so soon, because Locker will be on the right side of 30....or the left....but the right side in terms of.....oh f**k it. He didn't make anyone on his team better so how is it that he has great leadership skills? His team was sub 500 under his so called leadership. He has the ability(like many others) to make big throws but he rarely does. His accuracy is terrible that is why he had a 55% completion percentage. His whole college career has been mediocre at best. I have tons of evidence to support each one of these statements. Do you wish to challenge them? Thank you Landry44 for bringing some reality back to this Jake Locker lovefest. I feel the same way you do about Locker. The more game footage I watch of Locker the more I dislike him. His highlight reel of big game winning plays are him running the ball. Look what happen to him in the Bowl game nearly got knocked unconscious when he ran the ball against Nebraska. He will get killed in the NFL if he runs like he did in college. He is not very bright for running with his head down. Kind of reminds me of Patrick Ramsey, great guy, looks like your frat brother, tough, can take a hit, strong arm, but inaccurate. Just like Ramsey. One major difference is he can run and Ramsey could not. Locker will not wake up one day and go from being inaccurate college Qb against lesser talent and suddenly transform into an accurate passer against MUCH better talent he will face in the NFL. Locker is a project and a bit of a gamble. You do not draft a project with the #10 pick. That is a Cerratto move. Thank you Landry44 for bringing some reality back to this Jake Locker lovefest. I feel the same way you do about Locker. The more game footage I watch of Locker the more I dislike him. His highlight reel of big game winning plays are him running the ball. Look what happen to him in the Bowl game nearly got knocked unconscious when he ran the ball against Nebraska. He will get killed in the NFL if he runs like he did in college. He is not very bright. Kind of reminds me of Patrick Ramsey, great guy, looks like your frat brother, tough, can take a hit, strong arm, but inaccurate. Just like Ramsey. The only difference is he can run and Ramsey could not. Locker will not wake up one day and go from being inaccurate college Qb against lesser talent and suddenly transform into an accurate passer against MUCH better talent he will face in the NFL. Locker is a project and a bit of a gamble. You do not draft a project with the #10 pick. It's only reality to you and Landry because you both share the same opinion. Others disagree believing that Locker deserves the benefit of the doubt when you look at the quality of the talent that surrounded him at Washington. If we can get Locker in the second round after taking Castonzo in the first as TSN's mock shows, that would be great in my opinion, especially if you take Locker with the idea of having him sit the first year getting some occasional mop up duty. Locker isn't the guy if you are expecting him to start day one. As for the Locker/Ramsey comparison, I always like Ramsey and have always felt he got a raw deal considering the sieve for an offensive line we had. He was tough and his teammates respected him and that I can see in Locker. I'd be willing to bet that Locker will have the teams respect in short order by the way he works on and off the field and through his leadership. He may not end up with a Tom Brady/Peyton Manning career, but I think he'll be someone we'll either wish we had drafted or are glad we did draft. Whatever we do, I just hope we get it right this year. ^I like the idea of taking a day one starter at offensive tackle. It will help our terrible running game and help protect our Qb. Locker will just sit on the bench. If we stay at 10 and all the elite pass rushers and Defensive linemen are gone, the few remaining elite Offensive linemen, WR or RB seems to a safer bet, then reaching for a QB. Stick with what is proven works, elite pass rusher or elite offensive linemen. I would love to come away with from this draft with the next Russ Grimm and Joe Jacoby. They proved you win with any of three different Qb's. Thank you Landry44 for bringing some reality back to this Jake Locker lovefest. I feel the same way you do about Locker. The more game footage I watch of Locker the more I dislike him. His highlight reel of big game winning plays are him running the ball. Look what happen to him in the Bowl game nearly got knocked unconscious when he ran the ball against Nebraska. He will get killed in the NFL if he runs like he did in college. He is not very bright for running with his head down. Kind of reminds me of Patrick Ramsey, great guy, looks like your frat brother, tough, can take a hit, strong arm, but inaccurate. Just like Ramsey. One major difference is he can run and Ramsey could not. Locker will not wake up one day and go from being inaccurate college Qb against lesser talent and suddenly transform into an accurate passer against MUCH better talent he will face in the NFL. Locker is a project and a bit of a gamble. You do not draft a project with the #10 pick. That is a Cerratto move. There you go. A whole drive of passes and one run by Locker to set up a game winning field goal. I suggest you fast forward to 3:40 if you want to know what this Locker kid is all about. By the way, if Vick can learn to be an accurate passer then I'm pretty sure this Locker guy can to. Just remember 70% completion rate on throws outside the pocket. We run a bootleg, play action, down the field passing system. __________________"It's nice to be important, but its more important to be nice."- Scooter "I feel like Dirtbag has been slowly and methodically trolling the board for a month or so now." - FRPLG It's only reality to you and Landry because you both share the same opinion. Others disagree believing that Locker deserves the benefit of the doubt when you look at the quality of the talent that surrounded him at Washington. If we can get Locker in the second round after taking Castonzo in the first as TSN's mock shows, that would be great in my opinion, especially if you take Locker with the idea of having him sit the first year getting some occasional mop up duty. Locker isn't the guy if you are expecting him to start day one. As for the Locker/Ramsey comparison, I always like Ramsey and have always felt he got a raw deal considering the sieve for an offensive line we had. He was tough and his teammates respected him and that I can see in Locker. I'd be willing to bet that Locker will have the teams respect in short order by the way he works on and off the field and through his leadership. He may not end up with a Tom Brady/Peyton Manning career, but I think he'll be someone we'll either wish we had drafted or are glad we did draft. Whatever we do, I just hope we get it right this year. Of course that would be great. Just do not reach for Locker at #10. You talk about Locker's college team not being very good. Yes that is true. But what really shocked me when watching his games from last year, I was shocked at how unathletic and bad some of the competition was. HIs highlights from the UCLA and Cal games in particular, some of the Db's and Lb's looked worse then my highschool teamates. The PAC 10 last year was not very good, especially the bottom half. Last years Pac10 will not be confused with the SEC. You are only as good as your competition. There you go. A whole drive of passes and one run by Locker to set up a game winning field goal. I suggest you fast forward to 3:40 if you want to know what this Locker kid is all about. By the way, if Vick can learn to be an accurate passer then I'm pretty sure this Locker guy can to. Just remember 70% completion rate on throws outside the pocket. We run a bootleg, play action, down the field passing system. Sir Dirtbag59 Locker rushed 12 times for 112 yards in that game. Locker rushed for 84 yards and a decisive touchdown in his final college game. Locker nearly got decapitated running the ball in that game. Everyone talks about how he is not so accurate as a pocket passer but so much more accurate when getting outside of the pocket. In the NFL the speed of the defenders especially on the edge is so much more than college, Locker better develop pocket passing accuracy or he will get killed if he has to rely on one thing. Thinking Locker is a Franchise Qb at this stage is reckless. We have so many holes to fill taking a chance with Locker at 10 is a big gamble for Mr. Shanahan and his longevity in DC. Sir Dirtbag59 Locker rushed 12 times for 112 yards in that game. Locker rushed for 84 yards and a decisive touchdown in his final college game. Locker nearly got decapitated running the ball in that game. Everyone talks about how he is not so accurate as a pocket passer but so much more accurate when getting outside of the pocket. In the NFL the speed of the defenders is so much more than college, Locker better develop pocket passing accuracy or he will get killed. And I agree on the need for him to develop as a pocket passer. I don't go into this prediction/desire for Locker without reservations but I certainly don't think it's unrealistic to expect him to develop that skill at the next level. Plus if the Redskins have done all this work since week 8 and decided that Locker is their guy that should at least say something about how he projects at the next level in this system. Still while NFL defenses will make it significantly harder for QB's to gain yards on the ground it's still hard to get to QB's on designed play action. Especially when you have tackles like Trent Williams leading the way. Also the right side won't be shabby either with Brown or Harris manning the RT spot. Every Shanahan QB has had great success getting to the edges on bootlegs and rollouts....except McNabb but thats beside the point. I mean if Schuab can get outside the pocket then I'm pretty sure that Locker can get there to. __________________"It's nice to be important, but its more important to be nice."- Scooter "I feel like Dirtbag has been slowly and methodically trolling the board for a month or so now." - FRPLG And I agree on the need for him to develop as a pocket passer. I don't go into this prediction/desire for Locker without reservations but I certainly don't think it's unrealistic to expect him to develop that skill at the next level. Plus if the Redskins have done all this work since week 8 and decided that Locker is their guy that should at least say something about how he projects at the next level in this system. Still while NFL defenses will make it significantly harder for QB's to gain yards on the ground it's still hard to get to QB's on designed play action. Especially when you have tackles like Trent Williams leading the way. Also the right side won't be shabby either with Brown or Harris manning the RT spot. Every Shanahan QB has had great success getting to the edges on bootlegs and rollouts....except McNabb but thats beside the point. I mean if Schuab can get outside the pocket then I'm pretty sure that Locker can get there to. Mcnabb was running for his life behind our line last year. I was not very impressed with Brown. I would not be so quick to lay all the blame on Mcnabb who was playing in a new system for him. I do not hate Locker and I would be ok with us taking further down the draft. He is a reach at 10. That is all. Reaching when you have so many other holes to fill is a Ceratto move. We are not a Qb away from being good. This is a deep draft and we should get a day one starter at 10. Not a project roll of the dice. Not at 10 please. Mcnabb was running for his life behind our line last year. I was not very impressed with Brown. I would not be so quick to lay all the blame on Mcnabb who was playing in a new system for him. I do not hate Locker and I would be ok with us taking further down the draft. He is a reach at 10. That is all. Reaching when you have so many other holes to fill is a Ceratto move. We are not a Qb away from being good. Agreed, but you don't draft a QB to be good that year. You draft a QB to be good 3 or 4 years down the road. Lineman can be drafted and added in between and get up to speed much quicker (how many weeks, not years but weeks, did it take for Trent to get good enough to hold off Pro Bowl pass rushers). Bottom line if you see your franchise QB you take him. Maybe you can manage a quick trade down but you don't play fast and loose with a guy you see as your QB of the future. Look at it this way. Without a QB of the future you're going to have to worry about the Skins taking a QB instead of a trench player for the next 2 or 3 years. You will never get any sleep. Even if Locker is a total bust the team will stay away from drafting the QB position for the next few years and instead focus on front 7 defenders, lineman, and other positions. __________________"It's nice to be important, but its more important to be nice."- Scooter "I feel like Dirtbag has been slowly and methodically trolling the board for a month or so now." - FRPLG
Low
[ 0.465306122448979, 28.5, 32.75 ]
/* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2018 Advanced Micro Devices, Inc. All rights reserved. */ #ifndef _CCP_PMD_PRIVATE_H_ #define _CCP_PMD_PRIVATE_H_ #include <rte_cryptodev.h> #include "ccp_crypto.h" #define CRYPTODEV_NAME_CCP_PMD crypto_ccp #define CCP_LOG_ERR(fmt, args...) \ RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \ RTE_STR(CRYPTODEV_NAME_CCP_PMD), \ __func__, __LINE__, ## args) #ifdef RTE_LIBRTE_CCP_DEBUG #define CCP_LOG_INFO(fmt, args...) \ RTE_LOG(INFO, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \ RTE_STR(CRYPTODEV_NAME_CCP_PMD), \ __func__, __LINE__, ## args) #define CCP_LOG_DBG(fmt, args...) \ RTE_LOG(DEBUG, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \ RTE_STR(CRYPTODEV_NAME_CCP_PMD), \ __func__, __LINE__, ## args) #else #define CCP_LOG_INFO(fmt, args...) #define CCP_LOG_DBG(fmt, args...) #endif /**< Maximum queue pairs supported by CCP PMD */ #define CCP_PMD_MAX_QUEUE_PAIRS 8 #define CCP_NB_MAX_DESCRIPTORS 1024 #define CCP_MAX_BURST 256 #include "ccp_dev.h" /* private data structure for each CCP crypto device */ struct ccp_private { unsigned int max_nb_qpairs; /**< Max number of queue pairs */ uint8_t crypto_num_dev; /**< Number of working crypto devices */ bool auth_opt; /**< Authentication offload option */ struct ccp_device *last_dev; /**< Last working crypto device */ }; /* CCP batch info */ struct ccp_batch_info { struct rte_crypto_op *op[CCP_MAX_BURST]; /**< optable populated at enque time from app*/ int op_idx; uint16_t b_idx; struct ccp_queue *cmd_q; uint16_t opcnt; uint16_t total_nb_ops; /**< no. of crypto ops in batch*/ int desccnt; /**< no. of ccp queue descriptors*/ uint32_t head_offset; /**< ccp queue head tail offsets time of enqueue*/ uint32_t tail_offset; uint8_t lsb_buf[CCP_SB_BYTES * CCP_MAX_BURST]; phys_addr_t lsb_buf_phys; /**< LSB intermediate buf for passthru */ int lsb_buf_idx; uint16_t auth_ctr; /**< auth only ops batch for CPU based auth */ } __rte_cache_aligned; /**< CCP crypto queue pair */ struct ccp_qp { uint16_t id; /**< Queue Pair Identifier */ char name[RTE_CRYPTODEV_NAME_MAX_LEN]; /**< Unique Queue Pair Name */ struct rte_ring *processed_pkts; /**< Ring for placing process packets */ struct rte_mempool *sess_mp; /**< Session Mempool */ struct rte_mempool *sess_mp_priv; /**< Session Private Data Mempool */ struct rte_mempool *batch_mp; /**< Session Mempool for batch info */ struct rte_cryptodev_stats qp_stats; /**< Queue pair statistics */ struct ccp_batch_info *b_info; /**< Store ops pulled out of queue */ struct rte_cryptodev *dev; /**< rte crypto device to which this qp belongs */ uint8_t temp_digest[DIGEST_LENGTH_MAX]; /**< Buffer used to store the digest generated * by the driver when verifying a digest provided * by the user (using authentication verify operation) */ } __rte_cache_aligned; /**< device specific operations function pointer structure */ extern struct rte_cryptodev_ops *ccp_pmd_ops; uint16_t ccp_cpu_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, uint16_t nb_ops); uint16_t ccp_cpu_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, uint16_t nb_ops); #endif /* _CCP_PMD_PRIVATE_H_ */
Low
[ 0.49433962264150905, 32.75, 33.5 ]
Thomas Lant Thomas Lant (1554–1601) was a draftsman and long-serving officer of arms at the College of Arms in London. Lant was born in Gloucester and was one of seven children of Thomas and Mary Lant. When Lant was twelve years old, he became a page to Richard Cheney, the Bishop of Gloucester. When Cheney died in 1579, Lant again became a page, this time for Henry Cheney. It was through Lord Cheney that Lant became connected with Sir Philip Sidney. The two accompanied each other to the Low Countries in 1585. Lant was the draftsman of roll recording Sidney's funeral procession at St Paul's on 16 February 1587. Following Sidney's death, Lant went to work for Sir Francis Walsingham, the secretary of state. It was through Walsingham that Lant secured an appointment as Portcullis Pursuivant of Arms in Ordinary at the College of Arms on 20 May 1588. It was as Portcullis that Lant wrote Observations and Collections Concerning the Office and Officers of Arms describing the College as "a company full of discord and envy." In 1595, Lant presented the queen a catalog of officers of arms known as Lant's Roll of which several copies survive. In 1596 Lant was involved in an argument with Ralph Brooke, York Herald of Arms in Ordinary. This led to Lant assaulting Brooke at the Middle Temple. Lant was created Windsor Herald of Arms in Ordinary on 23 October 1597. Lant's date of death is uncertain. He was alive on 26 December 1600 but is thought to have died early in 1601. He and his wife Elizabeth, daughter of Richard Houghton, had two children, a daughter and a son. The son, Thomas, was born after his father's death and died on 18 May 1688 as rector of Hornsey, Middlesex. Lant's wife remarried on 28 September 1609 at the Savoy Chapel, Westminster. The union was with the alchemist Francis Anthony, and the marriage licence describes her as being 36 years old and the widow of "Thomas Lant, gent., deceased eight years since." Arms See also Heraldry Pursuivant Herald References Walter H. Godfrey and Sir Anthony Wagner, The College of Arms, Queen Victoria Street: being the sixteenth and final monograph of the London Survey Committee. (London, 1963), 171–172. Sir Anthony Wagner. Heralds of England: a History of the Office and College of Arms. (London, 1967), 87–88, 217–219. Mark Noble. A History of the College of Arms. (London, 1805), 171–172. External links The College of Arms Heraldica list of officers of Arms Category:1554 births Category:1601 deaths Category:English antiquarians Category:English genealogists Category:English officers of arms Category:People of the Tudor period Category:16th-century English writers Category:16th-century male writers Category:17th-century English writers Category:17th-century male writers Category:People from Gloucester
High
[ 0.687224669603524, 29.25, 13.3125 ]