text
stringlengths 8
5.74M
| label
stringclasses 3
values | educational_prob
sequencelengths 3
3
|
---|---|---|
Pharmaceutical company wins $1M in legal fees from patent troll and Stanford - vmarsy http://www.prnewswire.com/news-releases/hi-tech-pharmaceuticals-wins-almost-1-million-in-legal-fees-from-thermolife-and-stanford-after-winning-major-patent-case-300436949.html ====== yourapostasy I thought patent trolls set up a shell company for each litigation effort, then "license" the patents to the shell, and proceed to litigate from within the shell, with the intent that if it turns out badly for them, they fold the asset-less shell, and collection is thus impossible. Is anyone familiar with patent trolls' business model, and can comment on the feasibility of actually collecting a judgement like this? ~~~ jjn2009 can't courts work around this in extra ordinary cases? ~~~ sjg007 This is why they sue the patent holder too I guess. ------ jakelarkin This is a p/r piece from the winning pharma company. Would be nice to have a more independent source. There are varying degree of scummy-ness in the patent world. It's possible that Thermalife is like a Intellectual Ventures or a MPEG-LA, particularly if they are somehow affiliated with Stanford. Recognize that some people may consider the later two to be patent trolls, as well. ------ aj7 Hi-Tech isn't a pharmaceutical company in the way most HN readers would interpret. It's a "neutraceutical" company whose major market appears to be gym rats. See [https://hitechpharma.com](https://hitechpharma.com). ------ zdean For clarity, that's "Leland Stanford Junior University" ~~~ vmarsy submission title was already too long, Stanford University wouldn't fit, "Leland Stanford Junior University"[1] even less :) [1] [https://en.wikipedia.org/wiki/Stanford_University](https://en.wikipedia.org/wiki/Stanford_University) ~~~ justboxing "Pharma Company Wins $1M in legal fees from Patent Troll & Stanford University" How about that? 78 characters long (within the 80char HN title limit). | Mid | [
0.6326034063260341,
32.5,
18.875
] |
Sumner, WA Are you looking to start work as early as tomorrow? Do you consider yourself to be a manufacturing and production professional? Look no further! Logic Staffing has multiple temp to perm opportunities within a rapidly growing production and manufacturing environment. These positions start at $10+ per hour and offer 40+ hour work weeks. There are both day and swing shift opportunities.Applicants must have a flexible working schedule to include additional overtime and weekend work. If you are looking to earn a good sized paycheck - look no further!!The following is a few of the current opportunities we are currently recruiting for! Plus new openings daily Manufacturing Positions - Day & Swing ShiftsMust possess the ability to lift up to 50 pounds for the entire duration of a 10 - 14 hours shift.Recent manufacturing experience, line production machinery preferred.Excellent hand-eye coordination is a MUST! This position requires a keen eye for attention to detail$11.00 starting pay - Ample opportunity for overtime - Day & Swing Shifts AvailableLoading/Unloading - Sumner & AuburnIf you enjoy fast paced, physical work- apply now!These positions require you to load and unload product - MUST be able to lift at least 45lbs on a consistent basis.Entry level positions available. $10 per hour - Sumner, AuburnProduction Positions - Day, Swing & Graveyard - TacomaTemp to perm!These positions require attention to detail and a keen eye for perfection!$10.00 starting pay - Must have an open and flexible scheduleWE HAVE MORE POSITIONS AVAILABLE IN AUBURN, KENT AND RENTON!Work for a company that cares about YOU and helps you find the right position! Join the Logic Family and experience the difference for yourself.You must have your OWN transportation and a flexible and open schedule!Criminal background check and pre-employment drug screens are required!Logic Staffing does conduct reference checks, ALL applicants must have 2 recent and verifiable professional references. References WILL be checked prior to being offered any position with our organization.If you meet all of the minimum requirements as listed above - Apply Today:http://logicstaffing.net/employees/search-jobsLogic Staffing's pay range is between $10 - $11 per hour- All based on experience! ALL employees are eligible for Medical, Dental & Vision Insurance once you have meet the full-time eligibility requirement! Yes!! Even as a temporary associate we offer a robust benefits package. The fraudster will send a check to the victim who has accepted a job. The check can be for multiple reasons such as signing bonus, supplies, etc. The victim will be instructed to deposit the check and use the money for any of these reasons and then instructed to send the remaining funds to the fraudster. The check will bounce and the victim is left responsible. Geebo Job Alerts Looking through job listings can take a lot of time, but Geebo can make the search easier. Just sign up for free "job alerts" and we'll email listings that match your interests and location. First Name Last Name Desired Job/Keywords Example: Warehouse/Customer Service/Accountant Location City State Email Dear Job Seeker, This Is Important! Many employers check your credit score as part of the hiring process.Improving and identifying issues on your credit reportstarts with getting your FREE score. By clicking 'Continue', I declare that I am a U.S resident over 18, I agree to the Privacy Policy and Terms of Use, I provide my consent and signature to have a representative from Career Advisor contact me regarding career and educational opportunities at the email and phone number provided, including wireless, using automated technology, pre-recorded calls and/or text messages, and my consent is not a condition of purchase. In addition, I agree to receive job alerts and special offers by email from you and your trusted partners. By clicking 'Continue', I consent to be contacted regarding education opportunities at the phone number provided: , using an automated telephone dialing system and/or pre-recorded voice. I may be contacted by Career Advisor. Consent is not required as a condition of using this service. | Low | [
0.5280665280665281,
31.75,
28.375
] |
Cardiomyocyte DNA synthesis and binucleation during murine development. Cardiomyocyte DNA synthesis and binucleation indexes were determined during murine development. Cardiomyocyte DNA synthesis occurred in two temporally distinct phases. The first phase occurred during fetal life and was associated exclusively with cardiomyocyte proliferation. The second phase occurred during early neonatal life and was associated with binucleation. Collectively, these results suggest that cardiomyocyte reduplication ceases during late fetal life. Northern and Western blot analyses identified several candidate genes that were differentially expressed during the reduplicative and binucleation phases of cardiomyocyte growth. | High | [
0.696562032884902,
29.125,
12.6875
] |
Q: DateTimeFormatter and ZonedDateTime - Throw exception on ambiguous and nonexistent datetimes (DST) I am trying to create a really strict DateTimeFormatter/ZonedDateTime factory. If the datetetime doesn't make sense as is then it should throw exception - no audjustments. Example: If an hour is skipped during DST. Currently these tests fail, no exception is thrown. ZonedDateTime guesses/adjusts based on rules as described in the docs. @Test fun `Ambiguous hour during +0200 to +0100 DST change`() { val input = "2019-10-27T02:30:00Europe/Prague" val formatter = createFormatter() Assertions.assertThrows(DateTimeParseException::class.java) { ZonedDateTime.from(formatter.parse(input)) // Gets instead adjusted as 2019-10-27T02:30:00Europe/Prague+02:00 } } @Test fun `Skipped hour during +0100 to +0200 DST change`() { val formatter = createFormatter() Assertions.assertThrows(DateTimeParseException::class.java) { ZonedDateTime.from(formatter.parse("2019-03-31T02:30:00Europe/Prague")) // Gets instead adjusted as 2019-03-31T03:30:00Europe/Prague+02:00 } } @Test fun `Test impossible offset during DST change`() { val input = "2019-10-27T02:30:00Europe/Prague+05:00" val formatter = createFormatter() Assertions.assertThrows(DateTimeParseException::class.java) { ZonedDateTime.from(formatter.parse(input)) // Gets instead adjusted as 2019-10-26T23:30:00Europe/Prague+02:00 } } Changes for the Europe/Prague region are from here. Code I have: fun createFormatter(): DateTimeFormatter { return DateTimeFormatterBuilder() .parseCaseSensitive() .parseStrict() .append(DateTimeFormatter.ISO_LOCAL_DATE_TIME) .appendZoneRegionId() .optionalStart() .appendOffsetId() .optionalEnd() .toFormatter() .withResolverStyle(ResolverStyle.STRICT) } ZonedDateTime adjusts as described in the docs. I would like to change this behavior to throw anytime something isn't completely clear. A: You need to add extra logic to validate the result. Below code is in Java, but you can easily convert to Kotlin. static ZonedDateTime parseVeryStrict(String text) { TemporalAccessor parsed = createFormatter().parse(text); ZonedDateTime zonedDateTime = ZonedDateTime.from(parsed); if (parsed.isSupported(OFFSET_SECONDS)) { // Verify given offset was correct ZoneOffset zoneOffset = ZoneOffset.from(parsed); if (! zoneOffset.equals(zonedDateTime.getOffset())) throw new DateTimeParseException("Incorrect offset: '" + text + "'", text, 0); } else { // Without offset, fail if in DST overlap time range if (! zonedDateTime.withEarlierOffsetAtOverlap().isEqual(zonedDateTime.withLaterOffsetAtOverlap())) throw new DateTimeParseException("Ambiguous time (DST overlap): '" + text + "'", text, 0); } // Verify time wasn't adjusted because it was in DST gap time range LocalTime localTime = LocalTime.from(parsed); if (! localTime.equals(zonedDateTime.toLocalTime())) throw new DateTimeParseException("Invalid time (DST gap): '" + text + "'", text, 0); return zonedDateTime; } Test public static void main(String[] args) { test("2019-10-27T02:30:00"); test("2019-10-27T02:30:00Europe/Prague"); test("2019-03-31T02:30:00Europe/Prague"); test("2019-10-27T02:30:00Europe/Prague+05:00"); test("2019-10-27T02:30:00Europe/Prague+02:00"); test("2019-10-27T02:30:00Europe/Prague+01:00"); } static void test(String text) { try { System.out.println(text + " -> " + parseVeryStrict(text)); } catch (DateTimeParseException e) { System.out.println(text + " - " + e.getMessage()); } } Output 2019-10-27T02:30:00 - Text '2019-10-27T02:30:00' could not be parsed at index 19 2019-10-27T02:30:00Europe/Prague - Ambiguous time (DST overlap): '2019-10-27T02:30:00Europe/Prague' 2019-03-31T02:30:00Europe/Prague - Invalid time (DST gap): '2019-03-31T02:30:00Europe/Prague' 2019-10-27T02:30:00Europe/Prague+05:00 - Incorrect offset: '2019-10-27T02:30:00Europe/Prague+05:00' 2019-10-27T02:30:00Europe/Prague+02:00 -> 2019-10-27T02:30+02:00[Europe/Prague] 2019-10-27T02:30:00Europe/Prague+01:00 -> 2019-10-27T02:30+01:00[Europe/Prague] | Mid | [
0.5696821515892421,
29.125,
22
] |
And the stories stressed the same point. The LA Times led with, “Hormonally speaking, becoming a father may make you less of man.” Fox News led with, “A father’s testosterone level drops steeply after his baby arrives.” They were writing about the research finding that a new father’s testosterone levels dropped temporarily when a new baby came home. But this does an injustice to the real news. This fascinating just-published study by three anthropologists at the Institute for Policy Research at Northwestern University and a researcher at the Office of Population Studies Foundation at the University of San Carlos in the Philippines is the first to prove the link between paternal nurturing of children and testosterone. The researchers tracked the testosterone levels of 624 young men over roughly five years at different stages of their lives: single nonfathers, fathers of newborns, and nurturing or non-nurturing fathers. It found that the biggest and longest lasting (though not permanent) drop occurred in fathers involved in daily child care. To be sure, a few news outlets got it right. The Wall Street Journal’s headline nailed it: “Men Biologically Wired to be Nurturing Fathers.” The New York Times came close: “Fatherhood Cuts Testosterone, Study Finds, for Good of the Family.” The researchers were clear about the nurturing finding. They wrote, “[C]aregiving fathers had lower [testosterone] than fathers who did not invest in care.” Furthermore, they reported that the more hours the fathers spent, the lower their hormone levels. “We found that [testosterone] … was lowest among fathers reporting more hours spent in childcare.” Typically, a newstory about the research mentioned somewhere that nurturing affected testosterone levels. But only rarely did one mention that more nurturing meant even lower levels. The researchers were not surprised to find the hormone-nurturing link. Earlier research has shown that testosterone drops in other male mammals that parent. Their question was: would this be true for humans? They found their nurturing fathers in the Philippines, in Cedu City, where, the report stated, “it is common for fathers to be involved in day-to-day care of their children.” Now there’s an interesting fact. Could this research have been conducted in the United States? Perhaps we are not yet as evolutionarily advanced. The researchers agree that evolution is involved. “Our findings suggest that human males have evolved neuroendocrine architecture … supporting a role of men as … caregivers.” All of this is not so surprising, really. If testosterone is the hormone that contributes to aggression in men, then it makes sense that nature would reduce that hormone when men are caring regularly for children. Gloria Steinem observed hopefully decades ago, “If men spent more time raising small children, they would be forced to develop more patience and flexibility.” But who knew the mechanism would be hormonal? Of course the interesting question is why the media has mostly downplayed the nurturing finding. One answer, to be fair, is the way the findings were written up. The report, published in the Sept. 13, 2011 issue of the Proceedings of the National Academy of Sciences, bears the misleading headline, “Longitudinal evidence that fatherhood decreases testosterone in human males.” And the researchers sprinkled their report with sentences that equated fatherhood with nurturing fatherhood. The news stories followed suit. The equation was unexamined. And that is the problem. For as feminists, exhausted mothers, and single mothers have been observing for a very long time and as fathers who nurture know too, the two are not the same. Fatherhood is a biological achievement, accomplished in partnership with a mother. A male nurturing parent is something else entirely: it is a man who spends attentive, loving, extended time with his child or children daily, connecting with them emotionally, seeing the world through their eyes, teaching them, and feeling, at times, the delight that selfless nurturing can give. The benefits for an increase in male nurturing might be more than personal. Feminists have wondered for at least a century if politics would change if men become more involved in child care. Jane Addams, the Nobel Peace prize winner and women’s suffrage advocate, writing in 1913, thought if the government were under women’s control instead of men’s, its chief purpose would not be war, but the nurture of children and the protection of the weak and sick, not because women were inherently nurturing but because historically women “had always exercised these functions.” Women have long suspected that the experience of child care was what mattered but the debate has always been framed as nature versus nurture. Now it turns out that experience changes biology, which also means that biology is not destiny. And what could be more groundbreaking news than that? [Picture 2: Jane Addams rejected the concept of either gender being inherently nurturing.] The views expressed in this commentary are those of the author alone and do not represent WMC. WMC is a 501(c)(3) organization and does not endorse candidates. To support women journalists who are changing the conversation, donate to the WMC here. To read other recent WMC Features, click here. | Mid | [
0.645476772616136,
33,
18.125
] |
This special edition of the Seeking Delphi™ podcast provides a preview of the 2018 IEEE Technology Time Machine, to be held October 31-November 1, at the Hilton Resort and Spa, San Diego, CA. Joining host Mark Sackler to discuss the upcoming program is Roberto Saracco, who heads the IEEE initiative on Symbiotic Autonomous Systems and is one of the conference organizers. He was previously interviewed on Seeking Delphi Podcast #22. See links below the embedded podcast audio and YouTube slide show to access event information and registration. “I can’t imagine a future without robots.”–Nolan Bushnell ™ In the popular HBO series Westworld, robotic hosts are depicted as being placed into a kind of psychiatric analysis by their creators. Could this actually happen one day? Joanne Pransky thinks it will. She bills herself as the World’s First Robotic Psychiatrist® (yes, she even registered that title!). She was dubbed the real life Susan Calvin by Isaac Asimov, after the robot psychologist he created in his classic 1950 short story anthology, I, Robot. In this episode of the Seeking Delphi™ podcast, host Mark Sackler talks to her about this and other significant issues in the man/machine relationships to come. “We are losing privacy at an alarming rate–we have none left.”–John McAfee “Privacy is becoming irrelevant.”–Gray Scott Is privacy dead? The answer may be more indifferent than you suspect. Gray Scott says it’s becoming irrelevant. People and politicians may squawk, but if you look at their behavior, it looks as if they just don’t really care. It seems we’d rather have free content–even at the cost of privacy–than pay even nominal amounts to access online materials. In this wide ranging interview, conducted just hours before Mark Zuckerberg’s senate testimony in the wake of the Cambridge Analytica data breach, Gray provides us with his nuanced view of the state of privacy, both present and future. “The main thing in life is not to be afraid of being human.”–Aaron Carter You’ve heard it all, and lately you’re hearing it more. The singularity is near. Robots are going to take our jobs. Robots are going to take over altogether. Robots are even going to take over our sex lives. Yadda yadda yadda. I’m not saying it won’t happen; I just think it’s farther away than the impression most people are getting from all the news. What’s here right now is genetic editing, and with it, the possibility of directing human evolution. The very real and very near possibility of changing what it means to be human. Read all the artificial intelligence and future of work articles–yes. But listen to what Elizabeth Parrish has to say about modifying the human genome to reverse aging and to keep up–cognitively and physically–with robots. Seeking Delphi™ will be on vacation next week. Enjoy the peace and quiet. Self-Driving vehicles–Have we been hearing altogether too much about autonomous vehicle development, lately. Satirical web site, The Onion, seems to think so. The released the image below with headline Tesla Debuts Carless Driver. Image Credit: The Onion Thanks for visiting and reading. See you in two weeks. While you’re reading about all this week’s future-related news, don’t forget that you can subscribe to Seeking Delphi™ podcasts on iTunes, PlayerFM, or YouTube(audio with slide show) and you can also follow us on Twitter and Facebook Max Tegmark is not one to shy away from bold scientific pronouncements. The MIT cosmologist and physics professor is perhaps best known for his taxonomy of a four level multiverse—some levels of which are predicted by certain theories, but none of which have been proven to exist. In his previous book, Our Mathematical Universe, My Quest for the Ultimate Nature of Reality, he offers the astounding conjecture that the whole of reality may be nothing more than pure mathematics. So, what, if anything, makes Life 3.0, Being Human in The Age of Artificial Intelligence different? Unlike a universe of multiverses, or of pure mathematics, it deals with issues that are right in front of our faces. And his taxonomy of Life 1.0, 2.0 and 3.0 is not a mere conjecture that can’t yet— or might never—be tested. Artificial intelligence is happening right in front of us, and we have a multiplicity of issues to deal with, while we still can control it. Even as Stephen Hawking and Elon Musk are shouting loudly about the potential dangers of artificial intelligence, and many actual AI researchers are countering that the dangers are overblown and distorted, Tegmark is doing something to bridge hype and reality. Or at least, he’s trying to. The problem is, there is no consensus even among the experts. He provides the reader with a wide range of scenarios. Many are not pretty—from a corporation using advanced AI to control global markets and ultimately governments, to a runaway AI that discards human intervention to rule the world itself. And yet, he asserts, all of the scenarios he presents have actual expert believers in their possibility. The ultimate answer is, we don’t know. Tegmark is not so much warning against its development—it’s probably impossible to stop—as he is advising about its challenges, opportunities and dangers. He knows that the experts don’t really know, and neither does he. But he’s not afraid to present bold scenarios to awaken our awareness. He sums it up best in Chapter 5, Intelligence Explosion: The short answer is obviously that we have no idea what will happen if humanity succeeds in building human-level AGI. For this reason, we’ve spent this chapter exploring a broad spectrum of scenarios. I’ve attempted to be quite inclusive, spanning the full range of speculations I’ve seen or heard discussed by AI researchers and technologists: fast takeoff/ slow takeoff/no takeoff, humans/ machines/cyborgs in control. I think it’s wise to be humble at this stage and acknowledge how little we know, because for each scenario discussed above, I know at least one well-respected AI researcher who views it as a real possibility. Tegmark makes is clear, that for all the unknowns, we need to proceed with caution. Bold conjectures and scenarios sometimes turn into realities. And some of these potential realities are not where we want to go. Decisions we make about machine intelligence in the next few decades will go a long way to deciding the future of humanity—our evolution or even our continued existence. He goes on to present possible scenarios for what we might look like in 10,000 and even 1 Billion years. It’s fascinating, but mind-numbing. We simply might not be able to control any of it. “By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” –Eliezer Yudkowsky Just two weeks after the first Emotion AI Summit–an event that might not have been possible even a year ago–there is an explosion of news around artificial intelligence. The sum of the stories might best be described by the subtitle of my other blog: ridiculous and sublime. As sure as there is the potential to use new technology for both good and evil, there is also the likelihood that someone will use it, well, to be just plain silly. So here is the good, the bad, and the positively daft. And be sure to check out the Seeking Delphi™ Podcast on the Emotion AI Summit, if you missed it last week. -Vladimir Putin has more to say about artificial intelligence. A few weeks back he said that whomever controls artificial intelligence will control the world. Now he’s warning–get this–artificially intelligent robots might eat us. Sorry for the spoiler alert, but in Will Mitchell’s sci-fi novel, Creations, they sort of do. —A new report by the World Economic Forum projects the global market for artificial intelligence will grow at a compound rate of over %17, to annual value of US$14 Billion by 2023. It also spews the now commonplace doom and gloom about job displacement. “The moon is a friend for the lonesome to talk to.”–Carl Sandburg I don’t know about the moon, but when it comes to Mars, it does not appear that NASA will be doing much more than saying hello in the foreseeable future. It seems there’s this little problem of money. On the other hand, it’s full speed ahead to the lunar surface for at least one private enterprise. | Mid | [
0.5819477434679331,
30.625,
22
] |
The work in this laboratory includes three major projects: (1) Studies involve an assessment of cell-mediated immune responses after the host has responded to a variety of alloantigenic stimulation to the same and other antigens. The significance of this work comes from attempts to understand regulatory mechanisms of cellular immune responses. This work has resulted in the development of a new mechanism of T cell control - the alloantigen elimination hypothesis. (2) Studies on the use of activated killer cells in the destruction of tumor cells have been undertaken. The receptor on tumor cells for killer lymphocytes is the subject of current investigation. An adjuvant immunotherapy attack on cancer cells remaining after surgery is planned using cells activated in vitro to lyse tumor cells. (3) A new mechanism of specific suppression to facilitate the transplantation of tissue and skin allografts is being attempted. The use of antigen specific suicide with 3H-Thymidine to produce clonally depleted cell populations is under investigation. | High | [
0.694767441860465,
29.875,
13.125
] |
A British parliamentary candidate bidding to become a Liberal Democrat MP has come under fire after mocking a Labour police commissioner as “little more than Barbie doll reading from a script,” during a rant on YouTube. Jonathan Wallace, the pro-EU party’s candidate for Blaydon, north-east England made the derogatory comment after Labour’s Kim McGuinness was successfully elected as Northumbria’s police and crime commissioner in July. Also on rt.com UK standards agency BANS ‘sexist’ ad featuring completely clothed female mechanic The 55-year-old made the remark in a now-deleted YouTube video in which he dismissed fellow candidates as “lightweights at best” before turning his ire to McGuinness. We now have a new police commissioner, Kim McGuinness, Barbie doll as she is dubbed by many. The comments are sure to come as an embarrassment for new Liberal Democrats leader Jo Swinson, a self-declared feminist who only last year suggested that misogyny should be classed as a “hate crime.” Responding to the emergence of Wallace’s video clip, McGuinness hit back at the Lib Dem, telling the Huffington Post that “No woman should have to put up with sexist language” when putting themselves forward for senior elected roles. Wallace has since apologized for this offensive remarks claiming that he wouldn’t be repeating “similar comments in the future.” Also on rt.com ‘Are they for real?’: Anti-Brexit Lib Dems will say NO to Corbyn’s ‘Me as temporary PM’ plan Think your friends would be interested? Share this story! to RT newsletter to get stories the mainstream media won’t tell you. | Low | [
0.514492753623188,
35.5,
33.5
] |
[Hemorrhage treatment: Evaluation of the proper use of fibrinogen concentrate]. An increase in fibrinogen concentrate prescriptions was noticed in 2015 after several guidelines regarding their use were published. We tried to evaluate if they were used appropriately. To evaluate the conformity of the prescriptions to these guidelines, we searched for each prescription if a dosage of blood fibrinogen was made, if its result was below the limit recommended to prescribe fibrinogen concentrate, and if the posology was in line with the recommendations. Effect and security of the treatment was also evaluated. We analyzed 202 prescriptions for 117 patients. The indications are respected except for one prescription for which we could not find it. The blood fibrinogen is measured for 76% of the prescriptions, 59% of the results are below the limit recommended to prescribe. The posology is conforming to the guidelines for 73% of the prescriptions, it is below the dose recommended for 20%. Patients who were prescribed low doses seemed less at risk than the others which questions the necessity of the prescriptions. The guidelines respect depends on the emergency of the prescription situation. It would be interesting to conduct a prospective study to better explain why doses below those recommended are prescribed. | Mid | [
0.6139954853273131,
34,
21.375
] |
/****************************************************************************
** COPYRIGHT (C) 1994-1997 INTEL CORPORATION **
** DEVELOPED FOR MICROSOFT BY INTEL CORP., HILLSBORO, OREGON **
** HTTP://WWW.INTEL.COM/ **
** THIS FILE IS PART OF THE INTEL ETHEREXPRESS PRO/100B(TM) AND **
** ETHEREXPRESS PRO/100+(TM) NDIS 5.0 MINIPORT SAMPLE DRIVER **
****************************************************************************/
/****************************************************************************
Module Name:
e100_557.h (82557.h)
This driver runs on the following hardware:
- 82558 based PCI 10/100Mb ethernet adapters
(aka Intel EtherExpress(TM) PRO Adapters)
Environment:
Kernel Mode - Or whatever is the equivalent on WinNT
*****************************************************************************/
#ifndef _E100_557_H
#define _E100_557_H
//-------------------------------------------------------------------------
// D100 Stepping Defines
//-------------------------------------------------------------------------
#define D100_A_STEP 0 // NEVER SHIPPED
#define D100_B_STEP 1 // d100 first shipped silicon
#define D100_C_STEP 2 // d100' (c-step) with vendor/id and hw fix
#define D101_A_STEP 4 // first silicon of d101
//-------------------------------------------------------------------------
// E100 Stepping Defines - used in PoMgmt Decisions
//-------------------------------------------------------------------------
#define E100_82557_A_STEP 1
#define E100_82557_B_STEP 2
#define E100_82557_C_STEP 3
#define E100_82558_A_STEP 4
#define E100_82558_B_STEP 5
#define E100_82559_A_STEP 6
#define E100_82559_B_STEP 7
#define E100_82559_C_STEP 8
#define E100_82559ER_A_STEP 9
//-------------------------------------------------------------------------
// D100 PORT functions -- lower 4 bits
//-------------------------------------------------------------------------
#define PORT_SOFTWARE_RESET 0
#define PORT_SELFTEST 1
#define PORT_SELECTIVE_RESET 2
#define PORT_DUMP 3
//-------------------------------------------------------------------------
// CSR field definitions -- Offsets from CSR base
//-------------------------------------------------------------------------
#define SCB_STATUS_LOW_BYTE 0x0
#define SCB_STATUS_HIGH_BYTE 0x1
#define SCB_COMMAND_LOW_BYTE 0x2
#define SCB_COMMAND_HIGH_BYTE 0x3
#define SCB_GENERAL_POINTER 0x4
#define CSR_PORT_LOW_WORD 0x8
#define CSR_PORT_HIGH_WORD 0x0a
#define CSR_FLASH_CONTROL_REG 0x0c
#define CSR_EEPROM_CONTROL_REG 0x0e
#define CSR_MDI_CONTROL_LOW_WORD 0x10
#define CSR_MDI_CONTROL_HIGH_WORD 0x12
//-------------------------------------------------------------------------
// SCB Status Word bit definitions
//-------------------------------------------------------------------------
//- Interrupt status fields
#define SCB_STATUS_MASK BIT_12_15 // ACK Mask
#define SCB_STATUS_CX BIT_15 // CU Completed Action Cmd
#define SCB_STATUS_FR BIT_14 // RU Received A Frame
#define SCB_STATUS_CNA BIT_13 // CU Became Inactive (IDLE)
#define SCB_STATUS_RNR BIT_12 // RU Became Not Ready
#define SCB_STATUS_MDI BIT_11 // MDI read or write done
#define SCB_STATUS_SWI BIT_10 // Software generated interrupt
//- Interrupt ACK fields
#define SCB_ACK_MASK (BIT_9 | BIT_12_15 | BIT_8) // ACK Mask
#define SCB_ALL_INTERRUPT_BITS BIT_8_15 // if all the bits are set, no interrupt to be served
#define SCB_ACK_CX BIT_15 // CU Completed Action Cmd
#define SCB_ACK_FR BIT_14 // RU Received A Frame
#define SCB_ACK_CNA BIT_13 // CU Became Inactive (IDLE)
#define SCB_ACK_RNR BIT_12 // RU Became Not Ready
#define SCB_ACK_MDI BIT_11 // MDI read or write done
#define SCB_ACK_SWI BIT_10 // Software generated interrupt
#define SCB_ACK_ER BIT_9 // Early Receive interrupt
#define SCB_ACK_FCP BIT_8 // Flow Control Pause interrupt
//- CUS Fields
#define SCB_CUS_MASK (BIT_6 | BIT_7) // CUS 2-bit Mask
#define SCB_CUS_IDLE 0 // CU Idle
#define SCB_CUS_SUSPEND BIT_6 // CU Suspended
#define SCB_CUS_ACTIVE BIT_7 // CU Active
//- RUS Fields
#define SCB_RUS_IDLE 0 // RU Idle
#define SCB_RUS_MASK BIT_2_5 // RUS 3-bit Mask
#define SCB_RUS_SUSPEND BIT_2 // RU Suspended
#define SCB_RUS_NO_RESOURCES BIT_3 // RU Out Of Resources
#define SCB_RUS_READY BIT_4 // RU Ready
#define SCB_RUS_SUSP_NO_RBDS (BIT_2 | BIT_5) // RU No More RBDs
#define SCB_RUS_NO_RBDS (BIT_3 | BIT_5) // RU No More RBDs
#define SCB_RUS_READY_NO_RBDS (BIT_4 | BIT_5) // RU Ready, No RBDs
//-------------------------------------------------------------------------
// SCB Command Word bit definitions
//-------------------------------------------------------------------------
//- CUC fields
#define SCB_CUC_MASK BIT_4_6 // CUC 3-bit Mask
#define SCB_CUC_START BIT_4 // CU Start
#define SCB_CUC_RESUME BIT_5 // CU Resume
#define SCB_CUC_DUMP_ADDR BIT_6 // CU Dump Counters Address
#define SCB_CUC_DUMP_STAT (BIT_4 | BIT_6) // CU Dump statistics counters
#define SCB_CUC_LOAD_BASE (BIT_5 | BIT_6) // Load the CU base
#define SCB_CUC_DUMP_RST_STAT BIT_4_6 // CU Dump and reset statistics counters
#define SCB_CUC_STATIC_RESUME (BIT_5 | BIT_7) // CU Static Resume
//- RUC fields
#define SCB_RUC_MASK BIT_0_2 // RUC 3-bit Mask
#define SCB_RUC_START BIT_0 // RU Start
#define SCB_RUC_RESUME BIT_1 // RU Resume
#define SCB_RUC_ABORT BIT_2 // RU Abort
#define SCB_RUC_LOAD_HDS (BIT_0 | BIT_2) // Load RFD Header Data Size
#define SCB_RUC_LOAD_BASE (BIT_1 | BIT_2) // Load the RU base
#define SCB_RUC_RBD_RESUME BIT_0_2 // RBD resume
// Interrupt fields (assuming byte addressing)
#define SCB_INT_MASK BIT_0 // Mask interrupts
#define SCB_SOFT_INT BIT_1 // Generate a software interrupt
//-------------------------------------------------------------------------
// EEPROM bit definitions
//-------------------------------------------------------------------------
//- EEPROM control register bits
#define EN_TRNF 0x10 // Enable turnoff
#define EEDO 0x08 // EEPROM data out
#define EEDI 0x04 // EEPROM data in (set for writing data)
#define EECS 0x02 // EEPROM chip select (1=high, 0=low)
#define EESK 0x01 // EEPROM shift clock (1=high, 0=low)
//- EEPROM opcodes
#define EEPROM_READ_OPCODE 06
#define EEPROM_WRITE_OPCODE 05
#define EEPROM_ERASE_OPCODE 07
#define EEPROM_EWEN_OPCODE 19 // Erase/write enable
#define EEPROM_EWDS_OPCODE 16 // Erase/write disable
//- EEPROM data locations
#define EEPROM_NODE_ADDRESS_BYTE_0 0
#define EEPROM_FLAGS_WORD_3 3
#define EEPROM_FLAG_10MC BIT_0
#define EEPROM_FLAG_100MC BIT_1
//-------------------------------------------------------------------------
// MDI Control register bit definitions
//-------------------------------------------------------------------------
#define MDI_DATA_MASK BIT_0_15 // MDI Data port
#define MDI_REG_ADDR BIT_16_20 // which MDI register to read/write
#define MDI_PHY_ADDR BIT_21_25 // which PHY to read/write
#define MDI_PHY_OPCODE BIT_26_27 // which PHY to read/write
#define MDI_PHY_READY BIT_28 // PHY is ready for another MDI cycle
#define MDI_PHY_INT_ENABLE BIT_29 // Assert INT at MDI cycle completion
//-------------------------------------------------------------------------
// MDI Control register opcode definitions
//-------------------------------------------------------------------------
#define MDI_WRITE 1 // Phy Write
#define MDI_READ 2 // Phy read
//-------------------------------------------------------------------------
// D100 Action Commands
//-------------------------------------------------------------------------
#define CB_NOP 0
#define CB_IA_ADDRESS 1
#define CB_CONFIGURE 2
#define CB_MULTICAST 3
#define CB_TRANSMIT 4
#define CB_LOAD_MICROCODE 5
#define CB_DUMP 6
#define CB_DIAGNOSE 7
//-------------------------------------------------------------------------
// Command Block (CB) Field Definitions
//-------------------------------------------------------------------------
//- CB Command Word
#define CB_EL_BIT BIT_15 // CB EL Bit
#define CB_S_BIT BIT_14 // CB Suspend Bit
#define CB_I_BIT BIT_13 // CB Interrupt Bit
#define CB_TX_SF_BIT BIT_3 // TX CB Flexible Mode
#define CB_CMD_MASK BIT_0_2 // CB 3-bit CMD Mask
//- CB Status Word
#define CB_STATUS_MASK BIT_12_15 // CB Status Mask (4-bits)
#define CB_STATUS_COMPLETE BIT_15 // CB Complete Bit
#define CB_STATUS_OK BIT_13 // CB OK Bit
#define CB_STATUS_UNDERRUN BIT_12 // CB A Bit
#define CB_STATUS_FAIL BIT_11 // CB Fail (F) Bit
//misc command bits
#define CB_TX_EOF_BIT BIT_15 // TX CB/TBD EOF Bit
//-------------------------------------------------------------------------
// Config CB Parameter Fields
//-------------------------------------------------------------------------
#define CB_CFIG_BYTE_COUNT 22 // 22 config bytes
#define CB_SHORT_CFIG_BYTE_COUNT 8 // 8 config bytes
// byte 0 bit definitions
#define CB_CFIG_BYTE_COUNT_MASK BIT_0_5 // Byte count occupies bit 5-0
// byte 1 bit definitions
#define CB_CFIG_RXFIFO_LIMIT_MASK BIT_0_4 // RxFifo limit mask
#define CB_CFIG_TXFIFO_LIMIT_MASK BIT_4_7 // TxFifo limit mask
// byte 3 bit definitions --
#define CB_CFIG_B3_MWI_ENABLE BIT_0 // Memory Write Invalidate Enable Bit
// byte 4 bit definitions
#define CB_CFIG_RX_MIN_DMA_MASK BIT_0_6 // Rx minimum DMA count mask
// byte 5 bit definitions
#define CB_CFIG_TX_MIN_DMA_MASK BIT_0_6 // Tx minimum DMA count mask
#define CB_CFIG_DMBC_EN BIT_7 // Enable Tx/Rx minimum DMA counts
// byte 6 bit definitions
#define CB_CFIG_LATE_SCB BIT_0 // Update SCB After New Tx Start
#define CB_CFIG_TNO_INT BIT_2 // Tx Not OK Interrupt
#define CB_CFIG_CI_INT BIT_3 // Command Complete Interrupt
#define CB_CFIG_SAVE_BAD_FRAMES BIT_7 // Save Bad Frames Enabled
// byte 7 bit definitions
#define CB_CFIG_DISC_SHORT_FRAMES BIT_0 // Discard Short Frames
#define CB_CFIG_URUN_RETRY BIT_1_2 // Underrun Retry Count
// byte 8 bit definitions
#define CB_CFIG_503_MII BIT_0 // 503 vs. MII mode
// byte 9 bit definitions -- pre-defined all zeros
// byte 10 bit definitions
#define CB_CFIG_NO_SRCADR BIT_3 // No Source Address Insertion
#define CB_CFIG_PREAMBLE_LEN BIT_4_5 // Preamble Length
#define CB_CFIG_LOOPBACK_MODE BIT_6_7 // Loopback Mode
// byte 11 bit definitions
#define CB_CFIG_LINEAR_PRIORITY BIT_0_2 // Linear Priority
// byte 12 bit definitions
#define CB_CFIG_LINEAR_PRI_MODE BIT_0 // Linear Priority mode
#define CB_CFIG_IFS_MASK BIT_4_7 // CSMA level Interframe Spacing mask
// byte 13 bit definitions -- pre-defined all zeros
// byte 14 bit definitions -- pre-defined 0xf2
// byte 15 bit definitions
#define CB_CFIG_PROMISCUOUS BIT_0 // Promiscuous Mode Enable
#define CB_CFIG_BROADCAST_DIS BIT_1 // Broadcast Mode Disable
#define CB_CFIG_CRS_OR_CDT BIT_7 // CRS Or CDT
// byte 16 bit definitions -- pre-defined all zeros
// byte 17 bit definitions -- pre-defined 0x40
// byte 18 bit definitions
#define CB_CFIG_STRIPPING BIT_0 // Stripping Disabled
#define CB_CFIG_PADDING BIT_1 // Padding Disabled
#define CB_CFIG_CRC_IN_MEM BIT_2 // Transfer CRC To Memory
// byte 19 bit definitions
#define CB_CFIG_FORCE_FDX BIT_6 // Force Full Duplex
#define CB_CFIG_FDX_ENABLE BIT_7 // Full Duplex Enabled
// byte 20 bit definitions
#define CB_CFIG_MULTI_IA BIT_6 // Multiple IA Addr
// byte 21 bit definitions
#define CB_CFIG_MULTICAST_ALL BIT_3 // Multicast All
//-------------------------------------------------------------------------
// Receive Frame Descriptor Fields
//-------------------------------------------------------------------------
//- RFD Status Bits
#define RFD_RECEIVE_COLLISION BIT_0 // Collision detected on Receive
#define RFD_IA_MATCH BIT_1 // Indv Address Match Bit
#define RFD_RX_ERR BIT_4 // RX_ERR pin on Phy was set
#define RFD_FRAME_TOO_SHORT BIT_7 // Receive Frame Short
#define RFD_DMA_OVERRUN BIT_8 // Receive DMA Overrun
#define RFD_NO_RESOURCES BIT_9 // No Buffer Space
#define RFD_ALIGNMENT_ERROR BIT_10 // Alignment Error
#define RFD_CRC_ERROR BIT_11 // CRC Error
#define RFD_STATUS_OK BIT_13 // RFD OK Bit
#define RFD_STATUS_COMPLETE BIT_15 // RFD Complete Bit
//- RFD Command Bits
#define RFD_EL_BIT BIT_15 // RFD EL Bit
#define RFD_S_BIT BIT_14 // RFD Suspend Bit
#define RFD_H_BIT BIT_4 // Header RFD Bit
#define RFD_SF_BIT BIT_3 // RFD Flexible Mode
//- RFD misc bits
#define RFD_EOF_BIT BIT_15 // RFD End-Of-Frame Bit
#define RFD_F_BIT BIT_14 // RFD Buffer Fetch Bit
#define RFD_ACT_COUNT_MASK BIT_0_13 // RFD Actual Count Mask
#define RFD_HEADER_SIZE 0x10 // Size of RFD Header (16 bytes)
//-------------------------------------------------------------------------
// Receive Buffer Descriptor Fields
//-------------------------------------------------------------------------
#define RBD_EOF_BIT BIT_15 // RBD End-Of-Frame Bit
#define RBD_F_BIT BIT_14 // RBD Buffer Fetch Bit
#define RBD_ACT_COUNT_MASK BIT_0_13 // RBD Actual Count Mask
#define SIZE_FIELD_MASK BIT_0_13 // Size of the associated buffer
#define RBD_EL_BIT BIT_15 // RBD EL Bit
//-------------------------------------------------------------------------
// Size Of Dump Buffer
//-------------------------------------------------------------------------
#define DUMP_BUFFER_SIZE 600 // size of the dump buffer
//-------------------------------------------------------------------------
// Self Test Results
//-------------------------------------------------------------------------
#define CB_SELFTEST_FAIL_BIT BIT_12
#define CB_SELFTEST_DIAG_BIT BIT_5
#define CB_SELFTEST_REGISTER_BIT BIT_3
#define CB_SELFTEST_ROM_BIT BIT_2
#define CB_SELFTEST_ERROR_MASK ( \
CB_SELFTEST_FAIL_BIT | CB_SELFTEST_DIAG_BIT | \
CB_SELFTEST_REGISTER_BIT | CB_SELFTEST_ROM_BIT)
//-------------------------------------------------------------------------
// Driver Configuration Default Parameters for the 557
// Note: If the driver uses any defaults that are different from the chip's
// defaults, it will be noted below
//-------------------------------------------------------------------------
// Byte 0 (byte count) default
#define CB_557_CFIG_DEFAULT_PARM0 CB_CFIG_BYTE_COUNT
// Byte 1 (fifo limits) default
#define DEFAULT_TX_FIFO_LIMIT 0x08
#define DEFAULT_RX_FIFO_LIMIT 0x08
#define CB_557_CFIG_DEFAULT_PARM1 0x88
// Byte 2 (IFS) default
#define CB_557_CFIG_DEFAULT_PARM2 0x00
// Byte 3 (reserved) default
#define CB_557_CFIG_DEFAULT_PARM3 0x00
// Byte 4 (Rx DMA min count) default
#define CB_557_CFIG_DEFAULT_PARM4 0x00
// Byte 5 (Tx DMA min count, DMA min count enable) default
#define CB_557_CFIG_DEFAULT_PARM5 0x00
// Byte 6 (Late SCB, TNO int, CI int, Save bad frames) default
#define CB_557_CFIG_DEFAULT_PARM6 0x32
// Byte 7 (Discard short frames, underrun retry) default
// note: disc short frames will be enabled
#define DEFAULT_UNDERRUN_RETRY 0x01
#define CB_557_CFIG_DEFAULT_PARM7 0x01
// Byte 8 (MII or 503) default
// note: MII will be the default
#define CB_557_CFIG_DEFAULT_PARM8 0x01
// Byte 9 - Power management for 82558B, 82559
#define CB_WAKE_ON_LINK_BYTE9 0x20
#define CB_WAKE_ON_ARP_PKT_BYTE9 0x40
#define CB_557_CFIG_DEFAULT_PARM9 0
// Byte 10 (scr addr insertion, preamble, loopback) default
#define CB_557_CFIG_DEFAULT_PARM10 0x2e
// Byte 11 (linear priority) default
#define CB_557_CFIG_DEFAULT_PARM11 0x00
// Byte 12 (IFS,linear priority mode) default
#define CB_557_CFIG_DEFAULT_PARM12 0x60
// Byte 13 (reserved) default
#define CB_557_CFIG_DEFAULT_PARM13 0x00
// Byte 14 (reserved) default
#define CB_557_CFIG_DEFAULT_PARM14 0xf2
// Byte 15 (promiscuous, broadcast, CRS/CDT) default
#define CB_557_CFIG_DEFAULT_PARM15 0xea
// Byte 16 (reserved) default
#define CB_557_CFIG_DEFAULT_PARM16 0x00
// Byte 17 (reserved) default
#define CB_557_CFIG_DEFAULT_PARM17 0x40
// Byte 18 (Stripping, padding, Rcv CRC in mem) default
// note: padding will be enabled
#define CB_557_CFIG_DEFAULT_PARM18 0xf2
// Byte 19 (reserved) default
// note: full duplex is enabled if FDX# pin is 0
#define CB_557_CFIG_DEFAULT_PARM19 0x80
// Byte 20 (multi-IA) default
#define CB_557_CFIG_DEFAULT_PARM20 0x3f
// Byte 21 (multicast all) default
#define CB_557_CFIG_DEFAULT_PARM21 0x05
#pragma pack(1)
//-------------------------------------------------------------------------
// Ethernet Frame Structure
//-------------------------------------------------------------------------
//- Ethernet 6-byte Address
typedef struct _ETH_ADDRESS_STRUC {
UCHAR EthNodeAddress[ETHERNET_ADDRESS_LENGTH];
} ETH_ADDRESS_STRUC, *PETH_ADDRESS_STRUC;
//- Ethernet 14-byte Header
typedef struct _ETH_HEADER_STRUC {
UCHAR Destination[ETHERNET_ADDRESS_LENGTH];
UCHAR Source[ETHERNET_ADDRESS_LENGTH];
USHORT TypeLength;
} ETH_HEADER_STRUC, *PETH_HEADER_STRUC;
//- Ethernet Buffer (Including Ethernet Header) for Transmits
typedef struct _ETH_TX_BUFFER_STRUC {
ETH_HEADER_STRUC TxMacHeader;
UCHAR TxBufferData[(TCB_BUFFER_SIZE - sizeof(ETH_HEADER_STRUC))];
} ETH_TX_BUFFER_STRUC, *PETH_TX_BUFFER_STRUC;
typedef struct _ETH_RX_BUFFER_STRUC {
ETH_HEADER_STRUC RxMacHeader;
UCHAR RxBufferData[(RCB_BUFFER_SIZE - sizeof(ETH_HEADER_STRUC))];
} ETH_RX_BUFFER_STRUC, *PETH_RX_BUFFER_STRUC;
//-------------------------------------------------------------------------
// 82557 Data Structures
//-------------------------------------------------------------------------
//-------------------------------------------------------------------------
// Self test
//-------------------------------------------------------------------------
typedef struct _SELF_TEST_STRUC {
ULONG StSignature; // Self Test Signature
ULONG StResults; // Self Test Results
} SELF_TEST_STRUC, *PSELF_TEST_STRUC;
//-------------------------------------------------------------------------
// Control/Status Registers (CSR)
//-------------------------------------------------------------------------
typedef struct _CSR_STRUC {
USHORT ScbStatus; // SCB Status register
UCHAR ScbCommandLow; // SCB Command register (low byte)
UCHAR ScbCommandHigh; // SCB Command register (high byte)
ULONG ScbGeneralPointer; // SCB General pointer
ULONG Port; // PORT register
USHORT FlashControl; // Flash Control register
USHORT EepromControl; // EEPROM control register
ULONG MDIControl; // MDI Control Register
ULONG RxDMAByteCount; // Receive DMA Byte count register
} CSR_STRUC, *PCSR_STRUC;
//-------------------------------------------------------------------------
// Error Counters
//-------------------------------------------------------------------------
typedef struct _ERR_COUNT_STRUC {
ULONG XmtGoodFrames; // Good frames transmitted
ULONG XmtMaxCollisions; // Fatal frames -- had max collisions
ULONG XmtLateCollisions; // Fatal frames -- had a late coll.
ULONG XmtUnderruns; // Transmit underruns (fatal or re-transmit)
ULONG XmtLostCRS; // Frames transmitted without CRS
ULONG XmtDeferred; // Deferred transmits
ULONG XmtSingleCollision; // Transmits that had 1 and only 1 coll.
ULONG XmtMultCollisions; // Transmits that had multiple coll.
ULONG XmtTotalCollisions; // Transmits that had 1+ collisions.
ULONG RcvGoodFrames; // Good frames received
ULONG RcvCrcErrors; // Aligned frames that had a CRC error
ULONG RcvAlignmentErrors; // Receives that had alignment errors
ULONG RcvResourceErrors; // Good frame dropped due to lack of resources
ULONG RcvOverrunErrors; // Overrun errors - bus was busy
ULONG RcvCdtErrors; // Received frames that encountered coll.
ULONG RcvShortFrames; // Received frames that were to short
ULONG CommandComplete; // A005h indicates cmd completion
} ERR_COUNT_STRUC, *PERR_COUNT_STRUC;
//-------------------------------------------------------------------------
// Command Block (CB) Generic Header Structure
//-------------------------------------------------------------------------
typedef struct _CB_HEADER_STRUC {
USHORT CbStatus; // Command Block Status
USHORT CbCommand; // Command Block Command
ULONG CbLinkPointer; // Link To Next CB
} CB_HEADER_STRUC, *PCB_HEADER_STRUC;
//-------------------------------------------------------------------------
// NOP Command Block (NOP_CB)
//-------------------------------------------------------------------------
typedef struct _NOP_CB_STRUC {
CB_HEADER_STRUC NopCBHeader;
} NOP_CB_STRUC, *PNOP_CB_STRUC;
//-------------------------------------------------------------------------
// Individual Address Command Block (IA_CB)
//-------------------------------------------------------------------------
typedef struct _IA_CB_STRUC {
CB_HEADER_STRUC IaCBHeader;
UCHAR IaAddress[ETHERNET_ADDRESS_LENGTH];
} IA_CB_STRUC, *PIA_CB_STRUC;
//-------------------------------------------------------------------------
// Configure Command Block (CONFIG_CB)
//-------------------------------------------------------------------------
typedef struct _CONFIG_CB_STRUC {
CB_HEADER_STRUC ConfigCBHeader;
UCHAR ConfigBytes[CB_CFIG_BYTE_COUNT];
} CONFIG_CB_STRUC, *PCONFIG_CB_STRUC;
//-------------------------------------------------------------------------
// MultiCast Command Block (MULTICAST_CB)
//-------------------------------------------------------------------------
typedef struct _MULTICAST_CB_STRUC {
CB_HEADER_STRUC McCBHeader;
USHORT McCount; // Number of multicast addresses
UCHAR McAddress[(ETHERNET_ADDRESS_LENGTH * MAX_MULTICAST_ADDRESSES)];
} MULTICAST_CB_STRUC, *PMULTICAST_CB_STRUC;
//-------------------------------------------------------------------------
// WakeUp Filter Command Block (FILTER_CB)
//-------------------------------------------------------------------------
typedef struct _FILTER_CB_STRUC {
CB_HEADER_STRUC FilterCBHeader;
ULONG Pattern[16];
}FILTER_CB_STRUC , *PFILTER_CB_STRUC ;
//-------------------------------------------------------------------------
// Dump Command Block (DUMP_CB)
//-------------------------------------------------------------------------
typedef struct _DUMP_CB_STRUC {
CB_HEADER_STRUC DumpCBHeader;
ULONG DumpAreaAddress; // Dump Buffer Area Address
} DUMP_CB_STRUC, *PDUMP_CB_STRUC;
//-------------------------------------------------------------------------
// Dump Area structure definition
//-------------------------------------------------------------------------
typedef struct _DUMP_AREA_STRUC {
UCHAR DumpBuffer[DUMP_BUFFER_SIZE];
} DUMP_AREA_STRUC, *PDUMP_AREA_STRUC;
//-------------------------------------------------------------------------
// Diagnose Command Block (DIAGNOSE_CB)
//-------------------------------------------------------------------------
typedef struct _DIAGNOSE_CB_STRUC {
CB_HEADER_STRUC DiagCBHeader;
} DIAGNOSE_CB_STRUC, *PDIAGNOSE_CB_STRUC;
//-------------------------------------------------------------------------
// Transmit Command Block (TxCB)
//-------------------------------------------------------------------------
typedef struct _GENERIC_TxCB {
CB_HEADER_STRUC TxCbHeader;
ULONG TxCbTbdPointer; // TBD address
USHORT TxCbCount; // Data Bytes In TCB past header
UCHAR TxCbThreshold; // TX Threshold for FIFO Extender
UCHAR TxCbTbdNumber;
ETH_TX_BUFFER_STRUC TxCbData;
ULONG pad0;
ULONG pad1;
ULONG pad2;
ULONG pad3;
} TXCB_STRUC, *PTXCB_STRUC;
//-------------------------------------------------------------------------
// Transmit Buffer Descriptor (TBD)
//-------------------------------------------------------------------------
typedef struct _TBD_STRUC {
ULONG TbdBufferAddress; // Physical Transmit Buffer Address
unsigned TbdCount :14;
unsigned :1 ; // always 0
unsigned EndOfList:1 ; // EL bit in Tbd
unsigned :16; // field that is always 0's in a TBD
} TBD_STRUC, *PTBD_STRUC;
//-------------------------------------------------------------------------
// Receive Frame Descriptor (RFD)
//-------------------------------------------------------------------------
typedef struct _RFD_STRUC {
CB_HEADER_STRUC RfdCbHeader;
ULONG RfdRbdPointer; // Receive Buffer Descriptor Addr
USHORT RfdActualCount; // Number Of Bytes Received
USHORT RfdSize; // Number Of Bytes In RFD
ETH_RX_BUFFER_STRUC RfdBuffer; // Data buffer in RFD
} RFD_STRUC, *PRFD_STRUC;
//-------------------------------------------------------------------------
// Receive Buffer Descriptor (RBD)
//-------------------------------------------------------------------------
typedef struct _RBD_STRUC {
USHORT RbdActualCount; // Number Of Bytes Received
USHORT RbdFiller;
ULONG RbdLinkAddress; // Link To Next RBD
ULONG RbdRcbAddress; // Receive Buffer Address
USHORT RbdSize; // Receive Buffer Size
USHORT RbdFiller1;
} RBD_STRUC, *PRBD_STRUC;
#pragma pack()
//-------------------------------------------------------------------------
// 82557 PCI Register Definitions
// Refer To The PCI Specification For Detailed Explanations
//-------------------------------------------------------------------------
//- Register Offsets
#define PCI_VENDOR_ID_REGISTER 0x00 // PCI Vendor ID Register
#define PCI_DEVICE_ID_REGISTER 0x02 // PCI Device ID Register
#define PCI_CONFIG_ID_REGISTER 0x00 // PCI Configuration ID Register
#define PCI_COMMAND_REGISTER 0x04 // PCI Command Register
#define PCI_STATUS_REGISTER 0x06 // PCI Status Register
#define PCI_REV_ID_REGISTER 0x08 // PCI Revision ID Register
#define PCI_CLASS_CODE_REGISTER 0x09 // PCI Class Code Register
#define PCI_CACHE_LINE_REGISTER 0x0C // PCI Cache Line Register
#define PCI_LATENCY_TIMER 0x0D // PCI Latency Timer Register
#define PCI_HEADER_TYPE 0x0E // PCI Header Type Register
#define PCI_BIST_REGISTER 0x0F // PCI Built-In SelfTest Register
#define PCI_BAR_0_REGISTER 0x10 // PCI Base Address Register 0
#define PCI_BAR_1_REGISTER 0x14 // PCI Base Address Register 1
#define PCI_BAR_2_REGISTER 0x18 // PCI Base Address Register 2
#define PCI_BAR_3_REGISTER 0x1C // PCI Base Address Register 3
#define PCI_BAR_4_REGISTER 0x20 // PCI Base Address Register 4
#define PCI_BAR_5_REGISTER 0x24 // PCI Base Address Register 5
#define PCI_SUBVENDOR_ID_REGISTER 0x2C // PCI SubVendor ID Register
#define PCI_SUBDEVICE_ID_REGISTER 0x2E // PCI SubDevice ID Register
#define PCI_EXPANSION_ROM 0x30 // PCI Expansion ROM Base Register
#define PCI_INTERRUPT_LINE 0x3C // PCI Interrupt Line Register
#define PCI_INTERRUPT_PIN 0x3D // PCI Interrupt Pin Register
#define PCI_MIN_GNT_REGISTER 0x3E // PCI Min-Gnt Register
#define PCI_MAX_LAT_REGISTER 0x3F // PCI Max_Lat Register
#define PCI_NODE_ADDR_REGISTER 0x40 // PCI Node Address Register
//-------------------------------------------------------------------------
// PHY 100 MDI Register/Bit Definitions
//-------------------------------------------------------------------------
// MDI register set
#define MDI_CONTROL_REG 0x00 // MDI control register
#define MDI_STATUS_REG 0x01 // MDI Status regiser
#define PHY_ID_REG_1 0x02 // Phy indentification reg (word 1)
#define PHY_ID_REG_2 0x03 // Phy indentification reg (word 2)
#define AUTO_NEG_ADVERTISE_REG 0x04 // Auto-negotiation advertisement
#define AUTO_NEG_LINK_PARTNER_REG 0x05 // Auto-negotiation link partner ability
#define AUTO_NEG_EXPANSION_REG 0x06 // Auto-negotiation expansion
#define AUTO_NEG_NEXT_PAGE_REG 0x07 // Auto-negotiation next page transmit
#define EXTENDED_REG_0 0x10 // Extended reg 0 (Phy 100 modes)
#define EXTENDED_REG_1 0x14 // Extended reg 1 (Phy 100 error indications)
#define NSC_CONG_CONTROL_REG 0x17 // National (TX) congestion control
#define NSC_SPEED_IND_REG 0x19 // National (TX) speed indication
#define PHY_EQUALIZER_REG 0x1A // Register for the Phy Equalizer values
// MDI Control register bit definitions
#define MDI_CR_COLL_TEST_ENABLE BIT_7 // Collision test enable
#define MDI_CR_FULL_HALF BIT_8 // FDX =1, half duplex =0
#define MDI_CR_RESTART_AUTO_NEG BIT_9 // Restart auto negotiation
#define MDI_CR_ISOLATE BIT_10 // Isolate PHY from MII
#define MDI_CR_POWER_DOWN BIT_11 // Power down
#define MDI_CR_AUTO_SELECT BIT_12 // Auto speed select enable
#define MDI_CR_10_100 BIT_13 // 0 = 10Mbs, 1 = 100Mbs
#define MDI_CR_LOOPBACK BIT_14 // 0 = normal, 1 = loopback
#define MDI_CR_RESET BIT_15 // 0 = normal, 1 = PHY reset
// MDI Status register bit definitions
#define MDI_SR_EXT_REG_CAPABLE BIT_0 // Extended register capabilities
#define MDI_SR_JABBER_DETECT BIT_1 // Jabber detected
#define MDI_SR_LINK_STATUS BIT_2 // Link Status -- 1 = link
#define MDI_SR_AUTO_SELECT_CAPABLE BIT_3 // Auto speed select capable
#define MDI_SR_REMOTE_FAULT_DETECT BIT_4 // Remote fault detect
#define MDI_SR_AUTO_NEG_COMPLETE BIT_5 // Auto negotiation complete
#define MDI_SR_10T_HALF_DPX BIT_11 // 10BaseT Half Duplex capable
#define MDI_SR_10T_FULL_DPX BIT_12 // 10BaseT full duplex capable
#define MDI_SR_TX_HALF_DPX BIT_13 // TX Half Duplex capable
#define MDI_SR_TX_FULL_DPX BIT_14 // TX full duplex capable
#define MDI_SR_T4_CAPABLE BIT_15 // T4 capable
// Auto-Negotiation advertisement register bit definitions
#define NWAY_AD_SELCTOR_FIELD BIT_0_4 // identifies supported protocol
#define NWAY_AD_ABILITY BIT_5_12 // technologies that are supported
#define NWAY_AD_10T_HALF_DPX BIT_5 // 10BaseT Half Duplex capable
#define NWAY_AD_10T_FULL_DPX BIT_6 // 10BaseT full duplex capable
#define NWAY_AD_TX_HALF_DPX BIT_7 // TX Half Duplex capable
#define NWAY_AD_TX_FULL_DPX BIT_8 // TX full duplex capable
#define NWAY_AD_T4_CAPABLE BIT_9 // T4 capable
#define NWAY_AD_REMOTE_FAULT BIT_13 // indicates local remote fault
#define NWAY_AD_RESERVED BIT_14 // reserved
#define NWAY_AD_NEXT_PAGE BIT_15 // Next page (not supported)
// Auto-Negotiation link partner ability register bit definitions
#define NWAY_LP_SELCTOR_FIELD BIT_0_4 // identifies supported protocol
#define NWAY_LP_ABILITY BIT_5_9 // technologies that are supported
#define NWAY_LP_REMOTE_FAULT BIT_13 // indicates partner remote fault
#define NWAY_LP_ACKNOWLEDGE BIT_14 // acknowledge
#define NWAY_LP_NEXT_PAGE BIT_15 // Next page (not supported)
// Auto-Negotiation expansion register bit definitions
#define NWAY_EX_LP_NWAY BIT_0 // link partner is NWAY
#define NWAY_EX_PAGE_RECEIVED BIT_1 // link code word received
#define NWAY_EX_NEXT_PAGE_ABLE BIT_2 // local is next page able
#define NWAY_EX_LP_NEXT_PAGE_ABLE BIT_3 // partner is next page able
#define NWAY_EX_PARALLEL_DET_FLT BIT_4 // parallel detection fault
#define NWAY_EX_RESERVED BIT_5_15 // reserved
// PHY 100 Extended Register 0 bit definitions
#define PHY_100_ER0_FDX_INDIC BIT_0 // 1 = FDX, 0 = half duplex
#define PHY_100_ER0_SPEED_INDIC BIT_1 // 1 = 100mbs, 0= 10mbs
#define PHY_100_ER0_WAKE_UP BIT_2 // Wake up DAC
#define PHY_100_ER0_RESERVED BIT_3_4 // Reserved
#define PHY_100_ER0_REV_CNTRL BIT_5_7 // Revsion control (A step = 000)
#define PHY_100_ER0_FORCE_FAIL BIT_8 // Force Fail is enabled
#define PHY_100_ER0_TEST BIT_9_13 // Revsion control (A step = 000)
#define PHY_100_ER0_LINKDIS BIT_14 // Link integrity test is disabled
#define PHY_100_ER0_JABDIS BIT_15 // Jabber function is disabled
// PHY 100 Extended Register 1 bit definitions
#define PHY_100_ER1_RESERVED BIT_0_8 // Reserved
#define PHY_100_ER1_CH2_DET_ERR BIT_9 // Channel 2 EOF detection error
#define PHY_100_ER1_MANCH_CODE_ERR BIT_10 // Manchester code error
#define PHY_100_ER1_EOP_ERR BIT_11 // EOP error
#define PHY_100_ER1_BAD_CODE_ERR BIT_12 // bad code error
#define PHY_100_ER1_INV_CODE_ERR BIT_13 // invalid code error
#define PHY_100_ER1_DC_BAL_ERR BIT_14 // DC balance error
#define PHY_100_ER1_PAIR_SKEW_ERR BIT_15 // Pair skew error
// PHY TX Register/Bit definitions
#define PHY_TX_STATUS_CTRL_REG 0x10
#define PHY_TX_POLARITY_MASK BIT_8 // register 10h bit 8 (the polarity bit)
#define PHY_TX_NORMAL_POLARITY 0 // register 10h bit 8 =0 (normal polarity)
#define PHY_TX_SPECIAL_CTRL_REG 0x11
#define AUTO_POLARITY_DISABLE BIT_4 // register 11h bit 4 (0=enable, 1=disable)
#define PHY_TX_REG_18 0x18 // Error counter register
// National Semiconductor TX phy congestion control register bit definitions
#define NSC_TX_CONG_TXREADY BIT_10 // Makes TxReady an input
#define NSC_TX_CONG_ENABLE BIT_8 // Enables congestion control
#define NSC_TX_CONG_F_CONNECT BIT_5 // Enables congestion control
// National Semiconductor TX phy speed indication register bit definitions
#define NSC_TX_SPD_INDC_SPEED BIT_6 // 0 = 100mb, 1=10mb
#endif // _E100_557_H
| Low | [
0.46619217081850506,
32.75,
37.5
] |
A Weekly Digest of the Mathematical Internet Tag Archives: kinetic sculpture Meet Nalini Joshi, a mathematician at the University of Sydney in Australia. I’ll let her introduce herself to you. Nalini has an amazing story and amazing passion. What does her video make you think? To hear more from Nalini, you can watch this talk she gave last month at the Women in Mathematics conference at the Isaac Newton Institute in Cambridge, England. Her talk is called “Mathematics and life: a personal journey.” You might also enjoy reading this interview or others on her media page. Nalini Joshi lecturing about solitons. I’d like to share three clumps of ideas that might give you a flavor for the math that Nalini enjoys doing. Most of it is way over my head, but I’m reaching for it! You can, too, if you try. Here’s clump number one. Two of the main objects that Nalini studies are dynamical systems and differential equations. You can think of a dynamical system as some objects that interact with each other and evolve over time. Think of the stars that Nalini described in the video, heading toward each other and tugging on each other. Differential equations are one way of describing these interactions in a mathematically precise way. They capture how tiny changes in one amount affect tiny changes in another amount. Vlasov billiards. To play around with some simple dynamical systems that can still produce some complex behaviors, check out dynamical-systems.org. Vlasov billiards was new to me. I think it’s really cool. The three-body problem is one of the oldest and most famous dynamical systems, and you can tinker around with examples of it here and here. There’s even a three-body problem game you can try playing. I’m not too crazy about it, but maybe you’ll enjoy it. It certainly gives you a sense for how chaotic the a three-body system can be! Nalini doesn’t study just any old dynamical systems. She’s particularly interested in ones where the chaotic parts of the system cancel each other out. Remember in the video how she described the stars that go past each other and don’t destroy each other, that are “transparent to each other”? Places where this happens in dynamical systems are called soliton solutions. They’re like steady waves that can pass through each other. Check out these four videos on solitons, each of which gives a different perspective on them. If you’re feeling adventurous, you could try reading this article called What is a Soliton? Making a water wave soliton in the Netherlands. A computer animation of interacting solitons. Japanese artist Takashi Suzuki tests a soliton to be used in a piece of performance art. Students studying and building solitons in South Africa. Level curves that are generalized Cassini curves.Also, it kind of looks like a four-body problem.(click for video) The second idea that Nalini uses that I’d like to share is level curves, or contours. Instead of studying complicated differential equations directly, it’s possible to get at them geometrically by studying families of curves—contours—that are produced by related algebraic equations. They’re just like the lines on a topographic map that mark off areas of equal elevation. Here’s a blog post by our friend Tim Chartier about colorful contour lines that arise from the differential equation governing heat flow. The temperature maps by Zachary Forest Johnson from a few weeks ago also used contour lines. And I found some great pieces of art that take contours as their inspiration. Click to check these out! The last idea clump I’ll share involves integrable systems. In an integrable system, it’s possible to uniquely “undo” what has happened—the rules are such that there’s only one possible past that could lead to the present. Most systems don’t work this way—you can’t tell what was in your refrigerator a week ago by looking at it now! Nalini mentions on her research page that “ideas on integrable differential equations also extend to difference equations, and even to extended versions of cellular automata.” I enjoyed reading this article about reversible cellular automata, especially the section about Critters. What move did Black just play?A puzzle by Raymond Smullyan. And this made me think of a really nifty kind of chess puzzle called retrograde analysis—a fancy way of saying “thinking backwards”. Instead of trying to find the best chess move to play next, you instead have to figure out what move was made to get to the position in the puzzle. Most chess positions could be arrived at through multiple moves, but the positions in these puzzles are specially designed so that only one move will work. There’s a huge index of this kind of problem at The Retrograde Analysis Corner, and there are some great starter problems on this page. Maurice Ashley And perhaps you’d like to hear a little bit about thinking backwards from one of the greatest teachers of chess, Grandmaster Maurice Ashley. Check out his TED video here. I hope you’ve enjoyed finding out about Nalini Joshi and the mathematics that she loves. I asked Nalini if she would do a Q&A with us, and she said yes! Do you have a question you’d like to ask her? Send it to us below and we’ll include it in the interview, which I send to Nalini in about a week. UPDATE: We’re no longer accepting questions for Nalini, because the interview has happened! Check it out! | Mid | [
0.64751958224543,
31,
16.875
] |
Drs. John Henrik Clarke and Yosef Ben Jochannan address the question, "What will we tell our children about our fight for freedom?" "We know something about time." :) "What will we tell our children about our role in history, how we survived to this point? When people say we were slaves, what do we tell our children about our slavery in particular and slavery in general? What do we tell our children so they can stop turning their face away and want to crawl under the table when you mention our condition in the world? When will they stop blaming themselves? You have not explained it to them because you have not explained it to yourselves. You have to know your oppressor and the nature of his oppression and what you did about it... At what time do we sit down and tell simple stories about our revolutionary heritage?... To know themselves our children must know what kind of society produced them." | Mid | [
0.64642082429501,
37.25,
20.375
] |
A day after a 2-Judge Bench of the Supreme Court appointed Sanjay Hedge and Sadhana Ramachandran as ‘interlocutors’ to convince the Shaheen Bagh protestors to shift their demonstration to an alternate site, the protestors have said that it would weaken their movement. As per a New Indian Express report, on being asked about the traffic inconvenience caused to people, an IIT Delhi student Asif Mujtaba said that the road blockade was a pressure tactic. He added that the protestors could open up one side of the road while continuing their sit-in protest on the other side. Asif expressed his delight over the fact that the apex court acknowledged their ‘right to protest.’ He claimed, “There is a chance that the protesters reach a consensus with the court-appointed interlocutors”. Asif, however, hit out at that the Delhi police for their disapproval of the use of women and children as shields in Shaheen Bagh. He, however, also stated that the protestors may oblige if the Supreme Court orders them to shift to an alternative location. The report quoted another protestor, Nadeem Khan, saying that the pressure tactic would fail if they relocated to an alternative site. - Advertisement - “The main reason for blocking the road was to pressure the government but now if we go to an irrelevant place, I am not sure whether we could insist on our demands with the same thrust or not,” Nadeem was quoted saying in the NIE report. Nadeem complained that the Supreme Court should have asked the BJP-led-government to talk to protestors instead of telling the peaceful protestors to shift. The petitions to clear Shaheen Bagh and Kalindi Kunj were filed by lawyer and activist Amit Sahni and BJP leader Nand Kishore Garg. The Shaheen Bagh sit-in protests were orchestrated after violent protests by Muslim mobs across the country that went on vandalising public properties was widely condemned by the media and the public. The demonstration that is premised on Islamic supremacy and has encouraged blatant Hinduphobia has exceeded 2 months now. The highway connecting Faridabad- Delhi- Noida had been blocked by the protestors, leading to traffic woes and forcing locals to protest against them. A 4-month old child had died during these protests while several journalists had been manhandled while reporting on the Anti-CAA protests in the area. | Mid | [
0.5484536082474221,
33.25,
27.375
] |
This pattern includes layouts for a round table topper, a shaped long table runner, and placemats. Some of the leaves are dimensional and others are appliqued down. Make one or all to enjoy in your home. Designed by Vicky Lawrence | High | [
0.6719160104986871,
32,
15.625
] |
Some people who go looking for recyclable silver are applying this simple strategy...They go to a bank, buy rolls of coins and then sort through them, looking for coins from certain years that contain silver. If they can find just one or two old silver coins, they can recycle them and make money. And the more coin rolls they buy, the more money they make. At least, that’s the theory.... | Low | [
0.46564885496183206,
30.5,
35
] |
Romney surges in narrowing White House race on back of debate performance CUYAHOGA FALLS, Ohio -- Mitt Romney supporters taunted Barack Obama with chants of "four more weeks" as their candidate surged into the lead in U.S. opinion polls, propelled by his debate win last week. With both candidates campaigning in perennial kingmaker state Ohio, top Obama aides put a brave face on the president's slide, insisting they had always known his re-election bid would be tough. A flurry of new polls out Tuesday showed the delayed impact of Romney's debate triumph last week in Denver. For the first time since he accepted the Republican Party nomination, Romney topped the widely-read poll of polls conducted by the RealClearPolitics website, albeit by only 0.7 points. He led Obama by two points in daily tracking polls by Gallup and Investors Business Daily, but the pair were tied in another tracking poll, by Rasmussen. "Today, there are 28 days before the election," Romney told a crowd of about 12,000 people in Cuyahoga Falls, his largest rally to date in Ohio. "I think the right chant ought to be for them: 'Four more weeks! Four more weeks!'" he added, in a play on the "Four more years" chant reserved for incumbents. The raucous supporters obliged, and Romney followed up by letting them know just how important the first debate was. "I actually think the people have heard what he had to say," Romney said of Obama, "and it's time for them to see him leave the White House and to say goodbye to him on Nov. 6." Romney's rise in the polls in part reflected his tack toward the political center during the debate, a shift he continued on Tuesday by telling an Iowa newspaper he had no plans to introduce legislation restricting abortion. Romney did say, however, that he would restore a law ended by Obama that prohibits non-profit groups receiving federal government funds from providing abortions in other countries. Meanwhile at an Obama rally in Columbus, Ohio, the large crowd chanted "Four more years! Four more years!" after the president took the stage. "I need you ready to go to vote because we've got some work to do. We've got an election to win," Obama said. "Everything that we fought for in 2008 is on the line in 2012." Despite the clear bounce in Romney support, national polls are only one reflection of the race, and the campaigns are more interested in the eight or so swing states that will decide the election. Obama is up in most battlegrounds, though full data is yet to emerge on local races following the debate. And in a welcome respite for the president's camp, struggling to shake off the fallout of last week's listless debate performance, a CNN/ORC poll showed Obama holding steady in Ohio, leading 51 to 47 percent among likely voters. | Mid | [
0.5680000000000001,
35.5,
27
] |
A rabbit passed away this week. Some of my readers will say: Awww that’s a shame. Some of my readers will actually say: So what? It’s just a rabbit. Let me tell you about him. His name was Scooch. Scooch was a Mini Rex, like my own Miss Moo. He was also an abandoned bunny. No one knows for sure how old he was, but in 2009 he was noticed around a neighbourhood along with another rabbit. In March 2010, the other rabbit was found dead on the side of the road, hit by a car. A week later, Scooch was found on a families lawn, with his back legs not working. Eventually he found his way into the care of a lady named Lisa. It was determined that his back had been broken, and the little fellow was paralyzed. Most stories like this would end right here with me saying that the rabbit was put to sleep as he would never recover, and would have a poor quality of life. Interesting phrase that….”quality of life”. Little Scooch was not put down, and the quality of his life was superb. He became the little bunny that could. Paralyzed? No problem…he grew to be strong in his front legs, and could “scooch” as fast and as good as any other bunny could hop. Being paralyzed he also had a diaper, and he never let that hold him back either. Scooch had not one but TWO girlfriends. They were as devoted to him as he was to them. When his mate Hattie became deathly ill, and could not stand up on her own, Scooch, the little bunny that could, helped nurse her back to health. He would prop her up so that she would not roll over so much, and would wash her face over and over so she would be soothed and comforted. Every year at BunFest, the annual outreach and fundraising event for Rabbit Rescue Inc, Scooch was there to show people that a disabled rabbit could be happy, and have an excellent quality of life. He inspired everyone who met him, including myself. His legacy lives on in the dozens of disabled rabbits that are being cared for due to the example he set. He was a hero to hundreds of people, but for himself, he was just a happy boy who loved life, and reveled in what he COULD do. If he could no longer jump in the air (binky) like other bunnies, he would instead throw his head up and shake it to show his joy. Here are the remembrances of a few of Scooches human admirers. Jeanne says: It was truly special when I saw Scooch in the garden for the first time…not his first time, cause he was out as often as all the other bunnies, but rather the first time I saw him myself, in person. There he was in his well-fitting diaper, a fit designed to keep him dry, protect his fragile skin, and of course give the ultimate “glide” along the ground cover. I could see in no time, that he had not only adapted to his broken body, but he excelled at moving quickly around the property. His joy, adventure and curiosity shone through his very awkward gait. I could finally see why Lisa felt compelled to invite one more broken bunny into her home, she saw in him the capacity for joy, and knew that he deserved a chance to live his best life possible. If there was a doubt in my mind about Lisa’s choice, it disappeared that day. Scooch, you had the best life possible, and were loved by sister-bunnies, and all the humans who met you. You were such a great example of what could be, to those who may have bailed before their bunnies were ready to go. You deserved to know the kind of love, care and acceptance you got, as you now deserve to be free of your broken body, and are once again whole. Run free little one. Michelle says: I’ve seen videos of two-legged cats, two-legged dogs, pigs in wheelchairs, a goldfish in a sling (I swear!) and all of them, every single one, carried on with life. They had no self pity, no grudges, no depression, just a will to live and a love of life. For me, Scooch is the epitome of this very amazing animal ability. With no use of his back legs, that little guy lived on. He never let his paralysis get in the way of moving around, loving Hattie, playing, napping, lounging, eating. He never minded his condition, but we as humans grieved for him because of all of the things he couldn’t do. But animals don’t do that, and neither did Scooch. For me, he will always be a shining example of how to live life to its fullest, how to not let things beyond your control limit your dreams. His life was one of love, acceptance, perseverance and inspiration. I have included a link to my favourite video of Scooch – him binkying in the backyard on a warm summer day. There he is, a little bunny in a diaper, pulling himself through the silky grass, breathing the air, listening to all the sounds, and binkying! Shaking his little head as his only way to express his pure joy at that moment in time, blissfully unaware of the disadvantages that we felt pity for on his behalf. Wow, were we ever wasting our time ❤ http://www.youtube.com/watch?v=3k7VK5YwDJ4&sns=em Kimmy says: The one thing I remember most about Scooch is the incredible kisses he gives. I was visiting Lisa with Jeanne and we were all sitting, holding a bunny each while chatting. Scooch was nestled on my chest with my arm around him. He reached that sweet little face up and started giving me the softest little bunny kisses all up and down my cheek and chin. That little bunny, although disabled, had so much love to give. His body may not have worked for him, but his heart worked overtime with the love. 🙂 He will always have a special place in my heart. Susan says: They say lightning never strikes twice in the same place. It is a once in a lifetime event. For many people, meeting Scooch is a once in a lifetime shot of inspiration. How fitting it is that when he began his tenure as a Bruce Bunny, his name was Lightning. He is a remarkable rabbit, cared for by a remarkable woman. The bond that Lisa and Scooch share is enviable – rarely do you find such a true and pure example of unconditional love and dedication. Scooch was a casanova, he was a prankster, and he was happy. The Rainbow Bridge will never be the same…it will be better. The love that Scooch embodied was pure, it was deep, it was kind, and it was infectious. He cared for his girls, Hattie and Belle with the joy and unapologetic enthusiasm of a binky. He loved. He is love. He loved having his ears scratched. He loved his twig tunnel. He loved his lettuce box. He loved food. He DID NOT love Gleason’s visits from the other side of the fence. He loved treats. He loved doing half-head-shake binkies. He DID NOT love being held. He loved the hostas. He loved scooching up and down the hill. He loved farting. But most of all, he loved Lisa. Scooch loved playing tricks on people. His favourite trick was to sit in such a way that an unsuspecting person might become frantic and panicked truly believing that his leg had ACTUALLY fallen off. Just as this unsuspecting person would make a plea to Lisa to “come HERE!” He would put his leg back on and change the words in his speech bubble from “my leg fell off!” To “GOTCHA”! But I wouldn’t know anything about that. Scooch was a prankster, but by far his most common trick was stealing people’s hearts. The Rainbow Bridge will never be the same. It will be better. Scooch brings to the Bridge the love, compassion, light and energy that he picked up from the hundreds of people who have been touched by his story. But most of all, he brings to the Bridge a Scooch-heart full of love, appreciation, and peace for his soul-mate, Lisa. The world will never see another bunny just like Scooch. I guess it is true…Lightning really doesn’t strike twice in the same place. Over the past few weeks the strain of moving only on his front limbs really began to take its toll on his little body. He was about to lose all mobility and one of his two remaining limbs that worked had to be immobilized. More importantly his spirit declined. Despite medication, he was in pain, and was not the joyous Scooch we all loved and admired so much. It was time. He had lived a life of SUCH high quality, but that quality had declined. The decision was made to help him painlessly to the other side. He may have been “just a bunny”, but he will be sorely missed, not only by his guardian Lisa and his mate Hattie, but also by me, and literally hundreds and hundreds of others. This was the story of Scooch, the little bunny that could. You’re in a better place now little man. We miss you. | Mid | [
0.6327944572748261,
34.25,
19.875
] |
32P-postlabeling detection of aromatic adducts in the white blood cell DNA of nonsmoking police officers. Atmosphere in urban areas may be polluted by a number of combustion sources, including industries, vehicle traffic, and residential heating. Traffic police constitute a group of workers that is highly exposed to urban pollutants, especially those from motor vehicle exhaust. We conducted a biomonitoring study to simultaneously measure in 34 nonsmoking police officers and in 36 nonsmoking office workers, as referents, the individual benzo(a)pyrene [B(a)P] exposure using personal samplers and the formation of DNA adducts in peripheral WBCs using 32P-postlabeling techniques. Our results show that the police officers were exposed to significantly higher levels of B(a)P than were referents (P < 0.0001). No seasonal variation of the atmospheric levels of B(a)P was found throughout the year. The median relative adduct labeling x 10(-8) values of the controls and exposed police officers were 0.94 (range, 0.1-3.7) and 1.3 (range, 0.1-5.5), respectively, using the nuclease P1 technique. Although the DNA adduct levels of police officers were globally higher than those of referents (P < 0.05), the difference was entirely due to the summer difference [median values 0.80 (range, 0.1-1.8) and 2.8 (range, 0.7-5.5), respectively (P < 0.001)]. In winter, the DNA adduct levels were substantially identical, and in midseason, there was only a very small increase in police officers, with respect to controls (statistically not significant). Moreover, a more significant seasonal variation of bulky aromatic DNA adduct levels was observed in WBC DNA samples of police officers (P < 0.05) compared to those of referents. The seasonal variation of bulky aromatic adduct levels could be correlated with the reported seasonal variation of aryl hydrocarbon hydroxylase inducibility in human lymphocytes. | High | [
0.6615384615384611,
32.25,
16.5
] |
The salmon run of sockeye salmon experienced a major recovery in the late 20th century, sometimes surpassing the Adams River as the greatest sockeye producer in the Fraser basin.[6] However, the river, wildlife, and nearby water sources is threatened by 10 million cubic meters of contaminated mine waste that escaped in August 2014.[7] | Mid | [
0.549494949494949,
34,
27.875
] |
IN THE COMMONWEALTH COURT OF PENNSYLVANIA Jerome Marshall, : Petitioner : : v. : No. 541 C.D. 2018 : Argued: March 14, 2019 Workers’ Compensation Appeal : Board (Easton Coach Company and : Hartford Fire Insurance Company), : Respondents : BEFORE: HONORABLE RENÉE COHN JUBELIRER, Judge HONORABLE PATRICIA A. McCULLOUGH, Judge HONORABLE BONNIE BRIGANCE LEADBETTER, Senior Judge OPINION NOT REPORTED MEMORANDUM OPINION BY JUDGE COHN JUBELIRER FILED: April 5, 2019 Jerome Marshall (Claimant) petitions for review of the Order of the Workers’ Compensation Appeal Board (Board) that affirmed the decision of a Workers’ Compensation Judge (WCJ) approving the calculations set forth by Easton Coach Company and its insurer, Hartford Fire Insurance Company, (together, Employer) in a Third-Party Settlement Agreement (Employer’s TPSA) and granting Employer’s Modification Petition based on those calculations. Claimant argues it was error to rely on Employer’s calculations because they included $153,982.45 from the settlement of his uninsured/underinsured motorist (UIM) claim that he asserts had been set aside exclusively to fund a Medicare Set- Aside Arrangement (MSA) on his behalf and, therefore, was not subject to subrogation under Section 319 of the Workers’ Compensation Act (Act), 77 P.S. § 671.1 The MSA that Claimant asserts was established was created in contemplation of the settlement of his WC claim with Employer, as well as the settlement of the UIM claim. But, because there was no settlement of the WC claim, Employer remains primarily liable for medical treatment related to Claimant’s work injury. Therefore, the MSA, which was never completely funded, was not necessary to protect Medicare’s interest here. The Board, therefore, did not err in finding that the challenged amounts should be included in the Third- Party Recovery, and so remained subject to subrogation. However, subsequent to the Board’s Order, the Supreme Court decided Whitmoyer v. Workers’ Compensation Appeal Board (Mountain Country Meats), 186 A.3d 947, 949 (Pa. 2018), which determined that employers cannot take a credit against a claimant’s medical benefits. We therefore vacate in part and remand for a determination of whether any recalculation is necessary. I. Background A. Facts The facts in this matter are not disputed. Claimant, a bus driver, sustained numerous injuries to his lumbar and cervical spine2 in a September 16, 2005 motor vehicle accident that occurred while driving Employer’s bus. (WCJ Decision, Findings of Fact (FOF) ¶¶ 1-3.) He has not returned to work due to those injuries, for which he continues to receive medical treatment. When Claimant began receiving Social Security Old Age Benefits, Employer offset his workers’ 1 Act of June 2, 1915, P.L. 736, as amended, 77 P.S. § 671. 2 The accepted injuries were “[c]ervical sprain and disc herniation injuries” and “C5-6 herniation and L4-5 disc protrusion.” (WCJ Decision, Findings of Fact ¶¶ 2-3.) 2 compensation (WC) wage loss benefits by his Old Age Benefits, thereby reducing his weekly benefit amount from $324.00 to $69.11. (Id. ¶ 23.) Claimant filed a third-party action against the driver of the vehicle that struck the bus, which resulted in a settlement of $35,000. (Id. ¶ 5.) Claimant also filed an UIM claim against Employer’s UIM carrier, for which he recovered $1.3 million in a July 23, 2015 settlement agreement (UIM settlement agreement). Of that amount, 33 1/3 percent, or $413,333.33, was paid to Claimant’s wife for loss of consortium. This left $886,666.67 in UIM settlement proceeds payable to Claimant, which the parties do not dispute are subject to subrogation under Section 319. In addition to these proceeds, Claimant received from the UIM carrier “$30,788.45 as seed money and $123,194.00 in the funding of an annuity to fund Claimant’s portion of a proposed . . . MSA.”3 (Id. ¶ 6.) It is this $153,982.45 (the Disputed Amount) that is at issue in this appeal. 3 The UIM settlement agreement provided, relevantly, that the UIM carrier would pay THIRTY THOUSAND SEVEN HUNDRED EIGHTY EIGHT DOLLARS AND FORTY FIVE CENTS [$30,788.45] TO A MEDICARE S[]ET-ASIDE ARRANGEMENT (“MSA”) THAT WILL BE ESTABLISHED ON BEHALF OF JEROME MARSHALL AND [TWELVE] (12) YEARLY PAYMENTS OF THIRTEEN THOUSAND NINE HUNDRED NINETY-FOUR DOLLARS AND SEVENTY FIVE CENTS ($13,994.75) EACH (totaling $167,937.00) (“Subsequent Payments”) TO BE PAID TO THE MSA as further outlined within the Addendum A below . . . payable to Jerome Marshall in trust to be in compliance with the Medicare Secondary Payer reimbursement rules and regulations . . . . (Reproduced Record (R.R.) at 68a (emphasis omitted, first alteration in the original).) Addendum A sets forth the terms of the 12 yearly payments of $13,994.75 to Claimant from Pacific Life Insurance Company based on the annuity the UIM carrier purchased. (Id. at 73a- 74a.) 3 During settlement discussions, inquiries were made about what amount would be necessary to fund an MSA for Claimant’s future medical needs. On June 24, 2013, the Center for Medicare and Medicaid Services (CMS) issued a letter on a requested MSA application filed by Employer in anticipation of a potential settlement of Claimant’s WC claim. (Reproduced Record (R.R.) at 116a-18a.) In that letter, CMS stated that any MSA for Claimant had to be funded in the amount of $335,874.00. (FOF ¶ 8.) If the MSA was to be funded by an annuity, CMS indicated that to reach the required amount, the seed money for the annuity had to be $68,377.00 and the annuity had to pay $19,106.00 for 14 years. (Id.) The CMS letter further indicated that “[a]pproval of this []MSA is not effective until a copy of the final executed workers’ compensation Settlement Agreement, which must include this approved []MSA amount is received by CMS . . . .” (Id. ¶ 9 (emphasis added); R.R. at 117a.) Following CMS’s letter, Claimant requested review of the proposed MSA by Garretson Resolution Group (GRG). On July 1, 2015, GRG responded that it evaluated the need for an MSA based on the facts presented and concluded that an MSA was needed based on CMS’s guidelines, Claimant’s injuries, the damages, and the gross award. (FOF ¶ 12.) As part of its analysis, GRG stated that it understood that: Claimant’s “WC carrier has paid for injury-related care expenditures prior to the date of Settlement”; the proposed “MSA is expected to pay for injury-related care going forward of the date of Settlement”; and “the parties have resolved both the WC and [t]hird-[p]arty liability components.” (Id. ¶¶ 13-14 (emphasis added); R.R. at 104a-07a.) GRG explained that, in order to fund the MSA in accordance with CMS’s requirements based on the settlements 4 of both the WC and third-party claims, the third-party settlement was to pay for 55 percent of the MSA, or $184,730.70, and Claimant’s WC carrier was responsible for funding 45 percent, or $151,143.30. (FOF ¶ 16.) However, both parties agree there has been no settlement of Claimant’s WC claim. After receiving GRG’s analysis, Claimant obtained a quote for an annuity in which the UIM carrier would pay Pacific Life Insurance Company (Pacific Life) $123,194.00 for an annuity that would pay $13,994.75 for 12 years, for a total payment of $167,937.00. (Id. ¶ 17.) The UIM carrier issued a check to Pacific Life for $123,194.00 to purchase the annuity. (Id. ¶ 18.) Following the settlement of the UIM claim, Employer requested that Claimant execute a TPSA, setting forth the information necessary to calculate Employer’s subrogation interest. Claimant did so (Claimant’s TPSA), setting forth the following: Total Third-Party Recovery - $881,857.98; Accrued WC Lien - $504,609.77; Expenses of Recovery - $451,844.44; and Balance of Recovery - $377,248.21. (R.R. at 140a.) Based on those numbers, Claimant’s TPSA provided that Employer was liable for 51.24 percent of Claimant’s future benefits until the Balance of Recovery was exhausted. (Id.) Employer did not execute Claimant’s TPSA. Instead, Employer filed the Modification Petition asserting the parties were unable to agree as to the terms of a TPSA. B. The WCJ’s Decision The Modification Petition was assigned to a WCJ for disposition. Multiple hearings were held, at which argument, but no sworn testimony, was presented. Claimant presented documentary evidence, including Claimant’s TPSA, the CMS letter, GRG’s analysis, and the quote for the annuity purchased by the UIM carrier. 5 Employer submitted Employer’s TPSA, which included the Disputed Amount for a total Third-Party Recovery of $1,035,840.40, a Balance of Recovery of $531,230.63, and a Reimbursement Rate of 43.62 percent. (Id. at 75a-76a.) Employer also presented copies of checks and the settlement agreements. Both parties submitted proposed findings of fact and briefs in support of their respective positions. The WCJ determined that the only dispute was a legal one, whether the Disputed Amount should be included in the total of the Third-Party Recovery, and thus, subject to subrogation, or excluded therefrom “because it was deposited in a separate interest bearing account by . . . Claimant after the Third[-]Party Settlement.” (FOF ¶ 28.) The WCJ found that it should be included, addressing, and rejecting, Claimant’s arguments to the contrary. The WCJ first found that a “valid” MSA had not been created. The WCJ observed that the CMS letter and GRG analysis expressly stated that the proposed MSA was “prepared in contemplation of resolving both the [t]hird[-p]arty claim and the [WC] claim” and that the proposed MSA had to be funded by both Claimant (via the Third-Party Settlement) and Employer. (Id. ¶ 32 (emphasis added).) Because the WC settlement never occurred, the WCJ reasoned that the proposed MSA was not approved, and never came to fruition because such settlement was required before the MSA became valid. (Id.) Next, the WCJ held that because no WC settlement occurred, Employer, not Medicare, remained ultimately liable to pay for the medical treatment for Claimant’s work injury, subject to Claimant’s paying a portion of his medical treatment during the “grace period” created by the Third- 6 Party Settlement,4 and there was no preemption issue. (Id. ¶¶ 34, 36-37.) The WCJ found that the principles of equity do not apply to Section 319 liens. (Id. ¶¶ 33-35.) Moreover, the WCJ pointed out that adding these funds in the total Third- Party Recovery places Claimant in the same position as all other claimants who settle a third-party claim and have to pay for a portion of his or her own benefits during the employer’s “grace period.” (Id. ¶ 35.) The WCJ noted that Employer’s calculation was not unjust because it was not seeking the entire value of the annuity, only the annuity’s purchase price, and Claimant would realize the difference between those amounts. (Id. ¶ 33.) Finally, the WCJ held that Cullen v. Pennsylvania Property and Casualty Insurance Guaranty Association, 760 A.2d 1198 (Pa. Cmwlth. 2000), was distinguishable as it involved a situation where the employer was statutorily barred from subrogating settlement funds received by a claimant from the Pennsylvania Property and Casualty Insurance Guaranty Association (Guaranty Association). (FOF ¶ 38.) The WCJ therefore determined that Employer met its burden of proof on the Modification Petition, concluding no valid MSA was established because no WC settlement had occurred and no such agreement was sent to CMS finalizing the creation of the MSA referenced in the CMS letter. (WCJ Decision, Conclusion of Law (COL) ¶ 2.) The WCJ held that Employer “has, at all times, remained responsible for the payment of reasonable, necessary and causally related medical expenses in relation to the Claimant’s accepted work . . . injury . . . .” (Id. ¶ 3.) Thus, the WCJ approved Employer’s TPSA and found that Employer would be 4 Pursuant to Section 319, any third-party recovery that exceeds an employer’s accrued lien is treated “as an advance payment of the employer’s future compensation obligation, thereby providing the employer with a ‘grace period’ from making compensation payments.” Suburban Delivery v. Workers’ Comp. Appeal Bd. (Fitzgerald), 858 A.2d 219, 223 (Pa. Cmwlth. 2004). 7 liable for 43.62 percent of Claimant’s future indemnity and medical benefits until the Balance of Recovery, $531,230.63, is exhausted. (Id. ¶ 4.) The WCJ reduced Claimant’s weekly benefit to $30.15 per week and indicated that Claimant would be responsible to pay 56.38 percent of his future medical expenses. (Id. ¶¶ 5-6.) Finally, the WCJ concluded: [t]he seed money and annuity payments being paid by Pacific Life may be utilized by Claimant in any manner he sees fit, as Medicare’s interests remain protected by the [Employer’s] ultimate liability to pay Claimant’s reasonable, necessary and causally related medical expenses until such time as those benefits are terminated or resolved by the funding of a valid MSA submitted to and approved by CMS. (Id. ¶ 8.) C. The Board’s Opinion Claimant appealed to the Board, challenging the WCJ’s determinations. The Board affirmed, “find[ing] no error in the WCJ’s reasoning and . . . [the WCJ] properly included the MSA funds in the overall amount that [Employer] was entitled to with its subrogation lien.” (Board Opinion (Op.) at 6.) The Board agreed that because no formal settlement of Claimant’s WC claim was executed, no MSA was formally created per the CMS letter. As no MSA was created, the Board held that the challenged funds remained a part of Claimant’s total Third- Party Settlement of which Employer had an absolute right to subrogate under Section 319. (Id. (citing Thompson v. Workers’ Comp. Appeal Bd. (USF & G Co.), 781 A.2d 1146, 1151 (Pa. 2001)).) Claimant now petitions this Court for review. 8 II. Claimant’s Appeal to this Court A. Claimant’s Arguments On appeal,5 Claimant reiterates the arguments he made before the WCJ, which he asserts the WCJ and Board erred in rejecting.6 Claimant argues the Disputed Amount was used to establish a valid MSA under federal law that was intended to protect Medicare’s interests by providing Claimant with funds to pay the future medical bills associated with his ongoing work injury. Claimant contends there is no requirement for his WC claim to have settled or for CMS to have approved the MSA for the MSA to be valid. Because a valid MSA was established, Claimant maintains including these funds in the total Third-Party Recovery and allowing their subrogation under Section 319 is unjust, unfair, and contrary to the humanitarian purpose of the Act and federal law, which preempts state law subrogation under these circumstances. Further, according to Claimant, he needed the MSA in order to settle his UIM claim and, because Employer’s WC carrier refused to fund the MSA, the UIM carrier agreed to do so. The WCJ’s conclusion that Claimant was free to use that money as he wishes conflicts with the terms of the UIM settlement agreement. And, by approving subrogation of these funds, Claimant argues the WCJ gave Employer the full benefit not only of the Third-Party Settlement proceeds, but also the MSA, and by doing so, relieved Employer from paying for a substantial 5 This Court’s “review is limited to determining whether constitutional rights were violated, whether the adjudication is in accordance with the law[,] or whether necessary findings of fact are supported by substantial evidence.” City of Philadelphia v. Workers’ Comp. Appeal Bd. (Sherlock), 934 A.2d 156, 159 n.5 (Pa. Cmwlth. 2007). 6 Claimant raises eight separate issues in his brief to this Court; however, his arguments on many of these issues overlap significantly. Accordingly, we have consolidated them into those discussed herein. 9 amount of Claimant’s future indemnity benefits and medical expenses related to his work injuries. Claimant also maintains the WCJ erred in including the Disputed Amount as a lump sum since Claimant would not receive the full benefit of the annuity for 12 years, which is manifestly unfair and prejudicial because it treats Employer more favorably than Claimant. Finally, Claimant asserts the rationale for subrogation under the Act and the prohibition against allowing employers to take a double offset through subrogation support his appeal. Citing Dale Manufacturing Company v. Bressi, 421 A.2d 653, 654 (Pa. 1980), Claimant argues that excluding the Disputed Amount from subrogation: does not allow him to receive a double recovery; Employer is not being compelled to pay WC benefits due to the negligence of a third party because the MSA relieves Employer of its liability to pay for Claimant’s future medical bills for that injury; and the third party, the UIM carrier, is not escaping liability for the negligence because it is that party that funded the MSA. Claimant also argues, citing Cullen, that subrogation cannot be used by an employer to obtain a double offset, as doing so contradicts the Act’s humanitarian purpose. Under the WCJ’s erroneous interpretation of Cullen, Claimant contends, Employer benefits from the MSA, “since the total amount of the third[]party settlement agreement is increased, . . . [its] portion of the lien reimbursement [is] greater . . . .” (Claimant’s Brief (Br.) at 44.) Claimant argues that Employer also benefits because it now is “only liable to pay substantially lower percentages of every future indemnity benefit and every future medical benefit.” (Id. at 45.) B. Employer’s Arguments 10 Employer argues its entitlement to subrogate Claimant’s recoveries from the third-party suit (against the driver) and the UIM claim is absolute under Section 319, and not subject to exceptions, equitable or otherwise. (Employer’s Br. at 6 (citing Thompson, 781 A.2d at 1151).) According to Employer, Claimant’s arguments that a valid MSA was established are incorrect. Because this matter involves a WC claim that has not been settled, Employer asserts it remains obligated to pay for the medical treatment that is reasonable, necessary, and causally related to Claimant’s work injury. This means, Employer argues, that no liability shifted to Medicare and, therefore, no MSA was needed to protect Medicare’s interests because those interests are protected by the ongoing medical coverage Employer provides to Claimant. With no need for an MSA, Employer contends the funds purportedly designated to create an MSA should be treated “merely [as] deposits of funds which can be freely used by Claimant for any purpose . . . ,” which are subject to subrogation under Section 319. (Id. at 8-9.) As for the particular arguments Claimant reiterates on appeal, Employer adopts the WCJ’s analysis rejecting each of those arguments as its own. (Id. at 9-16 (quoting FOF ¶¶ 32-38).) C. Discussion When considering issues of subrogation, we are guided by the statutory language of Section 319 and the mandatory nature of subrogation reflected by that language. In relevant part, Section 319 provides: Where the compensable injury is caused in whole or in part by the act or omission of a third party, the employer shall be subrogated to the right of the employe, his personal representative, his estate or his dependents, against such third party to the extent of the compensation payable under this article by the employer; reasonable attorney’s fees and other proper disbursements incurred in obtaining a recovery or in 11 effecting a compromise settlement shall be prorated between the employer and employe, his personal representative, his estate or his dependents. The employer shall pay that proportion of the attorney’s fees and other proper disbursements that the amount of compensation paid or payable at the time of recovery or settlement bears to the total recovery or settlement. Any recovery against such third person in excess of the compensation theretofore paid by the employer shall be paid forthwith to the employe, his personal representative, his estate or his dependents, and shall be treated as an advance payment by the employer on account of any future instalments of compensation. 77 P.S. § 671. Our Supreme Court has explained that this language “is clear and unambiguous” and “written in mandatory terms” that “admit[] no express exceptions, equitable or otherwise.” Thompson, 781 A.2d at 1151. Those terms do “more than confer a ‘right’ of subrogation upon the employer; rather, subrogation is automatic.” Id. The purpose of Section 319 is threefold and is intended to: prevent a claimant from receiving a double recovery for the same injury; ensure that an employer is not required to make compensation payments due to the negligence of a third party; and prevent the third party from escaping liability for his or her negligence. Poole v. Workers’ Comp. Appeal Bd. (Warehouse Club, Inc.), 810 A.2d 1182, 1184 (Pa. 2002) (citing Dale Mfg. Co., 421 A.2d at 654). When determining whether those funds are subject to subrogation under Section 319, the manner in which the parties to a third-party settlement characterize the settlement funds is not conclusive. Serrano v. Workers’ Comp. Appeal Bd. (Ametek, Inc.), 154 A.3d 445, 451 n.10 (Pa. Cmwlth. 2017) (“[A] claimant is not entitled to craft a third-party settlement award in a manner that limits an employer’s subrogation rights.”); Bumbarger v. Bumbarger, 155 A.2d 216, 218-19 (Pa. Super. 1959) (employee and third party cannot interfere with an employer’s right to subrogation by designating part of the recovery as damages for pain and suffering). 12 1. Whitmoyer After briefing, the Supreme Court decided Whitmoyer, which held that “when a [claimant] recovers proceeds from a third-party settlement . . . the employer . . . is limited to drawing down against that recovery only to the extent that future disability benefits [(and not medical expenses)] are payable to the claimant.” Whitmoyer, 186 A.3d at 949 (emphasis added). In other words, an employer cannot take a credit against ongoing medical benefits. The WCJ’s Decision here, affirmed by the Board, neither of which had the benefit of the Supreme Court’s decision, allowed Employer to take a credit against Claimant’s ongoing medical benefits. We, therefore, by order, directed the parties to address the impact of Whitmoyer on this case. As recognized by Employer at oral argument, only indemnity benefits are now correctly subrogable, and Employer remains liable for Claimant’s future reasonable and necessary medical treatment. We will therefore consider Claimant’s and Employer’s arguments with regard to whether the MSA removed the Disputed Amount from the Third-Party Settlement and Employer’s right to subrogation with Whitmoyer in mind. We will also vacate the Board’s Order affirming the WCJ’s Decision with regard to the subrogation calculations, and remand the matter for a determination of whether any recalculation is necessary. 2. Subrogation of the challenged amounts Claimant’s arguments regarding why Employer cannot subrogate the Disputed Amount are based on his contentions that a valid MSA was created to protect Medicare’s interests, and that the monies contained therein are available 13 only for Claimant to use for the future medical treatment of his work injury. However, after reviewing the purpose of MSAs, as well as the settled legal principles regarding an employer’s ongoing liability for a work injury absent settlement of the WC claim, we discern no error in the WCJ’s conclusions. In WC cases, Medicare payment is secondary to the employer’s payment of a claimant’s future medical expenses. 42 U.S.C. § 1395y(b)(2)(A)(ii) (Medicare is “secondary payer;” “[p]ayment [by Medicare] may not be made . . . with respect to any item or service to the extent that -- . . . (ii) payment has been made or can reasonably be expected to made under a [WC] law or plan . . . .”); see Miller v. Workers’ Comp. Appeal Bd. (Electrolux), 940 A.2d 603, 608 (Pa. Cmwlth. 2008); Weinstein v. Sebelius, No. 12-154, 2013 WL 1187052, at *3 (E.D. Pa. Feb. 13, 2013) (“the Medicare Secondary Payer statute” “makes Medicare a ‘secondary’ source of payment for health care services”). It is not until there is a settlement in which the employer is released from paying future medical benefits that the parties are required, or need, to protect Medicare’s interests in remaining the secondary payer. Id. When Medicare does need to be protected, the recommended method is an MSA, “a financial agreement that allocates a portion of a [WC] settlement to pay for future medical services related to the work injury, illness, or disease.” Sheaffer v. Workers’ Comp. Appeal Bd. (Standard Steel, LLC) (Pa. Cmwlth., No. 783 C.D. 2016, filed Feb. 14, 2017), slip op. at 2 n.27 (citations omitted) (emphasis added).8 When there is no WC settlement there is no need to submit a WC MSA. 7 Sheaffer, an unreported opinion, is cited for its persuasive authority in accordance with Section 414(a) of the Commonwealth Court’s Internal Operating Procedures, 210 Pa. Code § 69.414(a). 8 See also Workers’ Compensation Medicare Set-Aside Arrangement (WCMSA) Reference Guide Version, 2.9, available at https://www.cms.gov/Medicare/Coordination-of- Benefits-and-Recovery/Workers-Compensation-Medicare-Set-Aside-Arrangements/Downloads/ (Footnote continued on next page…) 14 Claimant argues that his receipt of the CMS letter and GRG’s analysis supports that, even without a settlement of his WC claim, he still had a valid MSA. However, those documents do not do so because their analyses were predicated on the assumptions that: (1) Claimant’s WC claim was being settled; and (2) the MSA would be funded in the amount projected to cover Claimant’s future work- related medical expenses, $335,874.00, thereby protecting Medicare’s interests in remaining the secondary payer. (R.R. at 102a-07a,9 116a-17a.10) Because the WC claim never settled and Employer never paid the amount anticipated to be its contribution to the MSA, resulting in the asserted MSA never being fully funded, Claimant’s reliance on these documents is misplaced. Claimant also argues that subrogating the Disputed Amount is erroneous because, following the Third-Party Settlement, Employer is no longer liable for his medical treatment and, therefore, an MSA was required to protect Medicare’s interests. However, this is not the case. It is well settled WC law that once an employer becomes liable for a work injury, it remains so “in the absence of a final receipt, an agreement, a supersedeas[,] or any other order of the WCJ” ending that liability. McLaughlin v. Workers’ Comp. Appeal Bd. (St. Francis Country House), 808 A.2d 285, 288 (Pa. Cmwlth. 2002). There has been no settlement of _____________________________ (continued…) WCMSA-Reference-Guide-Version-2_9.pdf (describing the purpose and manner in which MSAs should be established and administered) (last visited April 3, 2019). 9 GRG’s analysis referenced the settlement of both claims and provided that both Employer (through a WC settlement) and Claimant (through the UIM Settlement) would fund the MSA in the amount of $335,874.00. 10 CMS’s letter explained that any MSA created had to be funded $335,874.00 and “[a]pproval of this []MSA [was] not effective until a copy of the final executed [WC] settlement agreement, which must include th[e] approved . . . amount, [was] received by CMS.” (R.R. at 116a-17a.) 15 Claimant’s WC claim and, thus, Employer has not been relieved of its liability for Claimant’s WC injury. Because Claimant’s WC claim remains open and Employer remains primarily liable for the medical treatment related to Claimant’s work injury, Miller, 940 A.2d at 608, Medicare’s interests are adequately protected without the need for an MSA.11 Moreover, the way the parties to a third-party action fashion the settlement is not determinative to whether the settlement is subject to subrogation. Serrano, 154 A.3d at 451 n.10; Bumbarger, 155 A.2d at 218-19. Thus, the fact the UIM settlement agreement designates those monies as funding an MSA to pay for Claimant’s future medical treatment does not remove those funds from the Third-Party Settlement amount available for Employer’s subrogation, in the absence of a valid MSA and WC Settlement. We recognize Claimant contends the WCJ erred in finding that the Disputed Amount did not have to be used to pay Claimant’s medical treatment. However, we believe that the Board and WCJ did not err. Claimant next asserts the WCJ erred by including the Disputed Amount, as a lump sum, in the total Third-Party Recovery. However, where a third-party settlement results in an annuity, it is the cost, or present value, of the annuity that is subject to subrogation under Section 319. Suburban Delivery v. Workers’ Comp. Appeal Bd. (Fitzgerald), 858 A.2d 219, 226-27 (Pa. Cmwlth. 2004); A.C. & S. v. Workmen’s Comp. Appeal Bd. (Dubil), 616 A.2d 1085, 1087-88 (Pa. Cmwlth. 1992). Here, the WCJ included the cost of the annuity, $123,194.00, and the 11 Because there is no need to protect Medicare’s interests as required by federal law through the creation of an MSA, we will not address Claimant’s contention that subrogation under Section 319 is preempted by federal law. 16 $30,788.45 seed money, totaling $153,982.45, in Claimant’s total Third-Party Recovery. There was no error in the WCJ doing so. Claimant finally argues that the Act’s humanitarian purposes and Cullen require a different result because, as a result of subrogation, Employer receives a double offset and is relieved from its full liability to pay Claimant’s WC benefits, which is unjust and inequitable. Claimant’s arguments are premised on his view that the MSA remains valid, and that the Disputed Amount, which was designated for inclusion in the MSA, should therefore not be also subject to subrogation. However, there is no valid MSA and therefore the Disputed Amount cannot be removed from the Third-Party Settlement. Because there is no valid MSA, and no WC Settlement, there is no double offset, and Employer continues to be primarily liable for Claimant’s medical benefits. Therefore, as the WCJ cogently explained in his opinion, the inclusion of the Disputed Amount in Claimant’s total recovery is not unjust because it places Claimant in the same position as other claimants who settle third-party actions and whose receipt of WC benefits is reduced during the grace period created by the amount of the settlement that exceeded the accrued WC lien. (FOF ¶ 35); see Suburban Delivery, 858 A.2d at 223 (explaining that an employer receives a grace period from paying the full amount of a claimant’s benefits where a third-party settlement exceeds the employer’s accrued lien). Cullen is inapplicable because here, unlike in Cullen, the amounts that Claimant recovered from the Third-Party Settlement have not been reduced. As previously explained, while Claimant and the UIM carrier here designated certain funds from that settlement for an MSA, such designation is not dispositive in determining Employer’s subrogation rights. Serrano, 154 A.3d at 451 n.10; Bumbarger, 155 17 A.2d at 218-19. Accordingly, there is no double offset that would be contrary to the humanitarian purpose of the Act. III. Conclusion For the foregoing reasons, the WCJ did not err in including the Disputed Amount in Claimant’s Third-Party Recovery making those funds subject to Employer’s subrogation under Section 319, and the Board’s Order upholding that determination is affirmed. However, to the extent the Board’s Order affirmed the WCJ’s Decision allowing Employer to take a credit against Claimant’s ongoing medical benefits, the Order is vacated in part, and we remand for a determination of whether any recalculation is necessary. _____________________________________ RENÉE COHN JUBELIRER, Judge 18 IN THE COMMONWEALTH COURT OF PENNSYLVANIA Jerome Marshall, : Petitioner : : v. : No. 541 C.D. 2018 : Workers’ Compensation Appeal : Board (Easton Coach Company and : Hartford Fire Insurance Company), : Respondents : ORDER NOW, April 5, 2019, the Order of the Workers’ Compensation Appeal Board (Board), entered in the above-captioned matter, is AFFIRMED to the extent it affirmed the Workers’ Compensation Judge’s (WCJ) determination that the funds designated in a third-party settlement agreement to create a Medicare Set- Aside Arrangement for Jerome Marshall (Claimant) were subject to subrogation. The Order is VACATED to the extent it affirmed the WCJ’s determination that Easton Coach Company and Hartford Fire Insurance Company could take a credit against Claimant’s ongoing medical benefits, and we REMAND the matter to the Board to remand to the WCJ for a determination of whether any recalculation is necessary. Jurisdiction relinquished. _____________________________________ RENÉE COHN JUBELIRER, Judge | Mid | [
0.545882352941176,
29,
24.125
] |
Hi everyone, my name is Max Holliday, and I'm Peter Knapp and I'm Andrew Dianetti, We are part of the 2013 NASA Aeronautics Academy at Glenn Research Center , where we are characterizing and investigating the failure modes of Nickel-based superalloys in jet turbine disks. This exciting research can help improve the thermal capabilities of these alloys which can directly increase the fuel efficiency of commercial aircraft by lowering the amount of fuel used during operation Turbine engines are used in virtually all commercial aircraft. In these engines, air is compressed into the combustion chamber, where fuel is mixed and ignited. A turbine is used to extract energy from this combustion to power the compressor, and the remaining energy is used for thrust. The turbine disk is one of the most critical components in engine design, as it is subjected to a high velocity stream of hot gas. The maximum temperature that can be withstood by the turbine disc limits the amount of energy that can be extracted from the fuel, and thus impacts the engine s performance. Materials that can withstand higher temperatures under the harsh operating conditions of the engine are sought to improve engine performance. The advanced powder metallurgy disk alloy ME3 was developed in the NASA High Speed Research/Enabling Propulsion Materials program in cooperation with GE Aviation and Pratt & Whitney Aircraft Engines. This alloy was designed to have extended durability at temperatures up to 700 C in large disks. The higher temperature capability of ME3 significantly improved fuel efficiency in jet turbine engines and is considered a major advancement in disk alloys. The team we are working with at NASA, along with GE and Pratt & Whitney won a 2004 R&D 100 award for the development of ME3. Superalloys such as these are being utilized in compressor and turbine disks in current and emerging aircraft such as the Boeing 787 and Airbus A380. A turbine disk is a fracture critical structural engine component. Failures are typically uncontained, and can result in the loss of an engine, considerable airframe damage, or loss of the entire aircraft. These advanced alloys are susceptible to surface processing defects that have been known to cause failures. The FAA frequently requires enhanced inspections to detect disk cracking, in order to ensure no uncontainable failures occur. The durability of these material systems was assessed during material and engine development, however, issues can emerge as these new components spend more time in service. It would be too expensive and time consuming to produce hundreds of disks to test, so we use tensile specimens to observe corrosion effects on alloy life and predict fracture initiation. The specimens are corroded using different techniques to help simulate similar engine-like conditions and analyzed on the Alicona 3D imaging microscope to look at corrosion pit depth, width, and overlap. We use these metrics to predict exactly where the specimen will fail. The specimens are then Fatigue tested until failure. Then we use a high scanning electron microscope to observe the fracture surface and determine the points of failure in a process known as Fractography. The data collected during fractography can be used to determine what type of pit initiated crack growth and which type lead to failure. So far we are 85% successful at predicting which pit will cause fatigue failure, but we are improving with every new set of data. These facets of failure analysis not only help characterize the ME3 superalloy, but allow materials scientists to evaluate failed disks and determine exactly where the failure occurred and why. A key goal of this project is to understand how physical and microstructural factors control the physical properties of nickel-based superalloys. Specifically, the machining of disk features can profoundly affect the fatigue like of these alloys by imparting cold work or surface defects. Additionally, changes in the distribution of phases within these alloys can cause order of magnitude reductions in fatigue life. It is possible to quantify the effects of these processes using a technique called Vickers microhardness testing. In Vickers mcrohardness testing a square oyramidal diamond identer is pressed into a surface at a given load, the hardness is then determined from the dimensions of the remaining indent. We can use this testing procedure to create a map of surface hardness to determine the effects of machining. This summer we are examining broached specimens of NASA s Low Solvus High Refractory alloy that have been cycles to the point of failure. Following hardness testing we will etch these samples to expose the microstructure, i.e. grain and phase distribution, inorder to correlate changes in hardness with changes in phase. One method that can be used to resist the growth of fatigue cracks is to create a residual stress in the surface of the disk. A process known as shot peening, where small pellets are fired at the material surface, is used to create a compressive stress layer near the surface that resists external tension and suppresses the growth of fatigue cracks. Residual stress can be measured using a technique known as X-Ray Diffraction. In XRD, the strain of the lattice is measured to determine the stresses present. By removing different amounts of surface material using a process known as electropolishing the residual stress profile, as a function of depth, can be determined. Understanding these stresses will allow us to better understand how to resist crack growth. This summer, we are working to characterize the effect of different shot peening conditions on the residual stresses present in these alloys, as well as to characterize the effect of cyclic loading and elevated temperature conditions on the residual stresses throughout the life of a component. Our projects have involved characterizing aspects related to the failure modes of turbine disks. Current and future work involving these superalloys will help to create more efficient, safer aircraft engines. We would like to thank NASA for the opportunity to work on such an important and exciting program and we encourage any and all prospective engineers to consider NASA as outlet for conducting meaningful research. Thank you! | Mid | [
0.6341463414634141,
32.5,
18.75
] |
'use strict' const tape = require('tape') const disassemble = require('../src/code/disassembler').disassemble tape('Disassembler', function (t) { t.test('empty', function (st) { st.plan(1) st.equal(disassemble(''), '') }) t.test('add', function (st) { st.plan(1) st.equal(disassemble('0x01'), 'add') }) t.test('push', function (st) { st.plan(1) st.equal(disassemble('0x640203'), '0x0203000000') }) t.test('complexcode', function (st) { st.plan(1) const code = '60606040526009600060005055607e8060186000396000f360606040526000357c0100000000000000000000000000000000000000000000000000000000900480630dbe671f146039576035565b6002565b3460025760486004805050604a565b005b6000600090505b600a811015607a5760006000818150548092919060010191905055505b80806001019150506051565b5b5056' const asm = `mstore(0x40, 0x60) 0x09 0x00 pop(0x00) sstore 0x7e dup1 0x18 0x00 codecopy 0x00 return mstore(0x40, 0x60) calldataload(0x00) 0x0100000000000000000000000000000000000000000000000000000000 swap1 div dup1 0x0dbe671f eq 0x39 jumpi jump(0x35) label1: jump(0x02) label2: jumpi(0x02, callvalue()) 0x48 0x04 dup1 pop pop jump(0x4a) label3: stop() label4: 0x00 0x00 swap1 pop label5: 0x0a dup2 lt iszero 0x7a jumpi 0x00 0x00 dup2 dup2 pop sload dup1 swap3 swap2 swap1 0x01 add swap2 swap1 pop sstore pop label6: dup1 dup1 0x01 add swap2 pop pop jump(0x51) label7: label8: pop jump` st.equal(disassemble(code), asm) }) }) | Mid | [
0.5805084745762711,
34.25,
24.75
] |
Related Articles The seaplane that crashed near Sydney on New Year’s Eve killing five Britons and the pilot was off course, according to a report. A preliminary report by the Australian Safety Bureau says the aircraft was “away from the expected and standard flight path”. Richard Cousins, the 58-year-old chief executive of FTSE 100 company Compass Group, died alongside his sons Will and Edward, aged 25 and 23, his fiancee Emma Bowden, 48, and her 11-year-old daughter Heather. Image: Richard Cousins, Will Cousins, Ed Cousins, Emma and Heather Bowden The aircraft’s Australian pilot, Gareth Morgan, 44, also died. The Cousins family had gone for lunch and taken the flight about 3pm to return to Rose Bay, near Sydney Harbour. Mr Cousins was due to step down from his position at Compass in March. The de Havilland Canada DHC-2 Beaver collided with water in Jerusalem Bay, 25 miles north of Sydney city centre, in a “near-vertical position”, according to witnesses. The Australian Safety Bureau report said: “The operator reported that the aircraft’s expected and standard flight path after departing Cottage Point was to climb initially to the north then turn right along Cowan Creek toward the main body of the Hawkesbury River, until sufficient altitude was gained to fly above terrain and return to Rose Bay. “While the exact take-off path from Cottage Point has yet to be established, the aircraft was observed by witnesses to enter Jerusalem Bay. “The aircraft was observed to enter the bay at an altitude below the height of the surrounding terrain. “Several witnesses also reported hearing the aircraft’s engine and stated that the sound was constant and appeared normal. “Shortly after entering Jerusalem Bay, numerous witnesses reported seeing the aircraft suddenly enter a steep right turn and the aircraft’s nose suddenly drop before the aircraft collided with the water in a near vertical position.” Image: The seaplane crashed in Jerusalem Bay, 25 miles north of Sydney The incident has similarities to a crash involving another CHC-2 Beaver plane in Canada in August 2015. More from World A British family of four – Fiona Hewitt, 52, her husband Richard, 50, and children 14-year-old Harry and 17-year-old Felicity, all from Milton Keynes – died in the crash. The small aircraft had crashed into the side of a mountain in Quebec, killing the family along with a French passenger and the pilot. About Business Ideas UK My name is Joel Bissitt. I have been an entrepreneur for 24 years and have run many small businesses across various sectors. For the last 10 years I have worked mainly within online media, franchising and small business start-ups. | Mid | [
0.57314629258517,
35.75,
26.625
] |
NOT PRECEDENTIAL UNITED STATES COURT OF APPEALS FOR THE THIRD CIRCUIT ___________ No. 19-1085 __________ JOHN SLOAN, Appellant v. PENNSYLVANIA DEPARTMENT OF CORRECTIONS; MERCER SCI; SUPERINTENDENT THOMPSON; CAPT. SULLENBERGER; MR. BOGGS, Maint. Mgr; MR. BROMLEY, Facility Safety Manager; MR. DELLORSO, Maintenance #24; MS. BOAL, Medical Dept. Supervisor; MR. WOODS, OSCS; MS. ENGSTROM, Inmate Capt. Spc.; CORRECT CARE SOLUTIONS, (CCS); MR. RICHARD ELLERS, V.P.; DR. SCOTT MORGAN; MS. KAREN FEATHER, C.C.S. Site Administrator ____________________________________ On Appeal from the United States District Court for the Western District of Pennsylvania (D.C. Civil Action No. 2:16-cv-01182) District Judge: Honorable Nora B. Fischer ____________________________________ Submitted Pursuant to Third Circuit LAR 34.1(a) February 7, 2020 Before: KRAUSE, MATEY and COWEN, Circuit Judges (Opinion filed: February 10, 2020) ___________ OPINION* ___________ * This disposition is not an opinion of the full Court and pursuant to I.O.P. 5.7 does not constitute binding precedent. PER CURIAM John Sloan, a Pennsylvania prisoner who is proceeding pro se, appeals from an order of the United States District Court for the Western District of Pennsylvania granting the defendants’ motions for summary judgment. We will affirm. I. Sloan sustained a fractured toe while on a work assignment as part of SCI- Mercer’s maintenance crew. He later filed a civil rights complaint under 42 U.S.C. § 1983, which he later amended, alleging, inter alia, that he was not adequately treated for the toe injury, that the working conditions in the prison were dangerous, and that he was fired from his position in the prison’s maintenance department in retaliation for initiating a civil suit in state court. He named as defendants the Pennsylvania Department of Corrections, SCI-Mercer, and several SCI-Mercer employees (“DOC defendants”). He also named Correct Care Solutions, a healthcare company that provides medical services to inmates, and three of its employees, Richard Ellers, Karen Feathers, and Dr. Morgan (“medical defendants”). The medical defendants and the DOC defendants filed separate motions for summary judgment.1 The medical defendants asserted that they provided Sloan with 1 Earlier, the District Court granted in part those defendants’ motions to dismiss under Federal Rule of Civil Procedure 12(b)(6). Sloan has not challenged that determination in his brief. See United States v. Pelullo, 399 F.3d 197, 222 (3d Cir. 2005) (stating that “[i]t is well settled that an appellant’s failure to identify or argue an issue in his opening brief constitutes waiver of that issue on appeal.”). In any event, we conclude that, for the reasons provided in the Magistrate Judge’s Report and Recommendation, the District Court properly granted in part the motions to dismiss. (ECF No. 54.) 2 appropriate treatment; the DOC defendants argued that Sloan failed to exhaust his administrative remedies and that his claims lacked merit. A Magistrate Judge recommended granting both motions for summary judgment. The Magistrate Judge concluded that the summary judgment record failed to support Sloan’s claims against the medical defendants, and recommended that the District Court decline to exercise jurisdiction over any state law claims. In a separate Report and Recommendation, the Magistrate Judge stated that Sloan had failed to exhaust administrative remedies as to his claims against the DOC defendants. The District Court agreed with the Magistrate Judge’s conclusions and granted both motions for summary judgment. Sloan timely appealed. II. We have jurisdiction under 28 U.S.C. § 1291. We exercise plenary review over the District Court’s order granting summary judgment. See DeHart v. Horn, 390 F.3d 262, 267 (3d Cir. 2004). Summary judgment is proper where, viewing the evidence in the light most favorable to the nonmoving party and drawing all inferences in favor of that party, there is no genuine dispute as to any material fact and the moving party is entitled to judgment as a matter of law. Fed. R. Civ. P. 56(a); Kaucher v. County of Bucks, 455 F.3d 418, 422-23 (3d Cir. 2006). We may affirm on any basis supported by the record. See Fairview Twp. v. EPA, 773 F.2d 517, 525 n.15 (3d Cir. 1985). 3 III. To succeed on an Eighth Amendment claim for the denial or delay of medical care, Sloan is required to demonstrate that the medical defendants were deliberately indifferent to his serious medical needs. See Estelle v. Gamble, 429 U.S. 97, 103-05 (1976). “To act with deliberate indifference to serious medical needs is to recklessly disregard a substantial risk of serious harm.” Giles v. Kearney, 571 F.3d 318, 330 (3d Cir. 2009). Deliberate indifference can be shown by a prison official’s “intentionally denying or delaying access to medical care or intentionally interfering with the treatment once prescribed.” Estelle, 429 U.S. at 104-05. Allegations of medical malpractice are not sufficient to establish a constitutional violation. White v. Napoleon, 897 F.2d 103, 108-09 (3d Cir. 1990). Furthermore, “mere disagreement as to the proper medical treatment” does not support a claim of an Eighth Amendment violation. Monmouth Cty. Corr. Inst. Inmates v. Lanzaro, 834 F.2d 326, 346 (3d Cir. 1987). It is clear from the record that Sloan received timely and adequate medical care for his injured toe. That injury occurred on August 8, 2014, when Sloan dropped a manhole cover on his left big toe while on a work assignment as part of SCI-Mercer’s maintenance crew. Initially, Sloan was treated at the prison infirmary, where he was diagnosed with a “contusion laceration, left great toe.” The same day, Sloan was transported to the hospital for X-rays, which revealed a fracture. Upon return to SCI-Mercer that evening, Sloan was placed under observation in the infirmary. Medical staff provided Sloan with medication when he complained of pain and changed the bloody bandages. The next day, Sloan was examined by Dr. Morgan, who noted that Sloan had been issued crutches, that 4 he should use a wheelchair “for distances,” and that he should not engage in work or sports. On August 14, 2014, Sloan was examined by an orthopedic surgeon, who recommended that Sloan be given a “post-op shoe.” At a follow-up appointment one month later, the surgeon noted that Sloan “can weight bare as tolerated and transition to normal shoe wear.” Over the next several months, Sloan had additional X-rays taken and was reevaluated by a physician’s assistant. On November 4, 2014, Dr. Morgan ordered orthopedic shoes for Sloan, but discontinued that order the next month because Sloan did not meet the criteria for issuance of such shoes. Instead, Dr. Morgan ordered gel insoles for Sloan and advised Sloan that he should use a stiff-soled shoe. Dr. Morgan also ordered that Sloan could return to work. Sloan requested orthopedic shoes for “comfort,” but his requests were denied. Sloan continued to use the gel insoles through 2016. The undisputed record demonstrates that the medical defendants, pursuant to their professional judgment, adequately treated Sloan’s injured toe. Sloan did not present any evidence from which a reasonable juror could conclude that the medical defendants intentionally refused to provide needed treatment, delayed necessary treatment for a non- medical reason, prevented Sloan from receiving required treatment, or persisted in a particular course of treatment “in the face of resultant pain and risk of permanent injury.” Rouse v. Plantier, 182 F.3d 192, 197 (3d Cir. 1999) (quoting White v. Napoleon, 897 F.2d 103, 109-11 (3d Cir. 1990)). In his complaint, Sloan emphasized that he should have been provided with orthopedic shoes instead of gel insoles. But that assertion is simply a disagreement with a course of treatment, which is insufficient to establish deliberate indifference. See Spruill v. Gillis, 372 F.3d 218, 235 (3d Cir. 2004). Notably, 5 the orthopedic surgeon had concluded that Sloan could “transition to normal shoe wear” and Dr. Morgan advised Sloan to use a stiff-soled shoe. Under these circumstances, the District Court properly granted the medical defendants’ motion for summary judgment. See Norfleet v. Webster, 439 F.3d 392, 393, 396-97 (7th Cir. 2006) (holding that doctor’s refusal to prescribe soft-soled shoes, which was “undisputably based on medical records, some of which support the challenged determination, cannot support an inference of deliberate indifference”). IV. The District Court also properly granted summary judgment with respect to the unsafe working conditions and retaliation claims brought against the DOC defendants.2 Sloan alleged that the DOC defendants failed to protect him from unsafe working conditions on the prison maintenance crew. To establish an Eighth Amendment violation with respect to conditions of confinement, a prisoner must show that he has been deprived of “the minimal civilized measure of life’s necessities,” such as food, clothing, shelter, sanitation, medical care, or personal safety. Farmer v. Brennan, 511 U.S. 825, 2 The District Court granted the DOC defendants’ motion for summary judgment on the basis that Sloan failed to exhaust his administrative remedies as required by the Prison Litigation Reform Act (PLRA). 42 U.S.C. § 1997e(a); Jones v. Unknown D.O.C. Bus Driver & Transp. Crew, 944 F.3d 478, 480 (3d Cir. 2019) (“If a prisoner wants to file a § 1983 suit, he must exhaust the prison’s internal administrative remedies first.”). But because it appears that Sloan may have substantially complied with the exhaustion requirements, see Small v. Camden County, 728 F.3d 265, 272 (3d Cir. 2013), or have been thwarted from doing so, see Rinaldi v. United States, 904 F.3d 257, 266-67 (3d Cir. 2018), we will address the substance of Sloan’s claims. See Glover v. FDIC, 698 F.3d 139, 143 n.4 (3d Cir. 2012) (holding that a district court can be affirmed on any basis supported by the record); Nyhuis v. Reno, 204 F.3d 65, 69 n.4 (3d Cir. 2000) (stating that exhaustion under the PLRA is not a jurisdictional requirement). 6 832, 834 (1994) (citations omitted). In addition, the prisoner must demonstrate that the deprivation was sufficiently serious and that the defendants acted with deliberate indifference, i.e., that prison officials knew of and disregarded a substantial risk of serious harm. See id. at 837. “[C]laims of negligence, . . . without some more culpable state of mind, do not constitute ‘deliberate indifference.’” See Singletary v. Pa. Dep’t of Corr., 266 F.3d 186, 192 n.2 (3d Cir. 2001). On the day that he was injured, Sloan and two other inmates were directed to remove a manhole cover with a two-foot prybar. According to Sloan’s amended complaint, “[a]s the manhole cover was being removed using the prybar, the prybar slipped and the cover fell and contacted Sloan’s left foot.” Sloan alleged that his supervisor “did not secure the proper manhole removal tool” and that he and the inmates who assisted him were not “given any personal protective gear or equipment nor were they properly instructed in the safe removal of manhole covers.” Sloan proffered no evidence, however, from which a reasonable jury could infer that the DOC defendants knew of and disregarded the risk that caused the accident. Notably, before starting his job on the maintenance crew, Sloan participated in an “Inmate Worksite Orientation,” which included education on “safety guards and practices.” (ECF Nos. 95-3; 95-4.) Sloan did not voice any concerns about a lack of training or the absence of proper equipment. (ECF No. 95-6, ¶ 11-12.) And Sloan’s supervisor stated that he would not ask an inmate to perform a task that he did not feel safe doing himself. (Id. at ¶ 9.) At most, the DOC defendants were negligent for allowing Sloan to work in conditions in which his accident was possible, but negligence will not support a claim of deliberate 7 indifference. See Durmer v. O’Carroll, 991 F.2d 64, 67 (3d Cir. 1993) (explaining that deliberate indifference requires something “more than negligence”). Sloan also alleged that the DOC defendants transferred him from his maintenance job in retaliation for initiating a civil suit in state court. “A prisoner alleging retaliation must show (1) constitutionally protected conduct, (2) an adverse action by prison officials sufficient to deter a person of ordinary firmness from exercising his constitutional rights, and (3) a causal link between the exercise of his constitutional rights and the adverse action taken against him.” Mitchell v. Horn, 318 F.3d 523, 530 (3d Cir. 2003) (internal quotation marks omitted) (quoting Rauser v. Horn, 241 F.3d 330, 333 (3d Cir. 2001)). Filing a state court lawsuit is constitutionally protected conduct. See Milhouse v. Carlson, 652 F.2d 371, 373 (3d Cir. 1981). And loss of a prison work assignment can support a retaliation claim. See Wisniewski v. Fisher, 857 F.3d 152, 157 (3d Cir. 2017). Here, however, the transfer was not an adverse action. Sloan was removed from his maintenance job on July 3, 2015. (ECF No. 95-5, ¶ 3.) He resumed work as a library aide on August 16, 2015. (Id. at ¶ 8.) During the approximately six-week period between assignments, Sloan was compensated as if he were still working. (Id. at ¶ 9.) In addition, his work as a library aide was compensated at the same hourly rate that he was paid in his maintenance job. (Id. at ¶ 10.) Sloan did not allege that his duties as a library aide job were somehow more unpleasant than those as a maintenance worker, nor did he otherwise meaningfully claim that there were negative consequences to the job transfer. Under these circumstances, we conclude that no reasonable trier of fact could conclude that Sloan’s reassignment, which had de minimis consequences, would deter a prisoner of 8 ordinary firmness from exercising his rights. Cf. Watson v. Rozum, 834 F.3d 417, 423 (3d Cir. 2016) (“An adverse consequence ‘need not be great in order to be actionable[;]’ rather, it need only be ‘more than de minimis.’” (citation omitted)). V. For the foregoing reasons, we will affirm the District Court’s judgment granting the defendants’ motions for summary judgment.3 3 In granting the motions for summary judgment, the District Court dismissed Sloan’s state law claims. A District Court has discretion to decline to exercise supplemental jurisdiction over state law claims if the court “has dismissed all claims over which it has original jurisdiction.” 28 U.S.C. § 1367(c)(3). Because the District Court properly dismissed Sloan’s claims under federal law, it plainly acted within its discretion in declining to hear his claims under state law. See Maio v. Aetna, Inc., 221 F.3d 472, 480 n.6 (3d Cir. 2000). 9 | Low | [
0.536992840095465,
28.125,
24.25
] |
Short-course therapy for catheter-associated Staphylococcus aureus bacteremia. To determine the efficacy of "short-course" therapy (less than 17 days) for Staphylococcus aureus catheter-associated bacteremia, 13 patients were prospectively followed up for at least three months after completion of therapy. A single patient relapsed after 28 days with endocarditis. No clinical or microbiological predictors of relapse could be identified, and coexistent medical conditions associated with some degree of immunosuppression did not appear to predispose to relapse. The results of this study and a review of the literature indicate that short-course therapy for uncomplicated S aureus catheter-associated bacteremia has a relapse rate of only 5% to 10% and, therefore, is reasonable therapy for this condition. The majority of relapses are endocarditis and occur within ten weeks after completion of therapy. Close follow-up during this period is essential. | Mid | [
0.6327944572748261,
34.25,
19.875
] |
[Influence of disturbance intensity on nitrogen, phosphorus and permanganate index release of Potamogeton crispus during soaking in water]. Influence of various disturbance intensities on nitrogen, phosphorus and permanganate index (PI) release of Potamogeton crispus were investigated during the plant soaking in water, and the plant materials were collected in an urban lake of Beijing. Results showed that more rapid release of TP and PI from Potamogeton crispus were caused by disturbance, NH4(+) -N and TN in water were significantly increased (ANOVA, p < 0.05) under the condition of high disturbance (120 r/min) for 240 h. However, PI and TP were significantly decreased (ANOVA, p < 0.05) after 240 h disturbance in all treatments. When the release equilibrium of 2.13 g dry mass Potamogeton crispus in one liter of water was reached, the PI, TN, and TP released from unit mass Potamogeton crispus were 35, 5.1 and 4.1 mg x g(-1), respectively. The release ratio of TP from Potamogeton crispus was the highest, while the release ratio of PI was the lowest. The simulated experiment results showed that the highest pollution load to water released from Potamogeton crispus was the TP among the three nutrients. Phosphorus is one of the key factors which causing water eutrophication in lake, thus after submerged plants declining in lake, the field research of phosphorus release and migration from submerged plants into water is the focus of future research work. | Mid | [
0.6268656716417911,
36.75,
21.875
] |
Q: How to count mean of a column 2 if column 1 has a specific value? R How can I count the mean of Column 2 if Column 1 has a value of "UK"? Column1 | Column2 ------------------- USA | 4.5 UK | 4.3 UK | 2.4 UK | 1.3 GERMANY | 4.4 FRANCE | 2.3 So I want to get mean of Columns 2 for UK. A: We can subset the 'Column2' based on the 'Column1' value o 'UK' and get the mean with(df1, mean(Column2[Column1 == 'UK'])) Or if we need to get the mean of 'Column2' for all unique elements of 'Column1' aggregate(Column2 ~ Column1, df1, mean) Or with dplyr library(dplyr) df1 %>% group_by(Column1) %>% summarise(Column2 = mean(Column2, na.rm = TRUE)) | High | [
0.6701030927835051,
32.5,
16
] |
Q: Double Integral $\int\limits_0^b\int\limits_0^x\sqrt{a^2-x^2-y^2}\,dy\,dx$ What is the best method for evaluating the following double integral? $$ \int_{0}^{b}\int_{0}^{x}\,\sqrt{\,a^{2} - x^{2} - y^{2}\,}\,\,{\rm d}y\,{\rm d}x\,, \qquad a > \sqrt{\,2\,}\,\,b $$ Is there exist an easy method? My try: $$\int_0^b\int_0^x\sqrt{a^2-x^2-y^2}dy\,dx=\int_0^{\frac{\pi}{4}}\int_0^{b\sec(\theta)}r\sqrt{a^2-r^2}dr\,d\theta$$ $$=\int_0^{\frac{\pi}{4}}\frac{-1}{3}\left[(a^2-r^2)\sqrt{a^2-r^2}\right]_0^{b\sec(\theta)}d\theta$$ $$=\frac{1}{3}\int_0^{\frac{\pi}{4}}\left[a^3-(a^2-b^2\sec^2(\theta))\sqrt{a^2-b^2\sec^2(\theta)}\right]d\theta$$ but evaluating above integral is very difficult and antiderivative is very complexity! see here. A: Of course this is an exercise in integration. But nevertheless it can be done using elementary geometry alone. We are told to compute the volume of a body $K$ which is bounded by several planar faces and a peace of a spherical surface. The following figure shows the situation as seen from the tip of the $x$-axis. I have put $a=1$ and written $p$ instead of $b$. The body $K$ contains (a) the pyramid $P$ with base the triangle with vertices $(0,0,0)$,$(p,0,0)$, $(p,p,0)$, and of height $\sqrt{1-2p^2}$. The volume of this pyramid is $${\rm vol}(P)={1\over6}p^2\sqrt{1-2p^2}\ .$$ Furthermore $K$ contains (b) part of a sector $S$ of central angle $$\alpha:=\arctan{p\over\sqrt{1-2p^2}}$$ of a spherical segment. The two radii of this segment are $1$ and $\sqrt{1-p^2}$, and its thickness is $p$. The volume of the full sector $S$ is therefore given by $${\rm vol}(S)={\alpha\over 2\pi}{\pi\over6} p\bigl(3+3(1-p^2) +p^2\bigr)={\alpha\over6}p(3-p^2)\ .$$ From the volume of $S$ we (c) have to deduct the volume of a triangular spherical sector $T$, whereby the angles of the spherical triangle in question (shaded in the figure) are ${\pi\over2}$, ${\pi\over4}$ and a certain $\beta$. One leg of this triangle is $\alpha$, and a standard formula for right spherical triangles then tells us that $$\cos\beta=\sin{\pi\over4}\cos\alpha={1\over\sqrt{2}}\cos\alpha\ .$$ The spherical area of the triangle is then $\beta-{\pi\over 4}$, so that $${\rm vol}(T)={1\over3}\bigl(\beta-{\pi\over 4}\bigr)\ .$$ Finally $${\rm vol}(K)={\rm vol}(P)+{\rm vol}(S)-{\rm vol}(T)\ ,$$ which maybe can be simplified somewhat. A: Suppose $$I=\int_0^b\int_0^x\sqrt{a^2-x^2-y^2}dy\,dx$$ Let $y=\sqrt{a^2-x^2}\sin \theta$ then $dy=\sqrt{a^2-x^2}\cos \theta\,d\theta$, so $$I=\int_0^b\int_0^{\arcsin\frac{x}{\sqrt{a^2-x^2}}}(a^2-x^2)\cos^2 \theta\,d\theta\,dx$$ $$=\frac{1}{2}\int_0^b(a^2-x^2)\left[\theta+\frac{1}{2}\sin 2\theta\right]_0^{\arcsin\frac{x}{\sqrt{a^2-x^2}}}\,dx$$ and now note that $\sin \theta=\frac{x}{\sqrt{a^2-x^2}}$, so $\cos \theta=\sqrt{\frac{a^2-2x^2}{a^2-x^2}}$ and $\frac{1}{2}\sin 2\theta=\sin \theta\cos \theta=\frac{x\sqrt{a^2-2x^2}}{a^2-x^2}$. therefore $$I=\frac{1}{2}\left(\int_0^b(a^2-x^2)\arcsin\frac{x}{\sqrt{a^2-x^2}}\,dx+\int_0^bx\sqrt{a^2-2x^2}\,dx\right)=\frac{1}{2}(I_1+I_2).$$ for evaluating $I_1$ use integrating by parts. If you let $u=\arcsin\frac{x}{\sqrt{a^2-x^2}}$ and $dv=(a^2-x^2)dx$, then $$du=\frac{a^2}{(a^2-x^2)\sqrt{a^2-2x^2}}dx,v=a^2x-\frac{x^3}{3}$$ therefore $$I_1=\frac{3a^2b-b^3}{3}\arcsin\frac{b}{\sqrt{a^2-b^2}}+I_3$$ and $$I_3=-\int_0^b\frac{a^2(a^2-\frac{x^2}{3})x}{(a^2-x^2)\sqrt{a^2-2x^2}}\,dx$$ now let $u=\sqrt{a^2-2x^2}$, then $du=\frac{-2x}{\sqrt{a^2-2x^2}}dx$ and $x^2=\frac{a^2-u^2}{2}$. so $$I_3=\frac{a^2}{2}\int_a^{\sqrt{a^2-2b^2}}\frac{a^2-\frac{a^2-u^2}{6}}{a^2-\frac{a^2-u^2}{2}}\,du=\frac{a^2}{6}\int_a^{\sqrt{a^2-2b^2}}\frac{u^2+5a^2}{u^2+a^2}\,du$$ $$=\frac{a^2}{6}\int_a^{\sqrt{a^2-2b^2}}\left(1+\frac{a^2}{u^2+a^2}\right)\,du=\frac{a^2}{6}(\sqrt{a^2-2b^2}-a)+\frac{2a^3}{3}\left(\arctan\frac{\sqrt{a^2-2b^2}}{a}-\frac{\pi}{4}\right)$$ for $I_2$ we have $$I_2=\int_0^bx\sqrt{a^2-2x^2}\,dx=\frac{-1}{6}\left[(a^2-2x^2)\sqrt{a^2-2x^2}\right]_0^b=\frac{1}{6}(a^3-(a^2-2b^2)\sqrt{a^2-2b^2})$$ Hence, $$I=\frac{3a^2b-b^3}{6}\arcsin\frac{b}{\sqrt{a^2-b^2}}+\frac{a^3}{3}\arctan\frac{\sqrt{a^2-2b^2}-a}{\sqrt{a^2-2b^2}+a}+\frac{b^2}{6}\sqrt{a^2-2b^2}.$$ A: $\int_0^b\int_0^x\sqrt{a^2-x^2-y^2}~dy~dx$ $=\int_0^b\left[\dfrac{y\sqrt{a^2-x^2-y^2}}{2}+\dfrac{a^2-x^2}{2}\sin^{-1}\dfrac{y}{\sqrt{a^2-x^2}}\right]_0^x~dx$ (according to http://en.wikipedia.org/wiki/List_of_integrals_of_irrational_functions) $=\int_0^b\dfrac{x\sqrt{a^2-2x^2}}{2}dx+\int_0^b\dfrac{a^2-x^2}{2}\sin^{-1}\dfrac{x}{\sqrt{a^2-x^2}}dx$ Can you take it from here? | Mid | [
0.618834080717488,
34.5,
21.25
] |
COMPARE As a signatory to UNCLOS, the PRC occasionally implies that its interpretations should trump those of the United States, which has yet to ratify the convention that Washington nevertheless employs as a bludgeon against Beijing’s claims that UNCLOS permits limitations by coastal states on foreign military activities in the EEZ. | Low | [
0.5238095238095231,
33,
30
] |
(Cat? OR feline) AND NOT dog? Cat? W/5 behavior (Cat? OR feline) AND traits Cat AND charact* This guide provides a more detailed description of the syntax that is supported along with examples. This search box also supports the look-up of an IP.com Digital Signature (also referred to as Fingerprint); enter the 72-, 48-, or 32-character code to retrieve details of the associated file or submission. Concept Search - What can I type? For a concept search, you can enter phrases, sentences, or full paragraphs in English. For example, copy and paste the abstract of a patent application or paragraphs from an article. Concept search eliminates the need for complex Boolean syntax to inform retrieval. Our Semantic Gist engine uses advanced cognitive semantic analysis to extract the meaning of data. This reduces the chances of missing valuable information, that may result from traditional keyword searching. Efficient Computational Procedure for Fatigue Life Assessment Using DOE/DSA Techniques Publishing Venue The IP.com Prior Art Database Abstract During the early product design stage, it is very important to explore the design space for the optimal and robust design. The current analytical design of experiment (DOE) procedure is not efficient for durability fatigue applications in automobile industries. In the fatigue analysis, the unit-load stress coefficients are needed to compute the stress history and fatigue life. The usual practice is to run finite element analysis (FEA) to obtain those stresses. The FEA is performed using any commercial FEA software and, depending on the size of the model, it can be computationally very expensive. The DOE process works by initially laying out the design points in the space defined by the design parameters. For each design point, the finite element analysis must be performed. The serious drawback with the current DOE procedure is that hundreds of FEA evaluation may be needed to obtain a reasonably accurate fatigue life distribution, which makes the current DOE process undesirable to many engineers in automobile industries. Since automotive engineers are faced with vehicle Computer-Aided Engineering (CAE) models that have up to half million degrees of freedom, the current DOE process is impractical. Country United States Language English (United States) This text was extracted from a Microsoft Word document. At least one non-text object (such as an image or picture) has been suppressed. This is the abbreviated version, containing approximately 50% of the total text. Efficient Computational Procedure for Fatigue Life Assessment Using DOE/DSA Techniques During the early product design stage, it is very important to explore the design space for the optimal and robust design. The current analytical design of experiment (DOE) procedure is not efficient for durability fatigue applications in automobile industries. In the fatigue analysis, the unit-load stress coefficients are needed to compute the stress history and fatigue life. The usual practice is to run finite element analysis (FEA) to obtain those stresses. The FEA is performed using any commercial FEA software and, depending on the size of the model, it can be computationally very expensive. The DOE process works by initially laying out the design points in the space defined by the design parameters. For each design point, the finite element analysis must be performed. The serious drawback with the current DOE procedure is that hundreds of FEA evaluation may be needed to obtain a reasonably accurate fatigue life distribution, which makes the current DOE process undesirable to many engineers in automobile industries. Since automotive engineers are faced with vehicle Computer-Aided Engineering (CAE) models that have up to half million degrees of freedom, the current DOE process is impractical. The clear advantage of the proposed DOE/DSA procedure is that it reduces the number of FEA to only two. In other words, even if the engineer decides to perform 100-run DOE, only two FEA evaluations are needed in the entire process. (In the current DOE process, the engineer would need to evaluate 100 finite element analyses.) This is possible by utilizing the design sensitivities from the FEA code, such as MSC/NASTRAN, and updating the unit-load stress coefficients by using these sensitivities. Therefore, by integrating the sensitivity information into the DOE process, the proposed DOE/DSA process is much more efficient and desirable for automotive engineers to use. Procedure When performing an analytical design of experiment (DOE) for fatigue life assessment, the following steps are currently taken (See Figure 1): The accuracy of fatigue life assessment rests on how well, and how many, the design points are spread across the design space. Ideally, the engineer would prefer to spread many design points as evenly as possible to cover the entire space. But due to the lengthy nature of DOE procedure, the engineer cannot afford to evaluate too many design point... | Mid | [
0.6162790697674411,
26.5,
16.5
] |
Antiquing with Pamela(1 items in category)View Seller ProfileAntique furniture including lighting and period American English and Continental furniture. Decorative arts include silver, ceramics and estate jewelry. Paintings and sculpture grace our Fine Art. Requiem Antique(1 items in category)View Seller ProfileRequiem Antique Jewelry and Curios where you will find unique and rare materials. We bring you Victorian to Midcentury selections as well as Native American Jewelry. Vintages(2 items in category)View Seller ProfileVintages of Los Gatos specializes in antique estate jewelry and sterling silver, as well as sewing items, perfume bottles and vanity accessories. We offer collections, including Hawaiiana and desk accessories. | Mid | [
0.6450116009280741,
34.75,
19.125
] |
\oldKey la \minor \newKey re \minor \time 3/4 s2.*8 \time 4/4 s1*4 \time 3/4 s2.*2 \time 4/4 s1*3 \time 3/4 s2. \time 4/4 s1*2 \time 3/4 s2.*3 \time 2/2 s1 \time 4/4 s1*3 \time 3/4 s2.*2 \time 2/2 s1*2 \time 4/4 s1 \time 2/2 s1*2 \time 4/4 s1*5 \time 3/4 s2.*54 \time 2/2 s1 \time 4/4 s1 \time 3/4 s2.*3 \time 2/2 s1 \bar "|." | Low | [
0.49777777777777704,
28,
28.25
] |
If you're looking for a place to talk about The Elder Scrolls Online, you've found it! Go ahead, click the button below: registration is fast and free. We really look forward to having another TESO fan! Casting Shadows - Chapter Three The forest was still dark and dismal. Raen imagined the sun would soon begin to creep into the sky. Even so, it made for a difficult feat, treading of unearthed roots, rocks, and ice. The darkness that shrouded her vision was only becoming increasingly agitating as her search went on. “I told you I’m less useful than a candle.” she said under her breath. After a slew of curses and battling a wall of dead branches, Raen found herself in a small clearing. The young elf drew in a long, even breath and surveyed what she had found. While the area was spacious and barren, the tall hanging branches above provided some shelter with shade. The fading moonlight filtered through the trees to make chimerical patterns on the ground below and as the wind blew snow in, it glinted under the light before disappearing again. The quiet she found here was calming, yet she couldn't will her pulse to slow. Her mind still raced around thoughts of the boy. Raen decided that chasing shadows through a dark forest was not only futile but also idiotic. Instead she rested her weary body against a tree trunk and stared up, awaiting the suns return to the sky. As she did so, her eyes began to close until she was asleep beneath the canopy of trees. She remained like that for a while, undisturbed and peaceful, if only from exhaustion. It didn't last for very long though. A snapping sound had her head jerking upright and her body tensing. Her hands searched for some sort of weapon as she forced herself to stand, but she came up with nothing. The young elf heard another sound. It was closer this time, and Raen proceeded to back away from it, cautiously eyeing the area she believed it was coming from. Another noise echoed, and it had her heart in her throat as she continued to back up. In a moment, something tumbled forward. It broke the branches and rustled the bushes as it did so landing in a heap. Raen yelped and jumped backwards, tripping over the uneven ground. The boy sat, blinking wildly for a moment and shook his head as if to clear it. The pair stared at each other, wide eyed, from across the clearing. As their minds processed the events that had just taken place, they both just sat statuesque on the ground until one finally made an effort to break the silence. The little boy snickered. “You have a stick in your hair.” He giggled. Raen blinked furrowing her brows and shuffled to her feet. “Yeah, well, you should see what you look like!” She said defensively and pulled at the wood that had knotted itself into her braid. She looked to the little boy who sat in a ball, resting his chin on his knees. She could finally get a good look at him. He was a peculiar looking snow elf child with wide features and a goofy smile. His hair was lighter than Raen’s but cut to his jaw line which only exaggerated the shape of the little boy’s face. His eyes were a redeeming feature, she decided. They were very large and full of the naive wonderment people seemed to be infatuated with in small children. Raen thought this had to be the child she had seen spared at the Nord’s fire, yet he didn’t seem distraught. Raen only became aware she was staring when the young boy – who had apparently been babbling- spoke again. “What?” She asked and shook herself mentally as the boy repeated his question. “What’s your name?” His head cocked and his eyes fixated on the girl. “Why?” Raen asked slowly. He giggled again, which only made Raen sneer. She didn’t like to be laughed at, especially by some strange little kid. “It’s only a name. Don’t you know a name is the first thing you ever learn about someone?” the boy asked, finally moving to stand up. Raen watched curiously as the little boy stood up and dusted his hands off on his tattered shirt. He strode up to the girl confidently and stuck out a tiny, pale hand. “Mine’s Ayric.” She stared at the hand a while, then realized how stupid she must have seemed. “Raen. My name is Raen.” She finally realized that the shadows had been chased away from the sky. In lieu of the dreary darkness, a bright new light had begun to shine. While relief washed over her, memories of the night before came flooding back. The path to Heirsuun was a three day journey and even though the Nord’s were slow and intoxicated half the time, they were taking the direct paths. She looked down at Ayric who was studying her quizzically. It was as if she was thinking aloud. Raen knew he would be a liability, extra carrying weight she couldn’t afford. Yet, she couldn’t leave him. There was enough humanity in her to know she couldn’t turn her back on a child, especially when he had lost everything, just like her. “Do you know where any of your family is?” Raen finally asked. The boy shook his head. “I don’t have a family. Mrs. Gwyn took care of me at Frost Haven. She was a nice lady, I always got to go outside and play when the weather was nice. She taught me how to build a snow fort, and we played in it a whole day once.” For a moment, Raen felt a twinge of something deep within herself, but it was gone like a whisper in the wind before she could register what it was. Frost Haven was the orphanage, and she had met Mrs. Gwyn a time or two. She gathered that the woman she saw the previous night must have been her. Raen could feel her legs grow weaker so she moved to lean against a tree. Ayric seemed indifferent to the tragedy that had been bestowed upon him. He had lost interest in his chat about snow forts with Raen so he had begun to spin circles in the middle of the clearing, staring up at the trees. He was babbling on to her again about something incoherent. Raen turned her attention away and closed her eyes. She had seen her share of death and animosity, and she wondered what he had been subjected to. None of that was truly important right now, though. Instead, she mustered up her wavering strength and took a step away from the tree, and ensured she was steady enough to stand on her own. She looked to the child who had fallen and was lying on the ground. He turned his head and smiled at her as his hair brushed over his eyes. She didn’t smile back, as she wasn’t here to comfort him. Rather, she was here to make it to Heirsuun, and that’s what she intended to do. “Ayric, let’s go. You’re coming with me.” She barked at him. “Are we going on a hike? I’ve never been on a hike. I’ve never been in a forest either. Why do they get so dark and scary at night? If I was an animal I wouldn’t live in a forest.” Ayric continued to ramble until Raen finally interrupted. “No, alright? We aren’t going on a hike. Just keep quiet.” She ordered, and set off with her little companion a few steps behind who ignored her order entirely. The forest, in all its twisting and menacing confusion, was familiar to Raen. Her father had often taken her out to hunt so she knew the land rather well. However, once they got close to Heirsuun, that advantage diminished. She’d never been nor had any idea on what to expect, she had only seen the outlines over the horizon, carved into the side of Mount Kaalreach. As a little girl, she had hoped to visit one day, but she never had she imagined it would be on these terms. From behind her, she heard Ayric as they trudged, gabbing on about something that was entirely lost in his digressive speech. She rolled her eyes and continued on, ignoring the little pest behind her. While his nature was baffling, it didn’t change the fact that he pushed every one of the girl’s limited buttons. This, she thought, would prove to be rather interesting. They walked for a while and not one moment was silent between Ayric’s constant badgering and her own thoughts. They finally came across a rather steep inclination. There were jagged rocks and uneven ground riddled over the hill, but if they were to scale it, it would save an incredible amount of time. Raen made the decision and began to climb. She expected protest from the inexperienced little boy, but he made no sounds of impatience or displeasure. He only continued babbling, this time about how rocks sometimes looked like animals from a distance. The climb was challenging for the girl hadn't done so in a very long time. Memory had her hands moving to grab at sturdy jut-outs and her feet propelling her forward. She focused on her next move, until she found her way to the top. Once she heaved her body over, she reached out to Ayric, who was surprisingly not far behind. “And sometimes if the light hits it right, you would swear it’s a frost troll.” She heard him say as she hoisted his small frame up onto the even ground. They both turned at the same time to reveal the impressive view. A sea of trees rolled over hills like waves, breaking inn small clusters at parts and becoming incredibly dense in others. Mount Kaalreach sat proudly, bordered by greenery and crowned by the clouds. In its center they saw Heirsuun. The city was large even from so far away, and it was tucked into the mountain like a bird huddled away from the wind. Midst all of the beauty, Raen made a silent promise. Heirsuun would remain unharmed, for her, her people, and her father; they would stand ready to fight. | Low | [
0.520576131687242,
31.625,
29.125
] |
Atomic Interlamellar Ion Path in High Sulfur Content Lithium-Montmorillonite Host Enables High-Rate and Stable Lithium-Sulfur Battery. Fast lithium ion transport with a high current density is critical for thick sulfur cathodes, stemming mainly from the difficulties in creating effective lithium ion pathways in high sulfur content electrodes. To develop a high-rate cathode for lithium-sulfur (Li-S) batteries, extenuation of the lithium ion diffusion barrier in thick electrodes is potentially straightforward. Here, a phyllosilicate material with a large interlamellar distance is demonstrated in high-rate cathodes as high sulfur loading. The interlayer space (≈1.396 nm) incorporated into a low lithium ion diffusion barrier (0.155 eV) significantly facilitates lithium ion diffusion within the entire sulfur cathode, and gives rise to remarkable nearly sulfur loading-independent cell performances. When combined with 80% sulfur contents, the electrodes achieve a high capacity of 865 mAh g-1 at 1 mA cm-2 and a retention of 345 mAh g-1 at a high discharging/charging rate of 15 mA cm-2 , with a sulfur loading up to 4 mg. This strategy represents a major advance in high-rate Li-S batteries via the construction of fast ions transfer paths toward real-life applications, and contributes to the research community for the fundamental mechanism study of loading-independent electrode systems. | High | [
0.68075117370892,
36.25,
17
] |
Villa keeper Given ready for Ireland U-turn He is keen to talk to Ireland boss Giovanni Trapattoni about a return to the Republic of Ireland fold. A source close to the player told the Irish Sun: "It is something on Shay's mind and he is open to the idea of an Ireland comeback. He has always wondered if it was right to go after the Euros and he would be interested in talking to Giovanni Trapattoni. "If the manager wants him back, then it is something that could happen." | Low | [
0.518796992481203,
34.5,
32
] |
Breaking: Google Spends $3.1 Billion To Acquire DoubleClick About 20 minutes ago Google announced that they have agreed to acquired DoubleClick for $3.1 billion in cash (nearly double the size of their YouTube Acquisition). Microsoft was reportedly in a bidding war with Google for the company. Google gets access to DoubleClick’s advertising software and, perhaps more importantly, their customers and network. DoubleClick was founded in 1996. DoubleClick was taken private in 2005 by Hellman & Friedman and JMI Equity for $1.1 billion. The New York Times is reporting that DoubleClicks revenues are about $300 million/year. 10x revenue for a mature company is a…healthy…valuation. At least part of the acquisition price appears to be due to a desire by Google to keep this asset out of Microsoft’s hands. | Mid | [
0.5900900900900901,
32.75,
22.75
] |
/*************************************************************************** qgmaptoolellipseextent.cpp - map tool for adding ellipse from extent --------------------- begin : July 2017 copyright : (C) 2017 by Loïc Bartoletti email : lbartoletti at tuxfamily dot org *************************************************************************** * * * This program is free software; you can redistribute it and/or modify * * it under the terms of the GNU General Public License as published by * * the Free Software Foundation; either version 2 of the License, or * * (at your option) any later version. * * * ***************************************************************************/ #include "qgsmaptoolellipseextent.h" #include "qgsgeometryrubberband.h" #include "qgsmapcanvas.h" #include "qgspoint.h" #include "qgsgeometryutils.h" #include "qgslinestring.h" #include "qgsmapmouseevent.h" #include "qgssnapindicator.h" QgsMapToolEllipseExtent::QgsMapToolEllipseExtent( QgsMapToolCapture *parentTool, QgsMapCanvas *canvas, CaptureMode mode ) : QgsMapToolAddEllipse( parentTool, canvas, mode ) { } void QgsMapToolEllipseExtent::cadCanvasReleaseEvent( QgsMapMouseEvent *e ) { QgsPoint point = mapPoint( *e ); if ( !currentVectorLayer() ) { notifyNotVectorLayer(); clean(); stopCapturing(); e->ignore(); return; } if ( e->button() == Qt::LeftButton ) { if ( mPoints.empty() ) mPoints.append( point ); if ( !mTempRubberBand ) { mTempRubberBand = createGeometryRubberBand( mLayerType, true ); mTempRubberBand->show(); } } else if ( e->button() == Qt::RightButton ) { release( e ); } } void QgsMapToolEllipseExtent::cadCanvasMoveEvent( QgsMapMouseEvent *e ) { QgsPoint point = mapPoint( *e ); mSnapIndicator->setMatch( e->mapPointMatch() ); if ( mTempRubberBand ) { switch ( mPoints.size() ) { case 1: { if ( qgsDoubleNear( mCanvas->rotation(), 0.0 ) ) { mEllipse = QgsEllipse().fromExtent( mPoints.at( 0 ), point ); mTempRubberBand->setGeometry( mEllipse.toPolygon( segments() ) ); } else { double dist = mPoints.at( 0 ).distance( point ); double angle = mPoints.at( 0 ).azimuth( point ); mEllipse = QgsEllipse().fromExtent( mPoints.at( 0 ), mPoints.at( 0 ).project( dist, angle ) ); mTempRubberBand->setGeometry( mEllipse.toPolygon( segments() ) ); } } break; default: break; } } } | Low | [
0.518939393939393,
34.25,
31.75
] |
Counselling@DIA Languages@DIA Secondary School Talent Show Learning Support Policy and Procedures (Secondary School) The Learning Support Department at DIA ensures that students with learning difficulties achieve sufficient proficiency in literacy and numeracy before leaving primary school. The Learning Support students are assisted with strategies to facilitate their learning in order for them to achieve their potential. The students are assisted on areas of specific learning difficulties in skill areas of literacy and numeracy. The Learning Support Students that have an Assessment Report from an external agency will have an Individual Educational Plan (IEP) with specific targets that are tracked and modified. Psychological Assessment Reports need to be repeated and updated every two years. Where a student is attending individual therapy such as Speech and Language Therapy (SLT), or Occupational Therapy (OT) termly progress reports must be submitted to the Learning Support Coordinator. In addition, updated reports can be requested annually. All students with special learning needs or in need of remediation will require a formal external assessment before they become part of the SEN (Special Educational Need) student list. Identification Primary School Learning Support Specialists meet with the SS SENCO to discuss upcoming year 7 SEN students. Teachers flag students that require support in one or more subject areas (behavioral, learning, physical). The Learning Support Specialist observes student in the classroom. Learning Support Specialist and subject teacher meet to discuss the observation. Outcome of observation is discussed with parents and further assessment is requested. IEP is created based on recommendations from assessment report(s) and input from parents and teachers. Teachers differentiate instruction for SEN students and are supported by regular Professional Development. Teachers and the Student Development Coordinator/SENCO meet to review the student’s IEP. Request external educational assessment Role of teacher: To refer students with continued and consistent learning difficulties for further evaluation and assessment. To provide instructional recommendations that will assist students in their learning and development. Role of learning support: To review with teacher and make joint decision on areas of concern, meeting with parent and write referrals. To provide an individualized Intervention program for the student. Discuss assessment results with the concerned specialist and parents. To assist in the implementation of recommended strategies in the context of the classroom. To monitor and facilitate the effectiveness of the recommendations and their implementation. Supporting SEN students The subject specific teacher will differentiate instruction and make accommodations based on the student’s IEP. SEN students will have a report and IEP (Individual Education Plan) with clear strategies and accommodations. All teachers to view current strategies that are working with the student and/or note any focus that may need adjustment via live, regularly updated SEN Tracking Forms. Teachers are supported through Professional Development seminars that are need-based and held regularly. SENCO to meet with students for additional support needed outside of the classroom. SEN Audits are carried out to ensure accommodations/strategies are being made by the classroom teacher. Review meetings with teachers Morning SEN meetings are held each Monday to Thursday. The goal of these meetings is to: Review the student’s current progress and consider their CAT data. To update or make additional accommodations based on classroom strategies and observations. To document meeting outcome. To review and feedback on IEP and proposed strategies. To discuss ‘next steps’ and develop a focus for each, individual student. Review meeting/updates with parent SENCO meets with parents to update them on accommodations being made for their child. Semester Parent Updates are sent via email, informing them of accommodations and inviting them in to meet and discuss classroom strategies. Annual reviews are held in which IEP modifications are made based on the needs of the SEN student (with reference to SEN tracking forms). | High | [
0.684736091298145,
30,
13.8125
] |
According to nail color brand OPI, if you took a road trip across America and translated your experiences into nail polish colors, the result would be the Touring America collection. That’s right, dear readers, the Touring America collection, which will launch this fall, is a 12-piece lineup of nail polishes inspired by some of America’s most famous cities—including New Orleans, New York, Philadelphia, Seattle, Memphis and more. As for the colors, you’ll find everything from I Eat Mainely Lobsters, a coral-pink, to Road House Blues, a deep navy blue. The entire Touring America collection will be available at select salons and stores on August 3. Each bottle will retail for $8.50. | Mid | [
0.600961538461538,
31.25,
20.75
] |
Fred Cooke (footballer) Frederick Robert Cooke (5 July 1896 – 1976) was an English professional footballer who played as a forward for Sunderland. References Category:1896 births Category:1976 deaths Category:People from Kirkby-in-Ashfield Category:English footballers Category:Association football forwards Category:Sunderland A.F.C. players Category:Swindon Town F.C. players Category:Accrington Stanley F.C. players Category:Bangor City F.C. players Category:English Football League players | Mid | [
0.567264573991031,
31.625,
24.125
] |
Incorporation of nickel into ruminal factor F430 as affected by monensin and formate. A mixed culture of ruminal microorganisms was used to demonstrate that nickel (Ni) is incorporated into factor F430 and to determine the effects of monensin and formate on incorporation of Ni into factor F430. Ruminal microorganisms obtained from a semicontinuous culture were grown for 24 h in the presence of 63Ni and a 2 x 2 factorial arrangement of monensin (0 to 5 micrograms/ml) and formate (0 to 20 mM) treatments. Factor F430 was isolated and purified from the cultures by QAE-Sephadex A-25 column chromatography. The purified preparation contained 63Ni and exhibited a peak in absorbance at 430 nm. Methane production was decreased (P less than .01) 45% by monensin but was increased (P less than .01) 1.8-fold by formate. However, incorporation of 63Ni into factor F430, which is ubiquitous in methanogens and not found in other bacteria, did not parallel changes in methane production. Incorporation of 63Ni into factor F430 was decreased (P less than .01) 55% by monensin but was not affected (P greater than .05) by formate. In addition to its use for synthesis of urease and hydrogenase, Ni is involved in ruminal fermentation as a component of factor430. | Mid | [
0.612836438923395,
37,
23.375
] |
Industries Municipalities, Counties & Government Entities Overview Friday, Eldredge & Clark counsels counties, municipalities, and special-purpose entities throughout the state on legal and operational needs. As a leader in shaping the Arkansas governmental and regulatory environment, we have strengths in counseling governmental entities that no other law firm in the state surpasses. Our lawyers have a leading bond counsel practice focused on the financing of public and nonprofit projects on behalf of local and state governmental agencies and special-purpose entities throughout Arkansas. Our tax law practice is second to none in the state, with lawyers who have done much to shape the Arkansas tax structure. And with the state’s leading employee benefits practice, we represent educational institutions and governmental entities with their specialized needs. Client Focus State of Arkansas Regional Entities School Districts and Higher Education Institutions Municipalities Counties Services to Government Entities Bond Counsel Local and state governmental agencies and special-purpose entities in Arkansas and nationwide have made Friday, Eldredge & Clark a leading bond counsel firm. We provide counsel on bond insurance and other types of credit enhancement, interest rate swaps, and derivative products. In addition to our work as issuer’s and underwriter’s counsel, we serve as counsel to private purchasers of bonds, credit providers, and trustees. As a result, we have the ability to analyze public finance transactions from various perspectives, which enhances our ability to represent our public finance clients effectively in the issuance of state and local bonds, general obligation financing, school district financing, industrial and economic development bonds, and special-purpose financing, among others. Economic Development We play a central role in facilitating economic development bond financing for businesses and government throughout Arkansas, having worked on hundreds of millions of dollars in economic development financing transactions. That includes work as issuer’s and underwriter’s counsel on such financing vehicles as industrial development revenue bonds, special assessment bonds, and special source revenue credits and bonds. Education Our firm is among the leaders in bond financing for Arkansas school districts, representing underwriters and issuers as fiscal agents on behalf of the districts themselves. We also offer district administrators and boards of education a wide range of other services from employee benefits advice to courtroom representation in employment, desegregation, and other litigated matters. Employee Benefits A wide range of Arkansas municipalities and other governmental entities rely on us for guidance on the best ways to structure and administer their benefits and pension plans, including non-ERISA qualified, 403(b) and Section 457 plans. | High | [
0.713872832369942,
30.875,
12.375
] |
Motion Capture The term “Motion Capture” means different things to people using it for unique purposes. At the UM3D Lab: Motion Capture is the recording of movement in 3D space. Many people have heard of motion capture from its use in creating animated characters for movies and video games. At the 3D lab, we are applying this technology to projects in other fields such as kinesiology, anthropology, and even aerospace. If you want to capture and study motion in 3D space, we can provide you with the guidance and expertise to see if motion capture can benefit your project and help you through the steps. Types of Motion Capture How to Get Started Available Technologies Other Resources Types of Motion Capture The data captured during a motion capture session can be used for many different purposes including: Computer Body Interaction Using motion capture to understand the movements made by the human body. This can be applied to medicine, sports science, cultural preservation, or motion research purposes. Engineering This field applies to man-made objects that are tracked to test their function. The 3D Lab has captured the movement of a vehicles and robots, including Tubman College’s Kuka Robot. Entertainment The use of motion capture to create life-like animated characters in the entertainment industry has revolutionized this field. From video game characters to realistic CGI effects in blockbusters like “Avatar”, the 3D Lab makes use of the same technology utilized in Hollywood. These categories show some of the ways motion capture can applied to various fields. If your project does not fall into any of these categories it does not mean that it cannot be captured. Create an outline of your project while keeping these key questions in mind. What is the motion you are trying to capture? What range does this motion require? What data/information do you need to get from the motion? How will the data be presented or shared? We can help you work through these questions regardless of where you are in your project’s life-cycle and what skills you have available to you. How to Get Started As there are many different kinds of motion capture, you can learn more about it and get comfortable with it in a few ways. Consultation Session After you have decided who or what you want to motion capture and what you want out of it, it is time to schedule a consultation with our group. Steffen Heise, our Motion Capture Specialist, can help you figure out the project details, describe the technology used in our lab, and set up a time schedule to get started. Additionally, you may want to also think about how the data you collect will be presented at the end of the capturing. Workshops Starting in the Fall, students will be able to acquire digital “badges” in Motion Capture by taking hands-on motion capture workshops. Whether you are trying to earn the digital badge or just want to learn more about it, you may attend one of the workshops to learn more about capabilities of motion capture and the software used. Experiment There are a few ways to experiment on your own without involving the 3D lab. One option is renting out the Leap Motion system from the information desk located on the second floor of the Duderstadt. Another is trying out a Kinect system on your own. These systems require less expertise and preparation to use but are also less accurate. Schedule a Session Once you have met with an expert and decided on your course of action, it’s time to get started and capture data. The setup is done by our staff so all you need to do is bring your subject and come prepared to start learning the Blade software, which is used to polish the resulting data. Available Technologies Different projects require various kinds of capture systems and software. Below we have listed the those available to use at our lab. Software Blade If the motion is captured by the Vicon system, there are often a few mistakes in each take where the camera did not have direct view of one of the markers or mixed up two of them. We then use Blade software to manually edit the clips and gather the correct data. This is usually the longest part of the project since it takes time to become familiar with the software and correct all of the blunders. Motionbuilder After the data is initially edited on Blade, it can be further refined on Motionbuilder. This software uses the data provided from Blade to create a 3D animation which can be fit to a certain character or kept as a neural plasticman. Separate takes can also be combined into one clip on Motionbuilder. This is where the data actually becomes an animation. Hardware Kinect The Kinect was created by Microsoft to use with the Xbox gaming system. It is easy to use and setup, widely available and affordable. However, the Kinect can only track humans and that too only when they are facing the camera. Since it only has one camera, it is not very accurate. In our lab we applied the Kinect to create 3D spaces that can easily be manipulated by anyone with simple instructions. Vicon The Vicon system uses a ring of eight cameras that tract the location of reflective markers in space. If the motion of a human is being recorded, the person has to wear a black suit that then has markers attached to it. Although more prior preparation required, this system is highly accurate and gives the subject freedom to move in any direction. It can record more than one person at a time, a person with an object, or any moving object alone. As long as it has a marker on it, its movement can be recorded. Leap If your project only needs to capture hand and finger movements than the Leap system is for you. As it only tracts hand movements, this hardware is usually used to make interaction with the computer easier and seamless. | High | [
0.6716417910447761,
33.75,
16.5
] |
BIRMINGHAM, Ala. (WIAT) — Some neighbors in Five Points South are concerned after waking up to some startling vandalism Monday. “Drug Dealer” was plastered in bright pink letters along the side of a parked vehicle. A photo of the damage was posted and shared to social media. The pink mess was also splattered on a home and was visible on the public street. “You’re trying to build a community, and you have people that come in with just blatant disregard for someone else’s property, and not just that but public areas, you know they spray painted the street as well,” said Stephen Foster, president of the Five Points South Neighborhood Association. Foster said he and possibly other volunteers will begin looking at ways to clean the public area. “Five Points is on the rise and we are selling properties and everything else, the last thing we want is for this to be happening in the neighborhood that might deter a family from coming in and setting roots down in 5 points to raise their family or create a business,” Foster said. Neighbors were surprised about what happened overnight. Sara Carlos lives in the area and says people stopped to take pictures of the vehicle near her home. “Made me feel insecure, like it is not a safe place, and people will stop with the car and ask me questions,” Carlos said. Carlos said she does not know her neighbors. She hopes the mess is cleaned up soon. “I am upset, because it is in front of my house too, but sad what happened to the house and the car,” she said. Residents aren’t sure what led up to vandalism, but leaders hope offenders think twice before allowing private problems to become a public issue. “Really at the end of the day, it’s just unnecessary,” Foster said. CBS 42 reached out to Birmingham Police regarding the incident. A spokesperson said that a BPD officer was in the process of filing a report. | Low | [
0.45454545454545403,
26.875,
32.25
] |
Transverse incisions for resection of ileocolic Crohn's disease. Laparoscopic ileocecectomy is advocated as the ideal surgical approach for ileocecal Crohn's disease. Our experience suggests that equivalent outcomes are accomplished through a small right lower quadrant (RLQ) transverse incision in this patient population. We conducted a retrospective chart review of 39 patients undergoing ileocectomy for Crohn's disease using a RLQ transverse incision between 1991 and 2009. The mean operative time was 99 minutes with a mean length of hospital stay of 4.2 days and mean duration until return of bowel function of 2.9 days. There were no deaths or major complications. Long-term follow-up revealed four patients (13%) who required hospitalization for small bowel obstructions, one patient (3%) developed an incisional hernia, and no patients required an ileostomy. Ileocecectomy performed for Crohn's disease using a RLQ transverse incision yielded similar hospital lengths of stay and time to return of bowel function as those published for laparoscopic resection. This approach may result in shorter operative times when compared with the inexperienced surgeon performing a laparoscopic resection. Long-term follow-up revealed the risk for future RLQ ileostomy is low and the development of hernias or bowel obstruction is unlikely. | Mid | [
0.6540284360189571,
34.5,
18.25
] |
Director says casting lends “gravitas to roles that are rooted in naiveté and youth.” Gary Briggle and Wendy Lehr Ben Krywosz In these days of non-traditional casting, Nautilus Music-Theatre in St. Paul, MN, is taking non-traditional in a whole new direction. In its current production of the classic mini musical The Fantasticks, the director has cast 64-year-old Gary Briggle as Matt (The Boy) and 72-year-old Wendy Lehr as Luisa (The Girl), characters described as 20 and 16, respectively, in the script. They sing the duets “Soon It's Gonna Rain” and “They Were You.” The two actors are married in offstage life. With musical direction by Jerry Rubino, the production is in the middle of a sold-out limited run that concludes April 19. Also in the cast: Brian Sostek, Christina Baldwin, Jennifer Baldwin Peden and William Gilness. “This casting came about for a variety of reasons," said director Ben Krywosz. “Our group of artists have been talking recently about advancing years, artistic adventures, and the innate stylization of music-theater. We wondered how we might use the non-naturalism of music-theatre to offer our artistic elders an opportunity to focus on direct storytelling. Gary and Wendy (each of whom did the roles ages ago) bring a sense of maturity and gravitas to roles that are rooted in naiveté and youth. And Jen and Christina, both parents of young children and daughters of elderly parents, resonate deeply with their roles, transcending gender and dealing simply with the urge to protect their offspring.” Krywosz added, “When an influential piece like The Fantasticks becomes part of the canon, it becomes harder for audiences to see (to paraphrase El Gallo), ‘not with their eyes, for they are wise, but with their ears, and hear it with the inside of their hand’— in other words, beginner’s mind. How can interpretive artists recreate the kind of impact the piece had on audiences in 1960, 56 years later? “Our casting requires audiences to use theater’s most powerful tool — their own imagination — to help us create a climate in which the power of the words and music engage. And by using beloved members of the Twin Cities’ theatrical community, we were guaranteed of an audience connection through extraordinary performances.” Based on Les Romanesques by Edmond Rostand, The Fantasticks tells the story of two young lovers who live in adjacent houses. Their fathers have built a wall, ostensibly to keep them apart, but really to bring them together. Among other things, it's a story about growing up. The show has music by Harvey Schmidt and book and lyrics by Tom Jones. It opened Off-Broadway May 3, 1960, and, after a bumpy start, ran 17,162 performances, still the longest continuous run in American theatre history for a show on a full performance schedule. The original run closed in 2002, but a 2006 revival is still running Off-Broadway, and has piled up its own run of more than 3,800 performances. The Fantasticks is performed at the Nautilus Music-Theater studio at 308 Prince St #190 in Lowertown St. Paul, MN. | Mid | [
0.635983263598326,
38,
21.75
] |
General Question My dog is making a weird noise... I gave my dog a piece of bread and she must’ve swallowed it wrong. She’s making a sound like a duck, sort of like sniffling. What can I do to make her either cough up the bread or swallow it down all the way? My dog does that occasionally – it is a horrible sound. I took him to the vet, thinking it might be esophageal collapse, which small dogs can be prone to. He said it was reverse sneezing, and as long as it passed within a minute or two (and it was obvious the dog was breathing) it was ok. | Low | [
0.514227642276422,
31.625,
29.875
] |
This is an excellent rebuttal of sedevacantism. Sedevacantists maintain that there has been no legitimate pope since Pius XII. Every pope after him, they say, was a heretic; and if you’re a heretic you are outside the Church. Therefore, since the past four popes were heretics (and the present pope is a heretic), then they were not members of the Church and one must be a Catholic to be a valid pope. This aberration is infecting the minds of too many good Catholics every day. The movement’s leading promoters are as tenacious as they are unrelenting in their determination to win converts, not to Roman Catholicism, but to sedevacantism. Indeed, in their minds, if you reject the sedevacantist position you are not a true Roman Catholic. Traditional Catholics are easy prey, especially those who have more zeal than knowledge. They have become so discouraged by the scandalous mess caused by the Church’s hierarchy, beginning with the pope, that they are tempted to believe that God would never allow the Barque of Peter to have such liberal men at the helm. Add to that temptation some apparently convincing arguments from clever, sedevacantist theologians and you have the recipe for schism. If you have harbored such thoughts, or know people who have, and you want to help them persevere in the Faith while remaining within the Church as it endures the humiliation of Christ’s passion, then this talk is for you. Br. André provides clear answers to all the subtleties of the sedevacantist arguments. He reminds us that “the sedevacantist position is a problem, NOT an answer.” It is built on the Protestant idea of private inspiration. Each of us becomes our own pope in this camp of “rugged individualists.” Brother André does not make excuses for the scandal of this papally assisted “auto-demolition” of the Church (I quote Paul VI). He sticks to his theme: other than the pope himself freely resigning, no human authority can unseat a validly elected pope. We cannot take ecclesial authority into our own hands and start tossing around disciplinary papal legislation from past centuries and declaring ipso facto excommunications. Nor do we have any authority to determine that a papally approved sacramental rite is invalid. In effect, as Brother explains, the sedevacantists have established their own church. And what a strange church it is! — fractured into how many factions, full of ceremony, full of anathema hurlers, and bereft of charity. Imploring the Holy Spirit to guide his words, Brother Andrè began and ended his lucid presentation with a prayer for our then Holy Father, Pope John Paul II. | High | [
0.6980198019801981,
35.25,
15.25
] |
Social class inequalities in the utilization of health care and preventive services in Spain, a country with a national health system. In Spain, despite the existence of a National Health System (NHS), the utilization of some curative health services is related to social class. This study assesses (1) whether these inequalities are also observed for preventive health services and (2) the role of additional private health insurance for people of advantaged social classes. Using data from the Spanish National Health Survey of 2006, the authors analyze the relationships between social class and use of health services by means of Poisson regression models with robust variance, controlling for self-assessed health. Similar analyses were performed for waiting times for visits to a general practitioner (GP) and specialist. After controlling for self-perceived health, men and women from social classes IV-V had a higher probability of visiting the GP than other social classes, but a lower probability of visiting a specialist or dentist. No large class differences were observed in frequency of hospitalization or emergency services use, or in breast cancer screening or influenza vaccination; cervical cancer screening frequency was lower among women from social classes IV-V. The inequalities in specialist visits, dentist visits, and cervical cancer screening were larger among people with only NHS insurance than those with double health insurance. Social class differences in waiting times were observed for specialist visits, but not for GP visits. Men and women from social classes IV-V had longer waits for a specialist; this was most marked among people with only NHS insurance. Clearly, within the NHS, social class inequalities are still evident for some curative and preventive services. Further research is needed to identify the factors driving these inequalities and to tackle these factors from within the NHS. Priority areas include specialist services, dental care, and cervical cancer screening. | High | [
0.6634615384615381,
34.5,
17.5
] |
I Needed You,But Where Were You Below is the poem entitled I Needed You,But Where Were You which was written by poet Jasmine Whipps. Please feel free to comment on this poem. However, please remember, PoetrySoup is a place of encouragement and growth. I Needed You,But Where Were You I wanted your love, Needed your love, But you couldn't pull through, Of all the promises you made, And I just wonder if the love you say that you had, Was true from the beginning. And yes, I loved you, I wanted what was best, But the walls built around your heart Wouldn't allow you to see it, And I tried to make it clear, If it had been any clearer The sight would have blinded you. I wanted to be able to lie in your chest And cry if I needed to But you weren't there And you couldn't see how I was hurting, You couldn't tell that I didn't feel the love, That you so claimed to have. But I realize now, That there's someone out there who's better. | Mid | [
0.540755467196819,
34,
28.875
] |
/*
* Copyright 2011-2014 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package pl.com.bottega.ecommerce.sales.domain.productscatalog;
import pl.com.bottega.ecommerce.canonicalmodel.publishedlanguage.AggregateId;
import pl.com.bottega.ecommerce.sharedkernel.Money;
public class ProductObjectMother {
public static Product someProduct(){
return new Product(AggregateId.generate(), new Money(10.0), "product 1", ProductType.STANDARD);
}
}
| Low | [
0.526923076923076,
34.25,
30.75
] |
--- author: - 'Weiming Dong Fuzhang Wu, Yan Kong, Xing Mei Tong-Yee Lee and Xiaopeng Zhang, [^1]' bibliography: - 'Photo.bib' title: 'Image Retargeting by Content-Aware Synthesis' --- [Dong : Image Retargeting by Content-Aware Synthesis]{} Introduction ============ retargeting has retained in the front rank of most widely-used digital media processing techniques for a long time. To adapt raw image material for a specific use, there are often the needs of achieving a target resolution by reducing or inserting image content. To protect certain important areas, some methods [@Avidan:07; @Wang:08; @Panozzo:2012; @Lin:2013:PBI] used significance maps based on local low-level features such as gradient, dominant colors, and entropy. However, high-level semantics also play an important role in human’s image perception, so usually it is necessary to better understand the content of an image to help to choose a more feasible scheme for retargeting operation. Moreover, as found in [@Rubinstein:2010], viewers are more sensitive to deformation than to image area loss. Therefore for some examples it is better to summarize the content rather than distort/warp or crop the origin images [@Simakov:08; @Wu:2010; @Dong:2014:SBI]. Although many retargeting methods have been proposed, a few noticeable and critically influencing issues still endure. They are mostly related to complexity of textural patterns in many natural images. Previous retargeting techniques attempt to modify the image without noticing the properties of textural regions, and may easily result in apparent visual artifacts, such as over-smoothing (Figs. \[fig:tulip04\](c), \[fig:tire\_bd\], \[fig:field23\_bd\]), local boundary discontinuity (Figs. \[fig:tulip04\](c), \[fig:tire\_sm\]), content spatial structure mismatch (Figs. \[fig:tulip04\](c), \[fig:tire\_bd\], \[fig:field23\_bd\], \[fig:field23\_sm\]), uneven distortion (Figs. \[fig:tulip04\](d), \[fig:tire\_aad\], \[fig:tire\_mod\], \[fig:field23\_mod\]), over-squeezing/over-stretching (Figs. \[fig:tulip04\](e), \[fig:tire\_aad\]. \[fig:field23\_pbw\]), and damage of scene layout (Figs. \[fig:tulip04\](f), \[fig:field23\_sm\]). The examples in Figs. \[fig:tulip04\]-\[fig:field23\] are not special and exhibit one common problem - that is, *when images contain large textural patterns, retargeting quality could be generally affected by their complexity.* Since regularity is an important high-level feature for human texture perception [@Rao:1993:CGM] and texture exists in many natural images, this problem cannot be ignored. We propose a novel content-aware synthesis algorithm to address the challenge of handling textural patterns in image retargeting. In preprocessing, the textural regions (T-regions) of the input image are automatically detected based on local variation measures and each pixel in a T-region is assigned a significance value. In the retargeting process, the input image is first retargeted to the target size by fast multi-operators (F-MultiOp). Then, the T-regions are regenerated by *synthesis*, which arranges sample patches with respect to the neighborhood metric and patch position information (Figs. \[fig:tulip04\](b), \[fig:tire\_texd\], \[fig:field23\_texd\]). The patches with higher significance values have higher probability to appear in the result. With the content-based information and texture synthesis technique, the proposed approach can better protect both the local shape of the texture elements (texels) and the global visual appearance of the T-regions than previous image retargeting methods. Compared with recent studies on image retargeting, the major contributions of the proposed approach are as follows: - A fast and automatic method to detect the T-regions in an image. This process makes it possible for retargeting operation to treat the T-regions and NT-regions (non-textural regions) with different strategies. - A novel texture saliency detection method to generate significance map in a T-region, which is based on both color and texture features. - A synthesis-enhanced image retargeting approach is proposed to ease unpleasant visual distortions caused by seam carving, warping or scaling to overall texels in T-regions. Thus, our approach can yield better results in terms of texture element (texel) shape and preservation of globally varying effect compared with related approaches. To compare with the state-of-the-arts image retargeting methods, we construct a new benchmark image set and conduct a user study to demonstrate the effectiveness of our framework. {width="0.98\linewidth"} -2mm -3mm \[fig:tulip04\] -1.8mm -1.8mm -1.8mm -1.8mm -1.8mm -2mm -2mm \[fig:tire\] -1.8mm -1.8mm -1.8mm -1.8mm -1.8mm -2mm -4mm \[fig:field23\] Related Works {#sec:related} ============= Image Retargeting ----------------- Numerous content-aware image retargeting techniques have recently been proposed. Cropping has been widely used to eliminate the unimportant information from the image periphery or improve the overall composition [@Zhang:2013:PGT; @Yan:2013:LCA; @Zhang:2014:WSP]. Seam carving methods iteratively remove a seam in the input image to preserve visually salient content [@Avidan:07; @Rubinstein:08]. A seam is a continuous path with minimum significance. Multi-operator algorithms combine seam carving, homogeneous scaling and cropping to optimally resize images [@Rubinstein:09; @Dong:2009c; @Dong:2012]. Pritch et al. [@Pritch:09] introduced Shift-Map that removed or added band regions instead of scaling or stretching images. For many cases these *discrete* approaches can generate pleasing results, however, the seam removal may cause discontinuous artifacts, and cropping is unsuitable for the case when there are visually salient contents near the borders of images. *Continuous* retargeting methods have been realized through image warping or mapping by using several deformation and smoothness constraints [@Gal:06; @Wolf:07; @Wang:08; @Zhang:08; @Krahenbuhl:2009; @Guo:2009; @Liang:2013:OSF]. A finite element method has also been used to formulate image warping [@Kaufmann:2013:FEI]. Recent continuous retargeting methods focus on preserving local structures. Panozzo et al. [@Panozzo:2012] minimize warping energy in the space of axis-aligned deformations to avoid harmful distortions. Chang et al. [@Chang:2012:LAI] couple mesh deformations with similarity transforms for line features to preserve line structure properties. Lin et al. [@Lin:2013:PBI] present a patch-based scheme with an extended significance measurement to preserve shapes of both visual salient objects and structural lines. These approaches perform well on shape preservation of salient objects but often over-squeeze or over-stretch the T-regions to distort all texels since T-regions are usually not salient in the whole image. -2.5mm -3mm -4mm \[fig:examples\] {width="0.95\linewidth"} -2mm -4mm \[fig:workflow\] *Summarization*-based retargeting approaches eliminate repetitive patches instead of individual pixels and preserve patch coherence between the source and target image during retargeting [@Simakov:08; @Cho:08; @Barnes:09]. These techniques measure patch similarity and select patch arrangements that fit together well to change the size of an image. However, due to the lack of enough content information, the major drawback of such methods is that the globally visual effect may be discarded and some regions may be over-smoothed when the target size is small. High level image content informations are analyzed and integrated in some recent summarization approaches. For example, Wu et al. [@Wu:2010] detect the corresponding lattice of a symmetry image region and retarget it by trimming the lattice. Basha et al. [@Basha:2013:SSC] employ depth information to maintain geometric consistence when retargeting stereo images by seam carving. Lin et al. [@Lin:2014:OCW] utilize the object correspondences in the left and right images of a stereoscopic image in retargeting, which allows the generation of an object-based significance map and the consistent preservation of objects during warping. Dong et al. [@Dong:2014:SBI] detect similar objects in the input image and then use object carving to achieve a natural retargeting effects with minimum object saliency damage. There also exist a few efforts to deal with textures for better retargeting. Kim and Kim [@Kim:2011:TAS] exploit the higher order statistics of the diffusion space to define a reliable image importance map, which can better preserve the salient object when it is located in front of a textural background. This approach does not consider how to preserve the visual effects of textural regions. Zhang and Kuo [@Zhang:2012:RAT] resize the salient and irregular regions by warping and re-synthesize the regular regions. However, in [@Zhang:2012:RAT] the authors did not address what situations the regularity detection algorithm works which questions its robustness. On the other hand, the synthesis algorithm they used can only deal with isotropic textures which is not fit for most natural images with vivid anisotropic texture regions. Texture Detection and Synthesis ------------------------------- The adaptive integration of the color and texture attributes in image segmentation is one of the most investigated topics of research in computer vision (surveyed in [@Ilea:2011]). However, most of the image segmentation algorithm do not clearly illustrate the type of each region (textural or non-textural) in the result. Targhi et al. [@Targhi:2006] present a fast texture descriptor based on LU transform, but how to determine if a pixel is texture or non-texture according to the feature values is not discussed. Bergman et al. [@Bergman:2007] present an intuitive texture detection method which is based on contrast and disorganization measurements of image blocks. The method is not effective on noisy images which tend to have decreasing contrast and often generate many disjoint areas. Todorovic and Ahuja [@Todorovic:2009] formulate the detection of texture subimages as identifying modes of the pdf of region descriptors. However, the method is not efficient (5 minutes for a $512 \times 512$ image) for practical image retargeting applications. Texture synthesis is a general example-based methodology for synthesizing similar phenomena [@Wei:2009]. However, the basic MRF-based scheme in most existing texture synthesis methods cannot adequately handle the globally visual variation of texels, such as perspective appearance and semantic content distribution. Dong et al. [@Dong:2008] present a perspective-aware texture synthesis algorithm by analyzing the size variation of the texel, but verbatim copying artifacts also often appear in their results. On the other hand, common texture synthesis algorithms are designed for enlargement and can not be directly used for image retargeting applications. Wei [@Wei:2008] presents inverse texture synthesis approach to generate a smaller example from a large input texture. However, for globally varying textures, the output quality of this approach usually depends on the accuracy of the original map. Therefore, if applied to normal T-region retargeting, it will easily lose the globally visual variation or damage the local content continuity of the original image in the result. Retargeting by Synthesis {#sec:retarget} ======================== System Overview --------------- Some standard examples studied in our work are shown in Fig. \[fig:exp\]. Different from most examples in the RetargetMe benchmark, our images all contain one or more large textural regions, which bring new challenges to image retargeting. Previous methods can well preserve the shape of one or more salient objects in the retargeting results but often omit the “background” textures which also play important roles in most natural images. The shape of texels and some globally visual effects of T-regions will be easily damaged in the results. Our method will address those problems. Fig. \[fig:workflow\] illustrates the framework of the proposed method. In the preprocessing step, the input image is segmented into one or more T-regions and one NT-region (we treat disjoint NT-regions also as one region) by texture detection (Sec. \[sec:texdet\]). A hierarchical saliency detection for texture is then performed to generate a significance map for each T-region (Sec. \[sec:texsal\]). The significance map of the whole image is also adaptively adjusted according to the percentage of areas of T-regions. Afterwards, the input image is filtered by structure-preserving image smoothing. In the image retargeting step, fast multi-operator (F-MultiOp) [@Dong:2012] method is firstly used to resize the filtered input image to the target size (Sec. \[sec:initret\]). The process of retargeting the smoothed image is used to guide the resizing process of the original image in order to eliminate the effect of textural details (Sec. \[sec:initret\]). We then re-generate the T-regions of the resulting image via the proposed content-aware synthesis operator, in order to maintain the perspective variation, content diversity, as well as the texel shapes (Sec. \[sec:syn\]). Finally, we refine the boundaries between T- and NT-regions by re-synthesizing the pixels of the boundary areas (Sec. \[sec:merge\]). -3mm -4mm \[fig:field13\_tex\] Automatic Texture Detection {#sec:texdet} --------------------------- The first step for our image retargeting system is to locate the T-regions. Recently, local variation measures were used to smooth texture and extract structures from an image [@Buades:2010:FCT; @Xu:2012:SET; @Karacan:2013:SIS]. However, this kind of approaches all can not provide the positional information of textures, especially for most natural images which contain both T- and NT- regions. We develop a fast texture detection method based on the measure of relative total variation (RTV). Given an input image, we first calculate the *windowed total variations* $\mathcal{D}_x(p)$ (in the $x$ direction) and $\mathcal{D}_y(p)$ (in the $y$ direction) for pixel $p$, as well as the *windowed inherent variations* $\mathcal{L}_x(p)$ and $\mathcal{L}_y(p)$. Details of calculating the windowed variations are described in [@Xu:2012:SET]. We then calculate the reliability of pixel $p$ being a texture pixel as: $$R(p) = \frac{\mathcal{D}_x(p)}{\mathcal{L}_x(p) + \epsilon} + \frac{\mathcal{D}_y(p)}{\mathcal{L}_y(p) + \epsilon}, \label{equ:tex_r}$$ where the division is an element wise operation. $\epsilon = 10^{-5}$ is used to avoid division by zero. After calculating the reliability of each pixel, we use an iterative algorithm to set a threshold $R_T$ to determine the textural pixels. We first calculate the average reliability $R_A$ of all the pixels and use $R_T = R_A$ to separate the pixels into two parts. The pixels which $R(p) \geqslant R_T$ are set as textural pixels (T-pixels) and $R(p) < R_T$ as non-textural pixels (NT-pixels). We then calculate the average reliability of T-pixels as $R_A^T$ and the one of NT-pixels as $R_A^{NT}$. After that, we set the new threshold as $R_T' = \alpha \cdot R_A^T + (1.0 - \alpha) \cdot R_A^{NT}$, where $\alpha = 0.5$ in all our experiments. We update $R_T = R_T'$ and repeat the above steps until $|R_T' - R_T| < \epsilon$. We can get a noisy texture mask (see Fig. \[fig:field13\_tex\_noisy\]) after segmenting the original image into T-pixels and NT-pixels. To improve the quality of the mask, we over-segment the input image into super-pixels by SLIC [@Achanta:2012:SSC] (see Fig. \[fig:field13\_tex\_slic\]). A super-pixel is labeled as texture if more than half of its pixels are labeled as texture. The smooth texture mask (see Fig. \[fig:field13\_tex\_mask\]) is further improved by graphcut in order to get more accurate boundaries for T-regions. As shown in Fig. \[fig:field13\_tex\_final\], our algorithm can accurately detect the grassland as a T-region. Please see more texture detection results and the analysis of the accuracy of the algorithm in the supplemental material. -3mm -2mm -2mm -2mm -1mm -2mm -2mm -2mm -1mm -2mm -2mm -2mm -2mm -3mm 4.9mm 4.9mm -2mm -2mm -2mm -1.5mm -2mm -2mm -2mm -2mm -2mm -3mm -3mm -2.2mm -2.2mm -2.2mm -2.2mm -2.2mm -3mm -3mm Texture-Based Significance Map Generation {#sec:texsal} ----------------------------------------- Pixel significance measurements have been commonly used in image retargeting approaches. Usually saliency map is employed to help generate the significance map of the input image. However, the purpose of almost all of current saliency detection algorithms is to detect and segment the distinct salient objects. *To the best of our knowledge, a saliency detection algorithm aiming at marking the visually important areas (not objects) of a texture has not been proposed.* We present a hierarchical framework to deal with saliency detection of a texture. We first segment the image into $M$ patches by using the SLIC method [@Achanta:2012:SSC]. Since a texture usually does not contain a distinct salient object, the saliency detection becomes determining the patches which are visually unique from others. Previous approaches usually use color or contrast information to evaluate the visual difference between pixels or patches [@Cheng:2011:GCB; @Margolin:2013:WMP], but this is not effective enough for dealing with texture images, as shown in Fig. \[fig:field01\]. In our approach, in order to better evaluate the saliency of a texture, we integrate 2D Gabor filter [@Pang:2013:FGT] with 4 frequencies and 6 directions to extract the texture features of the T-regions. For each SLIC patch $A_i$, we calculate the average and variance of Gabor values of all the pixels in it and then get a 48D texture feature. Thus, we define the visual uniqueness saliency cue of $A_i$ as a weighted sum of color difference and texture difference from other patches: $$\begin{aligned} U_i = \sum_{j = 1}^{M}({w(A_i) \cdot \exp(\frac{-D_s(A_i, A_j)}{\sigma_s^2})} \\ \nonumber \cdot (\|\textbf{C}_i - \mathbf{C}_j\|^2 + \|\textbf{G}_i - \textbf{G}_j\|^2)),\end{aligned}$$ where $\textbf{C}_i$/$\textbf{C}_j$ is the average color of a patch, $\textbf{G}_i$/$\mathbf{G}_j$ is the texture feature. The color feature and texture feature are both normalized to $[0,1]$. $w(A_i)$ counts the number of T-pixels in $A_i$. Patches with more T-pixels contribute higher visual uniqueness weights than those containing less T-pixels. $D_s(A_i, A_j)$ is the square of Euclidean distance between patch centroids of $A_i$ and $A_j$, and $\sigma_s$ controls the strength of spatial weighting. In our implementation, we set $\sigma_s^2 = 0.5$ with pixel coordinates normalized to $[0, 1]$. Similar as [@Yan:2013:HSD], we also add the location heuristic that in many cases pixels close to a natural image center could be salient: $$H_i = \frac{1}{w(A_i)}\sum_{x_j \in A_i}{\exp(-\lambda\|\mathbf{x}_j - \mathbf{x}_c\|^2)},$$ where $\mathbf{x}_j$ is the coordinate of a pixel in patch $A_i$, and $\mathbf{x}_c$ is the coordinate of image center. In our experiments, we set $\lambda = 9$ to balance the visual uniqueness and location cues. We combine $H_i$ with $U_i$ to get the saliency of patch $A_i$: $$S_i = U_i \cdot H_i.$$ For further robustness, we compute patch-based saliency at three scales: $M = 100, 500, 1000$ and average them pixel by pixel. As shown in Fig. \[fig:clover01\_sal\_texcol\], we can get a coarse saliency map by using the above patch-based hierarchical method. Finally, we adopt an image up-sampling method [@Criminisi:2010:GIV] to refine the coarse saliency map and assign a saliency value to each image pixel. We define the saliency $\tilde{S}_i$ of a pixel as a Gaussian weighted linear combination of the saliency of its $N$ neighbourhoods: $$\tilde{S}_i = \frac{1}{Z_i}\sum_{j = 1}^{N}{\exp(-\frac{\|\mathbf{c}_i - \mathbf{c}_j\|^2 + \|\mathbf{g}_i - \mathbf{g}_j\|^2 + \|\mathbf{x}_i - \mathbf{x}_j\|^2}{2\sigma})S_j},$$ where $\mathbf{c}_i$ is the pixel color, $\mathbf{g}_i$ is the Gabor texture feature, and $\mathbf{x}_i$ is the pixel coordinate. We set $\sigma = 30$ in all our experiments. Result is shown in Fig. \[fig:clover01\_sal\]. Similar refinement method is used in [@Perazzi:2012:SFC], but they only consider the color and position features. -2mm -2mm -2mm -2mm -2mm -3mm -3mm \[fig:flowers13\] As discussed above, previous saliency detection approaches are usually designed to highlight the salient object(s). As shown in Fig. \[fig:clover01\], for an image in our dataset, previous methods either over-darken or over-highlight most part of a T-region. They also have difficulties with accurately detecting the visually important areas of T-regions due to the lack of texture features. On the other hand, the content balance will be easily damaged during retargeting if the saliency values of T-regions are too smaller or too larger than NT-regions, especially when the sizes of T- and NT-regions are similar. Therefore, to address these problems, in the saliency map of the whole image, we replace the parts of T-regions with the saliency maps generated by our method. For the generation of initial saliency map, we use the method in [@Yan:2013:HSD] if the area of NT-region is less than $30\%$ of the image since this method is good at distinguishing salient objects from complex background patterns. Otherwise, we use HSD method [@Yang:2013:SDG] to generate a more balanced initial saliency map. We use the saliency map as the significance map for retargeting operation. In Fig. \[fig:flowers21\] we show an example of using different saliency maps to retarget an image. We can see that our method can highlight more visually unique contents in the saliency map than HSD. Initial Retargeting {#sec:initret} ------------------- As an initial retargeting operation, we first smooth the original image by structure extraction [@Xu:2012:SET]. We then use F-MultiOp method [@Dong:2012] to resize the smoothed image to the target size. The significance map is utilized to preserve the important areas of both T- and NT-regions. The operation details are recorded, including the numbers of the three operators (i.e., seam carving, homogeneous scaling, and cropping) and the paths of pixels used by seam carving. Finally the original image is retargeted by copying these operations. This scheme can efficiently eliminate the unexpected affects of large-magnitude gradients of complex texture details to seam carving. After initial retargeting, the resized NT-region will be directly used in the final result, but we re-generate the T-regions by content-aware synthesis. Content-Aware Synthesis for T-regions {#sec:syn} ------------------------------------- We synthesize a T-region of the resized image by using the original T-region as the example. However, for most images, directly synthesizing the content by normal texture synthesis algorithms cannot generate satisfied result or even change the semantics of the image and introduce obvious boundary discontinuity. The global visual appearance may be damaged when the resized ratio is large. As shown in Figs. \[fig:flowers13\_its\]-\[fig:flowers13\_ats\], the perspective characteristic no longer exists and the spatial structure of the content is also damaged. For our content-aware image resizing application, the synthesis algorithm should preserve the globally visual appearances of the original T-regions as well as the local continuity. **Initialization** We employ patch-based synthesis framework which is effective for image textures to synthesize the resized T-regions. In our experiments, we find that a *good* initialization will increase the quality of the resized results. Therefore, we use the resized T-regions generated by F-MultiOp in initial retargeting as the initial guess. With the help of significance map during F-MultiOp, this will effectively preserve the global visual appearance and the visually salient areas in the result. -2mm -2mm -3mm -3mm -1mm -1mm -1mm -1mm -3mm -4mm **Neighborhood metric** The neighborhood similarity metric is the core component of example-based texture synthesis algorithms [@Wei:2009]. We denote $Z_p$ as the spatial neighborhood around a sample $p$, which is constructed by taking the union of all pixels within its spatial extent defined by a user specified neighborhood size. We formulate the distance metric between the neighborhoods of two sample $p$ and $q$ as: $$M(Z_p; Z_q) = \mu_p \cdot (\sum_{p' \in Z_p}{\|\mathbf{c}_{p'} - \mathbf{c}_{q'}\|^2} + \omega \cdot \|\mathbf{x}_{p'} - \mathbf{x}_{q'}\|^2), \label{equ:nmetric}$$ where $p'$ runs through all pixels $\in Z_p$, $q' \in Z_q$ is the spatially corresponding sample of $p'$, $\mathbf{c}$ represents the pixel color in RGB space, and $\mathbf{x}$ is the local coordinate of a sample pixel. Different from traditional texture synthesis that usually defines the neighborhood metric as a simple sum-of-squared of the pixel attributes (such as colors and edges), we add the spatial information to the neighborhood metric. The spatial item can preserve the global appearance without causing over-smoothing and generating obvious partial/broken objects (detailedly discussed in Sect. \[sec:results\]). In Equation (\[equ:nmetric\]), $\mu_p$ is a penalty coefficient which is used to avoid overusing the same patches in the resulting image: $$\mu_p = 1 + \beta \cdot t_p,$$ where $t_p$ is number of times that patch $Z_p$ has been used in the resulting image, $\beta = 10$ is a constant. In Fig. \[fig:car\], we can see the importance of adding $\mu_q$ to the neighborhood metric in avoiding unexpected repeat patterns. Note that for Fig. \[fig:car\_texd\_rep\] we also did not integrate the significance map during the initial retargeting process, so the salient yellow trees in the middle of the original image are lost in the result. **Optimization** Therefore, given an original exemplar T-region $\mathcal{I}$, our goal is to synthesize an output $\mathcal{O}$ that contains similar visual appearances to $\mathcal{I}$. We formulate this as an optimization problem via the following energy function: $$E(\mathcal{I}; \mathcal{O}) = \sum_{p \in \mathcal{O}}{M(Z_p; Z_q)}, \label{equ:energy}$$ where the first term measures the similarity between the input exemplar $\mathcal{I}$ and $\mathcal{O}$ via our local neighborhoods metric as defined in Equation (\[equ:nmetric\]). Specifically, for each output sample $q \in \mathcal{O}$, we find the corresponding input sample $p \in \mathcal{I}$ with the most similar neighborhood (according to Equation (\[equ:nmetric\])), and sum their squared neighborhood differences. Our goal is to find an output $\mathcal{O}$ with a low energy value. For our normal image resizing applications, we assume as null. Furthermore, we follow the EM-like methodology in [@Kwatra:2005] to optimize Equation (\[equ:energy\]) because of its high quality and generality with different boundary conditions. We perform our synthesis process in multi-resolutions through an iterative optimization solver. For Equation (\[equ:nmetric\]), we use larger $\omega$ in lower resolution to increase the spatial constraint. This scheme helps to preserve the global appearance during the synthesis process, then we decrease the $\omega$ value in higher resolution to avoid the local texel repeat. In all our experiment, we use a 3-level pyramid and within each level, from lower to higher, we fix $\omega = 0.65, 0.25, 0.1$. **Adaptive neighborhood matching** In each iteration, we search for the most similar input neighborhood for each output sample and assign the exemplar patch from the matched neighborhood to the output. This will gradually improve the synthesis quality. During the search step, exhaustively examining every input sample to minimize the energy value in Equation (\[equ:nmetric\]) can be computationally expensive. Previous works use $K$-means [@Kwatra:2005] or $K$-coherence [@Han:2006] to find an approximate nearest neighborhood (ANN). These strategies can efficiently accelerate the search process. However, when the texel diversity increases, the ANNs may not be accurate enough to improve the neighborhood quality, which will cause dissatisfied results (Fig. \[fig:field01\_texd\_k\]). Therefore, we also search for the exact nearest neighborhoods by brute-force method over the exemplar image. Since the nearest neighborhoods are independent from each other, we implement our EM-based synthesis algorithm fully on GPU by implementing the search in a parallel framework, which will dramatically accelerate the search process. Specifically, in each thread, we calculate the similarity of two neighborhoods in the M-step and perform the average operation for each pixel in the E-step. Moreover, to further accelerate the neighborhood matching process, we use an adaptive scheme to narrow the searching domain in finer layers. Since our synthesis algorithm is a hierarchical framework which contains three layers, we use an adaptive scheme to gradually narrow the searching domain. In layer 1 where the images are processed in the lowest resolution, we search the best patch from the whole exemplar for each patch in the resulting image, we search for the best matching from the whole exemplar. Then, in layer 2, for each patch in the resulting image, we narrow the searching domain to the $40\%$ pixels of the exemplar around its corresponding patch. Furthermore, we narrow the searching domain to $20\%$ in the finest layer. Note that in the two finer layers, we still perform full search in the first matching operation and narrow the domain in the latter steps. **Synthesis as a whole** We synthesize the image as a whole when most contents of the scene are textures (usually more than $70\%$). The advantage of this strategy is that it can better preserves the global visual appearance and effectively reduce the object broken artifacts. Fig. \[fig:tire\], \[fig:car\], \[fig:flowers22\], and \[fig:village\] show 4 typical examples of this class. Our results are directly generated by the synthesis operator. Merge of T-Regions and NT-Regions {#sec:merge} --------------------------------- Since we resize the T- and NT- regions by different strategy, there may exist discontinuity of image contents between them. As demonstrated in Fig. \[fig:field18\], the image content on the boundary between T- and NT-regions may be changed after synthesis. To reduce the discontinuity artifact, we grow the boundary by expanding 4 pixels on both inward and outward sides. We then get an overlapping area between the T- and NT-regions in the resized image. Afterwards, we re-synthesize those boundaries pixels by using the original image as the input example. The inclusion of NT- pixels on the boundary in the synthesis process helps to maintain the content consistency. Fig. \[fig:field18\_texd\_nf\] and \[fig:field18\_texd\] compare the results without and with fixing the discontinuity, respectively. Results and Discussion {#sec:results} ====================== We have implemented our method on a PC with Intel Core(TM) i7 950 CPU, 3.06 GHz, 8GB RAM, and nVidia Geforce GTX 770 GPU with 2048MB video memory. Our T-region synthesis algorithm is fully implemented on GPU with CUDA. The texture detection and saliency detection are both performed in real-time. The timing of resizing examples shown in this paper ranges from 10 seconds to 40 seconds, depending on the sizes of the output T-regions. Figs. \[fig:tulip04\], \[fig:tire\], \[fig:field01\]-\[fig:flowers13\], and \[fig:field18\]-\[fig:village\] show our image retargeting results. We perform a user study for visual comparison (detailedly described below). For each figure, we put our result and primarily better than other results with relatively higher votes. We can see that our content-aware synthesis method can preserve the overall texture features in terms of texel shape, perspective, boundary continuity, content completeness, and clarity. The perspective appearance remains perspective. The shapes of texels are reasonably preserved, without over-squeezing/over-stretching or uneven distortion of texels within the regions. The boundary between T- and NT-region is continuous. All the prominent contents of textures appear in the result. Evaluation on Textural Scene Retargeting Dataset ------------------------------------------------ Although images from RetargetMe benchmark [@Rubinstein:2010] have a large variety in their content, most of their textural regions are simple and smooth. To represent more general situations that real world images fall into, we construct a Textural Scene Retargeting Dataset (TSRD) with 61 images. They all contain diversified and large textural patterns (occupying more than $50\%$ areas of the whole image). These images are collected from RetargetMe (9 images), CSSD [@Yan:2013:HSD] (Fig. \[fig:flowers21\_o\]) and internet. Some images in the RetargetMe benchmark which also contain textures are not included in TSRD because either the T-regions are small or the textures are relatively smooth without obvious texels (such as a still water surface, a smooth snowfield, and a manicured lawn). The images in the new dataset can be roughly divided into three types: pure textures with vivid global visual effects (Type 1), images with textures around one or more salient objects (Type 2), images with distinct T- and NT-regions (Type 3). For the exemplars in this paper, Figs. \[fig:clover01\_o\], \[fig:flowers21\_o\], \[fig:flowers13\_o\], and \[fig:flowers22\_o\] belong to Type 1. Figs. \[fig:tire\_o\], \[fig:car\_o\], \[fig:child\_o\], and \[fig:girl02\_o\] belong to Type 2. Figs. \[fig:tulip04\](a), \[fig:field13\_tex\_o\], \[fig:field01\_o\], \[fig:car\_o\], \[fig:field18\_o\], \[fig:lavender\_o\], \[fig:field19\_o\], \[fig:field14\_o\], \[fig:bicycle1\_o\] and \[fig:village\_o\] belong to Type 3. The whole dataset and the comparisons with previous state-of-the-arts methods are all shown in the supplemental material. -1.5mm -1.5mm -1mm -1.5mm -1mm -1.5mm -1.5mm -3mm -6mm \[fig:lavender\] Comparison with previous methods -------------------------------- For quantitative evaluation, we compare our method with six state-of-the-arts image retargeting approaches, i.e., Axis-Aligned Deformation (AAD) [@Panozzo:2012], Bi-Directional Similarity (BDS) [@Simakov:08], Cropping, Multi-Operator (F-MultiOp [@Dong:2012] and MultiOp [@Rubinstein:09]), Patch-Based Warping (PBW) [@Lin:2013:PBI] and Shift-Map [@Pritch:09]. The experiments are performed on our data set. -2mm -2mm -2mm -2mm -2mm -3mm -4mm -2mm -2mm -2mm -2mm -2mm -3mm -4mm -2mm -2mm -2mm -2mm -2mm -3mm -4mm -2mm -2mm -2mm -2mm -2mm -3mm -4mm -2mm -2mm -2mm -2mm -2mm -3mm -4mm -2mm -2mm -2mm -2mm -2mm -3mm -4mm -2mm -2mm -2mm -2mm -2mm -3mm -4mm ***For AAD and PBW***, we choose them for comparison since they are two typical continuous image warping approaches, which have been recently presented and testified to be among the best warping methods. SV [@Krahenbuhl:2009] is also a good warping method which has been proved by the test on RetargetMe benchmark. However, in [@Panozzo:2012] the user study demonstrates that AAD is better than SV, so we only compare our method with AAD and PBW. The AAD results are generated with authors’ program by using the default parameters. The PBW results are provided by the original author. When dealing with images in TSRD, compared to our method, the main problem of AAD and PBW is in many cases they will over-squeeze some contents (e.g., Figs. \[fig:tulip04\](e), \[fig:field23\_pbw\], \[fig:lavender\_aad\] and \[fig:village\_aad\]) or the salient objects (e.g., Figs. \[fig:tire\_aad\], \[fig:child\_pbw\] and \[fig:bicycle1\_pbw\]), while over-stretch the background (Figs. \[fig:field19\_pbw\] and \[fig:girl02\_pbw\]), which makes some visually important regions to be too small in the resulting images. In many results the content structures of the scenes are obvious imbalance. The main reason is because warping usually tends to maintain as many as contents while preserving the aspect ratios of the areas with large energy or significance values. In most images of TSRD, these areas are usually the T-regions (Type 3) or the salient objects (Type 2). Therefore, to maintain the shape of those “important” areas, we can find that in the results generated by AAD or PBW, the T-regions are either overstretched (e.g., Fig. \[fig:girl02\_pbw\]) or over-squeezed (e.g., Fig. \[fig:village\_aad\]). Uneven distortion to the salient objects may also appear if their significance values are low, such as Figs. \[fig:tire\_aad\] and \[fig:field14\_aad\]. Specifically, as shown in Fig. \[fig:flowers22\_aad\], when the scene is almost all constructed by textures (Type 1), all the contents maybe be distorted if we use warping-based methods. ***For BDS***, we choose it for comparison since it is a synthesis-based image summarization method. The results are generated by imagestack program (<http://code.google.com/p/imagestack/>). For each exemplar, we use different parameters to generate four images and manually choose the best one as the final result. At each gradually resizing step, we set the EM iteration times as 50 and refinement interation times for each intermediate target as 20. When dealing with the images in TSRD, compared to our method, the main problem of BDS is that there will be obvious boundary discontinuity, such as the mountain in Fig. \[fig:tulip04\](c), the beach in Fig. \[fig:field18\_bd\], the sky in Fig. \[fig:lavender\_bd\], and the grassland in Fig. \[fig:field14\_bd\]. The reason is that BDS only uses color distance for neighbourhood matching, while the integration of spatial information in our algorithm can ensure the content continuity. The second problem often appears in BDS is the over-smoothing of some areas, such as the left-bottom tulips in Fig. \[fig:tulip04\](c), the middle of the bough and the bottom of the trunk in Fig. \[fig:tire\_bd\], and the small yellow flowers in Fig. \[fig:field19\_bd\]. We consider that it is due to the strategy of bidirectional similarity, sometimes one area in the resulting image is “obliged” to be similar as multiple areas of the original image. Our single-directional framework can avoid this problem. In fact, for image retatgeting application, content loss is allowed. Most of the users will be satisfied if the important contents are preserved. Another common problem of BDS is spatial structure mismatch of content. That is, some patches may appear in wrong places, such as the mountain patches in the sky of Fig. \[fig:tulip04\](c), the flowers patches in the sky of Fig. \[fig:field23\_bd\], and the house patches in the sky of Fig. \[fig:village\_bds\]. The other phenomenon of this problem is spatial relationship of some contents may be wrong in the result, such as the child and the seabirds in Fig. \[fig:child\_bds\], and the farm cattle and the farmer in Fig. \[fig:bdstexdcomp\](c) (the the farmer should be above the blue line). We consider that this is also due to the lack of spatial constraint in the synthesis algorithm. We use Fig. \[fig:bdstexdcomp\] to show the main problems of BDS, the two exemplars are picked from the original paper [@Simakov:08]. Moreover, missing a good significance map also causes the loss of visually important contents in the results, such as the missing of salient red flowers in Fig. \[fig:girl02\_bd\] and yellow flowers in Fig. \[fig:flowers22\_bd\]. Our good saliency map also makes it enough for our optimization process to only use a single-directional neighbourhood matching since the important areas are preserved in the initial retargeting operation. This also efficiently accelerates the speed of the synthesis process. In our experiments, we find that BDS usually costs more than 20 minutes to generate a good result, which limits its practical use in many applications. On the other hand, ***PatchMatch*** method [@Barnes:09] can also perform image retargeting by synthesis. We do not compare with PatchMatch since it shares the same framework as BDS so that it can be treated as a parallel method for image retargeting. {width="0.98\linewidth"} -2mm -4mm \[fig:bdstexdcomp\] ***For Cropping***, we choose it for comparison since in most cases it is the first choice of the users during the comparative study of [@Rubinstein:2010]. On the other hand, a texture usually appears a certain self-similarity, so maybe a simple cropping will be enough to well summarize its content. The results are created by an expert photographer. When dealing with images in TSRD, compared to our method, the main problem of cropping is some important contents will be unavoidably lost if there are multiple important contents located near the different sides of the input image, such as the largest sheep in Fig. \[fig:field14\_cr\], and the trees and mountain in Fig. \[fig:bicycle1\_cr\]. Our synthesis strategy can narrow the distance between the important contents and make them to appear together in the result. Moreover, as discussed in [@Rubinstein:2010], cropping should be considered as a reference, not as a proper retargeting algorithm. Here we still decide to compare with cropping only because sometimes it can benefit from the self-similarity characteristic of some textures and generate good retargeting results. -2mm -2mm -2mm -2mm -2mm -3mm -4mm -2.5mm -2.5mm -2.5mm -2.5mm -2.5mm -3mm -5mm ***For F-MultiOp and MultiOp***, we choose them for comparison since the MultiOp framework outperforms most algorithms according to the comparative study [@Rubinstein:2010]. F-MultiOp method has been demonstrated in [@Dong:2012] that it can generate results of the similar quality as MultiOp, so we consider these two methods as the same in our comparison. The MultiOp results of the six images collected from RetargetMe benchmark are directly downloaded from the AAD website (<http://igl.ethz.ch/projects/retargeting/aa-retargeting/aa-comparisons/dataset/index.html>), including the AAD results of those six images. The other results are generated by using F-MultiOp, which are all provided by the original author. When dealing with images in TSRD, compared to our method, the main problem of multi-operator methods is the uneven distortion to objects or texels, such are the tulips in Fig. \[fig:tulip04\](d), the tire in Fig. \[fig:tire\_mod\], the girl and flowers in \[fig:girl02\_mod\], the flowers in \[fig:flowers22\_mod\], and the sportsman in Fig. \[fig:bicycle1\_mod\]. The main reason is because although the integration of cropping operator can somewhat avoid the overall distortion, the unavoidable use of seam carving and homogeneous scaling operators (to protect the similarity between original image and resulting image) may still cause uneven distortions to objects or texels, especially when the T-regions are distributed throughout one dimension of the original image, such as Figs. \[fig:tulip04\](a), \[fig:field19\_o\], and \[fig:flowers22\_o\]. This problem can only be solved by using a synthesis-based strategy. On the other hand, as shown in Fig. \[fig:lavender\_mod\], some important contents may be over-squeezed due to the lack of a good significance map. ***For Shift-Map***, we choose it for comparison since sometimes it can generate a synthesis-like result which selectively stitches some contents together to construct a resized image. The results are partly provided by the original author, partly generated with the authors’ online system, and partly generated with a public implementation (<https://sites.google.com/site/shimisalant/home/image-retargeting>) after the online system is taken down. When dealing with images in TSRD, compared to our method, the main problem of Shift-Map is that in many cases it will unpredictably lose some important contents (e.g., the left cherry tree in Fig. \[fig:field23\_sm\], the river in Fig. \[fig:lavender\_sm\], the child in Fig. \[fig:child\_sm\], and the largest sheep in Fig. \[fig:field14\_sm\]) or degenerate to cropping which will damage the composition of the original image (e.g., the girl’s location is too left in Fig. \[fig:girl02\_sm\], and the sportsman’s location is too right in Fig. \[fig:bicycle1\_sm\]). The main reason is because stitching are minimized due to the global smoothness term. On the other hand, to get a good retargeting result by using Shift-Map, sometimes the user need to gradually resize the image by manually setting the number of removed columns/rows. This strategy is useful in preserving salient objects in the resulting image but ineffective for our images because for textures it usually does not contain distinct long boundaries that can help to penalize the removal of a large area. We consider that this is just the reason that in some cases shift-map degenerates to cropping when dealing with our images. Another problem of shift-map method is that it may also cause boundary discontinuity artifacts, such as the string of the tire in Fig. \[fig:tire\_sm\], the grassland boundary in Fig. \[fig:field14\_sm\], and the mountain boundary in Fig. \[fig:village\_sm\]. ***For example-based texture synthesis***, apparently the normal texture synthesis algorithms such as texture optimization [@Kwatra:2005] and appearance-space texture synthesis [@Lefebvre:2006:ATS] are not fit for image retargeting since they are originally designed for enlargement but have no effective schemes for size decrease. Inverse texture synthesis (ITS) [@Wei:2008] can produce a small texture compaction that summarizes the original. Its framework is very similar as BDS method so it will suffer the same problems as BDS if being used for image retargeting. On the other hand, the textural contents in most our images are not standard textures so using pure texture synthesis framework will easily cause content discontinuity or damage the globally varying effects. In Figs. \[fig:setangle\] and \[fig:child02\], we show two examples of only using our synthesis operator to retarget a general image which does not contain obvious textural patterns. Results show that our synthesis operator can also works well for some general images. However, since our synthesis operator is specifically designed for dealing with textural patterns, we can not assure of synthesizing satisfied results for arbitrary non-textural images. In fact in our framework, the NT-region is retargeted by F-MultiOp instead of the synthesis operator. User Study ---------- To evaluate our method further, we perform a user study to compare the results from different methods. All the stimuli are shown in the supplemental material. A total of 55 participants (24 males, 21 females, age range 20-45) from different backgrounds attended the comparison of 61 sets of resized images. Each participant is paid \$10 for their participation. All the participants sat in front of a 22-inch computers of $1680 \times 1050$ px in a semi-dark room. In the experiment, we showed the original image, our result, and the images of the competitors. We then ask which image the participant prefers. For each group, the original image is separately shown in the first row, while the results are randomly displayed in two additional rows within the same page. We allow the participant to choose at most two favourite images from the results. We did not provide a time constraint for the decision time. However, we recommend for the participants to finish the tests within 30 min. We allow the participants to move back and forth across the different pages by clicking the mouse. The average finishing time is 26 min 53 sec. A total of 4903 votes are reported. Fig. \[fig:user\] shows the the statistics of how many times the results of each method has been chosen as favourite retargeting results. Based on the statistics, our method outperforms all competitors in general. For each test exemplar in TSRD, we show the percentages when our method and the competitors have been chosen by the participants in the supplemental material. {width="0.98\linewidth"} \[fig:user\] Limitations ----------- The main limitation of our algorithm is the speed. Although we implement our synthesis operator fully on GPU, we still cannot get real-time performance like most warping-based methods, especially when the T-regions are large. Our method may generate unsatisfied results when the texels are very large (like an object) and have different attributes (color, shape, orientation, etc.). Fig. \[fig:candy\] shows such one example, the texels (a candy) are large and visually different from each other. Therefore, we can see that there are obvious object discontinuity in our result. In this case, one possible way to improve retargeting quality is to use object carving [@Dong:2014:SBI] to entirely remove some objects. -3mm \[fig:candy\] Conclusion and Future Work {#sec:con} ========================== The scenes containing textural regions are very common in natural images. However, as shown in our paper, most of them cannot be well handled by current general image resizing algorithms due to the lack of high level semantic information. We introduces a novel concept and robust method to solve the problem. An automatic methodology is proposed to detect the textures and adjust the saliency information. Then we use a synthesis-based image resizing system to achieve natural resizing effects with minimum texel visual appearance damage. The integration of the spatial information ensures the content consistency between the original image and the result images. Our spatial-aware strategy can be integrated into most existing general resizing frameworks and enhance their robustness. Experiments shown that our system can handle a great variety of input scenes especially non-standard textural regions (for example Fig. \[fig:car\_o\] is combined with many separate textural objects). For future work, extending the example-based synthesis operator to 3D scene resizing can be an interesting direction. [^1]: | Mid | [
0.609022556390977,
30.375,
19.5
] |
Now Commenting On: Wainwright relishes chance to close out NLDS Wainwright relishes chance to close out NLDS 10/7/12: Adam Wainwright collects 10 strikeouts and limits the Nationals to just one run over 5 2/3 impressive frames in Game 1 of the NLDS By Paul Hagen / MLB.com | WASHINGTON -- Adam Wainwright would have preferred kicking back a little Friday night, relaxing, thinking about how he planned to pitch against the Giants in the National League Championship Series opener. Instead, Wainwright will be on the mound at Nationals Park in Game 5 of the NL Division Series with the Cardinals' season on the line. And he's not afraid to admit that a part of him doesn't mind that St. Louis failed to clinch Thursday night (7:30 p.m. CT on TBS). "Sure, I wish we had won. But this is every pitcher's dream," Wainwright said after Jayson Werth's walk-off homer in Game 4. "It's every competitor's dream to go into huge moments like this. So I look forward to the challenge." It means even more to Wainwright since he was little more than a cheerleader a year ago, rehabbing from reconstructive elbow surgery, when the Cards won the World Series. "The postseason is so special anyway," Wainwright explained. "And this team has battled through so much this year and fought so hard just to get into the postseason. It really can't be understated how special that is to our team, and me in particular. I feel very blessed that I get to go out and compete tomorrow." Why he'll win: Gonzalez was arguably the best pitcher in the National League this season Pitcher beware: Wainwright went 4-7 in 15 regular-season road starts this year Pitcher beware: Gonzalez struggled with command in Game 1 Bottom line: Wainwright is postseason tested and looked sharp in Game 1 Bottom line: Gonzalez needs to regain his regular-season form after a shaky postseason debut Before he started the NLDS opener, Wainwright talked like a man who was just happy to be there. That was then. Now it all comes down to one game. If the Cardinals win, they'll advance to the NLCS against the Giants. If they lose, they'll be left to wonder what went wrong after two blowout wins gave them a lead of 2-1 in the best-of-five series. In Game 1, Wainwright became the first Cards pitcher to strike out 10 batters in a postseason game since Hall of Famer Bob Gibson in 1968. Wainwright held the Nationals to one run on six hits and walked three. Wainwright also needed 100 pitches to get through his 5 2/3 innings in a game the Nats came back to win with a rally in the eighth. That game was played in late-afternoon shadows that confounded hitters on both teams. Also, nine of Wainwright's 10 strikeouts came on curveballs, so it will be interesting to see if the Washington lineup can make an adjustment. "It was just a good pitch for me," Wainwright said after the game. "I wouldn't say the other ones were all bad, [but] that's kind of who I am. I have a good curveball, and there are times when I overuse it. I felt like my fastball command was not all there, so you make adjustments." "That the best I've seen Adam since before his [surgery]," Nationals third baseman Ryan Zimmerman, who struck out twice, said. "Not that he wasn't good the last couple times. We faced him two times late in the year, so a lot of people were kind of saying he was getting tired. But that right there, that was the curveball he had before." Wainwright left the game with runners on first and second and two outs in the sixth inning and the Cardinals up by one. Lance Lynn came in to strike out Werth for what was a crucial out. "I thought he fought and had a little bit of trouble early on making his pitches with his fastball, seemed like he wasn't as fine as when he's having his best game," manager Mike Matheny said. "His breaking ball really bailed him out and kept them off balance. He made pitches when he had to and kept us in the game." Cards starters Chris Carpenter and Kyle Lohse have largely shut down the Nats in the past two games, but Wainwright laughed when asked if there was anything he could learn from that. "Here's what I take from that: Hitting a baseball is real hard," Wainwright said. "It's the hardest thing in sports. And as a pitcher, if you go out there and make your pitches, throw it where you want to and keep them off balance, you're going to probably be in pretty good shape. And that's what I'm going to try to do." Wainwright did it before. The Cardinals are counting on him to do it again. Paul Hagen is a reporter for MLB.com. This story was not subject to the approval of Major League Baseball or its clubs. | High | [
0.6566347469220241,
30,
15.6875
] |
Bank of America stops making private student loans Bank of America joins Citigroup Inc., SLM Corp. and other lenders who are tightening loan criteria. The move is reported to be unprofitable due to rising borrowing costs, cuts in government subsidies for loans and a lack of investor. The Charlotte, North Carolina-based bank will go on offering government-backed loans, which constituted more than 85 percent of its $6 billion in student lending last year. | Mid | [
0.595936794582392,
33,
22.375
] |
import torch from torch import nn from torch.autograd import Function from torch.autograd.function import once_differentiable from torch.nn.modules.utils import _pair from models.ops import _C from apex import amp class _ROIAlign(Function): @staticmethod def forward(ctx, input, roi, output_size, spatial_scale, sampling_ratio, aligned): ctx.save_for_backward(roi) ctx.output_size = _pair(output_size) ctx.spatial_scale = spatial_scale ctx.sampling_ratio = sampling_ratio ctx.input_shape = input.size() ctx.aligned = aligned output = _C.roi_align_forward( input, roi, spatial_scale, output_size[0], output_size[1], sampling_ratio, aligned ) return output @staticmethod @once_differentiable def backward(ctx, grad_output): rois, = ctx.saved_tensors output_size = ctx.output_size spatial_scale = ctx.spatial_scale sampling_ratio = ctx.sampling_ratio bs, ch, h, w = ctx.input_shape grad_input = _C.roi_align_backward( grad_output, rois, spatial_scale, output_size[0], output_size[1], bs, ch, h, w, sampling_ratio, ctx.aligned ) return grad_input, None, None, None, None, None roi_align = _ROIAlign.apply class ROIAlign(nn.Module): def __init__(self, output_size, spatial_scale, sampling_ratio, aligned): super(ROIAlign, self).__init__() self.output_size = output_size self.spatial_scale = spatial_scale self.sampling_ratio = sampling_ratio self.aligned = aligned @amp.float_function def forward(self, input, rois): return roi_align( input, rois, self.output_size, self.spatial_scale, self.sampling_ratio, self.aligned ) def __repr__(self): tmpstr = self.__class__.__name__ + "(" tmpstr += "output_size=" + str(self.output_size) tmpstr += ", spatial_scale=" + str(self.spatial_scale) tmpstr += ", sampling_ratio=" + str(self.sampling_ratio) tmpstr += ")" return tmpstr | Mid | [
0.578397212543554,
41.5,
30.25
] |
Petrus Albertus van der Parra Petrus Albertus van der Parra (29 September 1714 – 28 December 1775) was Governor-General of the Dutch East Indies from 1761 to 1775. Biography Petrus Albertus van der Parra was born in Colombo, the son of a Secretary to the government of Ceylon. His great-grandfather had come to India and the family had lived there ever since. In 1728, he began his career at fourteen years old. As everyone had to start as a soldier, he began as a "soldaat van de penne", then became an "assistent" in 1731, and "boekhouder" (bookkeeper) in 1732. He had to move house in 1736 to take up a new job as "onderkoopman" (underbuyer/undermerchant), and at the same time "collectionist" (collector) and "boekhouder" to the General Secretary at Batavia/Jakarta. He became "koopman" (buyer/merchant) and "geheimschrijver" (secrets secretary) in 1739. He became Second Secretary to the High Government (Hoge Regering), becoming First Secretary in 1747. He became Counsellor-extraordinary of the Indies later that year (November) and in 1751 became a regular Counsellor. In 1752 he became President of the College van Heemraden (in charge of estate boundaries, roads, etc.). He was later a member of the "Schepenbank" (the local government and court in Batavia), a Regent (a board member) of the hospital and in 1755 he became First Counsellor and Director-General (Eerste Raad en Directeur-Generaal) On 15 May 1761, following the death of Jacob Mossel he became Governor-General of the Dutch East Indies. Confirmation of his appointment by the Heren XVII (the Seventeen Lords, who controlled the Dutch East India Company) came in 1762. He held a lavish inauguration on his birthday on 29 September. Subsequently, his birthday was a national holiday in the Indies. During his time as Governor-General, he overthrew the Prince of Kandy, in Ceylon, though with difficulty, and he conquered the sultanate of Siak in Sumatra. Contracts were entered into with various regional leaders in Bima, Soembawa, Dompo, Tambora, Sangar and Papekat. Van der Parra favoured his friends and gave out well-paid posts if he could get anything in return for them. It was said he was a typical colonial ruler, idle, grumpy but generous to those who fawned upon him and recognised his greatness. It was a golden time for the preachers in Batavia, who got gifts, translations of the New Testament and scholarships from Van der Parra. They worshipped and eulogised him. Although the Heren XVII knew about his behaviour, as five Counsellors had written to them about his pretensions to kingly behaviour, they did nothing about it. In 1770, Captain James Cook had to ask for his help to proceed on his journeys on HMS Endeavour (See s:Captain Cook's Journal, First Voyage/Chapter 9). At the end of the 19th Century, a steamship, trading to the Indies, was named after him. After over fourteen years in power, he died on 28 September 1775 in Weltevreden, the imposing palace built for him outside Batavia. He apparently left a great deal of his fortune to the widows of Colombo and a smaller part to the poor of Batavia. He was followed as governor by Jeremias van Riemsdijk He was married to Adriana Johanna Bake. References Sources Comprehensive Dutch website on the history of the Dutch Eat Indies Biographical Dictionary (in Dutch) L. P. van Putten, Ambitie en Onvermogen, Gouverneurs-generaal van Nederlands-Indië, Rotterdam, 2002 Jan N. Bremmer, Lourens van den Bosch Between Poverty and the Pyre: Moments in the History of Widowhood Routledge, 1995 , Category:1714 births Category:1775 deaths Category:Dutch nobility Category:Governors-General of the Dutch East Indies Category:People from Colombo Category:Sri Lankan people of Dutch descent | Mid | [
0.6445623342175061,
30.375,
16.75
] |
- Jika terpilih jadi presiden dan wakil presiden, Prabowo-Sandi akan menaikkan gaji guru. Gaji guru akan dinaikkan hingga Rp 20 juta/bulan.Wakil Ketua Badan Pemenangan Nasional Prabowo-Sandi, Mardani Ali Sera mengatakan, guru adalah salah satu faktoir utama dalam membangun Indonesia. Kenaikan gaji guru akan membangkitkan gairah dan kualitas pengajaran."Karena itu angka 20jt itu sebagai efek kejut bahwa guru itu harus mendapat perhatian dan angka 20jt itu paling utamanya untuk guru profesional yang sudah lulus berbagai halnya," tutur Mardani saat berbincang dengan, Rabu (21/11/2018).Mardani menjelaskan, guru yang mendapatkan gaji RP 20 juta harus memenuhi kualifikasi yang ditentukan nantinya. Mardani menyebut guru tersebut harus profesional dan setidaknya bisa menguasai dua bahasa termasuk Bahasa Inggris."Jadi ada grade A, grade B, grade C, grade A tentu berkualifikasi yg kualitas bahasa Inggrisnya oke, yang bisa dua bahasa, grade A mungkin bisa lebih besar dari itu guru profesional," imbuhnya.Dikatakan politikus Partai Keadilan Sejahtera ini, kesejahteraan guru bisa meningkatkan kualitas. Pada akhirnya dia mengatakan hal itu akan berimbas pada kurikulum dam lahirnya sumber daya manusia yang berkualitas."Sangat berpengaruh, kalau kami menyadari dua hal guru dan kurikulum bukan gedung sekolah ber-AC, guru dan kurikulum, guru yang berkualitas, kurikulum berkualitas itu akan melonjakkan SDM Indonesia," katanya. [Gambas:Video 20detik] | High | [
0.7225806451612901,
35,
13.4375
] |
The Belz Hasidic movement celebrated the marriage of its rebbe's grandson in a series of major events last week. Guests wishing to watch the senior ultra-Orthodox leadership sitting around the table of honor alongside the 18-year-old groom were asked to bid farewell to at least NIS 1,000 (about $260). Follow Ynetnews on Facebook and Twitter Belz sources explained that the money was used as a donation to fund the different events and cover the costs of the wedding, which were estimated at millions of shekels. A Few Years Ago Belz Hasidim limit wedding spending Kobi Nahshoni Hasidic stream issues document that places restrictions on cost of weddings, in bid to help couples' parents cope with financial burden. Belz Hasidim limit wedding spending A huge tent was built in the Hasidic movement's institutions complex, and the roads leading to the neighborhood were closed to traffic from the early afternoon hours to the small hours of the night. Last week's wedding. A particularly expensive event (Photo: Meir Alfasi) In the evening, the married Hasidim left the tent in order to watch the "mitzvah tantz," in which the bride dances while holding one end of a long gartel while the rebbe holds the other end. Prime Minister Benjamin Netanyahu telephoned the Belz movement's representative in the Knesset, MK Israel Eichler (United Torah Judaism) and asked him to pass on his congratulations to the groom's family. Meanwhile, a dramatic reconciliation took place between the Hasidic movement's leader and his cousin, the Machnovka rebbe, who he had not met with for nearly 30 years due to an inheritance battle and a control battle over the "Belz" brand. Violent conflicts have been recorded between the two movements, which separated families from the different camps. | Low | [
0.518962075848303,
32.5,
30.125
] |
[Modern principles of the treatment of testicular tumors]. The modern principles of treatment of patients with germinative tumors of the testis are discussed. Essential are precise morphologic diagnosis and exact determination of the stage of the disease. Preference is given to the so called surgical-pathological staging, because it furnishes major objective information. Of the noninvasive methods of examination major value have computer axial tomography, measurement of the serum alpha-fetoprotein, human choriongonadotropin and SB1 levels, lymphography and pulmonary tomography. The role of tumor markers as criteria for early diagnosis of relapses and objective assessment of the effect of treatment is emphasized. The place of chemotherapy depending on the histologic variant and the stage of the process is determined. The results of modern chemotherapeutic schemes, which have shown statistically greatest effectiveness are discussed. The place of supra- and infrahilar lymph dissection as method of diagnosis and treatment is arguable. Some records (randomized) for treatment of germinative tumors of the testis, which might serve as practical guidance are presented. | High | [
0.65625,
31.5,
16.5
] |
the keystore was generated with keytool -genkey -alias sitename -keyalg RSA -keystore keystore.jks -keysize 2048 the password is 123456 you can customize/replace the keystore. Just re-run the keytool, use a different password and then set the password in the properties file. | Low | [
0.372365339578454,
19.875,
33.5
] |
PROJECT SUMMARY/ABSTRACT The calcium-sensing receptor, CaSR, is a promiscuous G protein-coupled receptor with many functions throughout the body. It is present in taste cells, but its function there is unclear. There are hypotheses suggesting that it detects the taste of (a) calcium, (b) kokumi compounds, and (c) certain amino acids. We propose to test these hypotheses, and to characterize the gustatory anatomy of CaSR. To do this, we will make and phenotype mice with a taste-specific, tamoxifen-conditional knockout (cKO) of CaSR. Some of these mice will be treated using a novel procedure?the lingual application of tamoxifen?which will ablate CaSR only in the mouth. We will then compare the taste responses of these CaSR-cKO mice with vehicle-treated littermate controls, using (a) brief-access gustometry, (b) two-bottle choice tests, and (c) gustatory electrophysiology. Taste substances surveyed will be calcium salts, kokumi compounds, amino acids, and representatives of the five basic tastes. Histological analyses of taste tissue will (a) confirm that CaSR is successfully ablated in the treated mice, (b) determine whether the cKO produces nonspecific changes in taste tissue morphology, and (c) resolve conflicts in the literature about which taste cell types express CaSR. Innovative aspects of this project include (a) the production of a taste-specific, tamoxifen-conditional knockout mouse, (b) the focus on CaSR, a probable novel taste receptor, and (c) the investigation of certain tastes? calcium, kokumi, and amino acids?that are often ignored because, without an identified receptor mechanism, they are not considered ?basic.? The findings arising from this R21 project will be a foundation for future work to elucidate CaSR's taste transduction pathway(s), its hormonal control, and its genotype-phenotype associations in humans. Characterizing the involvement of CaSR in taste perception may lead to the development of methods to improve food acceptance and thus diet, which is a critical component of good health. | High | [
0.7173100871731001,
36,
14.1875
] |
Q: Index multidimensional torch tensor by another multidimensional tensor I have a tensor x in pytorch let's say of shape (5,3,2,6) and another tensor idx of shape (5,3,2,1) which contain indices for every element in first tensor. I want a slicing of the first tensor with the indices of the second tensor. I tried x= x[idx] but I get a weird dimensionality when I really want it to be of shape (5,3,2) or (5,3,2,1). I'll try to give an easier example: Let's say x=torch.Tensor([[10,20,30], [8,4,43]]) idx = torch.Tensor([[0], [2]]) I want something like y = x[idx] such that 'y' outputs [[10],[43]] or something like. The indices represent the position of the wanted elements the last dimension. for the example above where x.shape = (2,3) the last dimension are the columns, then the indices in 'idx' is the column. I want this but for more than 2 dimensions A: From what I understand from the comments, you need idx to be index in the last dimension and each index in idx corresponds to similar index in x (except for the last dimension). In that case (this is the numpy version, you can convert it to torch): ind = np.indices(idx.shape) ind[-1] = idx x[tuple(ind)] output: [[10] [43]] | Mid | [
0.577946768060836,
38,
27.75
] |
Abdominal sacral colpopexy and abdominal enterocele repair in the management of vaginal vault prolapse. Vaginal vault prolapse and enterocele represent challenging forms of female pelvic organ relaxation. These conditions are most commonly associated with other pelvic organ defects. Proper diagnosis and management is essential to achieve long-term successful outcomes. Physical examination should be carried out in the lithotomy and standing positions (if necessary) in order to detect a loss of vaginal vault support. With proper identification of the vaginal cuff, one should assess the degree of mobility of the vaginal cuff with a Valsalva maneuver. If there is significant descent of the vaginal cuff, vaginal vault prolapse is present, and correction should be considered. The abdominal sacral colpopexy is an excellent means to provide vaginal vault suspension. This procedure entails suspension of the vaginal cuff to the sacrum with fascia or synthetic mesh. This procedure should always be accompanied by an abdominal enterocele repair and cul-de-sac obliteration. In addition, many patients require surgical procedures to correct stress urinary incontinence, which is either symptomatic or latent (occurs postoperatively after prolapse correction). Complications include: mesh infection, mesh erosion, bowel obstruction, ileus, and bleeding from the presacral venous complex. If the procedure is carried out using meticulous technique, few complications occur and excellent long-term reduction of vaginal vault prolapse and enterocele are achieved. The purpose of this article is to review the preoperative evaluation of women with pelvic organ prolapse, and provide a detailed description of the surgical technique of an abdominal sacral colpopexy. | High | [
0.6880616174582791,
33.5,
15.1875
] |
#ifndef SIMIT_LLVM_FUNCTION_H #define SIMIT_LLVM_FUNCTION_H #include <string> #include <vector> #include <map> #include <memory> #include "llvm/IR/Module.h" #include "llvm/ExecutionEngine/ExecutionEngine.h" #include "backend/backend_function.h" #include "ir.h" #include "storage.h" #include "tensor_data.h" namespace llvm { class ExecutionEngine; } namespace simit { namespace pe { class PathExpression; class PathIndex; class PathIndexBuilder; } namespace backend { class Actual; /// A Simit function that has been compiled with LLVM. class LLVMFunction : public backend::Function { public: LLVMFunction(ir::Func func, const ir::Storage &storage, llvm::Function* llvmFunc, llvm::Module* module, std::shared_ptr<llvm::EngineBuilder> engineBuilder, bool skipEEInit = false); virtual ~LLVMFunction(); virtual void bind(const std::string& name, simit::Set* set); virtual void bind(const std::string& name, void* data); virtual void bind(const std::string& name, TensorData& data); virtual FuncType init(); virtual bool isInitialized() { return initialized; } virtual void print(std::ostream &os) const; virtual void printMachine(std::ostream &os) const; protected: /// Get the number of elements in the index domains. size_t size(const ir::IndexDomain &dimension); void initIndices(pe::PathIndexBuilder& piBuilder, const ir::Environment& environment); bool initialized; llvm::Function* llvmFunc; llvm::Module* module; llvm::Module* harnessModule; ir::Storage storage; /// Function actual storage std::map<std::string, std::unique_ptr<Actual>> arguments; std::map<std::string, std::unique_ptr<Actual>> globals; /// Externs std::map<std::string, std::vector<void**>> externPtrs; /// TensorIndices std::map<pe::PathExpression, std::pair<const uint32_t**,const uint32_t**>> tensorIndexPtrs; std::map<pe::PathExpression, pe::PathIndex> pathIndices; /// Temporaries std::map<std::string, void**> temporaryPtrs; private: std::shared_ptr<llvm::EngineBuilder> engineBuilder; std::shared_ptr<llvm::ExecutionEngine> executionEngine; std::unique_ptr<llvm::EngineBuilder> harnessEngineBuilder; std::unique_ptr<llvm::ExecutionEngine> harnessExecEngine; FuncType deinit; // MCJIT does not allow module modification after code generation. Instead, // create all harness functions in the harness module first, then fetch // generated addresses using getHarnessFunctionAddress. llvm::Function* createHarness(const std::string& name, const llvm::SmallVector<llvm::Value*,8>& args, llvm::Function** harnessPrototype); llvm::Function* getInitFunc() const; llvm::Function* getDeinitFunc() const; }; }} #endif | Mid | [
0.587301587301587,
37,
26
] |
Monza: Ocean Racing Tech weekend summary Ocean Racing Technology among the fastest in Monza Chilton secures points in both races For the first time in a while in the GP2 Main Series championship, Ocean Racing Technology succeeded in showing their potential on the Monza circuit, the... Ocean Racing Technology among the fastest in Monza Chilton secures points in both races For the first time in a while in the GP2 Main Series championship, Ocean Racing Technology succeeded in showing their potential on the Monza circuit, the fastest on the calendar, by seeing one of their single-seaters into the points in both Italian races. Max Chilton finished the second race in 5th position one day after having crossed the finish line of the first round in eighth place. However Monza qualifying didn't go too well with both Fabio Leimer and Max Chilton unable to make it into the top ten on the starting grid. The two drivers from the Portuguese team then gained many positions in the first race thanks to a consistent race pace. Despite starting 18th on the grid for race one Fabio Leimer was up into 6th place before a hydraulic problem forced him to retire. Max Chilton then carried the torch and not only secured a point for 8th place, but as a result secured pole position for race two. Finishing fifth on Sunday, Max Chilton is satisfied with his weekend. "It was the best weekend of the year for me," he said. "The start did not go well and we know that this determines the rest. But we succeeded in finding a good pace and it was only in the last five laps that things got complicated because the tyres started to suffer. All the same, I succeeded in defending my place. In the first race, the opening laps were crucial to gain places, and then it was essential not to be involved in a mishap. Overall, I am very satisfied with this weekend." Overpowered by bad luck, Fabio Leimer was unable to take the start of the second race because of a new problem with the hydraulic system which was discovered on the installation lap on his way to the grid. Nevertheless the Swiss driver showed that his race pace was among the best on the grid, something his temporary 6th place testifies to. This also means that he more than likely would have been able to make it into the top three of the starting grid for race two if he had not been stopped by a mechanical problem. The two last races of the GP2 Main Series will take place in Abu Dhabi, on the 13th and 14th of November. | High | [
0.677060133630289,
38,
18.125
] |
Solidarity is seeking the court’s intervention to overturn Eskom’s decision to reappoint Molefe as CEO. Further, the trade union wants the court to review decisions taken by the board to approve Molefe’s pension payout as well as his application to take early retirement last year, explained Deon Reyneke, deputy general secretary of energy, defence and aerospace. James Selfe, chairperson of the DA’s Federal Council, told Fin24 on Thursday that following a meeting with Gauteng Deputy Judge President Aubrey Ledwaba on Wednesday, the DA "anticipates" that the applications by the Democratic Alliance, the Economic Freedom Fighters and Solidarity will be consolidated. “We are seeking different reliefs and these can be separated in the court process," he said, adding that the matter will be heard on June 6 and 7. However, Reyneke told Fin24 that he is not aware of the move to consolidate the cases and still expects Solidarity's case to be heard on 21 June. Regarding their application, Reyneke said the trade union wants former Public Protector Thuli Madonsela’s proposal of a judicial inquiry into Gupta links with the power utility to be considered. Solidarity wants the board to be dismissed and to be held accountable in their personal capacity for the costs of the application. “We dispute the fact of the reappointment of Molefe. We feel that correct processes were not followed,” he told Fin24. Solidarity’s affidavit revealed that, contrary to Brown’s statements that she was unaware of Molefe’s early retirement agreement, a letter had been written to her by Ngubane detailing the terms of the package. In an affidavit filed by Brown on Monday, she explained that she only became aware of the pension package at a meeting held with the board on April 19. This followed reports in the Sunday Times that Molefe was to be paid a R30m pension package. But on November 25, 2015, two weeks after Molefe had signed his appointment letter and two months after his employment at Eskom commenced, Ngubane wrote to the minister requesting her approval of the agreement. The board explained that because Molefe had served in various government organisations at executive level, he was unable to accrue the benefits from a single pension fund due to the short-term nature of the contracts. The board suggested several contractual stipulations, among these that if Molefe was to take an early retirement (prior to the age of 63), penalties prescribed by the Eskom Pension and Provident Fund (EPPF) would be waived. Eskom would then cover the cost of these waived penalties. “The minister’s assertion that she was not aware of the arrangement is surprising to say the least: the Molefe pension carve-out was the subject of correspondence referred to above date 25 November 2015,” said Reyneke. He added that Molefe had not reached early retirement age, set between 55 and 63. At the time Molefe resigned in November 2016, he was only 50 years old. According to the standing EPPF rules, members are to retire at age 65, ceasing to be members of the fund. As for rules pertaining to early retirement, a member may retire after age 55 and is entitled to a pension in respect of his or her pensionable service. Additionally various penalties apply to taking early retirement, but these can be waived on the employer’s discretion to cover the liability. Further, Solidarity poked holes in a presentation regarding Molefe’s contract and pension benefits, to the board as well as the People and Governance Committee. According to Solidarity, the minutes of the meeting show that Molefe was present at this meeting, where his employment contract was discussed and voting was cast. “[He] did not absent himself from that portion of meeting nor did he declare that he had any interest in the subject matter of the resolution or recuse himself from the voting,” said Reyneke. Solidarity indicated that the conditions of Molefe’s early retirement were misrepresented, indicating that pension fund rules allow those aged 50 with at least 10 years' service to retire. Reyneke said that “in order to qualify for early retirement, a member must have been in service for 10 years and must be over 55 years of age (not 50)”. | Mid | [
0.550200803212851,
34.25,
28
] |
namespace LOIC
{
partial class frmEULA
{
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.IContainer components = null;
/// <summary>
/// Clean up any resources being used.
/// </summary>
/// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}
#region Windows Form Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.txtEULA = new System.Windows.Forms.RichTextBox();
this.btnAccept = new System.Windows.Forms.Button();
this.btnDecline = new System.Windows.Forms.Button();
this.chkEULA = new System.Windows.Forms.CheckBox();
this.SuspendLayout();
//
// txtEULA
//
this.txtEULA.Anchor = ((System.Windows.Forms.AnchorStyles)((((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Bottom)
| System.Windows.Forms.AnchorStyles.Left)
| System.Windows.Forms.AnchorStyles.Right)));
this.txtEULA.BorderStyle = System.Windows.Forms.BorderStyle.None;
this.txtEULA.Location = new System.Drawing.Point(1, 0);
this.txtEULA.Name = "txtEULA";
this.txtEULA.Size = new System.Drawing.Size(563, 510);
this.txtEULA.TabIndex = 0;
this.txtEULA.Text = "";
//
// btnAccept
//
this.btnAccept.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right)));
this.btnAccept.DialogResult = System.Windows.Forms.DialogResult.OK;
this.btnAccept.Enabled = false;
this.btnAccept.Location = new System.Drawing.Point(400, 515);
this.btnAccept.Name = "btnAccept";
this.btnAccept.Size = new System.Drawing.Size(75, 23);
this.btnAccept.TabIndex = 1;
this.btnAccept.Text = "&Accept";
this.btnAccept.UseVisualStyleBackColor = true;
//
// btnDecline
//
this.btnDecline.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Right)));
this.btnDecline.DialogResult = System.Windows.Forms.DialogResult.Cancel;
this.btnDecline.Location = new System.Drawing.Point(481, 515);
this.btnDecline.Name = "btnDecline";
this.btnDecline.Size = new System.Drawing.Size(75, 23);
this.btnDecline.TabIndex = 2;
this.btnDecline.Text = "&Decline";
this.btnDecline.UseVisualStyleBackColor = true;
//
// chkEULA
//
this.chkEULA.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Bottom | System.Windows.Forms.AnchorStyles.Left)));
this.chkEULA.AutoSize = true;
this.chkEULA.Location = new System.Drawing.Point(12, 519);
this.chkEULA.Name = "chkEULA";
this.chkEULA.Size = new System.Drawing.Size(287, 17);
this.chkEULA.TabIndex = 3;
this.chkEULA.Text = "I have read and &understood the terms of this agreement";
this.chkEULA.UseVisualStyleBackColor = true;
this.chkEULA.CheckedChanged += new System.EventHandler(this.chkEULA_CheckedChanged);
//
// frmEULA
//
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.ClientSize = new System.Drawing.Size(564, 542);
this.Controls.Add(this.chkEULA);
this.Controls.Add(this.btnDecline);
this.Controls.Add(this.btnAccept);
this.Controls.Add(this.txtEULA);
this.Icon = global::LOIC.Properties.Resources.LOIC_ICO;
this.MinimumSize = new System.Drawing.Size(500, 480);
this.Name = "frmEULA";
this.StartPosition = System.Windows.Forms.FormStartPosition.CenterScreen;
this.Text = "Newfag\'s Low Orbit Ion Cannon End User License Agreement (EULA)";
this.ResumeLayout(false);
this.PerformLayout();
}
#endregion
private System.Windows.Forms.RichTextBox txtEULA;
private System.Windows.Forms.Button btnAccept;
private System.Windows.Forms.Button btnDecline;
private System.Windows.Forms.CheckBox chkEULA;
}
} | Low | [
0.46938775510204006,
23,
26
] |
The United Arab Emirate’s first female pilot led a mission this week against ISIS in Syria, according to the UAE ambassador to the United States. Major Mariam Al Mansouri, 35, was the first woman to join the Emirati Air Force. The UAE this week joined the United States, Saudi Arabia, Jordan, Qatar and Bahrain in implementing airstrikes against ISIS. Mariam Al Mansouri, the first Emirati female fighter jet pilot prepare to take off on June 13, 2013 in United Arab Emirates. (PHOTO: ABC News) Born in Abu Dhabi, Mansouri is one of eight children. She told Deraa Al Watan magazine recently that her family supported her career goals, but she had to overcome gender stereotypes along the way. It likely helped that she graduated from high school with a 93 percent grade average. She attended UAE University, the first of the three government-sponsored universities in the country, and received a degree in English literature. She graduated from Zayed Air College in 2007 and is an operations pilot who works with F-16 Block 60 aircraft, according to Abu Dhabi’s English language news outlet, The National. The UAE Ambassador to the U.S. Yousef Al Otaiba confirmed on the talk show “Morning Joe” today that Mansouri led the airstrike. “I can officially confirm that the UAE strike mission on Monday night was led by female fighter pilot Mariam Al Mansouri,” Al Otaiba. “She is fully qualified, highly trained, combat-ready pilot, and she led the mission.” This June 13, 2013 photo provided by the Emirates News Agency, WAM, shows Mariam Al Mansouri, the first Emirati female fighter jet pilot, right, walking with other pilots at an undisclosed location in United Arab Emirates. (PHOTO: ABC News) As she underwent training, Mansouri previously said there was no difference in assignments for men and women, and she tried not to focus on her male colleagues. | High | [
0.6726457399103141,
37.5,
18.25
] |
Characterization of a new functional TCR J delta segment in humans. Evidence for a marked conservation of J delta sequences between humans, mice, and sheep. Through analysis of TCR delta-chain cDNA derived from human gamma delta T cell clones and polyclonal gamma delta T cell lines, we isolated a novel functional J delta gene segment (termed J delta 4) whose genomic fragment has been mapped within the TCR-delta locus between J delta 2 and J delta 1. Frequency of J delta 4 use was estimated among adult gamma delta PBL by using V delta 1, V delta 2, and V delta 5 genes. In all cases, this new J element was used at a low, albeit significant frequency, close to that of J delta 2. Finally, like human J delta 1 and J delta 2, which show a high degree of homology with their counterparts in the mouse and sheep, but unlike other J gamma, J beta, or J alpha elements, J delta 4 turned out to be highly homologous to a recently described ovine J delta. These results suggest the existence of strong selective pressures, possibly linked to an Ag-driven process, leading to specific conservation of J delta sequences among these three species. | High | [
0.678191489361702,
31.875,
15.125
] |
Analysis of clinical characteristics, rationale, and management of critically ill obstetric patients transferred to ICU. To evaluate the clinical and demographic characteristics, rationale for transfer of critically ill obstetric patients to intensive care unit and their management therein. The observational retrospective case series study was conducted at Shaheed Mohtarma Benazir Bhutto Medical University, Larkana, Pakistan, and comprised critically ill female patients transferred to intensive care unit from the department of Obstetrics and Gynaecology between August 2011 and June 2013. The data was collected on pre-designed proforma which included demographic characteristics of patients, their symptomatology and initial diagnosis, intervention in the department, continuing or subsequent complications/reasons for admission to intensive care unit, management and stay there and, finally, outcome. Data was analysed using SPSS 21. The mean age of 150 patients in the study was 30.3±5.047years,mean parity was 2.49±2.207.The most common condition affecting women and leading to their transfer to intensive care was eclampsia/pre-eclampsia in 80(53.33%) followed by bleeding disorders in 25(16.65%) and septic shock in 24(16%). The mean stay in intensive care was 4.47±2.53 days, and 38(25.3%) patients required ventilator support, while 112(74.7%) were managed with oxygen and inotropic support. The overall maternal mortality rate was 41(27.3%), which included 19(16.9%) patients managed without ventilator, and 22 (57.8%) managed with ventilator (p<0.05). Hypertensive and bleeding disorders were the main reasons for transfer of obstetric patients to intensive care unit, and maternal mortality was high among patients treated on ventilator support. | High | [
0.7000000000000001,
30.625,
13.125
] |
/*************************************************************************** * * * This program is free software; you can redistribute it and/or modify * * it under the terms of the GNU General Public License as published by * * the Free Software Foundation; either version 3 of the License, or * * (at your option) any later version. * * * ***************************************************************************/ #include "HashManagerScript.h" #include "WulforUtil.h" #include "dcpp/MerkleTree.h" HashManagerScript::HashManagerScript(QObject *parent) : QObject(parent) { HM = dcpp::HashManager::getInstance(); HM->addListener(this); } HashManagerScript::HashManagerScript(const HashManagerScript &) { HM = dcpp::HashManager::getInstance(); HM->addListener(this); } HashManagerScript::~HashManagerScript() { HM->removeListener(this); } HashManagerScript &HashManagerScript::operator=(const HashManagerScript &) { HM = dcpp::HashManager::getInstance(); HM->addListener(this); return *this; } void HashManagerScript::stopHashing(const QString &baseDir) { HM->stopHashing(_tq(baseDir)); } QString HashManagerScript::getTTH(const QString &aFileName, quint64 size) const{ dcpp::TTHValue val = HM->getTTH(_tq(aFileName), size); return _q(val.toBase32()); } QString HashManagerScript::getTTH(const QString &aFileName) const{ const dcpp::TTHValue *v = HM->getFileTTHif(_tq(aFileName)); if (v) return _q(v->toBase32()); else return ""; } void HashManagerScript::rebuild(){ HM->rebuild(); } void HashManagerScript::startup(){ HM->startup(); } void HashManagerScript::shutdown() { HM->shutdown(); } bool HashManagerScript::pauseHashing() const { return HM->pauseHashing(); } void HashManagerScript::resumeHashing() { return HM->resumeHashing(); } bool HashManagerScript::isHashingPaused() const { return HM->isHashingPaused(); } void HashManagerScript::on(TTHDone, const dcpp::string &file, const dcpp::TTHValue &val) throw() { emit done(_q(file), _q(val.toBase32())); } | Mid | [
0.583333333333333,
36.75,
26.25
] |
A few months ago, I stopped by Larry Bartels’s office at Vanderbilt University. Bartels, alongside Christopher Achen, is the author of Democracy for Realists, which I’d become a bit obsessed with. The book argues that decades of social science evidence has shattered the idealistic case made for how voters in democracies act, and the reality is that “even the most informed voters typically make choices not on the basis of policy preferences or ideology, but on the basis of who they are — their social identities.” I sat down with Bartels shortly after the 2016 election, and I had a dozen ideas for how his book helped explain the unusual results. But he wasn’t buying my premise. To him, the election looked pretty typical. The Democratic candidate won 89 percent of Democratic voters, and the Republican candidate won 90 percent of Republican voters. The Democrat won minorities, women, and the young; the Republican won whites, men, and the old. The Democrat won a few percentage points more of the two-party vote than the Republican, just as had happened four years before, and four years before that. If you had known nothing about the candidates or conditions in the 2016 election but had been asked to predict the results, these might well have been the results you’d predicted. So what was there to explain? Bartels doesn’t deny that there were interesting oddities to this election. The small but crucial number of Obama-to-Trump voters are worth studying, for instance. The interventions of Russia and then-FBI Director James Comey may well have delivered Republicans the presidency. And surely it’s meaningful that Hillary Clinton and Donald Trump were the least popular nominees in history. But his point is that we’re so obsessed by what was different in 2016 that we’re missing the big story — how much stayed the same. For all the weirdness of the campaign, Trump and Clinton still got about 95 percent of the vote, and they did so by consolidating their own bases in ways that looked extremely similar to 2012. It’s easy to come up with counterfactuals where Clinton is in the White House today, or where Marco Rubio won the popular vote by 4 percentage points, but the basic similarity between Trump-Clinton and Obama-Romney deserves more attention than it’s gotten. If democracy is a contest of ideas, if it’s really about voters judging candidates and policies and making fresh judgments about the world around them, how come a wild-card nominee wielding a brand new ideology didn’t do more to shuffle the deck? How did American politics become so stable that even an election as weird as 2016 ended up looking normal? And if it’s not a contest of ideas, then what the hell is it? In Bartels’s view, there is an answer. He thinks this election told us something we already knew. The problem is that it’s something nobody wants to hear. The problems with democracy Bartels and Achen describe Democracy for Realists as “a kind of intellectual conversion experience for us.” It had long been obvious to them, as it is to everyone, that the practice of democracy is often a grubbier, shabbier thing than the glittering rhetoric that surrounds it. But that’s no great crime. “We believed that if the realities failed to match the ideals, we (and others seeking to vindicate contemporary democracy) still had intellectually powerful back-up defenses to bolster our convictions,” they write. And then, as they surveyed the evidence, it turned out they didn’t. “Our view is that conventional thinking about democracy has collapsed in the face of modern social-scientific research.” Most of Democracy for Realists is Achen and Bartels systematically, ruthlessly demolishing traditional theories of democracy. Neither their arguments nor their evidence is surprising, at least not exactly. What’s unusual is their willingness to admit what the research actually reveals. This book is the political science equivalent of being told Santa doesn’t exist: It makes sense once you think about it, but everyone has spent years telling you otherwise, and so the revelation comes as a trauma. Achen and Bartels present the main argument for democracy, the one we all learned in grade school, like this: “Ordinary people have preferences about what their government should do. They choose leaders who will do those things, or they enact their preferences directly in referendums. In either case, what the majority wants becomes government policy.” The authors call this the “folk theory” of democracy, and it is a persuasive, inspiring vision of how government should work. But political scientists rejected that argument decades ago, and much of Democracy for Realists consists of Achen and Bartels running through the depressing studies that explain why. Voters hold weak and contradictory views about the government. They pay little attention to political news save in the months directly before an election. Their positions on key issues can be changed by small tweaks to how questions are worded or ordered. They often assume the leaders they like share their views, or if they don’t, they change their views to accord with the politicians they support. Look no further than the sudden rise in Republican estimations of Vladimir Putin to see this phenomenon in action: Here is Republicans and Democrats on Vladimir Putin since July 2014. pic.twitter.com/s4I6FY5cbt — Will Jordan (@williamjordann) December 14, 2016 None of this means voters are stupid. Quite the opposite, actually. The US government deals with a dizzying range of questions, and developing informed and current opinions on all of them would take more hours than any of us have in a day. Meanwhile, most people have jobs to do, families to feed, friends to see, lives to live, and very little actual political power to wield — so they sensibly don’t spend their scarce free time developing detailed views about appropriations bills and trade deals and conservation policy and military spending and corporate tax reform and Medicaid funding and the proper path of interest rates. But if voters aren’t judging candidates based on their preferences about what government should do, then how are they making decisions? How, to put it bluntly, does democracy actually work? In recent decades, the most academically influential model has been the “retrospective theory of voting.” This theory holds that voters use assessments of current conditions — like looking at whether the economy is growing, or if Americans are dying in an overseas war — to decide whether the party in power is doing a good job. In this telling, voters are taking a simple but powerful shortcut to the place that deeply informed opinions about public policy would get them: a country that runs well, an economy that grows quickly, a world that is peaceful. There is nothing new about this theory of how voters actually act. “To support the Ins when things are going well; to support the Outs when they seem to be going badly,” wrote Walter Lippmann in 1925, “this, in spite of all that has been said about tweedledum and tweedledee, is the essence of popular government.” The retrospective theory of voting has the advantage of being true. Economic growth really does drive election results, for instance. But it’s true in a haphazard, problematic way. Studies find that the only economic growth that actually matters is the growth that happens in an election year, and there’s no evidence that voters can separate a recession a president had nothing to do with, and perhaps even managed well, and a recession a president’s policies caused or worsened. “If jobs have been lost in a recession, something is wrong, but is that the president’s fault?” write Achen and Bartels. “If it is not, then voting on the basis of economic conditions may be no more sensible than kicking the dog after a hard day at work.” A yet clearer example comes from the well-established finding that voters punish incumbents for bad weather and natural disaster. Here, Achen and Bartels are at their acidic best: The fact that American voters throughout the 20th century punished incumbent presidents at the polls for droughts and floods seems to us to rule out the possibility that they were reacting to subpar handling of misfortunes rather than to the misfortunes themselves. After all, it is hard to see how incumbent presidents’ handling of droughts and floods could have been substantially worse than average over the course of an entire century. One problem with the retrospective theory of voting, in other words, is that the shortcuts get us lost. “Voters consistently and systematically punish incumbents for conditions beyond their control,” conclude Achen and Bartels. But perhaps a bigger problem for the theory — at least as an explanation of voter behavior — is that most voters don’t use it at all. Particularly in modern elections, the swingable vote is a tiny fraction of the population. In 2008, for instance, the Republican Party was presiding over an economic meltdown and an unpopular war, but John McCain still got 46 percent of the vote. Most voters support their party’s nominee for the presidency no matter the condition of the country. Let’s recap. The classic argument for democracy is that people have clear preferences for what government should do and they vote in accordance with those preferences. That’s not true. The more modern, and more academically influential, argument for democracy is that voters make judgments about how incumbents perform, and those judgments are broadly accurate and work as a shortcut for getting them the kinds of leaders they want. That is also not true. “All the conventional defenses of democratic government are at odds with demonstrable, centrally important facts of political life,” write Achen and Bartels. “One has to believe six impossible things before breakfast to take real comfort in any of them.” So how, then, do voters make decisions in a democracy? It’s the identity, stupid During the worst of Northern Ireland’s “Troubles,” when tensions between Catholics and Protestants were at their height, the Irish poet Seamus Heaney told of a visitor to the region who was asked whether he was Protestant or Catholic. The man replied that he was an atheist. “Yes, yes, we understand,” his hosts said. “But are you a Protestant atheist or a Catholic atheist?” Donald Trump might have been, in political if not religious terms, an atheist, but he was clearly a Republican atheist. He knew which side he was on, even if he didn’t believe what they believed. And that was enough. That was more than enough. Tribes exist to fight common enemies. Achen and Bartels believe we’ve spent so much time listening to what voters say that we’ve lost sight of what they actually do, and why they do it. “In thinking about politics,” they write, “it makes no sense to start from issue positions — they are generally derivative from something else. And that something else is identity.” This is a profound statement, and the authors don’t shy away from its scope. “A realistic theory of democracy must be built, not on the French Enlightenment, on British liberalism, or on American Progressivism, with their devotion to human rationality and monadic individualism, but instead on the insights of the critics of these traditions, who recognized that human life is group life,” they write. In this telling, democracy is less a contest between ideas than a contest between identities, with both parties constantly trying to activate the basket of identities that will lead to a vote for their tribe. Are you a Republican or a Democrat? Urban or rural? Catholic or Jewish? White or black? Male or female? Liberal or conservative? Rich or poor? Even gentle, isolated reminders of identity can swing people’s voting behavior. Take a 2014 study by psychologists Maureen Craig and Jennifer Richeson. They asked one group of white political independents if they knew that California would soon have more nonwhite residents than white residents. Then they asked another group of white political independents if they had heard another highly racialized, but less threatening (at least to white political power), fact — “that Hispanics had become roughly equal in number to Blacks nationally.” And then they compared the two group’s political opinions. Even now, the results, as described by Achen and Bartels, stun. “The people who had been informed (or simply reminded) of the potentially threatening demographic shift in California were significantly more likely to lean Republican. This effect was twice as strong in the West as in the nation as a whole, producing a substantial 11-point increase in Republican leaning (and a 15-point decrease in Democratic leaning).” In a follow-up study, Craig and Richeson handed some white subjects a press release about “projections that racial minorities will constitute a majority of the U.S. populace by 2042.” The group that read the release “produced more conservative views not only on plausibly relevant issues like immigration and affirmative action, but also on seemingly unrelated issues like defense spending and health care reform.” It’s not just white voters whose political opinions are easily swayed by simply contemplating threats to their racial group’s status and identity. A 2016 study by Alexander Kuo, Neil Malhotra, and Cecilia Hyunjung Mo split a sample of Asian-American college students into two groups. One group was subjected to a staged racial humiliation during the study — their US citizenship was doubted by the researcher managing the experiment. “This minor but socially charged interaction boosted Democratic partisanship by 13 percentage points,” report Achen and Bartels. It is worth dwelling on what isn’t happening here: any persuasion about policies or ideas. These experimental cues trigger complex, multilayered identities. They do not make arguments about ideal marginal tax rates or America’s role in the world. Results like these do not make sense under the traditional or retrospective theories of democracy; they make perfect sense under the group identity theory of democracy. When we talk about group identities in American politics, we mainly mean nonpolitical identities: race, class, gender, sexuality. But there’s one identity that elections activate above all, and that identity is becoming more and more powerful: partisan identity. One identity to rule them all There is one overwhelming fact that structures American politics, and it is this: People who vote for Republicans vote for Republicans, and people who vote for Democrats vote for Democrats. It might sound tautological, but it isn’t. A few decades ago, people who voted for Republicans often voted for Democrats, and vice versa. Split-ticket voting was common, and even hardcore, self-described partisans were often persuadable. Not anymore. There are a few findings that rocked my understanding of politics, and one of them came from political scientist Corwin Smidt. Looking at decades of election data, he found that self-described independent voters today are more loyal to a single party than voters who described themselves as “strong partisans” were in the 1970s. This bears repeating: The people who say they’re free from either party today are more partisan in their voting habits than the people who said they were strong loyalists of a single party in the ’70s. Smidt argues that the change is in our parties, not in our voters. The difference between the Democratic and Republican parties has become so clear, so unmistakable, that pretty much every kind of voter reliably votes for one party or the other. In 1964, Medicare — a single-payer health care system for the elderly — received 70 Republican votes in the House as well as 13 Republican votes in the Senate. There was no anti-abortion plank in the GOP platform until 1980. It's easy to see how a voter in the 1970s might think Republicans were open to something like Medicare or reproductive choice — particularly if they lived in a liberal area represented by a liberal Republican who actually was open to those policies. Today, however, the choice between the two parties is much, much clearer. You may not like Donald Trump, but you fear Hillary Clinton. As the parties diverge from each other ideologically and culturally, the other side becomes more of a threat — and that makes it easier to justify voting for your side, no matter who the nominee is. Polling backs this up. Since 1964, the American National Election Studies have been asking Republicans and Democrats to describe their feelings toward the other party on a scale that runs from cold and negative to warm and positive. In 1964, 31 percent of Republicans had cold, negative feelings toward the Democratic Party, and 32 percent of Democrats had cold, negative feelings toward the Republican Party. By 2012, that had risen to 77 percent of Republicans and 78 percent of Democrats. Today, fully 45 percent of Republicans, and 41 percent of Democrats, believe the other party’s policies “threaten the nation’s well-being.” Fear of the other is a powerful force to keeping unity in a tribe. But policy does not account for the entirety of the rise in political tribalism. In the New York Times, Amanda Taub ran through new research suggesting that “party affiliation has become an all-encompassing identity that outweighs the details of specific policies.” The key idea here is what we might call identity amplification, which has become more prevalent in recent years as people have clustered themselves economically, ideologically, religiously, and geographically: Everyone has multiple identities: racial, religious, professional, ideological and more. But while those multiple identities might once have pushed people in different partisan directions — think of the conservative Democrats of old in the South or all the liberal Republicans in the Northeast — today it’s more common to line up behind one party. A white conservative who lives in a rural area and is an evangelical Christian is likely to feel that the Republican Party is the best representative of all of those separate identities, for instance. An African-American liberal who lives in a city and works in a professional job is likely to feel the same way about the Democratic Party. Identity amplification makes Republican and Democratic identities overwhelmingly powerful, and perhaps even inescapable, in modern politics — and modern life. In 1960, Americans were asked whether they would be pleased, displeased, or unmoved if their son or daughter married a member of the other political party. Then, only 5 percent of Republicans, and only 4 percent of Democrats, said they would be upset by the cross-party union. Fast-forward to 2008. The polling firm YouGov asked Democrats and Republicans the same question — and got very different results. This time, 27 percent of Republicans, and 20 percent of Democrats, said they would be upset if their son or daughter married a member of the opposite party. In 2010, YouGov asked the question again; this time, 49 percent of Republicans, and 33 percent of Democrats, professed concern at interparty marriage. Today, studies find that people are more willing to discriminate against someone of the other party than of another race, at least in experimental settings. "Political identity is fair game for hatred," Shanto Iyengar, director of Stanford's political communications lab, told me. "Racial identity is not. Gender identity is not. You cannot express negative sentiments about social groups in this day and age. But political identities are not protected by these constraints. A Republican is someone who chooses to be Republican, so I can say whatever I want about them." This, Achen and Bartels say, is what elections are really about; “political campaigns consist in large part of reminding voters of their partisan identities.” All those ads, those speeches, those conventions, those endorsements, those photo ops — they’re not about persuading voters of whose tax plan is better so much as they’re about reminding voters which tribe they’re from. Trump understood all this, even if he didn’t embody it — he is a political independent, a lifelong Manhattanite, and a billionaire libertine who spent the campaign affirming evangelical, rural, working-class identities. He routinely contradicted or dismissed longtime Republican ideas, but what he never, ever did was disrespect Republican identities. Liberals laughed when Trump said, “I love the poorly educated," but his tribe knew what he meant — unlike those smug liberals, he was on their side and thought they deserved more respect. Viewed this way, Trump’s communication style makes more sense. His extemporaneous, rambling, quasi-factual speeches confuse pundits — including me — who are used to hearing politicians make careful arguments. But his speeches do work to establish which side he’s on, which groups he admires, and which enemies he’s going to humiliate. Perhaps more effectively, his tweets and insults drive his opponents into frenzies, and make the battle lines clear — you may not like Trump, but if you’ve spent years hating the Democrats and the media and condescending professors and rich cultural elites, at least he’s pissing them off and, in doing so, proving he’s on your side. And even if you didn’t like Trump — and many Republicans didn’t — fear and mistrust of the other side was a perfectly rational reason to vote for him. There was a Supreme Court seat up for grabs, after all, and conservatives reasonably feared whom Clinton would appoint. This is one way polarization amplifies identity: The two parties are now so far apart ideologically that even an unusual nominee like Trump is a safer bet than the Democrat. Many Trump voters appear to have made this calculation. According to exit polls, Trump won among voters who said their vote was motivated by dislike for the other candidates in the race. This is the answer to the normalcy of the 2016 election: As abnormal and unqualified and erratic as Trump showed himself to be, with Clinton’s help, he still activated pro-Republican and anti-Democratic identities, and that left him a stone’s throw, or a Jim Comey letter, away from the presidency. Yes, there were a few Obama-to-Trump voters on the margins, but overwhelmingly, even an election this weird was forced into normalcy by the power of our partisan identities. | Mid | [
0.613526570048309,
31.75,
20
] |
23 February 2007 Wowie. I leave you people without adult supervision for a couple of days and see what happens in the comments? (I was away, and unexpectedly without internet access -- oh the pain!) Well, I'm back, and first of all I do want to thank all of those who shared their personal experiences, and also faithful commenter/blogger Scalpel, for his determined responses. Despite our contrasting viewpoints, I suspect we have more shared points of agreement than differences. That may be a by-product of working in the same trenches, and the same frustrations with the particular patient population that tends to show up in the ED. Specifically, I think we agree that the lack of personal responsibility demonstrated by (some of) the patients we see is maddening, and that a carte blanche system such as Medicaid encourages irresponsible utilization. Any policy solution which addresses the uninsured crisis should do so in a manner which ensures that patients do have a direct personal financial incentive to ensure that their health care dollars are spent in a rational manner. Scalpel makes a couple of other points which I would like to address: "Plenty of people with insurance don't take care of themselves either."I have heard this many times from many sources, not just Scalpel. It's not totally off-topic, dovetailing with the "personal responsibility" theme, but it's also, well, irrelevant. There's a kind of implication that good health is a result of moral rectitude, and illness is somehow a deserved consequence of the patient's choices. It's a tempting conclusion to draw, given the self-destructive crap both Scalpel and I see every single day. But the counterpoint, a universally observed phenomenon among ER docs, is that the more upstanding a person is, the more likely they are to have some bizarre and lethal diagnosis (I recall the time the Chief of Police came in for a sore throat and I diagnosed him with metastatic laryngeal Ca -- a nonsmoker, of course). The other bizarre corollary is that the "scumbags" are just impossible to kill, it seems. They get drunk and fall off bridges and walk away. But I digress, which should surprise nobody. The point, I think, is that how well one takes care of oneself should have no bearing on whether an individual is allowed to access the health care system. "Health care is not a right."This has been gnawed over to a great degree at Gruntdoc and KevinMD with no shortage of passion, and I will not even try to delve the complicated depths of this argument in an off-the-cuff post like this one. I repeat that there is a place for personal responsibility, and my preference as a physician and entrepreneur is to retain elements of the system which allow me to set/negotiate prices and retain a profit incentive. However, I am not willing to embrace any system which simply excludes 20% of the population with a shrug of the shoulders. I like the train of thought proposed by JimII -- the right to life is a fundamental right; therefore health care necessary to sustain life should be considered a fundamental need inseparable from the right to live. Whether you call it a right or an entitlement or just "universal" is a matter of semantics which means nothing to me. Consider it a moral imperative if you will -- design a system which will cover everybody. That is all I care about. Scalpel does offer one common false choice: "if we try to provide healthcare for all, they may have to settle for "good" healthcare. Not super-excellent top -of-the-line healthcare." This is right up there with the whole "rationing" argument. Yes, it is possible that systems might be implemented which would ration or diminish the quality of care (see: Canada, UK/NHS). However, these are not obligatory consequences of universal coverage. Much of the ailments which plague the NHS and Canada's Medicare derive from underfunding. Since we're already paying top-dollar for our system, a system which simply preserves expenditures at current relative levels (with appropriate flexibility) should prevent the woes of rationing. As long as we are willing to pay for it -- through payroll, or taxes, or premiums -- there is no reason we cannot continue to enjoy the current quality of care that exists. By the way, I am even open to a two-tiered system, in which the wealthy can purchase better or preferential access to care, if it is the cost of entry into universal coverage. "Uninsured care can be handled at the state level with federal assistance. Further federalization of the healthcare system is not the only answer."Certainly not the only answer. But if you buy the notion that there is some sort of compelling moral obligation to provide care to all Americans, it bothers me that in some places (like Scalpel's home, it would seem) there are no obstacles to care for the poor and there is comprehensive care for all, yet in other localities, like mine, the uninsured are SOL and on their own. The availability of such a fundamental need should not vary by location. I've no problem with using states as laboratories to come up with innovative solutions for intractable problems. But this doesn't require a whole lot of clever thinking -- we know the problem and the solution. I don't care who administers it, but there should be a national standard and national funding. Realistically, this means a federal program. Also, from my politically liberal point of view, I am suspicious of local control, since local programs are more easily subverted and nullified. Reminds me of when Newt tried to break up welfare into "block grants" for the states to disburse as they saw fit. But that is another topic. Boy, that's a lot for now. More to come as I have time to think & digest. Thanks again to all who have read and contributed. 19 comments: "As long as we are willing to pay for it -- through payroll, or taxes, or premiums -- there is no reason we cannot continue to enjoy the current quality of care that exists." Well, there is already a differenial in quality amongst hospitals, at least from the patient's perspective. As ERC said, she would "rather have hubby seen by one of the best, than go to another hospital and not get as good healthcare." The trouble is that not everyone can go to UAB or the Mayo Clinic or Johns Hopkins et al. So if we're nationalizing healthcare, many of us are going to have to settle for less prestigious arrangements. Rationing is inevitable when one cuts the pie into 300 million pieces. The right to life is a fundamental right; therefore health care necessary to sustain life should be considered a fundamental need inseparable from the right to live. Sorry, but I don’t see the logical extension that you see here. The right to life has always meant that another person is not allowed to take your life. In other words, murder is a crime. The right to life doesn’t inherently mean that other people must actively sustain your life. Of course, one can define “right to life” that way but that is not the inevitable logical extension. In a way, the whole idea of defining human rights in the negative only, i.e. what other people are forbidden from doing to you, as opposed to in the positive (what other people are obligated to do for you), is uniquely American. Of course, we can all decide that we as a society should provide healthcare for all, and I am not necessarily opposed to the idea. But chalking up the distinction between a right and an entitlement to semantics doesn’t sit well with me. first of all, thank you for your thoughtful discussion about this. i really appreciate it. it's very emotional, and i've said things on previous entries that were probably tactless towards certain people. it's a really emotional issue for me right now because my girlfriend has stage four hodgkins with liver involvement at the moment, and of course does not have insurance. and it's enough for me to lose faith completely when i start to get the preception people in the oncologist's office (and the oncologist in particular) aren't maybe giving it their all in my mind. and of course, when i get this perception, it's hard not to immediately interpret it as something that's happening because of their resentment towards the fact that they know they're not getting paid. the thing that i find most offensive, the thing that indicates to me that there are a lot of people who do basically think you're SOL and should just get out of their face when you're sick and uninsured... is the complete lack of interest or even thought towards experimenting with any type of national healthcare system down the road. i mean, i think a lot of people really genuinely believe that Single Payor would be a disaster (not on political grounds but on common sense grounds), and yet, what would be in the way of agreeing to set up a certain agreed upon system... and if it gets going and it's clear that it's crap, pivot to option B... or put it on hold and think it over again. people against universal healthcare (especially those in DC) really don't give off the impression that something like this would ever even enter their beautiful minds. "i start to get the preception people in the oncologist's office (and the oncologist in particular) aren't maybe giving it their all in my mind. and of course, when i get this perception, it's hard not to immediately interpret it as something that's happening because of their resentment towards the fact that they know they're not getting paid." In all human interactions, if someone is willing and able to pay more, they generally get better service. Why should healthcare be any different? Wouldn't it bother you to not get paid for your work? Many of us are hesitant to embrace a(nother) national health system because we are intimately aware of the pittance paid by Medicare and Medicaid for our services, the difficulties we face in complying with their endless regulations, and the difficulties many patients face in receiving care through those programs. Why would we expect any other federal universal coverage program to be any different? I always assumed that a "right to life" *also* included a right to basic life-saving measures. I’m no historian, but I do believe that the rights to life, liberty, and property in the Bill of Rights and the Declaration of Independence derive from John Locke’s “natural rights”. A quick google search for John Locke brings up the following quote “The state of nature has a law of nature to govern it, which obliges everyone: and reason which is that law, teaches all mankind who will but consult it, that being all equal and independent, no one ought to harm another in his life, health, liberty or possessions...” It is very much a stretch in my mind to think that John Locke and those after him had in mind a right to life-saving measures. Who would be responsible for delivering those measures? How would this right be enforced? As Thomas Hobbes said, ‘Covenants, without the sword, are but words and of no strength to secure a man at all.’ This is why the American Bill of Rights was created, to put serious muscle into protecting said rights. There have been countries that declared “the right to health” for their citizens. Soviet Union was one of them. It was actually in the Soviet criminal code that a physician would be held criminally liable if he/she did not deliver health measures. I don’t know how the law was enforced and I don’t know if it’s on the books any longer. Basically it meant that the State (the Party) could call up a physician in the middle of the night and say: ‘Be ready in 5 minutes, there is a black Marusya coming to pick you up, comrade Stalin just threw up.’ i actually completely agree with you. if i were in the shoes of a doctor who knew a patient wasn't paying their bills, and that was directly effecting my pocketbook, it would be hard not to have at least a minimal amount of resentment-- despite how well i liked the person, wanted to treat them, and felt sympathetic towards their financial predicament. what i was saying was that i resent that this is the case in the first place. it just shouldn't be this way, period. believe me, it's enough to handle cancer in the first place, without having to feel self-conscious about the fact that doctors and their staff feel like you're imposing on them unduly. in the case of my girlfriend, she's worked all her adult life hard, she's smart, she's good hearted, she's humble, she gives way more than she asks for, and she got hodgkins. my problem is, what part of that equation says that she deserves to be treated as a second class citizen when she's trying to beat her disease? is is that she wasn't willing to dedicate 1/4th of her monthly income before the disease struck to paying for a sufficient personal insurance plan? when she was trying hard enough to afford paying rent and bills in the first place? or was it that she didn't try hard enough to find a job where there was health insurance, despite that she's been at the point, multiple times, where she didn't have the financial luxury to sit back and wait for that, because she had to find a job immediately? and what makes all the other people who have been in her situation and will in the future deserve it? i'm saying that as of right now, to call what this country has in terms of health care 'ideal' is what, in my mind, someone would say who hasn't had to go through something like what my girlfriend is going through right now. and with regards to preventative health care being a bonus to universal healthcare, she had all the classic symptoms of hodgkins for a year or more before the lump above her collar bone became so large that she found it. sweating, severe itching, fatigue, etc. i'd really like to think if she had had health insurance, a doctor's visit or routine xray would have prevented it from going to stage four. is that a supposition that opponents of universal health care would protest? anon -Hm. Loosely interpretated, would the refusal to provide life-saving treatment amount to someone harming the person? even if the someone constitutes "the state" or whatever? Also, blahblah, the era in which both Locke and Jefferson wrote was completely different from the era that we live in. Why assume a basic right to [life saving] medical treatment when it just didn't exist at that time? I don't think that a right to healthcare springs full grown from the right to life, liberty, and property. I think that there are certain things that cannot be based on how good a capitalist you are. But first, there are lots of things that should be based on how good a capitalist you are. You should be able to eat in nice restaurants. You should be able to drive nice cars and have cool computers. You should get to go on vacations. If you are a good capitalist, that is, you do a job that allows you to acquire money within the confines of our legal system, then you deserve those things. I don't care how hard you work; I don't care how socially valuable your work is; I don't care how much training it took to do the work you do. If you are a good capitalist, it is moral that you should enjoy the fruits of capitalism. However, being a good capitalist should not entitle you to vote twice. You should not be allowed to break the law. You should not be allowed to live longer. Why? Well, I can stomach preachers, teachers, social workers, etc. driver less nice cars than lawyers, doctors, and corporate executives. But I cannot stomach them being allowed to suffer from sickness longer. I think its wrong. We aren't doing very well in any of these three areas. Attorneys provide pro bono assistance to the indigent, but not nearly enough to make our justice system equal. Equal access to politicians, please? Frankly, medicine is definitely the closest. This isn't a legal argument. It is a moral one. Just because we believe that the wealthy--like me, btw--deserve to have nicer things, does not mean that we have to believe the wealthy deserve better medicine, protection under the law, or political representation. being a good capitalist should not entitle you to vote twice. You should not be allowed to break the law. You should not be allowed to live longer You should not be allowed to live longer? Once you turn 70 (65, anyone?), and your tax return shows 500K (100K, anyone?) of income, let’s just kill you off. It must feel great, jimii, to be as moral as you are. I don't necessarily agree with a blanket statement like that. As I have said before, just because a technology is available doesn't mean everyone is equally entitled to it. Those who try to claim the moral high ground never seem to want to discuss the concept of lines being drawn, but in a single payer system rest assured that lines will be drawn, budgets will be kept, and care will be denied. I concede it's an oversimplification. Poor people will always die earlier, from a variety of other causes, of course. And I've already said that I would be OK with a system which was tiered so some degree, so long as everybody was guaranteed some basic level of benefits. And remember that of the plans being discussed for universal health, "Single Payor" is just one and not necessarily my favorite. Wyden's plan seems to have some distinct advantages and also guarantees universal coverage, without rationing. Shadowfax About me: I am an ER physician and administrator living in the Pacific Northwest. I live with my wife and four kids. Various other interests include Shorin-ryu karate, general aviation, Irish music, Apple computers, and progressive politics. My kids do their best to ensure that I have little time to pursue these hobbies. Disclaimer This blog is for general discussion, education, entertainment and amusement. Nothing written here constitutes medical advice nor are any hypothetical cases discussed intended to be construed as medical advice. Please do not contact me with specific medical questions or concerns. All clinical cases on this blog are presented for educational or general interest purposes and every attempt has been made to ensure that patient confidentiality and HIPAA are respected. All cases are fictionalized, either in part or in whole, depending on how much I needed to embellish to make it a good story to protect patient privacy. All Content is Copyright of the author, and reproduction is prohibited without permission. | Mid | [
0.584821428571428,
32.75,
23.25
] |
If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. Welcome to Mac-Forums! Join us to comment and to customize your site experience! Members have access to different forum appearance options, and many more functions. hi I'm new to mac bought a 2nd hand mac a few weeks ago imac 24 with 8gb ram 1tb had radian 130 anyway been having a few issues with freeing recently had a read through a few forums and help pages tried a few things but to no avail i don't have any discs with it, and it looks like the recover hd is not intact so I cant reinstall from that . and as a result i can't run the hardware diagnostics read somewhere that holding down the option key whilst clicking the buy button for lion {app store ) will give you the option to get the downloader free to help make a recovery hd. however it did not work for me. well it did so but I'm now downloading lion and i guess I've been charged for it i got time machine running since day one of buying it so when lion downloads can i just reinstall it. or is it not as easy as that right downloaded lion and managed to create a bootable pen drive, although couldn't do it with the link you posted me. anyway found another prog that when run it created it for me, very easy to use. anyway i booted from pen drive and followed instructions . after about 30 mins installing pc booted back up. the strange thing was. the home screen was the default one making me think that lion had been reinstalled, but all my progs were still there . my book marks were all intact. intact the only thing different was the desktop wallpaper ,and some progs when started came on like they were fresh installs is this normal or as lion reinstalled from time machine Yes just running the Lion installer does not erase the hard drive, leaving all your files and applications in place. You will find the version installed may well be OS X.7 and you will need to update via software update. If this presents any problems go to the Apple web site and download the OS X.7.4 Combo Updater. been ok for a week or so since reinstalling lion last couple of days I've had lock ups for no apparent reason either surfing the web or moving files or opening documents etc. so can't pin it down to anyone event i can't test hardware coz i aint got the apple aht I've run memory check and all seems ok I'm running temp monitors memory usage check nd smt fan control any way i can get the apple hardware test from anywhere is it possible to do a clean install of lion when i had issues on windows i used to format and reinstall can i do the same on mac been reading up on it but its confusing i can get all the installed software again so don't really need to clone the drive. i do have time machine from the day i bought the machine about a month ago thanks for the help but i read the article and i don't see how i can get the aht to work i don't have original disc to boot off and its not installed on the hard drive anywhere i CAN get hold of a retail copy of snow leopard or retail copy of lion, will i be able to get it off one of them. I've been monitoring, activity monitor a lot along with temperature and memory usage. i notice a process called cfbakd was using a lot of cpu, after a read up on google it appears it was left over from when disk drill was installed so i deleted all its traces, i also noticed when time machine was left on, it pushed temp and cpu up a lot. so i disabled it, and just switch it on and back up say every couple of hours, then switch it off again as i mentioned in an earlier post, i would like to do a clean install of lion is this wise.? it was never recommended in windows to upgrade os, but with mac it seem to be normal | Low | [
0.49790794979079506,
29.75,
30
] |
/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * <p/> * http://www.apache.org/licenses/LICENSE-2.0 * <p/> * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.hadoop.mapred.gridmix; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.mapred.ClusterStatus; import org.apache.hadoop.mapred.gridmix.Statistics.ClusterStats; import org.apache.hadoop.mapred.gridmix.Statistics.JobStats; import org.apache.hadoop.mapreduce.JobID; import org.apache.hadoop.mapreduce.JobStatus; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.tools.rumen.JobStory; import org.apache.hadoop.tools.rumen.JobStoryProducer; import java.io.IOException; import java.util.HashSet; import java.util.Set; import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicBoolean; public class StressJobFactory extends JobFactory<Statistics.ClusterStats> { public static final Log LOG = LogFactory.getLog(StressJobFactory.class); private final LoadStatus loadStatus = new LoadStatus(); /** * The minimum ratio between pending+running map tasks (aka. incomplete map * tasks) and cluster map slot capacity for us to consider the cluster is * overloaded. For running maps, we only count them partially. Namely, a 40% * completed map is counted as 0.6 map tasks in our calculation. */ private static final float OVERLOAD_MAPTASK_MAPSLOT_RATIO = 2.0f; public static final String CONF_OVERLOAD_MAPTASK_MAPSLOT_RATIO= "gridmix.throttle.maps.task-to-slot-ratio"; final float overloadMapTaskMapSlotRatio; /** * The minimum ratio between pending+running reduce tasks (aka. incomplete * reduce tasks) and cluster reduce slot capacity for us to consider the * cluster is overloaded. For running reduces, we only count them partially. * Namely, a 40% completed reduce is counted as 0.6 reduce tasks in our * calculation. */ private static final float OVERLOAD_REDUCETASK_REDUCESLOT_RATIO = 2.5f; public static final String CONF_OVERLOAD_REDUCETASK_REDUCESLOT_RATIO= "gridmix.throttle.reduces.task-to-slot-ratio"; final float overloadReduceTaskReduceSlotRatio; /** * The maximum share of the cluster's mapslot capacity that can be counted * toward a job's incomplete map tasks in overload calculation. */ private static final float MAX_MAPSLOT_SHARE_PER_JOB=0.1f; public static final String CONF_MAX_MAPSLOT_SHARE_PER_JOB= "gridmix.throttle.maps.max-slot-share-per-job"; final float maxMapSlotSharePerJob; /** * The maximum share of the cluster's reduceslot capacity that can be counted * toward a job's incomplete reduce tasks in overload calculation. */ private static final float MAX_REDUCESLOT_SHARE_PER_JOB=0.1f; public static final String CONF_MAX_REDUCESLOT_SHARE_PER_JOB= "gridmix.throttle.reducess.max-slot-share-per-job"; final float maxReduceSlotSharePerJob; /** * The ratio of the maximum number of pending+running jobs over the number of * task trackers. */ private static final float MAX_JOB_TRACKER_RATIO=1.0f; public static final String CONF_MAX_JOB_TRACKER_RATIO= "gridmix.throttle.jobs-to-tracker-ratio"; final float maxJobTrackerRatio; /** * Represents a list of blacklisted jobs. Jobs are blacklisted when either * they are complete or their status cannot be obtained. Stress mode will * ignore blacklisted jobs from its overload computation. */ private Set<JobID> blacklistedJobs = new HashSet<JobID>(); /** * Creating a new instance does not start the thread. * * @param submitter Component to which deserialized jobs are passed * @param jobProducer Stream of job traces with which to construct a * {@link org.apache.hadoop.tools.rumen.ZombieJobProducer} * @param scratch Directory into which to write output from simulated jobs * @param conf Config passed to all jobs to be submitted * @param startFlag Latch released from main to start pipeline * @throws java.io.IOException */ public StressJobFactory( JobSubmitter submitter, JobStoryProducer jobProducer, Path scratch, Configuration conf, CountDownLatch startFlag, UserResolver resolver) throws IOException { super( submitter, jobProducer, scratch, conf, startFlag, resolver); overloadMapTaskMapSlotRatio = conf.getFloat( CONF_OVERLOAD_MAPTASK_MAPSLOT_RATIO, OVERLOAD_MAPTASK_MAPSLOT_RATIO); overloadReduceTaskReduceSlotRatio = conf.getFloat( CONF_OVERLOAD_REDUCETASK_REDUCESLOT_RATIO, OVERLOAD_REDUCETASK_REDUCESLOT_RATIO); maxMapSlotSharePerJob = conf.getFloat( CONF_MAX_MAPSLOT_SHARE_PER_JOB, MAX_MAPSLOT_SHARE_PER_JOB); maxReduceSlotSharePerJob = conf.getFloat( CONF_MAX_REDUCESLOT_SHARE_PER_JOB, MAX_REDUCESLOT_SHARE_PER_JOB); maxJobTrackerRatio = conf.getFloat( CONF_MAX_JOB_TRACKER_RATIO, MAX_JOB_TRACKER_RATIO); } public Thread createReaderThread() { return new StressReaderThread("StressJobFactory"); } /* * Worker thread responsible for reading descriptions, assigning sequence * numbers, and normalizing time. */ private class StressReaderThread extends Thread { public StressReaderThread(String name) { super(name); } /** * STRESS: Submits the job in STRESS mode. * while(JT is overloaded) { * wait(); * } * If not overloaded , get number of slots available. * Keep submitting the jobs till ,total jobs is sufficient to * load the JT. * That is submit (Sigma(no of maps/Job)) > (2 * no of slots available) */ public void run() { try { startFlag.await(); if (Thread.currentThread().isInterrupted()) { LOG.warn("[STRESS] Interrupted before start!. Exiting.."); return; } LOG.info("START STRESS @ " + System.currentTimeMillis()); while (!Thread.currentThread().isInterrupted()) { try { while (loadStatus.overloaded()) { // update the overload status if (LOG.isDebugEnabled()) { LOG.debug("Updating the overload status."); } try { checkLoadAndGetSlotsToBackfill(); } catch (IOException ioe) { LOG.warn("[STRESS] Check failed!", ioe); return; } // if the cluster is still overloaded, then sleep if (loadStatus.overloaded()) { if (LOG.isDebugEnabled()) { LOG.debug("[STRESS] Cluster overloaded in run! Sleeping..."); } // sleep try { Thread.sleep(1000); } catch (InterruptedException ie) { LOG.warn("[STRESS] Interrupted while sleeping! Exiting.", ie); return; } } } while (!loadStatus.overloaded()) { if (LOG.isDebugEnabled()) { LOG.debug("[STRESS] Cluster underloaded in run! Stressing..."); } try { //TODO This in-line read can block submission for large jobs. final JobStory job = getNextJobFiltered(); if (null == job) { LOG.warn("[STRESS] Finished consuming the input trace. " + "Exiting.."); return; } if (LOG.isDebugEnabled()) { LOG.debug("Job Selected: " + job.getJobID()); } UserGroupInformation ugi = UserGroupInformation.createRemoteUser(job.getUser()); UserGroupInformation tgtUgi = userResolver.getTargetUgi(ugi); GridmixJob tJob = jobCreator.createGridmixJob(conf, 0L, job, scratch, tgtUgi, sequence.getAndIncrement()); // submit the job submitter.add(tJob); // TODO: We need to take care of scenario when one map/reduce // takes more than 1 slot. // Lock the loadjob as we are making updates int incompleteMapTasks = (int) calcEffectiveIncompleteMapTasks( loadStatus.getMapCapacity(), job.getNumberMaps(), 0.0f); loadStatus.decrementMapLoad(incompleteMapTasks); int incompleteReduceTasks = (int) calcEffectiveIncompleteReduceTasks( loadStatus.getReduceCapacity(), job.getNumberReduces(), 0.0f); loadStatus.decrementReduceLoad(incompleteReduceTasks); loadStatus.decrementJobLoad(1); } catch (IOException e) { LOG.error("[STRESS] Error while submitting the job ", e); error = e; return; } } } finally { // do nothing } } } catch (InterruptedException e) { LOG.error("[STRESS] Interrupted in the main block!", e); return; } finally { IOUtils.cleanup(null, jobProducer); } } } /** * STRESS Once you get the notification from StatsCollector.Collect the * clustermetrics. Update current loadStatus with new load status of JT. * * @param item */ @Override public void update(Statistics.ClusterStats item) { ClusterStatus clusterStatus = item.getStatus(); try { // update the max cluster map/reduce task capacity loadStatus.updateMapCapacity(clusterStatus.getMaxMapTasks()); loadStatus.updateReduceCapacity(clusterStatus.getMaxReduceTasks()); int numTrackers = clusterStatus.getTaskTrackers(); int jobLoad = (int) (maxJobTrackerRatio * numTrackers) - item.getNumRunningJob(); loadStatus.updateJobLoad(jobLoad); } catch (Exception e) { LOG.error("Couldn't get the new Status",e); } } float calcEffectiveIncompleteMapTasks(int mapSlotCapacity, int numMaps, float mapProgress) { float maxEffIncompleteMapTasks = Math.max(1.0f, mapSlotCapacity * maxMapSlotSharePerJob); float mapProgressAdjusted = Math.max(Math.min(mapProgress, 1.0f), 0.0f); return Math.min(maxEffIncompleteMapTasks, numMaps * (1.0f - mapProgressAdjusted)); } float calcEffectiveIncompleteReduceTasks(int reduceSlotCapacity, int numReduces, float reduceProgress) { float maxEffIncompleteReduceTasks = Math.max(1.0f, reduceSlotCapacity * maxReduceSlotSharePerJob); float reduceProgressAdjusted = Math.max(Math.min(reduceProgress, 1.0f), 0.0f); return Math.min(maxEffIncompleteReduceTasks, numReduces * (1.0f - reduceProgressAdjusted)); } /** * We try to use some light-weight mechanism to determine cluster load. * * @throws java.io.IOException */ protected void checkLoadAndGetSlotsToBackfill() throws IOException, InterruptedException { if (loadStatus.getJobLoad() <= 0) { if (LOG.isDebugEnabled()) { LOG.debug(System.currentTimeMillis() + " [JobLoad] Overloaded is " + Boolean.TRUE.toString() + " NumJobsBackfill is " + loadStatus.getJobLoad()); } return; // stop calculation because we know it is overloaded. } int mapCapacity = loadStatus.getMapCapacity(); int reduceCapacity = loadStatus.getReduceCapacity(); // return if the cluster status is not set if (mapCapacity < 0 || reduceCapacity < 0) { // note that, by default, the overload status is true // missing cluster status will result into blocking of job submission return; } // Determine the max permissible map & reduce task load int maxMapLoad = (int) (overloadMapTaskMapSlotRatio * mapCapacity); int maxReduceLoad = (int) (overloadReduceTaskReduceSlotRatio * reduceCapacity); // compute the total number of map & reduce tasks submitted int totalMapTasks = ClusterStats.getSubmittedMapTasks(); int totalReduceTasks = ClusterStats.getSubmittedReduceTasks(); if (LOG.isDebugEnabled()) { LOG.debug("Total submitted map tasks: " + totalMapTasks); LOG.debug("Total submitted reduce tasks: " + totalReduceTasks); LOG.debug("Max map load: " + maxMapLoad); LOG.debug("Max reduce load: " + maxReduceLoad); } // generate a pessimistic bound on the max running+pending map tasks // this check is to avoid the heavy-duty actual map load calculation int mapSlotsBackFill = (int) (maxMapLoad - totalMapTasks); // generate a pessimistic bound on the max running+pending reduce tasks // this check is to avoid the heavy-duty actual reduce load calculation int reduceSlotsBackFill = (int) (maxReduceLoad - totalReduceTasks); // maintain a list of seen job ids Set<JobID> seenJobIDs = new HashSet<JobID>(); // check if the total number of submitted map/reduce tasks exceeds the // permissible limit if (totalMapTasks > maxMapLoad || totalReduceTasks > maxReduceLoad) { // if yes, calculate the real load float incompleteMapTasks = 0; // include pending & running map tasks. float incompleteReduceTasks = 0; // include pending & running reduce tasks for (JobStats job : ClusterStats.getRunningJobStats()) { JobID id = job.getJob().getJobID(); seenJobIDs.add(id); // Note that this is a hack! Ideally, ClusterStats.getRunningJobStats() // should be smart enough to take care of completed jobs. if (blacklistedJobs.contains(id)) { LOG.warn("Ignoring blacklisted job: " + id); continue; } int noOfMaps = job.getNoOfMaps(); int noOfReduces = job.getNoOfReds(); // consider polling for jobs where maps>0 and reds>0 // TODO: What about setup/cleanup tasks for cases where m=0 and r=0 // What otherwise? if (noOfMaps > 0 || noOfReduces > 0) { // get the job's status JobStatus status = job.getJobStatus(); // blacklist completed jobs and continue if (status != null && status.isJobComplete()) { LOG.warn("Blacklisting completed job: " + id); blacklistedJobs.add(id); continue; } // get the map and reduce tasks' progress float mapProgress = 0f; float reduceProgress = 0f; // check if the status is missing (this can happen for unpolled jobs) if (status != null) { mapProgress = status.getMapProgress(); reduceProgress = status.getReduceProgress(); } incompleteMapTasks += calcEffectiveIncompleteMapTasks(mapCapacity, noOfMaps, mapProgress); // bail out early int currentMapSlotsBackFill = (int) (maxMapLoad - incompleteMapTasks); if (currentMapSlotsBackFill <= 0) { // reset the reduce task load since we are bailing out incompleteReduceTasks = totalReduceTasks; if (LOG.isDebugEnabled()) { LOG.debug("Terminating overload check due to high map load."); } break; } // compute the real reduce load if (noOfReduces > 0) { incompleteReduceTasks += calcEffectiveIncompleteReduceTasks(reduceCapacity, noOfReduces, reduceProgress); } // bail out early int currentReduceSlotsBackFill = (int) (maxReduceLoad - incompleteReduceTasks); if (currentReduceSlotsBackFill <= 0) { // reset the map task load since we are bailing out incompleteMapTasks = totalMapTasks; if (LOG.isDebugEnabled()) { LOG.debug("Terminating overload check due to high reduce load."); } break; } } else { LOG.warn("Blacklisting empty job: " + id); blacklistedJobs.add(id); } } // calculate the real map load on the cluster mapSlotsBackFill = (int) (maxMapLoad - incompleteMapTasks); // calculate the real reduce load on the cluster reduceSlotsBackFill = (int)(maxReduceLoad - incompleteReduceTasks); // clean up the backlisted set to keep the memory footprint minimal // retain only the jobs that are seen in this cycle blacklistedJobs.retainAll(seenJobIDs); if (LOG.isDebugEnabled() && blacklistedJobs.size() > 0) { LOG.debug("Blacklisted jobs count: " + blacklistedJobs.size()); } } // update loadStatus.updateMapLoad(mapSlotsBackFill); loadStatus.updateReduceLoad(reduceSlotsBackFill); if (loadStatus.getMapLoad() <= 0) { if (LOG.isDebugEnabled()) { LOG.debug(System.currentTimeMillis() + " [MAP-LOAD] Overloaded is " + Boolean.TRUE.toString() + " MapSlotsBackfill is " + loadStatus.getMapLoad()); } return; // stop calculation because we know it is overloaded. } if (loadStatus.getReduceLoad() <= 0) { if (LOG.isDebugEnabled()) { LOG.debug(System.currentTimeMillis() + " [REDUCE-LOAD] Overloaded is " + Boolean.TRUE.toString() + " ReduceSlotsBackfill is " + loadStatus.getReduceLoad()); } return; // stop calculation because we know it is overloaded. } if (LOG.isDebugEnabled()) { LOG.debug(System.currentTimeMillis() + " [OVERALL] Overloaded is " + Boolean.FALSE.toString() + "Current load Status is " + loadStatus); } } static class LoadStatus { /** * Additional number of map slots that can be requested before * declaring (by Gridmix STRESS mode) the cluster as overloaded. */ private volatile int mapSlotsBackfill; /** * Determines the total map slot capacity of the cluster. */ private volatile int mapSlotCapacity; /** * Additional number of reduce slots that can be requested before * declaring (by Gridmix STRESS mode) the cluster as overloaded. */ private volatile int reduceSlotsBackfill; /** * Determines the total reduce slot capacity of the cluster. */ private volatile int reduceSlotCapacity; /** * Determines the max count of running jobs in the cluster. */ private volatile int numJobsBackfill; // set the default to true private AtomicBoolean overloaded = new AtomicBoolean(true); /** * Construct the LoadStatus in an unknown state - assuming the cluster is * overloaded by setting numSlotsBackfill=0. */ LoadStatus() { mapSlotsBackfill = 0; reduceSlotsBackfill = 0; numJobsBackfill = 0; mapSlotCapacity = -1; reduceSlotCapacity = -1; } public synchronized int getMapLoad() { return mapSlotsBackfill; } public synchronized int getMapCapacity() { return mapSlotCapacity; } public synchronized int getReduceLoad() { return reduceSlotsBackfill; } public synchronized int getReduceCapacity() { return reduceSlotCapacity; } public synchronized int getJobLoad() { return numJobsBackfill; } public synchronized void decrementMapLoad(int mapSlotsConsumed) { this.mapSlotsBackfill -= mapSlotsConsumed; updateOverloadStatus(); } public synchronized void decrementReduceLoad(int reduceSlotsConsumed) { this.reduceSlotsBackfill -= reduceSlotsConsumed; updateOverloadStatus(); } public synchronized void decrementJobLoad(int numJobsConsumed) { this.numJobsBackfill -= numJobsConsumed; updateOverloadStatus(); } public synchronized void updateMapCapacity(int mapSlotsCapacity) { this.mapSlotCapacity = mapSlotsCapacity; updateOverloadStatus(); } public synchronized void updateReduceCapacity(int reduceSlotsCapacity) { this.reduceSlotCapacity = reduceSlotsCapacity; updateOverloadStatus(); } public synchronized void updateMapLoad(int mapSlotsBackfill) { this.mapSlotsBackfill = mapSlotsBackfill; updateOverloadStatus(); } public synchronized void updateReduceLoad(int reduceSlotsBackfill) { this.reduceSlotsBackfill = reduceSlotsBackfill; updateOverloadStatus(); } public synchronized void updateJobLoad(int numJobsBackfill) { this.numJobsBackfill = numJobsBackfill; updateOverloadStatus(); } private synchronized void updateOverloadStatus() { overloaded.set((mapSlotsBackfill <= 0) || (reduceSlotsBackfill <= 0) || (numJobsBackfill <= 0)); } public boolean overloaded() { return overloaded.get(); } public synchronized String toString() { // TODO Use StringBuilder instead return " Overloaded = " + overloaded() + ", MapSlotBackfill = " + mapSlotsBackfill + ", MapSlotCapacity = " + mapSlotCapacity + ", ReduceSlotBackfill = " + reduceSlotsBackfill + ", ReduceSlotCapacity = " + reduceSlotCapacity + ", NumJobsBackfill = " + numJobsBackfill; } } /** * Start the reader thread, wait for latch if necessary. */ @Override public void start() { LOG.info(" Starting Stress submission "); this.rThread.start(); } } | Low | [
0.49292929292929205,
30.5,
31.375
] |
Guntupalli, Krishna district Guntupalli (IAST: Gunṭupalli) is a census town in Krishna district of the Indian state of Andhra Pradesh. It is located in Ibrahimpatnam mandal of Vijayawada revenue division. It is a suburb of Vijayawada. Demographics census, the town had a population of 11,187. The total population constitute, 5,573 males and 5,614 females —a sex ratio of 1007 females per 1000 males. 993 children are in the age group of 0–6 years, of which 521 are boys and 472 are girls —a ratio of 906 per 1000. The average literacy rate stands at 85.18% with 8,683 literates, significantly higher than the state average of 67.41%. Jainism was once practiced and is supported by the Jain remnants at the site. Transport APSRTC operates city buses from Vijayawada. Rayanapadu railway station is the nearby railway station to the town. The town is located 10 km to the west of Vijayawada on NH 65. It has the wagon workshop of Indian Railways. Education The primary and secondary school education is imparted by government, aided and private schools, under the School Education Department of the state. See also Villages in Ibrahimpatnam mandal List of census towns in Andhra Pradesh References Category:Census towns in Andhra Pradesh | High | [
0.6575963718820861,
36.25,
18.875
] |
Postoperative management for PIP joint pyrocarbon arthroplasty. Although protocols provide therapists with the scaffolding with which to build a treatment program, it is the judgment, knowledge, and skills of the therapist, and how the one uses such information that allows for modification of a protocol when deemed necessary. This therapist outlines how she modified a postsurgical protocol by using anatomy, biomechanics, the literature, and clinical judgment. This article describes the methodical approach used to successfully modify a standard postsurgical protocol after a PIP joint arthroplasty. | High | [
0.716927453769559,
31.5,
12.4375
] |
Natural occurrence of acetylated derivatives of deoxynivalenol and nivalenol in wheat and barley in Japan. Thirty-four samples of domestic wheat and barley grains, collected from eight prefectures of different locations in Japan and previously determined to be positive for deoxynivalenol (DON), nivalenol (NIV) and/or zearalenone (ZEA), were analysed for acetylated derivatives of DON and NIV by gas chromatography-mass spectrometry. In addition to DON and NIV, 3-acetyldeoxynivalenol (3-ADON), 15-acetyldeoxynivalenol (15-ADON) and 4-acetylnivalenol (4-ANIV) were found in 25, 4 and 14 samples, respectively. A regional difference in the DON and NIV contamination of Japanese wheat and barley was suggested: DON was the major trichothecene in the northern district and NIV in the central districts, whereas in the southern districts the DON level was similar to or slightly higher than the NIV level. 3-ADON occurred together with DON in almost all prefectures examined, whereas 15-ADON was found only in samples from northern districts. In addition, a high correlation (r = 0.974, n = 23) between levels of DON and its acetates (3-ADON and 15-ADON) was noted. These results may also suggest the possibility of a geographic difference in the distribution of different chemotypes of Fusarium species producing these trichothecenes in Japan. | Mid | [
0.6118721461187211,
33.5,
21.25
] |
921 Earthquake 九二一大地震 The catastrophic 921 Earthquake中文 struck Taiwan on September 21st, 1999, killing and injuring thousands, destroying innumerable buildings, and reshaping entire landscapes. Emanating from an epicenter on the Chelungpu Fault 車籠埔斷層 in Jíjí 集集, Nántóu 南投, its effects were felt all around central Taiwan. Numerous memorials and reminders of the disaster remain, several of which have been documented here. | High | [
0.743434343434343,
23,
7.9375
] |
Q: Disable focus cues on a SplitContainer How can I disable the focus cues on a SplitContainer? I ask because I'd rather draw them myself using OnPaint in order to make it look somewhat smoother. I tried this: protected override bool ShowFocusCues { get { return false; } } And this is my control: public class cSplitContainer : SplitContainer { private bool IsDragging; protected override void OnMouseDown(MouseEventArgs e) { base.OnMouseDown(e); if (!IsSplitterFixed) IsDragging = true; Invalidate(); } protected override void OnMouseUp(MouseEventArgs e) { base.OnMouseUp(e); if (IsDragging) { IsDragging = false; IsSplitterFixed = false; } } protected override void OnMouseMove(MouseEventArgs e) { base.OnMouseMove(e); if (IsDragging) { IsSplitterFixed = true; if (e.Button == MouseButtons.Left) { if (Orientation == Orientation.Vertical) { if (e.X > 0 && e.X < Width) SplitterDistance = e.X; } else { if (e.Y > 0 && e.Y < Height) SplitterDistance = e.Y; } } else { IsDragging = false; IsSplitterFixed = false; } } } protected override void OnPaint(System.Windows.Forms.PaintEventArgs e) { base.OnPaint(e); if (IsDragging) { e.Graphics.FillRectangle(new SolidBrush(Color.FromArgb(127, 0, 0, 0)), Orientation == Orientation.Horizontal ? new Rectangle(0, SplitterDistance, Width, SplitterWidth) : new Rectangle(SplitterDistance, 0, SplitterWidth, Height)); } } } but it didn't work. I also tried some other methods mentioned before, but I'm still getting focus cues. A: I don't think what you are seeing is the FocusCue so much as a floating window that is used to move the slider. If keyboard access isn't important, you can try making it unselectable: public class MySplit : SplitContainer { public MySplit() { this.SetStyle(ControlStyles.Selectable, false); } protected override void OnPaint(PaintEventArgs e) { e.Graphics.Clear(Color.Red); } } This prevents the SplitContainer from getting focus, but your mouse can still interact with it. A: The code of SplitContainer is like: protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); if (Focused) { DrawFocus(e.Graphics,SplitterRectangle); } } DrawFocus is not virtual. So you can't override it. Focused is virtual. Maybe you can set it to false while calling base.OnPaint(...) in your OnPaint override. So you could add following code (I did not tested if it works): private bool _painting; public override bool Focused { get { return _painting ? false : base.Focused; } } protected override void OnPaint(PaintEventArgs e) { _painting = true; try { base.OnPaint(e); } finally { _painting = false; } } That is more a hack than a clean solution. A: I was googling for this issue and this question came up on the top. There is a solution and interesting discussion on a Microsoft forum regarding the splitter stealing focus for no good reason. The following comment is spot on: The focus issue you mentioned is by design, however to get the performance you want, you can use the following workaround: .... It may be "by design", but it is not a very good one. What spitters have you ever seen in any Microsoft production application that even temporarily take the focus from the panes they split? I also added the code you suggest, and it does keep me from permanently losing the focus to the splitter, but I still don't like the fact that my panes hide and show their selections during splitter manipulation. This distracting selection flash just is not present in most professional applications. It is just good enough that it probably won't be worth my time to fix for a while, but not what most people really want. If you respected the TabStop property or even added a AcceptsFocus property, most people would want this off. I think you should add this option to the design in a future version. --Brendan | Low | [
0.514285714285714,
27,
25.5
] |
For years to come, Sheriff Joe Arpaio can expect to have a federal judge looking over his shoulder, watching almost his every move and those of his deputies. A federal judge Wednesday approved a plan to place an independent monitor inside the Maricopa County, Ariz., Sheriff’s Office to ensure the department is not racial profiling. Maricopa County deputies - once described by a Justice Department expert as conducting “the most egregious racial profiling in the United States” - will have every traffic stop monitored statistically and with video cameras, with strict orders to ignore suspects’ race. U.S. District Judge G. Murray Snow’s ruling caps a class-action anti-discrimination lawsuit against Arpaio and the sheriff’s office. In May, Snow ruled that Arpaio’s office was using unconstitutional racial profiling to target and detain Latinos suspected to have entered the U.S. illegally. The latest ruling orders the sheriff’s office to promote an internal policy “that unauthorized presence in the United States is not a crime and does not itself constitute reasonable suspicion or probable cause to believe that a person has committed or is committing any crime.” Deputies making stops will have to make a radio call specifying what the stop is about. They are barred from asking someone about immigration status unless the person is suspected of a crime. Snow ruled that the court would maintain the monitorship until the sheriff’s office had achieved “full and effective compliance” for at least three years. The sheriff’s office and the plaintiffs - Latinos who said they were profiled by deputies - were expected to agree on the monitor, who will be appointed by the judge to oversee the department. The American Civil Liberties Union of Arizona, which filed suit against the sheriff’s office, will have access to department data and materials and will receive reports about the department’s progress on implementing new training and anti-discrimination policies. “Every person in Maricopa County deserves better than a sheriff’s department that commits pervasive civil rights violations at the expense of public safety,” Cecillia Wang, director of the ACLU Immigrants’ Rights Project, said in a statement. “The court’s order will make sure the agency actually enforces the law and will no longer go on wild goose chases based on racial stereotypes.” The sheriff’s office’s attorney, Tim Casey, told the Arizona Republic that the department expected to appeal Snow’s decision but would abide by the order. “The sheriff still remains in exclusive charge of the [sheriff’s office], he still sets the policy, still sets the programs and makes the sole decision on discretionary items,” Casey told the Republic. “The monitor has absolutely no veto power whatsoever on any law enforcement decision.” ALSO: No new trial for Jerry Sandusky Extreme motorcycle road rage caught on tape Government shutdown: no panda cam, no parks Follow L.A. Times National on Twitter | Mid | [
0.6538461538461531,
34,
18
] |
ATP5J and ATP5H Proactive Expression Correlates with Cardiomyocyte Mitochondrial Dysfunction Induced by Fluoride. To investigate the effect of excessive fluoride on the mitochondrial function of cardiomyocytes, 20 healthy male mice were randomly divided into 2 groups of 10, as follows: control group (animals were provided with distilled water) and fluoride group (animals were provided with 150 mg/L F- drinking water). Ultrastructure and pathological morphological changes of myocardial tissue were observed under the transmission electron and light microscopes, respectively. The content of hydrolysis ATP enzyme was observed by ATP enzyme staining. The expression levels of ATP5J and ATP5H were measured by Western blot and quantitative real-time PCR. The morphology and ultrastructure of cardiomyocytes mitochondrial were seriously damaged by fluoride, including the following: concentration of cardiomyocytes and inflammatory infiltration, vague myofilaments, and mitochondrial ridge. The damage of mitochondrial structure was accompanied by the significant decrease in the content of ATP enzyme for ATP hydrolysis in the fluoride group. ATP5J and ATP5H expressions were significantly increased in the fluoride group. Thus, fluoride induced the mitochondrial dysfunction in cardiomyocytes by damaging the structure of mitochondrial and interfering with the synthesis of ATP. The proactive ATP5J and ATP5H expression levels were a good response to the mitochondrial dysfunction in cardiomyocytes. | High | [
0.691612903225806,
33.5,
14.9375
] |
[Scintigraphy by l'Octreoscan in the management of head and neck paragangliomas]. To assess the usefulness of somatostatin receptor scintigraphy [Octreoscan] in a series of 18 patients referred for a suspicion of paraganglioma of the head and neck between July 2001 and February 2002. Sixteen patients had one or several paragangliomas of the head and neck diagnosed on conclusive conventional imaging including CT and MR scan. In two patients, radiological data were not conclusive. Planar images were obtained 4 and 24 hr after the iv injection of 148-185 MBq [Octreoscan]. Twenty-two hot spot lesions were detected. Twenty of these lesions corresponded to the twenty known paragangliomas. The volume of the smallest tumor was 0.2 cm(3). In one patient, intense thyroid nodule uptake led to the surgical diagnosis of oncocytoma. In two lesions, where conventional imaging was not conclusive, arteriography showed a typical aspect of meningioma; one patient was operated on and histology confirmed this diagnosis. No evidence of abnormal uptake was seen in site previously operated on (3 patients). Octreotide scintigraphy is a very sensitive method for detection of paraganglioma of head and neck. It provides information on potential tumor sites in the whole body after one single injection. It could be used as a screening test in patients at risk (familial or known paraganglioma) in order to detect paraganglioma at an early stage and, thus to reduce the surgical morbidity, as well as in the follow-up after surgery to detect recurrences. | High | [
0.6876640419947501,
32.75,
14.875
] |
It’s clear that the indictment of Dennis Hastert has raised more questions than it’s provided answers. But I suspect a lot of people are asking the wrong ones. Hastert’s “misconduct” may turn out to be of sexually predatory nature, in which case talk of how much his reputation is worth is picayune compared the nature of crime. But there are questions about what he did that are applicable to the entire industry he represents. The most obvious question, that’s also the least relevant for most Americans: What is the “misconduct” that Hastert is alleged to have been trying to cover up? This is an important question, to be sure, but indicting Hastert on the financial charges and lying to investigators rather than on whatever misconduct occurred seems to indicate that those charges were the best investigators could come up with. Presumably, if the misconduct was illegal, they’d have mentioned that—and indicted him for it. If the conduct was sexual abuse, as sources are saying, then the statute of limitations has run out. It follows that Hastert wasn’t paying hush money to stay out of jail, he was protecting his reputation. A better question, and one that many Washington watchdogs leapt on quickly: How did Hastert happen to have enough money lying around that paying out $3.5 million was even within the realm of possibility? Hastert’s ability to participate in the blackmail is, after all, itself a general indictment of D.C.’s “revolving door” money culture, in which former lawmakers move easily from government into lobbying. In Hastert’s case, the ability to profit off of one’s legislative position is especially galling: While in office, Hastert used the earmarking process to turn his investment in some Illinois farmland into a profit of 140 percent when a federal highway project just happened to make its way through those very fields. Indeed, it was this instance of a completely legal form of insider trading that helped prompt Congress to end earmarks. And, of course, Hastert made even more money once he was out of office. One study found that, on average—and when the information is publicly available—former lawmakers get a 1,425 percent raise when they make the jump from Capitol Hill to K Street. Hastert, who was worth between $4 million and $17 million when he left Congress, was making $175,000 as a representative. His K Street bump would be to almost $2.5 million a year. Okay, he made his money as a lobbyist, doing presumably sneaky lobbyist things. That raises the next question: How can Hastert’s reputation even be worth $3.5 million? Hastert is a former member of Congress known to have profited off of a shady land deal and he’s a registered lobbyist—these are already the two professions that Americans regard as the most disreputable careers available. They are literally last (lobbyist) and second-to-last (congressman) on Gallup’s list of what jobs Americans regard as “honest” and “ethical.” What would one have to do to be thought even less of? Given the ickiness of what has been reported, it might not be good to think about that question too hard, so let’s turn that question on its head: What kind of reputation could be worth spending $3.5 million to protect? To consider $3.5 million a reasonable sum to spend on protecting one’s reputation, presumably it has to be worth a lot more than that. And, indeed, in the context of the lobbying world, $3.5 million just isn’t that much money. Especially considering that Hastert was apparently making payoffs over time. Special interest groups spent almost 1,000 times that—$3.2 billion—in 2015 alone. If Hastert viewed protecting his reputation as a kind of investment in future earnings, $3.5 million is on the scale of buying an alarm system for your home, not buying a whole other house. And, it’s important to remember, what Hastert was covering up with that hush money was not a “reputation” as an average citizen might conceive of it: something akin to honor or trustworthiness or fidelity. A lobbyist’s reputation, after all, actually hinges on his or her established lack of principles. A lobbying client for someone who is a former member of Congress is paying a premium for that person’s willingness to engage in barely-legal favor-trading. A lobbyist’s prices go up the more corrupt he is. Who wants to hire an honest one? | Mid | [
0.575107296137339,
33.5,
24.75
] |
Q: SQLite Table-Level Encryption We need to encryption in SQLite to secure our data from vulnerability. Is there any way of table-level encryption in SQLite? we tried to find out but we didn't yet get any solution. Thanks in advance. A: The SQLite Encryption Extension [1] encrypts an entire database. There is no option to encryption some tables and not others. This is because SEE operates at the pager layer and the pager does not know what table a particular page belongs to. But what you can do is separate your data into two separate database files. One file is encrypted and the other is not. Then open your database connection on the unencrypted file and ATTACH the encrypted file, or vice versa. SQLite does allow connections where there are multiple attached databases, some of which are encrypted and others not. | Mid | [
0.6105527638190951,
30.375,
19.375
] |
! wilson_facet.scl ! Wilson study in 'conjunct facets', Hexany based 22 ! 28/27 21/20 10/9 9/8 7/6 6/5 5/4 35/27 4/3 27/20 7/5 40/27 3/2 14/9 63/40 5/3 140/81 7/4 9/5 28/15 35/18 2/1 | Mid | [
0.6259351620947631,
31.375,
18.75
] |
Netflix's Hastings: Online steering TV's future The future of television will be driven by Internet networks, and traditional cable TV companies will inevitably morph into Web-based content providers, Netflix CEO Reed Hastings predicted Tuesday. The flexibility of Internet programming is already attracting content creators, and even online video providers are getting into the production game themselves, Hastings said during a meeting at POLITICO as part of a D.C. swing to promote his company’s Washington-based TV series, “House of Cards.” Text Size - + reset “What we’ll see in the Internet is most cable networks will become Internet networks — we’ll still call ESPN a cable network, but it’ll be mostly delivered over the Internet in 10 or 20 years. The fundamental advantage of the Internet is individualization, control, being able to watch on any screen. It’s just a much better technology substrate for video.” Personalized advertising streams will help drive the move to the Web, Hastings said, as will the roll-out of very high-speed broadband networks. But just as cable networks grew from running others' content to also commissioning their own, the TV and movie distributor is following the same path. “We’re embarking on this really big phase of pioneering original content,” Hastings said. “It’s different — you’ve got creative risks, operational risks, your financing — it’s much more ambitious. But it’s the natural thing that you grow into,” he added, touting this week’s release of the 13-episode production starring Kevin Spacey. Much of Netflix’s growth has come in fairly unregulated spaces like DVD-by-mail and online streaming. But that hasn’t kept the company from retaining a D.C. presence. Netflix dropped slightly more than $1 million in D.C. lobbying in 2012 — a pittance compared with many top tech companies but double the $500,000 it spent the year before. Much of that regulatory focus congealed around a just-passed update to the Video Privacy Protection Act. Known colloquially as the Netflix bill, the law now allows users to give online video providers blanket consent for up to two years to share their rental history with social networks like Facebook. Some reports criticized the legislation as anti-privacy, but Hastings said that’ll change once people have a chance to use the feature. “It’s in a vacuum, because no one can use the Netflix social aspect yet. Once people get to see it, they are going to not see it as an invasion of privacy,” he said. More broadly, the company also has a dog in the industry fight over net neutrality and data caps. But there’s not a huge war to wage — at least right now, Hastings said. “It’s ambiguous right now because there aren’t big problems. … There’s big threats, but, in general, the Internet is working really well,” he said. “Uncapped is better for sure, like Google Fiber. But we’re not at some big crisis point. So it behooves us at Netflix to stay active in it and keep people aware of it, and it may be that if there’s enough talk about what’s important then the companies will be thoughtful and we really won’t have to get into a heavy regulatory scheme, which is hard to figure out anyway.” This article first appeared on POLITICO Pro at 4:42 p.m. on January 29, 2013. | Mid | [
0.608910891089108,
30.75,
19.75
] |
Ben Masel Bennett A. "Ben" Masel (October 17, 1954 – April 30, 2011) was an American writer, publisher, cannabis rights and free speech activist, expert witness for marijuana defendants, and frequent candidate for public office. A skilled chess player, Masel was director of Wisconsin NORML, and organizer of Weedstock and the annual Great Midwest Marijuana Harvest Festival which has been held in front of the Wisconsin State Capitol every autumn since 1971. Masel, who was known for his Yippie theatrics and anti-war and pro-labor activism, was born in the Bronx, grew up in New Jersey, and in 1971 relocated to Madison, where he became a fixture of the Wisconsin political scene for 40 years. He died after battling cancer, in 2011. Life and activism Education and early activism Masel, who was born in New York in 1954 and grew up in New Jersey, became involved with the Youth International Party when he was a teenager, earning him the distinction of being the youngest person on Nixon's Enemies List. Masel was arrested during the Yippie protests at the 1968 Democratic National Convention in Chicago. The arrest embarked Masel on a lifelong career of First Amendment litigation and activism. In 1971, Masel moved to Madison. He attended the University of Wisconsin briefly before being expelled for his involvement in demonstrations. Masel, a Yippie "street theatre" Vietnam War and personal freedom protester, made national headlines in 1976 for heckling segregationist Alabama Governor George Wallace from a wheelchair. Over his lifetime, Masel was arrested 137 times. Harvestfest and Weedstock The Great Midwest Marijuana Harvest Festival in Madison, the oldest and longest running cannabis rights festival in the United States, was first held in 1971 following a series of marijuana-trafficking arrests. Marchers carried signs reading "Free Dana Beal." Masel organized the demonstration to support Beal and after that it became an annual event. Masel's roving Weedstock "protestival" was held for fourteen years, from 1988 to 2001. Political writing and publishing Masel was a reporter for The Yipster Times, a newspaper of the Youth International Party, or Yippies. In 1985, Masel co-authored Blacklisted News: Secret Histories from Chicago, '68 to 1984, a comprehensive history of the Youth International Party. Masel published the underground newspaper Zenger from 1987 to 1993. Until his death, Masel maintained online political action blogs and petitions at Myspace, Facebook, and alternative media sites. Election campaigns and advocacy According to Steve DeAngelo, the first Hemp Tour in 1989 was Masel's idea. Jack Herer visited fourteen American cities, promoting the revised edition of his book The Emperor Wears No Clothes, in 1989. In 1990, Masel ran against Wisconsin Governor Tommy Thompson in the Republican primary. When the 1990 Midwest Marijuana Harvest Festival at the Wisconsin Capitol drew the criticism of Attorney General Don Hanaway, Masel challenged Hanaway to a chess match in order to prove that cannabis does not diminish intelligence. Hanaway declined Masel's chess game challenge. During a 1992 write-in bid for Dane County sheriff, Masel's campaign poster pictured him naked with the slogan: "Nothing to Hide, Masel for Sheriff." He got more than 7,000 votes. And when he made the ballot in 1994 as the Democratic candidate for sheriff, Masel received more than 39,000 votes. Masel challenged US Senator Herb Kohl in the Democratic primary in 2006 and got over 50,000 votes (about fifteen percent). At the time he was diagnosed with cancer in 2011, Masel was again seeking the Wisconsin Democratic Party endorsement for US senator. Civil rights career Masel was a professional protester. He got a $95,000 settlement from Sauk County, Wisconsin, after police officers wearing body armor arrested about a dozen Weedstock festival-goers, including Masel, who refused an order to vacate the grounds after being told the festival could not be held on a private field there, in 2000. On June 29, 2006, while lawfully gathering signatures during an election campaign, Masel was confronted by two University of Wisconsin–Madison police officers who threw him to the ground, pinned him with a knee on his back and then pepper-sprayed him in the face. Masel's federal civil rights suit against police officers John McCaughtry and Michael Mansavage which was heard at trial in 2009 before a hung jury and was to be reheard at a second trial in 2010, was settled out of court. Masel agreed to accept $7,500 from the state of Wisconsin to dismiss the appeal. A longtime friend of Masel, Amy Gros-Louis, told a Wisconsin State Journal reporter that "Ben knew the laws better than the police did." Masel fought limitations to free speech and the right to assemble. Whenever police tried to stop him, he would sue. And he usually won, according to Jeff Scott Olson, Masel's lawyer. Legacy and recognition In April, 2011, Masel was recognized by the National Organization for the Reform of Marijuana Laws board of directors with an award of special appreciation for "A Lifetime of Outstanding Work in Advancing the Cause of Legalizing Marijuana." The 420 Chess Club held an online competition called The Ben Masel Memorial 420 Chess Tournament from May, 2011, through February, 2012. Masel was named High Times' Freedom Fighter of the Month in August, 2011. On May 17, 2011, the City of Madison Common Council declared April 20 to be Ben Masel Day. Illness and death In January, 2011, Masel was diagnosed with lung cancer. He underwent a series of radiation treatments and was given steroids to aid his breathing, but became too weak to undergo chemotherapy. Nevertheless, Masel remained upbeat and never stopped demonstrating, even defying his doctor's advice in order to join a month-long labor rights protest being held at the Wisconsin Capitol during April. Masel, who did not have health insurance, died in a hospice, surrounded by friends and family, on April 30, 2011. He is survived by his daughter, Semilla Anderson, and granddaughter, Anandi. References Category:1954 births Category:2011 deaths Category:Activists from Wisconsin Category:American anarchists Category:American anti-war activists Category:American cannabis activists Category:American civil rights activists Category:American political writers Category:American abortion-rights activists Category:Cannabis in Wisconsin Category:COINTELPRO targets Category:Deaths from cancer in Wisconsin Category:Free speech activists Category:New Left Category:People from the Bronx Category:Writers from Madison, Wisconsin Category:Pranksters Category:University of Wisconsin–Madison alumni Category:Yippies | Mid | [
0.6544622425629291,
35.75,
18.875
] |
167 Ariz. 155 (1990) 805 P.2d 388 James ARNOLD, Kenneth M. Clayton, David O. Williams, and Derrell Doyal, Plaintiffs/Appellants, v. ARIZONA BOARD OF PARDONS AND PAROLES, Patricia Veliz Gilbert; Ray R. Florez; Ron Johnson; Darwin H. Aycock; Arter L. Johnson; Richard M. Ortiz; Frank R. Startzell, Defendants/Appellees. No. 2 CA-CV 90-0009. Court of Appeals of Arizona, Division 2, Department B. July 10, 1990. Review Denied February 20, 1991. *156 Stompoly & Stroud, P.C. by Don Awerkamp, Tucson, for plaintiffs/appellants. Robert K. Corbin, Atty. Gen. by Bruce L. Skolnik, Tucson, for defendants/appellees. OPINION FERNANDEZ, Chief Judge. Appellants challenge two trial court rulings on appeal: its ruling that a Board of Pardons and Paroles regulation does not violate the ex post facto clause of either the state or federal constitution, and its denial of attorney's fees to appellants. We find no error and affirm. Appellants James Arnold, Kenneth Clayton, David Williams, and Derrell Doyal are inmates of the Arizona State Prison Complex in Tucson. All are serving life sentences for murder. Williams committed his crime in 1969, Clayton and Doyal committed theirs in 1970, and Arnold committed his in 1975. Williams, Clayton, and Doyal have previously applied for and been denied *157 commutation of their sentences. In 1988, all four applied for commutation of their sentences. All were notified that Phase I[1] hearings would be held September 2, 1988. On August 29, they discovered that the hearings had been changed to September 1. None was ever officially notified of the change. All were subsequently informed that the board had determined not to hold a Phase II hearing on their applications. In October 1988, appellants filed this action, combining a special action to compel the board to rehear their applications because of the lack of due process with a complaint for declaratory relief seeking a judgment that the administrative regulation governing commutation applications is unenforceable. The parties settled the special action issue when the state agreed to afford appellants another hearing.[2] They agreed that the issue of attorney's fees would be determined by the trial court. Appellants then moved for summary judgment on the ex post facto issue, arguing that the prohibition against their reapplying for commutation until 24 months after the denial of their previous applications violates the ex post facto clause. They appeal from the trial court's ruling that the regulation does not violate the ex post facto clause. EX POST FACTO VIOLATION The governor has the power to commute prisoners' sentences in accordance with conditions, restrictions, and limitations provided by law. Ariz. Const. art. 5, § 5; A.R.S. § 31-443. The Board of Pardons and Paroles is granted exclusive power to recommend commutations, and the governor may not commute a sentence without the board's recommendation. A.R.S. § 31-402. Subject to that restriction, however, the governor has the sole power to determine whether to commute a prisoner's sentence. State ex rel. Arizona State Board of Pardons & Paroles v. Superior Court, 12 Ariz. App. 77, 467 P.2d 917 (1970). At the time Clayton, Williams, and Doyal committed their crimes, the board regulation permitted a prisoner to reapply six months after his application was denied if he had not previously served time in a juvenile or adult penal institution and a year after denial if he had previously served time. At the time Arnold committed his crime in 1975, all prisoners were required to wait 12 months to reapply. In 1980, the current regulation was adopted. Prisoners are now required to wait 24 months after a denial to reapply for commutation. A.C.R.R. R5-4-602. Appellants contend that the application of the current regulation to them instead of the regulation in existence when they committed their crimes is an ex post facto violation. Because there are no cases directly on point, appellants argue that the holding in Weaver v. Graham, 450 U.S. 24, 101 S.Ct. 960, 67 L.Ed.2d 17 (1981) supports them. In that case, the Court held that an adverse change in a Florida statute governing prisoners' earning of good time credits could not be applied to prisoners whose crimes were committed before the change. As the Court explained in that case, The ex post facto prohibition forbids the Congress and the States to enact any law `which imposes a punishment for an act which was not punishable at the time it was committed; or imposes additional punishment to that then prescribed.' ... Through this prohibition, the Framers sought to assure that legislative Acts give fair warning of their effect and permit *158 individuals to rely on their meaning until explicitly changed.... The ban also restricts governmental power by restraining arbitrary and potentially vindictive legislation. 450 U.S. at 28-29, 101 S.Ct. at 964, 67 L.Ed.2d at 22-23, quoting Cummings v. Missouri, 71 U.S. (4 Wall.) 277, 325-26, 18 L.Ed. 356, 363-64 (1867). For a law to be ex post facto, "it must be retrospective, that is, it must apply to events occurring before its enactment, and it must disadvantage the offender affected by it." Weaver, 450 U.S. at 29, 101 S.Ct. at 964, 67 L.Ed.2d at 23. Because the regulation adopted in 1980 is being applied to appellants, whose crimes were committed prior to 1980, it is retrospective. Appellants argue that the ex post facto clause applies to administrative regulations that are adopted pursuant to statute, in this case, A.R.S. § 31-401(F). The board does not dispute that fact, and neither do we. We must next determine whether appellants are "disadvantaged" by the retroactive application of the regulation. In doing so, we first examine the nature of commutation. An application for commutation of a prisoner's sentence to a lesser term is "nothing more than an appeal for clemency." Connecticut Board of Pardons v. Dumschat, 452 U.S. 458, 465, 101 S.Ct. 2460, 2464-65, 69 L.Ed.2d 158, 165 (1981). "[T]he Arizona courts have held that commutation is a matter of grace, not of right." Banks v. Arizona State Board of Pardons & Paroles, 129 Ariz. 199, 201, 629 P.2d 1035, 1037 (App. 1981). In both Banks and Dumschat, the courts held that a prisoner's right to commutation of his sentence is not a constitutionally protectible interest such that parole boards are required to state their reasons for denying commutation. Appellants rely on two cases involving retrospective application of laws that increase the time limits when a prisoner may reapply for parole. One case they cite, Watson v. Estelle, 859 F.2d 105 (9th Cir.1988), was later vacated, the court instead determining that no retroactive application had occurred. Watson v. Estelle, 886 F.2d 1093 (9th Cir.1989). In the other, the court held that a legislative change increasing the time between parole hearings was an ex post facto violation because it effectively deprived that prisoner, whose sentence was a short one, of any meaningful opportunity for parole. Rodriguez v. United States Parole Commission, 594 F.2d 170 (7th Cir.1979). Parole, however, is a different creature than commutation. The legislature has adopted criteria for the board to apply in determining whether to grant parole. A.R.S. § 31-412. A prisoner denied parole must be given a written statement specifying the reasons for the denial. A.R.S. § 31-411(F). For commutation, on the other hand, the prisoner must obtain a recommendation from the board, but the legislature has enumerated no criteria the board must apply in determining whether to recommend commutation. A.R.S. § 31-402(A). Likewise, the governor has sole power to act on any application recommended by the board however he or she sees fit. A.R.S. § 31-443. As Division One of this court noted in Banks, "[t]here is less basis for an Arizona prisoner to anticipate commutation than parole, given that some limited expectancy of parole may be created by the statutory and regulatory scheme establishing parole eligibility." 129 Ariz. at 201, 629 P.2d at 1037. The distinction between parole and commutation is even more significant within the context of ex post facto principles. As our earlier quote from Weaver indicates, the ex post facto clause applies to legislative acts. Although the regulation of which appellants complain was adopted by the board pursuant to the direction of the legislature, we cannot ignore the fact that the act of granting commutation is strictly a matter of executive grace. Therefore, changes in the procedure by which that grace is sought do not violate the ex post facto clause. *159 DENIAL OF ATTORNEY'S FEES Appellants next complain because the trial court rejected their request for attorney's fees incurred in their special action that sought a new hearing because of due process violations. They contend they are entitled to fees under one of three theories. Their first two theories are statutory. They claim a right to fees under either A.R.S. § 12-2030 or § 12-348. Section 12-2030 provides for an award of fees to a party other than the state or any of its political subdivisions "which prevails by an adjudication on the merits" in a mandamus action. Likewise, § 12-348(A)(5) permits a fee award in a special action proceeding brought to challenge a state action "to any party other than this state or a city, town or county which prevails by an adjudication on the merits." Appellants argue that they are entitled to fees because they successfully obtained new hearings from the board. That argument, however, ignores the fact that the new hearings were the result of a settlement, not an adjudication on the merits. Appellants contend that they were successful parties because they were able to obtain new hearings and that they are thus entitled to fees, analogizing to statutes such as A.R.S. § 12-341.01 and 42 U.S.C. § 1988. Section 12-341.01, however, requires that a party be "successful" in order to be awarded fees, and 42 U.S.C. § 1988 requires that the party be "the prevailing party." Appellants' attempt to impose the language of those statutes on the applicable statutes here would render the phrase "adjudication on the merits" meaningless. The parties' stipulation on the special action states that appellants were granted new Phase I hearings and that "[t]his settlement disposes of all issues pertaining to the Phase I Commutation Hearings that are the subject of the instant lawsuit." The judgment entered in this case states, "THE COURT FINDS that the issue which sought the Court to order the administrative hearing has been rendered moot by the agreement." By no stretch of the imagination can it be said that an adjudication on the merits occurred on that issue. Appellants' arguments that the denial of fees violates public policy because the state is not punished for having violated their due process rights and because indigent parties are thereby discouraged from seeking redress of their rights, are ones that must be addressed to the legislature. We cannot ignore the plain language of §§ 12-348 and 12-2030 which requires an adjudication on the merits for a fee award. Nor do we find any merit to appellants' third theory for the recovery of fees, the private attorney general rule. That doctrine permits a fee award to "a party who has vindicated a right that: (1) benefits a large number of people; (2) requires private enforcement; and (3) is of societal importance." Arnold v. Arizona Department of Health Services, 160 Ariz. 593, 609, 775 P.2d 521, 537 (1989). It is obvious in this case that appellants' special action was brought on behalf of themselves solely to obtain a new hearing only for appellants. Affirmed. LIVERMORE, P.J., and LACAGNINA, J., concur. NOTES [1] The board regulation provides for a Phase I review by the board of the documents and information on the prisoner to determine if further investigation is warranted. Only if the board decides to proceed further is a Phase II comprehensive investigation undertaken. During the second phase, witnesses may appear before the board. A.C.R.R. Rule R5-4-602(D). [2] Although neither party has mentioned it on appeal, the record reflects that Arnold, Williams, and Clayton were granted Phase II hearings after the stipulated new hearing. In May 1989, the board recommended to the governor that Clayton's minimum sentence be reduced to 55 1/2 years and that Williams's minimum sentence be reduced to 63 years. The maximum sentence for both was recommended to remain at life. Arnold's application was still pending in May 1989. Doyal's application was denied after his Phase I hearing. | Low | [
0.528688524590163,
32.25,
28.75
] |
Manatee Memorial Hospital Manatee Memorial Hospital (MMH) is a private 319-bed health care facility located in Bradenton, Florida. History The hospital opened as Manatee Veterans Memorial Hospital on February 23, 1953 with a capacity of 100 beds. The hospital was renamed to its current name on July 1, 1963 to prevent people confusing the hospital as a Veterans Administration facility due to "Veterans" in its name. References Category:Hospitals in Florida Category:Buildings and structures in Manatee County, Florida Category:Hospital buildings completed in 1953 | Mid | [
0.629370629370629,
33.75,
19.875
] |
Jesse James (1939 film) Jesse James is a 1939 American western film directed by Henry King and starring Tyrone Power, Henry Fonda, Nancy Kelly and Randolph Scott. Written by Nunnally Johnson, the film is loosely based on the life of Jesse James, the notorious outlaw from whom the film derives its name. It is "notorious for its historical inaccuracy." The supporting cast features Henry Hull, John Carradine, Brian Donlevy, Jane Darwell and Lon Chaney, Jr.. The American Humane Association began to oversee filmmaking after a horse died when it was driven off a cliff on set. Plot A railroad representative named Barshee (Brian Donlevy) forces farmers to give up the land the railroad is going to go through, giving them $1 per acre (much less than fair price) for it. When they come to Jesse's home, Barshee is told by Jesse (Tyrone Power) that his mother Mrs Samuels (Jane Darwell) is the farm's owner. Barshee repeatedly tries to force her into selling, until her other son Frank James (Henry Fonda) gets involved. Frank fights and easily beats Barshee, but Jesse shoots Barshee in the hand, in self-defence. When arrest warrants are issued for Frank and Jesse, Major A. Rufus Cobb (Henry Hull), an editor in nearby Liberty, Missouri and uncle of Zerelda (Zee) Cobb (Nancy Kelly), Jesse's lover, quickly comes to tell them to leave. Frank and Jesse learn that Barshee is responsible for the death of their mother and Jesse kills him in revenge. This begins Frank and Jesse's career as outlaws. They are pursued relentlessly by the unscrupulous railway boss, McCoy (Donald Meek). Three years later, with a $5,000 reward on his head, Jesse marries Zee and turns himself in, at her insistence, having been promised a light sentence by Marshall Will Wright (Randolph Scott). But McCoy manages to manipulate the situation through his connections, by having the judge dismissed pre-trial, and installing a new judge, who is likely to favour McCoy's recommendation of imposing the death penalty for Jesse. Frank breaks Jesse out of jail, and the James gang continue their life of crime. Eventually Zee leaves him, taking their son Jesse Jr. with her. Years later, following an unsuccessful robbery, a wounded Jesse returns home and Zee joins him in the belief that they will escape to California. Meanwhile, Bob Ford (John Carradine), an old member of the James gang, together with his brother Charlie Ford (Charles Tannen), contact Jesse, claiming that Frank sent them to ask Jesse to participate in their next robbery. They assert that the job will earn them all a large sum of money for very little risk. Jesse nevertheless refuses the Ford brothers' offer, and the brothers exit the house. However, sensing an opportunity to claim the generous reward for Jesse's death, Bob Ford sneaks back inside, and shoots Jesse in the back, thereby killing him. Cast Tyrone Power as Jesse James Henry Fonda as Frank James Nancy Kelly as Zerelda (Zee) Randolph Scott as Will Wright Henry Hull as Major Rufus Cobb Slim Summerville as Jailer J. Edward Bromberg as Mr. Runyan Brian Donlevy as Barshee John Carradine as Bob Ford Donald Meek as McCoy John Russell as Jesse James, Jr. Jane Darwell as Ma James Charles Tannen as Charles Ford Claire Du Brey as Mrs. Bob Ford Willard Robertson as Clarke Harold Goodwin as Bill Ernest Whitman as Pinkie Eddy Waller as Deputy Paul Burns as Hank Spencer Charters as Minister Arthur Aylesworth as Tom Colson Charles Middleton as Doctor Charles Halton as Heywood George Chandler as Roy Harry Tyler as Farmer Virginia Brissac as Boy's Mother Ed Le Saint as Judge Rankin John Elliott as Judge Mathews Erville Alderson as Old Marshall George Breakston as Farmer Boy Lon Chaney, Jr. as One of James' Gang Tom London as Soldier (uncredited) Reception Jesse James was a smash hit and the fourth largest-grossing film of 1939, behind Gone with the Wind, The Wizard of Oz, and The Hunchback of Notre Dame, and in front of Mr. Smith Goes to Washington. A sequel, The Return of Frank James, directed by Fritz Lang and with Henry Fonda reprising his role as Frank James along with a variety of other actors playing the same characters as they had in Jesse James, was released in 1940. A remake was directed by Nicholas Ray in 1957, The True Story of Jesse James. Animal cruelty The film gained a measure of notoriety for a scene in which a horse falls to its death down a rocky slope toward the end of the film. This scene was one of many cited by the American Humane Association against Hollywood's abuse of animals, and led to the association's monitoring of filmmaking. However, according to Leonard Mosley's biography Zanuck: The Rise and Fall of Hollywood's last Tycoon, none of "the horses [had] been injured. Under Zanuck's direction, a short distance down the cliff, on a conveniently broad platform, the unit roper had arranged a soft landing for the horses." Production Much of the filming for Jesse James took place around the town of Pineville, Missouri in McDonald County, Missouri, because at the time the town and surrounding area looked much the same as it would have in the 1880s and 1890s. The town's historic Old McDonald County Courthouse, a National Register of Historic Places site, was featured in the film serving as a stand in for the Liberty, Missouri courthouse. Pineville still celebrates Jesse James Days annually in homage to the film and the movie stars who descended on the small town to make it. In their off time from filming, the films' stars and crew, including Tyrone Power, Henry Fonda and Randolph Scott, would seek out relaxation at the Shadow Lake resort in Noel, Missouri, on the shores of Elk River (Oklahoma). See also List of American films of 1939 References External links Category:1939 films Category:1930s historical films Category:1930s color films Category:1930s Western (genre) films Category:1930s biographical films Category:American films Category:American historical films Category:American Western (genre) films Category:American biographical films Category:English-language films Category:Biographical films about Jesse James Category:Films set in the 1870s Category:Films set in the 1880s Category:20th Century Fox films Category:Films produced by Darryl F. Zanuck Category:Films directed by Henry King Category:Films with screenplays by Nunnally Johnson Category:James–Younger Gang Category:Animal cruelty incidents Category:Films scored by Louis Silvers Category:Bank robbery in fiction Category:Cultural depictions of Jesse James | Mid | [
0.5539568345323741,
28.875,
23.25
] |
Incidence of Pap test abnormalities within 3 years of a normal Pap test--United States, 1991-1998. Declines in cervical cancer incidence and mortality reported in the United States since the 1950s have been attributed to early detection and treatment of precancerous and cancerous lesions through the use of the Papanicolaou (Pap) test (1). More than 50 million Pap tests are performed each year (2); however, guidelines about the frequency of testing in women with a history of normal test results are inconsistent (3-5). To determine the incidence of cervical cytologic abnormalities following a normal Pap test, 1991-1998 data from the National Breast and Cervical Cancer Early Detection Program (NBCCEDP) were analyzed for this report (6). The findings indicated that within 3 years of a normal Pap test result, severe cytologic abnormalities were uncommon, and incidence rates were similar among women screened 1, 2, and 3 years following a normal Pap test. | Mid | [
0.627802690582959,
35,
20.75
] |
A case of ARDS associated with influenza A - H1N1 infection treated with extracorporeal respiratory support. After the first outbreak identified in Mexico in late March 2009, influenza A sustained by a modified H1N1 virus ("swine flu") rapidly spread to all continents. This article describes the first Italian case of life-threatening ARDS associated with H1N1 infection, treated with extracorporeal respiratory assistance (venovenous extracorporeal membrane oxygenation [ECMO]). A 24-year-old, previously healthy man was admitted to the Intensive Care Unit (ICU) of the local hospital for rapidly progressive respiratory failure with refractory impairment of gas exchange unresponsive to rescue therapies (recruitment manoeuvres, pronation and nitric oxide inhalation). An extracorporeal respiratory assistance (venovenous ECMO) was performed. It allowed a correction of the respiratory acidosis and made possible the transportation of the patient to the ICU (approximately 150 km from the first hospital). A nasal swab tested positive for H1N1 infection and treatment with oseltamivir was started. The chest computed tomography scan showed bilateral massive, patchy consolidation of lung parenchyma; lab tests showed leukopenia, elevated CPK levels and renal failure. The patient required high dosages of norepinephrine for septic shock and continuous renal replacement therapy. The clinical course was complicated by Pseudomonas aeruginosa superinfection, treated with intravenous and aerosolised colistin. ECMO was withheld after 15 days, while recovery of renal and respiratory function was slower. The patient was discharged from the ICU 34 days after admission. In this case, ECMO was life-saving and made the inter-hospital transfer of the patient possible. | High | [
0.7117241379310341,
32.25,
13.0625
] |
990 F.2d 626 Hurmanv.Port of Houston Authority* NO. 92-2038 United States Court of Appeals,Fifth Circuit. Mar 26, 1993 1 Appeal From: S.D.Tex. 2 REVERSED. * Fed.R.App.P. 34(a); 5th Cir.R. 34.2 | Mid | [
0.613107822410147,
36.25,
22.875
] |
--- !ruby/object:RI::ClassDescription attributes: - !ruby/object:RI::Attribute comment: - !ruby/struct:SM::Flow::P body: Allow TLS negotiation? Defaults to true name: allow_tls rw: RW - !ruby/object:RI::Attribute comment: - !ruby/struct:SM::Flow::P body: How many seconds to wait for <stream:features/> before proceeding name: features_timeout rw: RW - !ruby/object:RI::Attribute comment: name: host rw: R - !ruby/object:RI::Attribute comment: - !ruby/struct:SM::Flow::P body: Keep-alive interval in seconds, defaults to 60 (see private method keepalive_loop for implementation details) name: keepalive_interval rw: RW - !ruby/object:RI::Attribute comment: name: port rw: R - !ruby/object:RI::Attribute comment: - !ruby/struct:SM::Flow::P body: Optional CA-Path for TLS-handshake name: ssl_capath rw: RW - !ruby/object:RI::Attribute comment: - !ruby/struct:SM::Flow::P body: Optional callback for verification of SSL peer name: ssl_verifycb rw: RW - !ruby/object:RI::Attribute comment: - !ruby/struct:SM::Flow::P body: whether to use the old and deprecated SSL protocol Defaults to false name: use_ssl rw: RW class_methods: - !ruby/object:RI::MethodSummary name: new comment: - !ruby/struct:SM::Flow::P body: The connection class manages the TCP connection to the Jabber server constants: [] full_name: Jabber::Connection includes: [] instance_methods: - !ruby/object:RI::MethodSummary name: accept_features - !ruby/object:RI::MethodSummary name: close! - !ruby/object:RI::MethodSummary name: connect - !ruby/object:RI::MethodSummary name: is_tls? - !ruby/object:RI::MethodSummary name: start - !ruby/object:RI::MethodSummary name: starttls name: Connection superclass: Stream | High | [
0.681265206812652,
35,
16.375
] |
HKS Premium Day Part 2: A Day of "Go" We have left part 1 of the HKS Premium Day coverage with a nice collection of cars scattered around paddocks and parking areas of Fuji Speedway, but what is the point of taking a fast car to a world renowned Gran Prix circuit and not seeing what it can really do. This is where the time attack event at HKS Premium day comes in and turns "show" into "go". This is a perfect opportunity for all participating tuners and private owners to see whether their efforts were worth the time and investment. Throughout the day there were several competitive outings, as well as pure exhibition event where a few lucky automotive fans could observe the action from a very special point of view. There was plenty of action and as usual, the scenery tends to leave everyone speechless. But this is not what we are here for, we are here only for the cars, and with such vast variety the event was broken into several classes, as well as special allocation for race car vs tuner car battle, which featured this Endless Z4 E89 GT3 car, which in capable hands of Kyosuke Mineo it managed to lap Fuji Speedway in 1.40.543 While impressive for any type of vehicle this time was nowhere close to the flagship machine of HKS Technical Factory driven by mighty Nobuteru Taniguchi, who put a new record for R35 running on slick racing tires: 1.37.773 The Option Super Lap battle featured a great variety of cars built by various tuners from across Japan. With such variety of cars featuring completely different approach to tuning and performance this was by far the most exciting event to watch. We start with the Car Station Marsche WRX Sti, which we had a close encounter with during our last update. With mostly stock motor, modified exhaust, ECU and aftermarket suspension the car managed a respectable 1.58.132 while running on road legal radial tires - one of the qualifying conditions for this class. Of course, on the other end of the spectrum is the Garage G Force EVO9, which showed how much potential these tuned cars actually have. With power being rated at 665HP, the CT9A managed to score a remarkable 1.42.154 on the board, while sounding absolutely mean in the process. Speaking of the variety of cars, the BMW E46 by Cockpit Tatebayashi could not go unnoticed, as it was running a 400HP NA tune and managed to lap Fuji at 1.56.081 Japanese sports cars were at the main stage, however, as this PAN SPEED FD3S RX-7 managed a 1.45.146 When we speak about classic Japanese sports cars however, there is only one which was truly revolutionary on the global scale: Honda NSX. With full aluminum frame, lightweight body, a powerful V6 motor and tune and testing done by Ayrton Senna himself, it was the NSX that placed JDM performance on the map. It was therefore a given that we would see at least one on this event, and it just happened to be one of the fastest ever built. The 690HP NSX NA1 built by Esprit and driven by Tarzan Yamada has clocked an impressive 1.42.466. But it was this Top Fuel AP1 in the capable hands of Nobuteru Taniguchi that reminded us why the classic S2000 was so awesome. Alright, not many factory components are left on this insanely quick "what used to be 2 seater now converted to race monster" machine, but an incredible time of 1.39.131 shows the sheer potential of classic Japanese sports cars. While the NSX was the most revolutionary Japanese car, the Skyline GTR is definitely the most recognizable and sought after. Partially because of its Japan-only production, but mostly for its absolutely limitless tuning potential. Starting with the more moderate tuned Revolfe S.A R33, which scored a time of 1.47.115, to some of the most incredible machines known in the JDM world. As usual a great variety of R34's were present as various shops are still exploiting methods of extracting power out of what could possibly be referred to as best engine ever made. Garage Ito GTR's RB26 has been fitted with a stroker kit and aftermarket turbo on top of various other updates producing a mean 650 horses, which allowed it to lap Fuji Speedway in just 1.41.508. Art tech Hanatsuta's R34 takes a more simple approach to tuning by attempting to create a street tuned GTR suitable for daily driving. With respectable time of 1.47.102 it proved to be a very capable track car as well. With all the tuned cars done with their runs it was time for the big boys to step up to the plate. The R35 battle featuring some of the most powerful machines seen in Part 1 and Tokyo Auto Salon was about to start. We begin with Power House Amuse R35, which is configured for street performance, featuring Amuse's own titanium exhaust as well as cooling and intake manifold. Even with such light tune the car has managed to achieve 1.51.334 around Fuji Speedway. With most other tuners presenting their 1000+ HP GTR's the battle was heating up. Wing Takeo featured their demo car from Tokyo Auto Salon, which has an extensive list of upgrades highlighted by Wing Takeo original ECU tune, Trust 4.2L stroker kit, RX1200 turbo and high capacity injectors with RH9 titanium exhaust system to relieve back pressure. The machine at the hands of Tarzan Yamada managed a very quick 1.45.361 while running on radial tires. With competitors times being so quick the pressure inside Top Secret's pit was building up. With their car experiencing failures during previous runs everything was at stake today. The runs looked very promising as the 1200HP R35 GTR looked stable and incredibly quick in the hands of experienced Super GT driver Yasushi Kikuchi. But bad luck seems to follow the Top Secret super GTR as one of the tires failed during at the high speed section and caused expansive damage to the front fender and components around. The time recorded does not do the car any justice as only 1.46.430 was showing on the clock. Following this disappointment Smokey Nagata decided to go back to drawing board and essentially rebuild the car. We had a quick encounter with the Kansai Service R35 at Tokyo Auto Salon where I specifically highlighted the track oriented build. Much to my delight the car was in capable hands of Nobuteru Taniguchi who scored an impressive 1.42.752 and was a runner-up... to himself.. Yes, Nob Taniguchi took the most out of his track time as he spent entire day jumping from one car to the next, while obliterating lap records in the process. It was his last run of the day that turned most heads as many people had the privilege of witnessing the Varis Kitted 4.1L 1200HP Kamikaze R going out on track for the first time and absolutely killing it. Many have said that such performance would be impossible on radial tires, but the perfectly executed tuning and setup by HKS Technical Factory and incredible driving skills by Nob Taniguchi have proved everyone wrong. 1.41.743 was displayed on top of the board at the end of the day and that is the best result ever for an R35 on radial tires. On this note we wrap up the time attack event and track-side coverage of the HKS Premium day. We have seen some amazing cars stationary on display at the paddocks and being pushed to the limits and beyond on the track combining for a magnificent event defining what Japanese car culture is all about. I am grateful that aftermarket and OEM parts manufacturers like HKS take time and invest in what started off as a pure marketing showcase and has grown into one phenomenal automotive event. This concludes our coverage of HKS Premium Day at Fuji Speedway, but make sure to check back soon for more events, crazy cars, and test drivers here on motorflair.com. As always, thank you for your comments, shares and follows. | Mid | [
0.5770171149144251,
29.5,
21.625
] |
Video: The robbery Joe Francis was at the epicenter of events in Hollywood that dark night of January 22, 2004. Francis, 33 years old, got obscenely rich by inventing the video series called "Girls Gone Wild." Francis has sold enough videos to have a state of the art production office in Santa Monica, a mansion in Bel Air, a home in the Carribean, two private jets, a Ferrari and a Bentley. On that day in January, Francis was robbed at his home by an intruder with a gun. Francis says he was frightened for his life. “I had seen these happen before. Guys who came in your house and telling these victims that they weren't gonna do anything if they cooperated. But then at the end, they always killed them. So all these things were playing out in my mind. That, oh my God, ‘this is how it all ends.’” The intruder also ordered Francis to lie face down on the bed, pulled down his pants, and revealed what he’d brought with him to Francis’ house: a video camera and a sex toy -- which he posed suggestively on Joe's posterior. According to L.A. county prosecutor Hoon Chun, “The purpose of the video was quite clearly extortion— [the perpetrator] demanding various sums of money in order to not publish the videotape that he had forced Mr. Francis at gunpoint to make.” Joe Francis, who'd made a fortune getting girls to strip for the camera, became the star in his own nasty video – the only difference is that he was forced to do it by the point of a gun. Is it karma? Some people feel that what happens on “Girls Gone Wild” videos is also obscene -- Francis and his cameramen persuading young, often inebriated women, to expose themselves and even have sex with each other. (Just last week, Joe and his company plead guilty to violating a federal law designed to prevent pornographers from working with people under 18.) What the extortionist didn’t count on was that Francis would not be threatened by the release of the video tape. “That just is ridiculous to me… that somebody's gonna think the owner of "Girls Gone Wild" is gay,” says Francis. Dateline wires the home of a volunteer, Jenny, from top to bottom with hidden cameras. Then she called repairmen to her house for a simple problem we created as a test with Jenny's pool. NBC's Chris Hansen reports. | Low | [
0.46236559139784905,
26.875,
31.25
] |
Endogenous and exogenous protection of the BBB in stroke. Blood-brain barrier (BBB) dysfunction occurs in a wide variety of neurological diseases and injuries (e.g. stroke). Such dysfunction may participate in those states by enhancing the influx of leukocytes into the brain, allowing the entry of potentially neurotoxic blood components and causing vasogenic edema. In addition, it may affect disease treatment (e.g. hemorrhagic transformation is a major limiting factor for the use of tissue plasminogen activator-induced reperfusion therapy for ischemic stroke). There is, therefore, a great need for methods to protect the BBB. Therapeutic targets may potentially be identified by examining which endogenous mechanisms are altered in disease states. We have shown that preconditioning stimuli can protect the BBB and cerebral endothelial cells in vivo and in vitro. We have also shown that stroke-related factors cause a marked increase in the expression of the cystine/glutamate exchanger (system xc-), a regulator of intracellular glutathione, in cerebral endothelial cells. This exchanger is regulated by nrf2 (an anti-oxidant transcription factor) and xc- can be markedly upregulated by exposure to sulforaphane, an activator of Nrf2 and a component of cruciferous vegetables. These results have led us to hypothesize that: Nrf2 and the proteins it regulates (e.g. xCT, heme oxygenase 1 and ferritin) may be a target for protecting the BBB. As Nrf2 regulation of these proteins requires protein synthesis, we also hypothesize that the function of this system is to protect against delayed BBB disruption, particularly due to migrating leukocytes in ischemia. These hypotheses will be examined in five specific aims: 1+2) Determine whether upregulation of system xc- by stroke-related factors, inflammatory mediators or sulforaphane is protective. 3+4) Determine whether Nrf2 is activated in the cerebral endothelium after stroke or inflammation and whether its activation and the upregulation of downstream proteins will protect the cerebral endothelium. 5) Examines whether treatment with sulforaphane can protect the BBB in vivo. These specific aims will be examined in vitro, to allow elucidation of molecular mechanisms, and in vivo, to determine pathophysiological relevance. The results should highlight endogenous BBB protective mechanisms and the potential exogenous compounds to activate or inhibit those mechanisms. PUBLIC HEALTH RELEVANCE: Brain blood vessels have very specialized functions, forming a blood-brain barrier. Disuption of that barrier occurs in many neurological disorders and injuries, contributing to brain dysfunction. This proposal examines natural defense mechanisms that may protect the blood-brain barrier, how to activate those mechanisms or prevent their inactivation therapeutically. | High | [
0.724386724386724,
31.375,
11.9375
] |
Paul Bastock: How does record-breaking goalkeeper stack up against the legends? Wisbech goalkeeper Paul Bastock broke Peter Shilton’s record when he made the 1,250th appearance of his career at the weekend.But how does that tally stack up against some sporting legends? Take our quiz to find out. | High | [
0.661577608142493,
32.5,
16.625
] |
STATE OF WEST VIRGINIA FILED SUPREME COURT OF APPEALS March 22, 2016 RORY L. PERRY II, CLERK SUPREME COURT OF APPEALS MARTHA D. NEELY, OF WEST VIRGINIA Claimant Below, Petitioner vs.) No. 15-1052 (BOR Appeal Nos. 2050350, 2050351, 2050354, & 2050360) (Claim No. 2013022944) WEST VIRGINIA UNITED HEALTH SYSTEM, Employer Below, Respondent MEMORANDUM DECISION Petitioner Martha D. Neely, by M. Jane Glauser, her attorney, appeals the decision of the West Virginia Workers’ Compensation Board of Review. This appeal arises from the Board of Review’s Final Order dated October 2, 2015, in which the Board affirmed a March 9, 2015, Order of the Workers’ Compensation Office of Judges. In its March 9, 2015, Order, the Office of Judges affirmed the claims administrator’s July 17, 2014, and July 28, 2014, decisions denying requests to add the diagnoses of herniated cervical disc and right shoulder strain as compensable components of Ms. Neely’s claim for workers’ compensation benefits.1 The Board affirmed a second Order of the Workers’ Compensation Office of Judges dated March 9, 2015. In its second Order dated March 9, 2015, the Office of Judges affirmed the claims administrator’s July 11, 2014, decision denying Ms. Neely’s request to add depressive disorder as a compensable diagnosis. Additionally, the Board affirmed a March 11, 2015, Order of the Workers’ Compensation Office of Judges. In its March 11, 2015, Order, the Office of Judges affirmed the claims administrator’s October 7, 2014, decision denying Ms. Neely’s request for payment of services rendered for the treatment of depressive disorder. Finally the Board affirmed a March 19, 2015, Order of the Workers’ Compensation Office of Judges. In its March 19, 2015, Order, the Office of Judges affirmed the claims administrator’s January 7, 2014; January 13, 2014; and February 11, 2014, decisions denying Ms. Neely’s request for payment of services rendered by David Lynch, M.D. The Court 1 Ms. Neely’s request to add a herniated cervical disc as a compensable component of the claim was rejected based upon a finding that a cervical disc protrusion has already been added as a compensable diagnosis. The denial of Ms. Neely’s request to add a herniated cervical disc as a compensable component of the claim was not appealed to this Court. 1 has carefully reviewed the records, written arguments, and appendices contained in the briefs, and the case is mature for consideration. This Court has considered the parties’ briefs and the record on appeal. The facts and legal arguments are adequately presented, and the decisional process would not be significantly aided by oral argument. Upon consideration of the standard of review, the briefs, and the record presented, the Court finds no substantial question of law and no prejudicial error. For these reasons, a memorandum decision is appropriate under Rule 21 of the Rules of Appellate Procedure. Ms. Neely was injured on February 25, 2013, while transporting a very large patient. On March 7, 2013, her claim for workers’ compensation benefits was held compensable for cervical, thoracic, and lumbar sprains. A cervical disc protrusion/herniation was later added as a compensable component of the claim. In the instant appeal, Ms. Neely is requesting authorization for services rendered by Dr. Lynch on multiple occasions, the addition of a right shoulder sprain as a compensable component of the claim, the addition of depressive disorder as a compensable component of the claim, and authorization for payment of services rendered for the treatment of depressive disorder. On January 7, 2014, the claims administrator denied a request for authorization of payment for services rendered by Dr. Lynch on October 21, 2013. On January 13, 2014, the claims administrator denied a request for authorization of payment for services rendered by Dr. Lynch on November 19, 2013. On February 11, 2014, the claims administrator denied a request for authorization for payment of services rendered by Dr. Lynch on December 17, 2013. On July 11, 2014, the claims administrator denied a request to add depressive disorder as a compensable component of Ms. Neely’s claim for workers’ compensation benefits. On July 17, 2014, the claims administrator denied a request to add a right shoulder sprain as a compensable component of Ms. Neely’s claim for workers’ compensation benefits. On July 28, 2014, the claims administrator denied a repeat request to add the right shoulder as a compensable diagnosis. Finally, on October 7, 2014, the claims administrator denied a request for payment of services rendered on August 21, 2014, in correlation with the treatment of depressive disorder. On March 9, 2015, the Office of Judges affirmed the July 17, 2014, and July 28, 2014, claims administrator’s decisions. In a separate Order dated March 9, 2015, the Office of Judges affirmed the July 11, 2014, claims administrator’s decision. On March 11, 2015, the Office of Judges affirmed the October 7, 2014, claims administrator’s decision. Finally, on March 19, 2015, the Office of Judges affirmed the January 7, 2014; January 13, 2014; and February 11, 2014, claims administrator’s decisions. The Board of Review affirmed the reasoning and conclusions contained in all four Orders of the Office of Judges in its decision dated October 2, 2015. Regarding the request for authorization of services rendered by Dr. Lynch, the Office of Judges found that Ms. Neely failed to introduce any evidence relating to the services rendered on October 21, 2013. Regarding the services rendered on November 19, 2013, and December 17, 2013, the Office of Judges found that Dr. Lynch’s treatment notes indicate that he was treating 2 Ms. Neely for various sprains/strains of the spinal column.2 In that regard, the Office of Judges found that on September 12, 2013, Bill Hennessey, M.D., performed an independent medical evaluation and opined that Ms. Neely was in no further need of treatment in relation to the February 25, 2013, injury. Further, the Office of Judges found that on December 5, 2013, Bruce Guberman, M.D., performed an independent medical evaluation and opined that no further treatment or diagnostic testing was necessary, aside from the continuation of medications and follow-up visits with her physician. Despite the fact that the evidence of record indicates that Dr. Lynch was rendering treatment for sprains/strains, on appeal Ms. Neely asserts that she was actually receiving treatment for a herniated nucleus pulposus at C5-6, which was previously added as a compensable component of the claim. As was noted by the Office of Judges, Ms. Neely has not introduced any evidence indicating that Dr. Lynch was rendering treatment on the dates at issue for anything other than a cervical sprain which, according to the medical evidence of record, should have resolved long ago. Regarding the request to add a right shoulder sprain as a compensable component of the claim, the Office of Judges noted that Ms. Neely has repeatedly litigated this very issue, with her continued submission of virtually identical evidence in each repeated request to add a right shoulder sprain as a compensable diagnosis. In Martha D. Neely v. West Virginia United Health System, 15-0477 (memorandum decision), we affirmed the most recent rejection of Ms. Neely’s request to add the right shoulder as a compensable body part. Regarding the request to add depressive disorder as a compensable diagnosis and authorize treatment for such, the Office of Judges found that Ms. Neely has failed to introduce any evidence indicating that she complied with the requirements outlined in West Virginia Code of State Rules § 85-20-12.4 (2006). West Virginia Code of State Rules § 85-20-12.4 provides: Services may be approved to treat psychiatric problems only if they are a direct result of a compensable injury. As a prerequisite to coverage, the treating physician of record must send the injured worker for a consultation with a psychiatrist who shall examine the injured worker to determine 1) if a psychiatric problem exists; 2) whether the problem is directly related to the compensable condition; and 3) if so, the specific facts, circumstances, and other authorities relied upon to determine the causal relationship. The psychiatrist shall provide this information, and all other information required in section 8.1 of this Rule in his or her report. Failure to provide this information shall result in the denial of the additional psychiatric diagnosis. Based on that report, the Commission, Insurance Commissioner, private carrier, or self- insured employer, whichever is applicable, will make a determination, in its sole discretion, whether the psychiatric condition is a consequence that flows directly from the compensable injury. 2 None of Dr. Lynch’s treatment notes were provided for review on appeal to this Court. 3 Moreover, Ms. Neely has failed to introduce any evidence whatsoever relating to a diagnosis of depressive disorder on appeal to this Court. As was noted by the Office of Judges, because depressive disorder is not currently a compensable component of the claim, Ms. Neely is not entitled to authorization of treatment for this condition. For the foregoing reasons, we find that the decision of the Board of Review is not in clear violation of any constitutional or statutory provision, nor is it clearly the result of erroneous conclusions of law, nor is it based upon a material misstatement or mischaracterization of the evidentiary record. Therefore, the decision of the Board of Review is affirmed. Affirmed. ISSUED: March 22, 2016 CONCURRED IN BY: Chief Justice Menis E. Ketchum Justice Robin J. Davis Justice Brent D. Benjamin Justice Margaret L. Workman Justice Allen H. Loughry II 4 | Low | [
0.514606741573033,
28.625,
27
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.