text
stringlengths
8
5.74M
label
stringclasses
3 values
educational_prob
sequencelengths
3
3
Knock Knock, who’s there? Orange you glad to see us! The Industrial Steel Backlit Bar (Orange Baroque) is a fantastic untraditional backlit bar for your next special occasion. With bonus storage, the Industrial Steel Backlit Bar (Orange Baroque) is also secured on locking casters for ease of mobility. The orange vinyl pattern will be illuminated once lit through the fluorescent lighting on the Industrial Steel Backlit Bar (Orange Baroque). Rent a Industrial Steel Backlit Bar (Orange Baroque) from FormDecor Furniture Rental. Our inventory of bars may be rented for any type of event. FormDecor delivers in Los Angeles as well as Southern California. We do ship nationwide via trusted carriers.
Low
[ 0.38481012658227803, 19, 30.375 ]
Jessica Lutz will skate for Switzerland in the Sochi Olympics next month. For most of past year, she worked at a Washington coffee shop in the mornings and trained in the afternoons. Jessica Lutz Originally published on January 31, 2014 8:59 am Jessica Lutz is on her way from making arty designs in coffee cups to carving Olympic ice in Sochi. And although she grew up in the U.S., Lutz will compete for the Swiss hockey team. Her story is an example of the sacrifices and strategies many athletes rely on to get to the games. For most of the past year, Lutz, 24, crafted latte art as a barista in Washington, D.C. Born and raised in the D.C. suburb of Rockville, Md.,Lutz had a chance to compete for Switzerland because of her father's nationality (she's a dual citizen). Her path to Sochi hasn't always been certain, or direct. Lutz moved to Bern, Switzerland, inthe summer of 2010 after graduating from the University of Connecticut, where she played Division I hockey. Lutz planned to play after college, but she was realistic about her ability to make the U.S. national team. An overwhelming number of athletes were competing for only a few spots. "I wanted to play at the highest level that I could," Lutz said. "It was always a goal to go to the Olympics, and so I figured out that my chances in Switzerland would be a lot better." The most competitive countries in women's ice hockey are the U.S. and Canada, who often take the top two spots in competitions. But the Swiss team is hoping for a medal. "I think we have a little bit of a chance for a bronze," Lutz says. "That's our goal." In January, Lutz was officially selected as one of 21 women to represent Switzerland in the 2014 Winter Olympics. She spent the past three years playing for EV Bomo Thun and one other team in the Swiss women's hockey league. Players who don't live in the country must play in the league for two seasons before they're eligible for the national team. Over time, Lutz realized that ice hockey was the only aspect of her life abroad that seemed to work out. It was difficult to find a job in Switzerland because her bachelor's degree was not accepted, and she missed her friends. "I didn't really see myself staying long-term," she said. So with the Olympics on the horizon and an invitation to play for the Swiss in a December tournament in Austria,Lutz moved back to Washington, D.C., last March. In D.C., she landed a job as a barista at The Coffee Bar, where she perfected the coffee art skills she'd learned working at an Italian cafe in Switzerland. Her morning shifts also allowed her time to focus on her Olympic goal. "I was looking for a job that would be flexible so I could continue to train," she says, "but also something that I enjoyed doing." To stay in shape for a potential spot on the national team, Lutz worked out and played pick-up hockey games at local ice rinks. She was almost always the only female player on the ice, which she said was a good challenge. "It's definitely hard to be motivated when you're training by yourself all the time," she said. As unique as Lutz's story might seem, it's not unusual for Swiss hockey players to disperse and play in other countries. Two of her teammates played in Sweden last year; others play for U.S. colleges. Their only chance to be evaluated by the Swiss coaches is at a handful of tournaments. Aside from those tournaments, Lutz hadn't seen any of her Swiss coaches or teammates since last April's World Championships in Canada. But that doesn't mean they were all taking a break: All athletes are responsible for staying in shape, she says. "You have to show up and really perform well to prove that you've been working hard," Lutz said, "and you've been doing what you're supposed to be doing." That determination is what secured her a spot on the Swiss team. Lutz is now reunited with her hockey team in Switzerland. As she prepares to travel to Sochi, she's thinking about teamwork — whether skating on the ice or dealing with a line of customers on a Saturday morning. "Everyone needs to be on, doing their job and working well together. I think that's the same for a hockey game," she says. "Everyone has to be doing their individual job for the team to succeed." Lauren Katz is an intern on NPR's Social Media desk. We'll have an update on Lutz's experience in Sochi as the games progress.
Mid
[ 0.62200956937799, 32.5, 19.75 ]
Q: How does one upgrade garage electrical from an In-House main? tl;dr Want to know the easiest way to upgrade electrical in garage from 15A to 20A. Actual Post Garage is on a 15 amp circuit, doesn't work out too well for a 15A air compressor. The main is in the house, ~20 yards away give or take. I can do the wiring myself, but I don't know how to go about finding the existing wiring's path, will standard fishing line work? I saw this related question, so if there are multiple 90s in my path then probably not. What I know: I'll probably be adding a subpanel in there for convenience. I can calculate Amp/Gauge requirements for cabling and breakers. This is probably going to be a pain. What I don't know: What codes will apply in this situation. I know there are certain requirements for buried cables though, just not what. How/Where the cable should enter the house. If I can or even if I should trace the existing wiring from the garage and use that path, or start from scratch. Any best practices or pitfalls to avoid for this type of job. My question differs from this one in that I want to know what's involved with running new cable from the garage to the house. I know it can be done, I just want to know how it's done, essentially. Info about the area: There's a standard city backyard between the garage and house (a big tree, lawn, back porch w/ walkway), and this is MI in case there are any differences in code. A: tl;dr - if you are going to all the work, and a subpanel, you presumably want a bit more than 20 amps (think it needs to be 30 amps minimum for code these days, and 60 amps is probably better.) You'll have to dig a ditch. At that point, my opinionated opinion is that you should go ahead and put in conduit, and an additional conduit for any current or future possibility that you might want cable, phone, network, etc out there. Ditches are expensive and a lot of work - conduit is cheap, once you have the ditch...Often cheaper (with wire) than "direct burial" cable, and far more resistant to damage in the future - plus it does offer you the potential of pulling out the wire and pulling in new wire if there ever was a problem - but that's low odds of you making any use of it on he electrical side. Network, quite possible. Before digging a ditch in a city (especially) backyard, call Dig-Safe and have all (Offically known about) services (gas, phone, electric, water sewer & things you may not know about) located. Turn off the circuit to the garage - I would not worry too much about where it is (unless it runs in a conduit that you might be able to re-use - which is not too likely), but odds are that you'll find it when digging, and it's less exciting if it's turned off when you do. Any wire used must be rated for wet locations - not difficult, just be sure it is. Any exterior conduit is assumed by code to be wet (and that's generally true.) If the portion of the backyard you are crossing is not travelled by cars and trucks (not crossing the driveway) depth is sufficient if the TOP of the conduit is 18" below finished grade. Be sure to lay "buried electric line below" tape in the top 6" of the trench fill. If you are not digging below frost line (4 feet or more where you are, probably) you definitely need to bring the ends of the conduit up vertically at buildings, and provide a slip (expansion) joint, as the conduit will move with frost. That's generally needed even if the conduit is buried below frost-line as well, unless it's going straight into a basement below frost-line, but its especially critical when the conduit is above frost line. If you don't find the cost phohibitive, a layer of XPS foam over the top of the conduit provides one more indication that there is something there (when someone else is digging, later) and can reduce frost movement a little bit (or a lot if it's wide.) Alternatively, 2" of concrete over the conduit provides some serious protection on top of the conduit, and reduces the required depth to 6" in Rigid or IMC metallic conduit (which may be well worth it in your situation to save on digging) or 12" in PVC conduit. If you happen to want a walkway that would happen to run where the electric service would, a 4" thick concrete slab extending 6" beyond the conduit reduces the required burial depth to 4" (ie, right under the slab.) Look for NEC table 300.5 for more detail.
Mid
[ 0.6091370558375631, 30, 19.25 ]
TechWeb: HP To Offer Linux Support Through OpenView(May 21, 1999, 14:43) "Hewlett-Packard, in Palo Alto, Calif., announced Thursday that in June, support will be available for Linux through HP OpenView IT/Operations, the vendor's systems- and network- management solution." Wired: Caldera 2, Microsoft 0(May 21, 1999, 11:29) "A federal judge in Salt Lake City has ruled that news organizations have a First Amendment right to see hundreds of Microsoft documents from an antitrust case in Utah." PC Week: SGI goes open source(May 21, 1999, 01:12) "SGI Inc. is set to announce plans to give the Linux community the source code of XFS, its 64-bit Unix file system. Industry watchers say this marks the first time a company the size of SGI has released a major chunk of its software intellectual property to the open source community."
Mid
[ 0.641288433382137, 27.375, 15.3125 ]
List of video game crowdfunding projects The following is an incomplete list of notable video game projects (in hardware, software, and related media) that have embarked upon crowdfunding campaigns. Only when the amount raised is highlighted in green did the project receive those funds. See also List of most successful crowdfunding projects References Video game crowdfunding projects Crowdfunding projects
Mid
[ 0.5408560311284041, 34.75, 29.5 ]
Many methods have failed in the effort to secure digital communications, but one has remained relatively reliable: Faraday cages. These metallic enclosures prevent all incoming and outgoing electrical charges, and have successfully been used in the past by those hoping to conceal their wireless communications. You may remember Chelsea Manning used a makeshift Faraday cage last year when she asked New York Times reporters to dump their phones in a microwave to prevent prying ears from listening in. Despite their often unorthodox appearance, Faraday cages are largely considered an effective, if not extreme, additional step in securing communications. While many have utilized this technology for personal uses (A bar owner in the UK even created his own Faraday cage to keep drinkers off their phones), larger institutions like banks, governments, and other corporations turn to Faraday cages to house some of their most sensitive data. These systems also vary in size. Smaller Faraday cages and Faraday bags may be used for individuals while larger corporations may create entire Faraday conference rooms. It appears, however, that these metal mesh cages may have a chink in their armor. A new attack method laid out in two recently released papers from researchers at the Cyber Security Research Center in Ben Gurion University in Israel, show how data could potentially be compromised even when encased in a Faraday cage. The extraction method, dubbed MAGNETO, works by infecting an “air-gapped” device—a computer that isn't connected to the internet—with a specialized malware called ODINI that regulates that device’s magnetic fields. From there, the malware can overload the CPU with calculations, forcing its magnetic fields to increase. A local smartphone, (located a maximum of 12 to 15 centimeters from the computer) can then receive the covert signals emanating off the magnetic waves to decode encryption keys, credential tokens, passwords and other sensitive information. Mordechai Guri, who heads research and development at the Cyber Security Research Center, said he and his fellow researchers wanted to show that Faraday cages are not foolproof. “Faraday cages are known for years as good security for electromagnetic covert channels,” Guri told Motherboard in an email. “Here we want to show that they are not hermetic and can be bypassed by a motivated attacker.” According to the research, even if phones are placed on airplane mode in secure locations, these extraction techniques could still work. Since the phone’s magnetic sensors are not considered communication interfaces, they would remain active even in airplane mode. The foundations for the researcher’s breakthrough were built off of previous public examples of offline computer vulnerabilities. Last July, Wikileaks released documents allegedly demonstrating how the CIA used malware to infect air-gapped machines. The tool suite, called “Brutal Kangaroo,” allegedly allowed CIA attackers to infiltrate closed networks by using a compromised USB flash drive. The researchers at the Cyber Security Research Center highlighted “Brutal Kangaroo” in their paper as a real life example of the fallibility of air-gapped computers. The papers point out that air-gapped computer networks are being used by banks to store confidential information and by the military and defense sectors as well. Guri said that institutions hoping to addresses these security issues may face some difficulty. “In [the] case of the Magnetic covert channel, its fairly challenging, since the computer must be shielded with a special ferromagnetic shield.” Guri said. “The practical countermeasures is the 'zoning' approach, where you define a perimeter in which not [every] receiver/smartphone allowed in.”
Mid
[ 0.6100917431192661, 33.25, 21.25 ]
One of the biggest strengths of Angular is its’ forms library for handling forms logic. Even though Angular has some built-in form validators as required, it sometimes is necessary to create your own form validators. Template driven and reactive forms In Angular, there are two form modules: template driven and reactive. The template-driven allows you to specify the form logic in the template and reactive allows you to write your form as typescript code in the template. This means that creating custom validators for template driven forms is gonna be slightly different from reactive forms. The difference is basically that you are gonna wrap the reactive custom validator in a directive to make it work with template driven forms. If you are using template driven forms I recommend coding your custom validators in a way that they are also compatible with reactive forms, should you want to use that. Creating a custom validator for reactive forms Creating a custom validator for reactive forms is actually more simple than for a template driven form. You only need to implement ValidatorFn, which takes a form control and returns an error object. A date validator can be created as: invalid-date.validator.directive.ts export function invalidDateValidatorFn(): ValidatorFn { return (control: AbstractControl): { [key: string]: any } => { const date = new Date(control.value); const invalidDate = !control.value || date.getMonth === undefined; return invalidDate ? { 'invalidDate': { value: control.value } } : null; }; } 1 2 3 4 5 6 7 export function invalidDateValidatorFn ( ) : ValidatorFn { return ( control : AbstractControl ) : { [ key : string ] : any } = > { const date = new Date ( control . value ) ; const invalidDate = ! control . value | | date . getMonth === undefined ; return invalidDate ? { 'invalidDate' : { value : control . value } } : null ; } ; } Here we are validating if the input can be converted to a date and if not, we return an error object with “invalidDate” set + the invalid value, this when can be used to display an error message to the user. This validator is hooked up to a reactive form like this: todo.component.ts this.form = this.formBuilder.group({ title: this.formBuilder.control('', Validators.required), description: this.formBuilder.control('', Validators.required), dueDate: this.formBuilder.control('', Validators.required, invalidDateValidatorFn), }); 1 2 3 4 5 this . form = this . formBuilder . group ( { title : this . formBuilder . control ( '' , Validators . required ) , description : this . formBuilder . control ( '' , Validators . required ) , dueDate : this . formBuilder . control ( '' , Validators . required , invalidDateValidatorFn ) , } ) ; Creating a custom validator for template driven forms As said before, when creating a custom validator for a template driven form, you should have created the validator fn first, which is used seperately if it was in a reactive form: export function invalidDateValidatorFn(): ValidatorFn { return (control: AbstractControl): { [key: string]: any } => { const date = new Date(control.value); const invalidDate = !control.value || date.getMonth === undefined; return invalidDate ? { 'invalidDate': { value: control.value } } : null; }; } @Directive({ selector: '[appInvalidDate]', providers: [{ provide: NG_VALIDATORS, useExisting: InvalidDateValidatorDirective, multi: true }] }) export class InvalidDateValidatorDirective implements Validator { // tslint:disable-next-line:no-input-rename @Input('appInvalidDate') public invalidDate: string; public validate(control: AbstractControl): { [key: string]: any } { return this.invalidDate ? invalidDateValidatorFn()(control) : null; } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 export function invalidDateValidatorFn ( ) : ValidatorFn { return ( control : AbstractControl ) : { [ key : string ] : any } = > { const date = new Date ( control . value ) ; const invalidDate = ! control . value | | date . getMonth === undefined ; return invalidDate ? { 'invalidDate' : { value : control . value } } : null ; } ; } @ Directive ( { selector : '[appInvalidDate]' , providers : [ { provide : NG_VALIDATORS , useExisting : InvalidDateValidatorDirective , multi : true } ] } ) export class InvalidDateValidatorDirective implements Validator { // tslint:disable-next-line:no-input-rename @ Input ( 'appInvalidDate' ) public invalidDate : string ; public validate ( control : AbstractControl ) : { [ key : string ] : any } { return this . invalidDate ? invalidDateValidatorFn ( ) ( control ) : null ; } } For using a validator in a template-driven form we hook it in with a directive. Notice that we bind to an attribute with [] in the selector. The way we hook it into Angular template driven forms by adding the directive to Angular’s NG_VALIDATORS using the multi option. NG_VALIDATORS is a provider Angular is using on every form change to loop through the validators in the form and update the form’s validity. A validator directive implements Validator from @angular/forms which contain a validate callback which is called by Angular forms module when it iterates on all directives hooked into NG_VALIDATORS. Input to a validator can be done with an Input validator that matches the selectors name. A bizarre trick for creating a flexible custom validator Alright, enough of the affiliate marketing… I have found it could become tedious for doing all the above process for really simple validation logic, so for that reason, I came up with a custom validator directive that evaluates boolean expressions. For complex validation logic I would like to encapsulate validation logic, like in the above example, but if it is really simple boolean expressions I preferer to use the flexible custom validator, as it saves you from doing the above steps for every validator. The flexible custom validator looks like this: export class CustomValidator { constructor(public expression: () => boolean, public validatorName: string) {} } export function customValidatorFnFactory( customValidator: CustomValidator ): ValidatorFn { return function(control: AbstractControl) { const errorObj = {}; errorObj[customValidator.validatorName] = true; return customValidator.expression() ? null : errorObj; }; } @Directive({ selector: '[appCustomValidator]', providers: [ { provide: NG_VALIDATORS, useExisting: CustomValidatorDirective, multi: true } ] }) export class CustomValidatorDirective implements Validator { private _customValidator: CustomValidator; public get appCustomValidator(): CustomValidator { return this._customValidator; } @Input() public set appCustomValidator(customValidator: CustomValidator) { this._customValidator = customValidator; if (this._onChange) { this._onChange(); } } private _onChange: () => void; constructor() {} public validate(control: AbstractControl): { [key: string]: any } { return customValidatorFnFactory(this.appCustomValidator)(control); }https://christianlydemann.com/wp-admin/post.php?post=174&action=edit# public registerOnValidatorChange?(fn: () => void): void { this._onChange = fn; } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 export class CustomValidator { constructor ( public expression : ( ) = > boolean , public validatorName : string ) { } } export function customValidatorFnFactory ( customValidator : CustomValidator ) : ValidatorFn { return function ( control : AbstractControl ) { const errorObj = { } ; errorObj [ customValidator . validatorName ] = true ; return customValidator . expression ( ) ? null : errorObj ; } ; } @ Directive ( { selector : '[appCustomValidator]' , providers : [ { provide : NG_VALIDATORS , useExisting : CustomValidatorDirective , multi : true } ] } ) export class CustomValidatorDirective implements Validator { private _customValidator : CustomValidator ; public get appCustomValidator ( ) : CustomValidator { return this . _customValidator ; } @ Input ( ) public set appCustomValidator ( customValidator : CustomValidator ) { this . _customValidator = customValidator ; if ( this . _onChange ) { this . _onChange ( ) ; } } private _onChange : ( ) = > void ; constructor ( ) { } public validate ( control : AbstractControl ) : { [ key : string ] : any } { return customValidatorFnFactory ( this . appCustomValidator ) ( control ) ; } https : //christianlydemann.com/wp-admin/post.php?post=174&action=edit# public registerOnValidatorChange ? ( fn : ( ) = > void ) : void { this . _onChange = fn ; } } As we saw before we are creating a validationFn that is used in a directive. The directive takes as input a CustomValidator object which contains a boolean express, that is gonna be evaluated, a validator name, used in the error object. When running the validators, you can show an error message in your template like this: add-todo.component.html <div class="form-group"> <label for="todo-description">Description</label> <input type="text" #todoDescriptionInput="ngModel" [appCustomValidator]="getLengthCustomValidator(todoDescriptionInput.value)" required name="todo-description" [(ngModel)]="currentTODO.description" class="form-control" id="todo-description" placeholder="Enter description"> </div> <div *ngIf="todoDescriptionInput.touched && todoDescriptionInput.errors" class="alert alert-danger" role="alert"> Error </div> 1 2 3 4 5 6 7 8 <div class = "form-group" > <label for = "todo-description" > Description </label> <input type = "text" # todoDescriptionInput = "ngModel" [ appCustomValidator ] = "getLengthCustomValidator(todoDescriptionInput.value)" required name = "todo-description" [ ( ngModel ) ] = "currentTODO.description" class = "form-control" id = "todo-description" placeholder = "Enter description" > </div> <div * ngIf = "todoDescriptionInput.touched && todoDescriptionInput.errors" class = "alert alert-danger" role = "alert" > Error </div> Here we are applying the custom validator in a template by passing a custom validator object containing an expression for validating the length of the input as well as the validator name used for showing validation messages: add-todo.component.ts public getLengthCustomValidator = (value: string) => new CustomValidator( () => value.length < MAX_DESCRIPTION_LENGTH, 'minLengthValidator' ) 1 2 3 4 5 public getLengthCustomValidator = ( value : string ) = > new CustomValidator ( ( ) = > value . length < MAX_DESCRIPTION_LENGTH , 'minLengthValidator' ) I’m using bootstrap here for the styling. Upon a validation error you can show something like this to a user: Read the code for validators and more in my Angular best practices repository on Github. Do you want to become an Angular architect? Check out Angular Architect Accelerator. Hi there! I’m Christian, a freelance software developer helping people with Angular development. If you like my posts, make sure to follow me on Twitter. Like this: Like Loading...
High
[ 0.6702849389416551, 30.875, 15.1875 ]
Handy Sony map shows where all UK Vitas are PlayStation has put together a map to show consumers where they can play Vita.The new handheld goes on sale on February 22nd, and is available to play in locations up and down the UK, including all GAME and GameStation stores. You can check out the full map here.It is part of the platform holder's aggressive bid to achieve mass awareness for the new portable. The firm is still in the process of taking its Vita rooms tour across the UK, while ads are starting to appear across the country and on TV.
Mid
[ 0.5492662473794551, 32.75, 26.875 ]
General Safety Precautions Miscellaneous If you have a fireplace, wood burning stove, or other heat source, place barriers around it to avoid burns. Inspect and clean chimneys and stovepipes regularly. Make certain hazardous items, such as bug sprays, cleaners, auto care products, and weed killers, are secured and stored in their original containers in the garage, utility room, or basement. Place Warning stickers on all hazardous items. Keep syrup of Ipecac in your home for accidental poisonings. But, never use syrup of Ipecac without first calling your physician or the poison control center. Make certain plastic bags, broken pieces of toys, buttons, screws, and other choking or suffocation hazards are stored out of reach of children or discarded. Post emergency telephone numbers near each telephone in your home. When children are present, safety devices, such as gates, locks, and doorknob covers, should be in use at all stairways and exits in your home. Make sure all indoor and outdoor stairways and entries are well-lit and clear. Make certain bathrooms and bedrooms can be unlocked from the outside. Keep matches and lighters out of the reach of children and disabled persons. A home should have two unobstructed exits, in case of fire or other emergency. Check all electrical cords to make sure they are not cracked or frayed. Make certain outlets or extension cords are not overloaded. It is best not to use space heaters. If they are used, make sure they are in safe condition. Never plug them into an extension cord. Do not place them near drapes or furnishings. Do not use the stove/oven as a heater. Paint or wallpaper should not be chipping or peeling. Keep purses, backpacks, and other portable storage bags out of a child's reach. They may contain medicines, pen knives, hard candies, and other items that may harm children.
Mid
[ 0.58252427184466, 37.5, 26.875 ]
Psychological state is related to the remission of the Boolean-based definition of patient global assessment in patients with rheumatoid arthritis. To evaluate whether the psychological state is related to the Boolean-based definition of patient global assessment (PGA) remission in patients with rheumatoid arthritis (RA). Patients with RA who met the criteria of swollen joint count (SJC) ≤ 1, tender joint count (TJC) ≤ 1 and C-reactive protein (CRP) ≤ 1 were divided into two groups, PGA remission group (PGA ≤ 1 cm) and non-remission group (PGA > 1 cm). Anxiety was evaluated utilizing the Hospital Anxiety and Depression Scale-Anxiety (HADS-A), while depression was evaluated with HADS-Depression (HADS-D) and the Center for Epidemiologic Studies Depression Scale (CES-D). Comparison analyses were done between the PGA remission and non-remission groups in HADS-A, HADS-D and CES-D. Seventy-eight patients met the criteria for SJC ≤ 1, TJC ≤ 1 and CRP ≤ 1. There were no significant differences between the PGA remission group (n = 45) and the non-remission group (n = 33) in age, sex, disease duration and Steinbrocker's class and stage. HADS-A, HADS-D and CES-D scores were significantly lower in the PGA remission group. Patients with RA who did not meet the PGA remission criteria despite good disease condition were in a poorer psychological state than those who satisfied the Boolean-based definition of clinical remission. Psychological support might be effective for improvement of PGA, resulting in the attainment of true remission.
Mid
[ 0.6462264150943391, 34.25, 18.75 ]
Navigate: Lone Star rising From left: John Cornyn is sticking close to Ted Cruz; Joaquin and Julian Castro are rising Democratic stars. | AP Photos Rep. Michael Burgess (R-Texas) sounded an even more skeptical note, saying that memories of the last major immigration bill, in 1986, are still fresh for many conservative activists. “All I can tell you is what I see at home: a lot of lessons learned from ’86,” said Burgess. “That, ‘OK, we’ll go one-time amnesty and after that we’ll really be good.’ But nobody believes it this time, nobody believes it.” Text Size - + reset Thornberry noted that Republican voters back home were even angrier toward Washington than when he first arrived in Washington following the 1994 election and that every incumbent in the state had to be on guard. “I expect everybody will have a primary challenge because people are so disappointed at the results of the last elections,” said Thornberry. “They’re disappointed they can’t get everything done they want to get done with divided government and they’re looking for an outlet and one outlet is to attack other Republicans.” But Republicans eyeing Texas’s Hispanic future are emphatic that immigration must be dealt with in short order. Straus, in an interview last week in his elegant Capitol office here, had a blunt message for his counterparts in Washington: “It is an issue that has lingered far too long and as policymakers in the most important border state in the country, we need, as Texans, the Congress to solve this, take it off the plate.” And the speaker — who represents San Antonio, home to the Castro twins — added a warning. “We better watch our agenda very closely and try to appeal to more people,” he said. “Politics is a game of addition. Just because we’ve had success in recent years doesn’t mean we’re guaranteed of success tomorrow.” Straus, much like Speaker John Boehner, has had to quiet some of his more ideological members who’ve garnered press coverage for pushing issues such as the TSA groping matter and “Sanctuary Cities.” Straus’s job is safe in part, though, because he enjoys coalition support from a bloc of Democrats and Republicans. But while that enables him to muzzle conservatives in his caucus, Cornyn, the new GOP whip, has no such luxury. Democrats snicker that he’s now stuck on “Cruz Control,” following the lead of his junior partner on issues ranging from the confirmation of Secretary of State John Kerry (they comprised two of the three “no” votes) to the re-authorization of the Violence Against Women Act (which garnered 78 “yes” votes).
Mid
[ 0.5532467532467531, 26.625, 21.5 ]
Reasons for missing interviews in the daily electronic assessment of pain, mood, and stress. Electronic diary assessment methods offer the potential to accurately characterize pain and other daily experiences. However, the frequent assessment of experiences over time often results in missing data. It is important to identify systematic reasons for missing data because such a pattern may bias study results and interpretations. We examined the reasons for missing electronic interviews, comparing self-report and data derived from electronic diary responses. Sixty-two patients with temporomandibular disorders were asked to rate pain intensity, pain-related activity interference, jaw use limitations, mood, and perceived stress three times a day for 8 weeks on palmtop computers. Participants also were asked the number of and reason(s) for missing electronic interviews. The average electronic diary completion rate was 91%. The correspondence between self-report and electronic data was high for the overall number of missed electronic interviews (Spearman correlation=0.77, P < 0.0001). The most common self-reported reasons for missing interviews were failure to hear the computer alarm (49%) and inconvenient time (21%). Although there was some suggestion that persistent negative mood and stress were associated with missing electronic interviews in a subgroup of patients, on the whole, the patient demographic and clinical characteristics, treatment, and daily fluctuations in pain, activity interference, mood, and stress were not associated significantly with missing daily electronic interviews. The results provide further support for the use of electronic diary methodology in pain research.
Mid
[ 0.6537530266343821, 33.75, 17.875 ]
We test whether good economic conditions and expansionary fiscal policy help incumbents get reelected in a large panel of democracies. We find no evidence that deficits help reelection in any group of countries independent of income level, level of democracy, or government or electoral system. In developed countries and old democracies, deficits in election years or over the term of office reduce reelection probabilities. Higher growth rates over the term raise reelection probabilities only in developing countries and new democracies. Low inflation is rewarded by voters only in developed countries. These effects are both statistically significant and quite substantial quantitatively. (JEL D72, E62, H62, O47)
High
[ 0.7117241379310341, 32.25, 13.0625 ]
Machine learning defines models that can be used to predict occurrence of an event, for example, from sensor data or signal data, or recognize/classify an object, for example, in an image, in text, in a web page, in voice data, in sensor data, etc. Machine learning algorithms can be classified into three categories: unsupervised learning, supervised learning, and semi-supervised learning. Unsupervised learning does not require that a target (dependent) variable y be labeled in training data to indicate occurrence or non-occurrence of the event or to recognize/classify the object. An unsupervised learning system predicts the label, target variable y, in training data by defining a model that describes the hidden structure in the training data. Supervised learning requires that the target (dependent) variable y be labeled in training data so that a model can be built to predict the label of new unlabeled data. A supervised learning system discards observations in the training data that are not labeled. While supervised learning algorithms are typically better predictors/classifiers, labeling training data often requires a physical experiment or a statistical trial, and human labor is usually required. As a result, it may be very complex and expensive to fully label an entire training dataset. A semi-supervised learning system only requires that the target (dependent) variable y be labeled in a small portion of the training data and uses the unlabeled training data in the training dataset to define the prediction/classification (data labeling) model.
High
[ 0.693548387096774, 32.25, 14.25 ]
Q: PayPal .NET Integration need current paypal_base.dll file I am trying to get my .net 4.0 based website integration with PayPal Express working. The code I have utilizes the paypal_base.dll While this file is referenced in numerous recent support posts on various tech forums, I cannot find any active link to download it anywhere on the web. The x.com site has had all it links disabled since it was replaced by the new developer.paypal.com web site. On this new site, which is not very helpful, it provides download link to Github. The only appropriate SDK DLLs I can find there, are the PayPalCoreSDK.dll, PayPalMerchantSDK.dll and PayPalPermissionSDK.dll - none of which appear to include "CallerServices" or other interfaces my code is expecting. On their new developer.paypal.com site there is no mention of this commonly referenced paypal_base.dll file! They dont say it has been depreciated. The DDL's provided by PayPal do not appear to be compatible with my sample code which is looking for the com.paypal.soap.api For example "CallerServices" does not appear in any of the Legacy SDK DLLs supplied by PayPal on their GitHub download site. The version of the paypal_base.dll I have on hand is 5.6.61.0. I have seen newer version referenced in various support posts. The version I have is throwing communication errors. "The request was aborted: Could not create SSL/TLS secure channel." I am hoping someone can point me to where I can download the most current version of the paypal_base.dll or point me to which PayPal SDK DLL supports these methods. Many thanks, ARF A: I have just had this exact problem (we had version 5.6.63.0). The problem was the dll pointing to the wrong API Endpoints. The code was using the SOAP API whilst the dll was pointing the NVP endpoints (you can look inside the dll and see the XML). The problem was solved by overriding these in the sites Web.config file. Add the following to the configSections: <section name="paypal" type="com.paypal.sdk.core.ConfigSectionHandler, paypal_base"/> Then add the XML block in the paypal_base.dll in a <paypal></paypal> section in your web.config file, replacing the links with the ones required from the link above. XML block from paypal_base.dll: <endpoints> <wsdl> <environment name="live"> <port name="PayPalAPI">https://api.paypal.com/nvp</port> <port name="PayPalAPIAA">https://api-aa.paypal.com/nvp</port> <port name="PayPalAPI" threetoken="true">https://api-3t.paypal.com/nvp</port> <port name="PayPalAPIAA" threetoken="true">https://api-aa-3t.paypal.com/nvp</port> </environment> <environment name="sandbox"> <port name="PayPalAPI">https://api.sandbox.paypal.com/nvp</port> <port name="PayPalAPIAA">https://api.sandbox.paypal.com/nvp</port> <port name="PayPalAPI" threetoken="true">https://api-3t.sandbox.paypal.com/nvp</port> <port name="PayPalAPIAA" threetoken="true">https://api-3t.sandbox.paypal.com/nvp</port> </environment> <environment name="beta-sandbox"> <port name="PayPalAPI">https://api.beta-sandbox.paypal.com/nvp</port> <port name="PayPalAPIAA">https://api-aa.beta-sandbox.paypal.com/nvp</port> <port name="PayPalAPI" threetoken="true">https://api-3t.beta-sandbox.paypal.com/nvp</port> <port name="PayPalAPIAA" threetoken="true">https://api-3t.beta-sandbox.paypal.com/nvp</port> </environment> </wsdl> </endpoints> replacing each /nvp with /2.0
High
[ 0.6880000000000001, 32.25, 14.625 ]
Q: Feedback on Stack Overflow Careers "thanks for participating" e-mail I recieved a mail from [email protected] today titled "Thanks for participating on Stack Overflow!" I wanted to feedback that this hit my junkmail folder (on Hotmail). I'm not concerned about the mail itself, and I'm sure it's not junk, but you guys might want to know that it's being seen as junk by Hotmail. update I subsequenty verified my email address on Careers, and then paid to be listed. Both mails associated with these actions ended up as junk. I've now flagged SO mails as "safe". Very little else ends up in my junk mail (i.e. I don't think I have an overly agressive junk mail filter). Perhaps I'm in a minority, but it would be worth investigating why these mails get flagged as it could cost you business or reputation. A: Can't speak to Hotmail, but it came safely through to my gmail account. A: Yes, we did a test to hotmail and indeed it is going to the spam folder -- confirmed. edit: Even with SenderID and DKIM both properly configured and tested (this is new, the DNS records are in, and the email code will be deployed tonight), we can't get our Stack Overflow mails through to hotmail.com email addresses -- they regularly go in the spam folder. Apparently hotmail is a tough nut to crack... The autoresponder at [email protected] is a great resource for this. Send an email to it with a valid reply address, and it'll tell you what isn't configured properly. edit: We requested to be added to the hotmail "ok senders" list and got this response: We have added your stackoverflow.com, superuser.com and serverfault.com domains to the Sender ID program. This may take up to 2 business days to be fully replicated in our systems. If you have any questions regarding this please let me know. We also verified that our SPF records were up to snuff and MS signed off on them.
Mid
[ 0.58603066439523, 21.5, 15.1875 ]
<?php /** * This code was generated by * \ / _ _ _| _ _ * | (_)\/(_)(_|\/| |(/_ v1.0.0 * / / */ namespace Twilio\Rest\Taskrouter\V1\Workspace; use Twilio\ListResource; use Twilio\Version; class WorkspaceRealTimeStatisticsList extends ListResource { /** * Construct the WorkspaceRealTimeStatisticsList * * @param Version $version Version that contains the resource * @param string $workspaceSid The SID of the Workspace */ public function __construct(Version $version, string $workspaceSid) { parent::__construct($version); // Path Solution $this->solution = ['workspaceSid' => $workspaceSid, ]; } /** * Constructs a WorkspaceRealTimeStatisticsContext */ public function getContext(): WorkspaceRealTimeStatisticsContext { return new WorkspaceRealTimeStatisticsContext($this->version, $this->solution['workspaceSid']); } /** * Provide a friendly representation * * @return string Machine friendly representation */ public function __toString(): string { return '[Twilio.Taskrouter.V1.WorkspaceRealTimeStatisticsList]'; } }
Low
[ 0.517110266159695, 34, 31.75 ]
After failing in two presidential contests in 2008 and 2016, Hillary Clinton is spending a ton of time at this year’s Sundance Film Festival issuing warnings to America about the 2020 election. Voter suppression, Clinton says, is “a concern because once the Supreme Court gutted Section 5 of the Voting Rights Act, they took away one of the most useful tools for holding states and local jurisdictions accountable for what they did around elections.” “And I was the first candidate running for president on the Democratic side who faced both the gutting of the Voting Rights Act and Citizens United,” Clinton continued. “So I saw firsthand the concerted effort to purge voters and suppress voters. That’s still going on.” Hillary Clinton is at the film festival to promote the latest project about herself, Hillary, a four-part documentary set to debut on Hulu in March. Still clinging to the fact that she beat out Donald Trump in the popular vote, Clinton said she favors the abolition of the Electoral College. “The person who gets the most votes should win. The Electoral College is an anachronism that foils the rights of the majority of Americans to choose our leaders.” To be clear, according to an extensive survey from Gallop, Americans feel much better about the economy, race relations, and security of the country today than they did threes years ago in the last month of the Obama presidency. President Trump’s chances to win reelection aren’t good Clinton notes, arguing that now “there’s a story now to be told.” “Before he was a blank slate. He was a guy that people saw on their TVs. As you know, he was a reality TV star,” Clinton said. “Now I think there’s a record that he’s going to have to be held accountable for.” Elsewhere, Clinton, in a wide-ranging interview with Variety, managed to stoke the rumors that she’s planning a third run for the White House and said she “certainly feels the urge” to jump into the 2020 race “because I feel the 2016 election was a really odd time and an odd outcome.” The Associated Press contributed to this report. Jerome Hudson is Breitbart News Entertainment Editor and author of the bestselling book 50 Things They Don’t Want You to Know. Order your copy today. Follow Jerome Hudson on Twitter and Instagram @jeromeehudson
Mid
[ 0.553459119496855, 33, 26.625 ]
Thosegentlemen, the Cadets, continue innocently to “fail to understand”. And perhaps the one who most stubbornly of all persists in “failing to understand” is Mr. Izgoyev. In a tone of injured innocence he expresses his indignation at “Messrs. the Bolsheviks” on account of their attacks against the Cadets. “Theparty of ’people’s freedom’ will never deceive anybody. Nobody has a right to demand of it more than is indicated in the programme and tactics that have been approved by party congresses. The programme and tactics contain no mention of an armed uprising or the overthrow of the monarchy. The Bolsheviks must reckon with the party that actually exists; and it is somewhat strange that they should be angry with people who tell them the truth, and who refuse to act as they dictate.” But,Mr. Izgoyev, we are “reckoning with the party that actually exists”. Do you continue to “fail to understand”? But the matter is so simple: for a bourgeois party, the programme of the “party of people’s freedom” is not at all bad. Please note that we are saying this quite seriously. There(in the programme, Mr. Izgoyev!) one finds, for example, the demand for free speech, freedom of assembly, and quite a number of good things. But this has not prevented the Cadets from drafting repressive Bills against free speech, against freedom of assembly, and against the other good things. Well,now about tactics.... True,party congresses have approved of the tactics of “with a shield, or on a shield”; “death with glory, or death with shame”. But outside of congresses, in actual politics, the Cadets’ tactics smack of something entirely different. You are opposed to an armed uprising? You have a perfect right to be, gentlemen. But you claim that you are in favour of inflexible, relentless opposition; you claim that you want power to be transferred to the people under a monarch who will reign, but not govern. Why then are you haggling for ministerial portfolios? So you see, Mr. Izgoyev; we are “reckoning with the party that actually exists”, and not with one that merely exists on paper. If you were really fighting on the lines laid down by your programme and tactics, which have been “approved by party congresses”, we would talk to you in entirely different terms. Mr.Izgoyev’s article contains many other curiosities. But speaking generally, it is the literary property of Comrade A. L—y[1] and we do not intend to encroach upon it. Notes [1]A. L—y is A. V. Lunacharsky, who in Ekho, No. 8, wrote a reply to the article by Izgoyev directed against Lenin’s article “Yes Men of the Cadets”.
Mid
[ 0.585365853658536, 33, 23.375 ]
--- author: - 'Kazuo [Hida]{}[^1]' title: ' Ferrimagnetic and Long Period Antiferromagnetic Phases in High Spin Heisenberg Chains with $D$-Modulation' --- Introduction ============ Among various exotic ground states in quantum magnetism, the Haldane state in the integer spin antiferromagnetic Heisenberg chain[@fd] has been most extensively studied both experimentally and theoretically. This state is characterized by the hidden antiferromagnetic string order accompanied by the $Z_2\times Z_2$ symmetry breakdown in spite of the presence of the energy gap and exponential decay of the spin-spin correlation function. The easy plane single-site anisotropy $D (>0)$ destroys the Haldane ground state leading to the large-$D$ state with finite energy gap and exponential decay of the spin-spin correlation function [*without specific order*]{}. On the contrary, the easy axis single-site anisotropy ($D <0$) drives the Haldane state into the Néel state.[@md; @ht; @chen] On the other hand, the ground state of the odd spin Heisenberg chain is the Tomonaga-Luttinger liquid state. Due to its critical nature, the ground state is driven to the Néel ordered state for infinitesimal negative $D$ while the Luttinger liquid state is stable against positive $D$[@hjs]. In this context, it is an interesting issue to investigate how the ground states of the quantum spin chains are modified if the easy-axis and easy-plane single-site anisotropy coexist in a single chain. In the previous work[@altd], the present author and Chen investigated the $S=1$ chain with alternating single-site anisotropy and found that the period doubled Néel phase with ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$ structure is realized for strong alternation amplitude, although the Haldane phase is stable for weak alternation. The physical origin of this type of Néel order is interpreted as a ’pinning’ of the string order. In the present work, we further explore this problem for the cases $S > 1$. We find not only the ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$ ground state but also the ferrimagnetic ground states with quantized and unquantized spontaneous magnetization for intermediate strength of $D$-alternation. These quantizated values of magnetization also satisfy the Oshikawa-Yamanaka-Affleck condition[@oya] well-known for the magnetization plateau in the magnetic field. This paper is organized as follows. In the next section, the model Hamiltonian is presented and the two possible senarios which leads to different ground states are explained. In §3, the numerical results for the spontaneous magnetization and the local spin profile are presented to reveal the physical nature of each state. The last section is devoted to summary and discussion. Model Hamiltonian ================= We investigate the ground state of the Heisenberg chains with alternating single site anisotropy whose Hamiltonian is given by, $$\begin{aligned} \label{ham0} {\cal H} &=& \sum_{l=1}^{N}J\v{S}_{l}\v{S}_{l+1}+\delta D\sum_{l=1}^{N/2}S_{2l-1}^{z2}\nonumber\\ &-&\delta D\sum_{l=1}^{N/2}S_{2l}^{z2}, \ \ (J > 0, \delta D >0).\end{aligned}$$ where $\v {S_{i}}$ is the spin-$S$ operator on the $i$-th site. In ref. , it is found that the period doubled Néel phase with ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$ structure is realized for large enough $\delta D$ for $S=1$, although the Haldane phase is stable for small $\delta D$. The mechanism to stabilize this period doubled Néel phase can be understood along the following senario (senario I). In the absence of the $D$-terms, the $S=1$ ground state has a hidden string order which implies that the spins with ${\left\vert {\pm 1} \right\rangle}$ are arranged antiferromagnetically if the sites with ${\left\vert {0} \right\rangle}$ are skipped.[@md; @ht] The position of the sites with ${\left\vert {\pm 1} \right\rangle}$ and ${\left\vert {0} \right\rangle}$ strongly fluctuate quantum mechanically and this antiferromagnetic order remains hidden because it is impossible to observe the correlation between only the sites with ${\left\vert {\pm 1} \right\rangle}$ experimentally. In the presence of strong $\delta D$-terms, only the states consistent with the constraint set by these $\delta D$-terms survive among all states with hidden order. For $\delta D>>J$, the odd-th site must be in the state ${\left\vert {0} \right\rangle}$ and the even-th sites ${\left\vert {\pm 1} \right\rangle}$. To be compatible with the string order, spins must be arranged as ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$. Thus the strong $\delta D$-term can select the spin states among those with hidden order to realize the explicit period doubled Néel order. On the other hand, another senario (senario II) is possible from classical point of view, although this is not realized in the $S=1$ case. Let us consider the classical limit where each spin can be regarded as a classical unit vector. In the ground state, the spins on the easy axis sites are fully polarized along $z$-direction but those on the easy-plane sites are tilted by an angle $\theta=\cos^{-1}(J/\delta D)$ from the $-z$-direction as shown in Fig. \[claspin\]. Therefore this senario leads to the noncollinear ferrimagnetic ground state with the spontanenous magnetization $M=M_{\rm s}(1-(J/\delta D))/2$ as plotted in Fig.\[clasmag\]. ![The classical spin configuration in the ferrimagnetic state[]{data-label="claspin"}](fig1.eps){width="30mm"} ![The spontaneous magnetization in the classical limit. Here and in the following figures energy scale is set as $J=1$.[]{data-label="clasmag"}](fig2.eps){width="70mm"} In what follows, we show that either of these two senarios can be realized in $S >1$ chains depending on the values of $\delta D$ and $S$ based on the numerical exact diagonaliztion (NED) and density matrix renormalization group (DMRG) calculation. Numerical Results ================= Spontaneous Magnetization ------------------------- To identify the ferrimagnetic regime expected in senario II, the ground state spontaneous magnetization is calculated by NED with periodic boundary condition and by DMRG with open boundary condition for various values of $S$ and $\delta D$. The maximum chain length for the NED is $N=12$ for $S=3/2$, $S=2$ and $S=5/2$, while it is $N=8$ for $S=3$. For the DMRG calculation with open boundary condition, appropriate end spins are added to reduce the boundary effects. ![The spontaneous magnetization for (a) $S=2$ with $N=12$ (NED) and $N=64$(DMRG), and (b) $S=3$ with $N=8$ (NED) and $N=32$ (DMRG) plotted against $\delta D$. The dotted lines are the classical spontaneous magnetization.[]{data-label="mageven"}](fig3.eps){width="70mm"} ![The spontaneous magnetization for (a) $S=3/2$ with $N=12$ (NED) and $N=64$(DMRG) and (b) $S=5/2$ with $N=12$ (NED) and $N=32$(DMRG) plotted against $\delta D$. The dotted lines are the classical spontaneous magnetization.[]{data-label="magodd"}](fig4.eps){width="70mm"} The results for the integer spin cases are presented in Fig. \[mageven\] for $S=2$ and 3 and those for the half-odd-integer spin cases are presented in Fig. \[magodd\] for $S=3/2$ and $5/2$. In contrast to the case of $S=1$, it is found that the ferrimagnetic phase always appear for $S \ge 3/2$ above the critical value $\delta D_{\rm c1}$. For $0 < \delta D < \delta D_{\rm c1}$, the energy gap decreases monotonously with $\delta D$ until it vanishes at $\delta D=\delta D_{\rm c1}$ in all cases studied ($3/2 \leq S \leq 3$). Therefore, we may safely conclude that the ground state is the Haldane phase or the Tomonaga-Luttinger liquid phase according as $S=$ integer or half-odd-unteger. For integer $S$, the spontaneous magnetization vanishes for large enough $\delta D$. From the local spin profile ${\left\langle {S^z_i} \right\rangle}$ which will be presented in the next section, this state turns out to be the period-doubled ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$-type Néel state expected in senario I. On the other hand, the ferrimagnetic state remains stable for arbitrarily large $\delta D$ for half-odd-integer $S$ because the ground state of the easy plane site is the doublet $S^z=\pm 1/2$ which can sustains the ferrimagnetic order even for large $\delta D$. It should be also noted that the spontaneous magnetization in the ferrimagnetic phase is not restricted to simple fractions of the saturation magnetization $M_{\rm s}(=NS)$ as in the usual quantum ferrimagnets[@yama1; @yama2] but varies continuously with $\delta D$ in accordance with the classical intuition predicting the noncollinear ferrimagnetism in senario II. Within a appropriate range of $\delta D$, however, the spontaneous magnetization is locked to a simple fraction of $M_{\rm s}$ reflecting the quantum nature of the present model. In the DMRG calculation, these quantized value of magnetization slightly deviates from the simple fraction of $M_{\rm s}$ due to the boundary spins. Correspondingly, the spontaneous magnetization is slightly rescaled so that the main quantized value exactly equals the simple fraction of $M_{\rm s}$ in Figs. \[mageven\] and \[magodd\]. In all cases, these quantized values of the magnetization satisfy the condition $$p(S-m)=q \label{oymcond}$$ where $p$ is the size of the unit cell, $q$ is an integer and $m$ is the magnetization per site ($m=M/N=MS/\Ms$). In the present model, $p$ is equal to 2. This condition is identical to that proposed by Oshikawa, Yamanaka and Affleck[@oya] for the magnetization plateau in the magnetic field. However, their proof is not restricted to the magnetic field induced magnetization but also applies for the spontaneous magnetization in the ferrimagnetic phase. If the condition (\[oymcond\]) is satisfied, it is allowed to have a finite energy gap to the excited state with different magnetization. This implies the stability of the ground state against the variation of $\delta D$ which leads to the ’plateau’ behavior. The $\delta D$-dependence of the energy gap on the plateau state calculated by the DMRG method for $S=2$ chains is shown in Fig. \[pla2\]. It is clear that the energy gap is finite on the plateau region $1.91 \leq \delta D \leq 3.26$. ![The energy gap of $S=2$ chain on the plateau state with magnetization $M_{\rm p}=M_{\rm s}/4$ for $N=72$. The filled (open) symbols are the gap to the state with magnetization $M=M_{\rm p}+1$($M_{\rm p}-1$) []{data-label="pla2"}](fig5.eps){width="70mm"} In addition, the maximum possible values of the ground state spontaneous magnetization is bounded from above due to the nature of the present model. To maximize the spontaneous magnetization, the spins on the easy axis sites must have $S^z_i=S$ and those on the easy plane site must have negative value due to the antiferromagnetic interaction with the neighbouring polarized spins. It should be noted that the senario I leading to the period doubled Néel order becomes effective if $S^z_i=0$ on the easy plane site. Therefore the spontaneous magnetization per site $m$ must satisfy $0 < m = MS/\Ms < S/2$ in the ferrimagnetic phase. This implies $2S > q > S$. In the half-odd-integer case, the smallest possible value of $q$ is $S+1/2$. This gives the magnetization per site $m=(S-1/2)/2$ which yields $M/\Ms=(S-1/2)/2S$. On the other hand, in the integer spin case, the smallest possible value of $q$ is $S+1$. In this case, $m$ is equal to $(S-1)/2$ which yields $M/\Ms=(S-1)/2S$. It should be noted this value vanishes for $S=1$. This explains why the quantized ferrimagnetic phase does not appear for $S=1$ case. Actually, prominent ’plateaus’ are observed only for the smallest possible value of $q$. This is due to the fact that the condition for the gap generation on the compactification radius of the underlying Gaussian model becomes increasingly severer with the increase of $q$[@oya]. As a secondary plateau with larger $q$, we only find a small plateau at $M=\Ms/5$ for $S=5/2$ which corresponds to $q=4(=S+3/2)$ as shown in Fig. \[mag5ov2lar\] within the $S$ values studied so far, although this plateau is almost unvisible for small sized systems shown in Fig. \[magodd\](b). ![The small ’plateau’ at spontaneous magnetization $M=M_{\rm s}/5$ for $S=5/2$ plotted against $\delta D$. The system size is $N=96$ (dotted line) and 192 (solid line). Rescaling factor is slightly different from that of Fig. \[magodd\] to fix this small plateau precisely to $M=M_{\rm s}/5$. []{data-label="mag5ov2lar"}](fig6.eps){width="70mm"} Local Magnetization Profile --------------------------- The local magnetization profile ${\left\langle {S^z_i} \right\rangle}$ calculated by the DMRG method is presented for each phase of $S=2$ chains in Fig. \[cor\]. Below the plateau, the easy plane spins are almost in the state $S^z_i=-1$ while the magnetization of the easy axis spins increases from 1 to 2 as $\delta D$ approaches the lower end of the plateau region. On the plateau, the easy axis spins are almost in the state $S^z_i=2$ and the easy plane spins are in the state $S^z_i=-1$ leading to the quantized value of spontaneous magnetization for the smallest possible value of $q$ described in the preceding section. Above the plateau, the easy axis spins are almost in the state $S^z_i=2$ and the increase in total spontaneous magnetization is due to the decrease in the polarization of the easy plane spins. The behavior of the local magnetization profile in the noncollinear ferrimagnetic phase is in contrast to the similar noncollinear ferrimagnetic phase in the frustrated spin chains investigated in refs. and in which the incommensurate superstructure is observed. This suggests that the incommensurate superstructures observed in these literatures are essentially due to frustration. ![The local magnetization profile of $S=2$ chains for (a) $\delta D=1.0$ (below the plateau), (b) $2.5$ (on the plateau), (c) 4.0 (above the plateau) and (d) $5.0$ (period-doubled Néel phase) with $N=72$.[]{data-label="cor"}](fig7.eps){width="70mm"} In these ferrimagnetic phases, the correlation between the easy axis spins are ferromagnetic. At $\delta D=\delta D_{\rm c2}$, however, the easy plane spins turns into the state with $S^z_i=0$ and the correlation between the easy axis spins turns into antiferromagnetic. In this case, the magnetion profile clearly shows the ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$ structure as shown in Fig. \[cor\](d). For the calculation of the local spin profile in this phase, we have applied a tiny symmetry breaking field with period 4, because otherwise the true ground state of the finite size system is the linear combination of ${\left\vert {\uparrow 0 \downarrow 0} \right\rangle}$ and ${\left\vert {\downarrow 0 \uparrow 0} \right\rangle}$-type states and no net local magnetization is expected. Actually, the DMRG calculation is often trapped to the states with domain walls in the absence of the symmetry breaking field. The value of the symmetry breaking field ranged from $0.005$ to $0.02$ and the results turned out to be almost indistinguishable on the scale of the Fig. \[cor\](d). The physical origin of this magnetic structure is understood in the same way as the $S=1$ case[@altd] following the first senario described in §2. Summary and Discussion ====================== The ground state properties of the high spin Heisenberg chains with alternating single site anisotropy are investigated by means of the numerical exact daigonaization and DMRG method. It is found that the ferrimagnetic state appears between the Haldane phase and period doubled Néel phase for the integer spin chains. On the other hand, the transition from the Tomonaga-Luttinger liquid state into the ferrimagnetic state takes place for the half-odd-integer spin chains. In the ferrimagnetic phase, the spontaneous magnetization varies continuously with the modulation amplitude of the single site anisotropy in accordance with the classical intuition. Eventually, however, the magnetization is locked to fractional values of the saturated magnetization which satisfies the Oshikawa-Yamanaka-Affleck condition. The local spin profile is calculated to reveal the physical nature of each state. In contrast to the case of frustration induced ferrimagnetism[@ym1; @zigferri], no incommensurate superstructure is found. We thus expect that the incommensurate superstructures found is these literatures are essentially due to the interplay of quantum effect and frustration. The similar mechanism should also work in 2 and 3 dimensions, although the Haldane or Tomonaga Luttinger liquid phase would be replaced by the long range ordered Néel-type state. However, even in the large $\delta D$ limit, the ground state is not trivial due to the frustration in the effective interaction among the easy-axis spins. The investigation of these higher dimensional models are in progress. For the experimental realization of the present mechanism it is necessary to synthesize the compound of easy axis magnetic ions and easy plane ones. Considering a variety of phases expected for the present model, this is a challenging attempt for the experimentalists. Recently, various single chain molecular magnets with considerable strength of single site anisotropy have been synthesized.[@scm1; @scm2] Although the materials with alternating sign $D$-terms with uniform $S$ are not yet reported, these series of materials can be a good candidate to observe the phenomena proposed in the present work. The computation in this work has been done using the facilities of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo and the Information Processing Center of Saitama University. The diagonalization program is based on the TITPACK ver.2 coded by H. Nishimori. This works is supported by the Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology, Japan. [99]{} F. D. M. Haldane: Phys. Lett. [**93A**]{} (1983); Phys. Rev. Lett. [**50**]{} (1983) 1153. M. den Nijs and K. Rommelse: Phys. Rev. [**B40**]{} (1989) 4709. H. Tasaki: Phys. Rev. Lett [**66**]{} (1991) 798. W. Chen, K. Hida and B. C. Sanctuary: Phys. Rev. [**B67**]{} (2003) 104401. H. J. Schulz: Phys. Rev. [**B34**]{} (1986) 6372. K. Hida and W. Chen : J. Phys. Soc. Jpn. 74, 2090 (2005). M. Oshikawa, M. Yamanaka and I. Affleck : Phys. Rev. Lett.78 (1997)1984 S. Yamamoto and T. Sakai: J. Phys. Soc. Jpn. [**67**]{} (1998) 3711. S. Yamamoto: Phys. Rev. B [**59**]{} (1999) 1024. S. Yoshikawa and S. Miyashita: J. Phys. Soc. Jpn. [**74**]{} (2005) Suppl. 71. K. Hida: cond-mat/0608582; to appear in J. Phys. Condens. Matter. M. Mito, H. Deguchi, T. Tajiri, S. Takagi, M. Yamashita, and H. Miyasaka : Phys. Rev. [**B 72**]{}(2005) 144421. Y. Oshima, H. Nojiri, K. Asakura, T. Sakai, M. Yamashita, and H. Miyasaka : Phys. Rev. [**B 73**]{} (2006) 214435. [^1]: E-mail: [email protected]
Mid
[ 0.596923076923076, 24.25, 16.375 ]
(AP) -- The G-League is coming. The NBA Development League is changing its name starting next season to the NBA Gatorade League, a deal that will include a rebranding that will affect the league logo, basketballs, jerseys, on-court signage and digital properties. Long viewed as a proving and testing ground of sorts for the NBA, what has been known as the D-League will also get to take advantage of Gatorade's Sports Science Institute - a resource that many elite athletes, including Dwyane Wade and Cam Newton, have used in recent years for testing and evaluation of what exactly their bodies need during competition. "This isn't about slapping a name on a league," NBA deputy commissioner Mark Tatum said. "This is much, much deeper than that." Tatum said this is not the first step toward a name change for the NBA, and declined to detail the length or financial terms of the deal. But he said the part of the deal including GSSI will provide "knowledge to enhance player performance in our game" through nutrition, training and other advances. Gatorade will also incorporate its most recent products and equipment throughout the league. David Carter, executive director of the USC Sports Business Institute, called the deal "very authentic" because the brands are historically connected and the deal provides the opportunity for brand positioning from both sides. He said leagues look for true marketing partnerships because consumers can be "reluctant to embrace brands they think are inauthentic." Putting sponsorships on jerseys is fairly new for the NBA. While other leagues have putting sponsorships on jerseys for several years, including the WNBA, MLS and soccer leagues overseas, the NBA got on board and approved on-jersey corporate sponsorships patches starting next season. The Utah Jazz announced Monday that their patch will be sponsored by Qualtrics and used to raise money for cancer research. Carter believes there hasn't been much pushback from fans over jersey patches since they've become accustomed to seeing sponsorship everywhere. The patches on NBA jerseys are limited in size to 2.5 inches by 2.5 inches. "The key ... is to position this so it's not too much in your face," Carter said, adding that the NBA is "pretty good at striking that blend of what's appropriate." Neither the NBA nor Gatorade said how much the partnership is worth, though Carter said he doesn't "know that it's as important from a (financial) numbers perspective as much as it really underscores that the NBA is continuing to look for unique ways to add marketing inventory. "I think the NBA sees, as they continue to build this league as an asset of theirs, there will be plenty of marketing dollars coming behind it and being innovative with something like this is really the front end of that." The development league has grown from eight teams when it debuted in 2001-02, to 25 for next season. There could be some concern that renaming an entire league with a corporate sponsorship opens the door for more overbearing marketing at events. "Everything's been very incremental," Carter said. "You've seen it at the collegiate level with jersey patches. I don't know if it's a slippery slope. You should ask the people that run the (English Premier League). For them, there is no slippery slope. "It is a very decided and concerted effort to drive as much revenue from corporate partnerships as possible." Gatorade senior vice president and general manager Brett O'Brien talked about several examples of how his company's science expertise will be used in the D-League, including testing for a player's sweat type and amount, if they are a fat burner or carbohydrate burner, recovery advances and joint health. Added D-League President Malcolm Moran: "This promotes performance overall, but helps maximize potential and, ultimately, the product that we have on court."
Mid
[ 0.6126582278481011, 30.25, 19.125 ]
898 F.2d 512 ALLEN & O'HARA, INC., a corporation, Plaintiff-Appellant,Cross-Appellee,andMaryland Casualty Company, a corporation, InterveningPlaintiff, Defendant on Counterclaims, Cross-Appellee,andThe Northwestern Mutual Life Insurance Company, acorporation, Intervening Plaintiff, Defendant onCounterclaims and Third-PartyPlaintiff-Appellant, Cross-Appellee,v.BARRETT WRECKING, INC., a corporation and Thomas M. Barrett,Defendants-Appellees, Cross-Appellants. Nos. 88-2509, 88-2558 and 88-2559. United States Court of Appeals,Seventh Circuit. Argued Nov. 3, 1989.Decided Feb. 13, 1990.As Amended Feb. 15, 1990.Rehearing Denied March 15, 1990. Before BAUER, Chief Judge, FLAUM and KANNE, Circuit Judges. FLAUM, Circuit Judge. 1 This diversity case concerns a contract between Allen & O'Hara (A & O) and Barrett Wrecking (Barrett) for the demolition of a building owned by Northwestern Mutual Life Insurance (NML). After a six week trial, the jury issued a special verdict finding that A & O had wrongfully terminated the contract and that NML had tortiously interfered with the contract. The district court granted NML judgment n.o.v. on the tort claim. The principal issue in this case involves the proper measure for contract damages under Wisconsin law. In addition, the parties appeal the district court decisions regarding tortious interference with contract, statutory conspiracy, punitive damages, payment of costs, attorney's fees, and prejudgment interest for both contract damages and conversion damages. For the reasons stated below, we affirm except with respect to the prejudgment interest on the contract damages. Facts and Proceedings Below 2 NML hired A & O, a wholly owned subsidiary of NML, to act as its general contractor for the renovation of NML's downtown Milwaukee office. As part of the renovation, A & O solicited bids for the demolition of part of the office, giving tours of the building and allowing potential bidders to examine the building's blue-prints. Barrett submitted a bid of $595,000, slightly below its anticipated cost of demolition on the basis that it could cover its costs through sale of the salvage. Barrett's bid was significantly below other bids, so A & O and Barrett entered into a demolition contract.1 3 The contract called for demolition work to begin in May of 1979 and to finish in September 1979. NML did not, however, vacate the building until mid-to-late August and full scale demolition did not begin until August 27, 1979, at which time Barrett submitted a four month completion schedule. 4 Almost immediately upon commencement of the demolition, Barrett began to encounter unanticipated conditions which caused delays. These conditions included a vault that was constructed of steel, concrete and copper, and heavy structural beams designed to support eight additional stories. Complaints regarding excessive dust, noise, and vibrations caused A & O to instruct Barrett to employ procedures to reduce these conditions which slowed the demolition process further. 5 The delays pushed the completion date further and further back, until Barrett finally submitted a schedule that called for completion in July 1980. Eventually, Francis Ferguson, President of NML, expressed his concern over the delays to A & O, and on May 9, 1980, A & O terminated the contract. 6 A & O promptly sued Barrett and State Surety (Barrett's bonding company) for breach of contract. It also named, in his individual capacity, Thomas Barrett, president of Barrett Wrecking, as a defendant. Barrett counterclaimed for breach of contract. Their dispute centers around which party was responsible for the delays and who should bear the additional costs of demolition caused by the delays and the subsequent changes in procedure. The contract was a fixed-price contract and called for formal written change orders for any change in price. Even so, Barrett claims that it is entitled to payment for extra costs because the parties waived the contract provisions calling for written change orders. 7 NML intervened as of right in the action because it was potentially liable as an indemnitor of A & O. Barrett counterclaimed against NML for tortious interference with business contract relationships, and (along with Thomas Barrett) common law and statutory conspiracy to damage reputation, trade and business. 8 The district judge dismissed the common law conspiracy claims on a motion for partial summary judgment. Following a six week trial, the jury returned a special verdict finding that A & O had breached the contract by terminating it without justification. They awarded compensatory damages of $852,000. They further found that A & O had wrongfully retained salvage belonging to Barrett in the amount of $62,798. In addition, the jury found that NML had wrongfully interfered with the contractual relationship between Barrett and A & O, with damages of $1,400,000. 9 The district judge granted judgment n.o.v. on the tortious interference claim, finding that NML was privileged to interfere with the contract and therefore the claim failed as a matter of law. He also denied motions by Barrett for prejudgment interest and for State Surety's attorney's fees. Analysis 1. Preliminary Matters 10 Before considering the proper measure of damages, we dispose of Barrett's counterclaims. Except for the claim for prejudgment interest on the contract damages, none of these claims has merit and we affirm the district court. 11 Barrett's first claim is that the judge should not have granted a directed verdict to the plaintiffs but should have submitted its conspiracy claim to the jury. We review this decision de novo. Selle v. Gibb, 741 F.2d 896 (7th Cir.1984). 12 Initially, we note that district courts, in general, should be reluctant to remove an issue from the jury. Nevertheless, where there is not substantial evidence to support the verdict, a directed verdict or, alternatively, a judgment n.o.v. is appropriate. See id. at 900; Erwin v. County of Manitowoc, 872 F.2d 1292, 1295 (7th Cir.1989); Brady v. Southern Railway, 320 U.S. 476, 64 S.Ct. 232, 88 L.Ed. 239 (1943). "All the evidence, taken as a whole, must be viewed in the light most favorable to the non-moving party. This evidence must provide a sufficient basis from which the jury could have reasonably reached a verdict without speculation or drawing unreasonable inferences which conflict with the undisputed facts." Selle, 741 F.2d at 900. A judgment n.o.v. is appropriate only if this is not the case. 13 Barrett's conspiracy claim is based on a civil cause of action deriving from a criminal conspiracy statute. Wis.Stat. Sec. 134.01; Radue v. Dill, 74 Wis.2d 239, 246 N.W.2d 507, 511 (1976). To establish this claim, Barrett needed to prove that NML conspired or acted in concert with at least one other individual or entity to willfully injure Barrett (or Thomas Barrett) in their reputations or businesses and that injury resulted. Id. 14 Barrett's proof relies entirely on circumstantial evidence which is sufficient to prove a claim of conspiracy, Lange v. Heckel, 171 Wis. 59, 175 N.W. 788, 789-90 (1920), but it is necessarily weaker than direct evidence. In Wisconsin, if circumstantial evidence supports equal inferences of lawful action and unlawful action, then the claim of conspiracy is not proven. See Scheit v. Duffy, 248 Wis. 174, 176, 21 N.W.2d 257 (1946). We believe that at best this is the case here. 15 Barrett's claim relies on essentially one piece of evidence: the termination of Barrett from two independent demolition contracts (the A & O contract, and a contract with Milwaukee) at about the same time. Barrett claims that this unusual circumstance warrants an inference of concerted action. This evidence alone, however, is not sufficient to show that the conspirators acted with the specific, malicious purpose of injuring the plaintiff. See Radue, 74 Wis.2d at 246, 246 N.W.2d 507. The demolition contract with Milwaukee was behind schedule and the evidence shows that Milwaukee had independent reasons for terminating its contract with Barrett. And even if Milwaukee acted maliciously, no evidence was offered to support a finding of malevolence by NML. We find, in agreement with the district court, that the evidence of a conspiracy fails as a matter of law.2 16 Barrett's next claim is that the district judge erred in granting judgment n.o.v. on the claim that NML tortiously interfered with the demolition contract between A & O and Barrett. Wisconsin recognizes a cause of action against one who, without privilege to do so, induces a third person not to perform a contract with another. Harman v. LaCrosse Tribune, 117 Wis.2d 448, 344 N.W.2d 536 (Ct.App.1984). The Wisconsin courts have adopted the Restatement (Second) of Torts on tortious interference with contracts. Liebe v. City Finance Company, 98 Wis.2d 10, 15-16, 295 N.W.2d 16 (Ct.App.1980); Restatement (Second) of Torts Secs. 766, 767 (1979). Section 769 of the Restatement discusses the situation where the actor has a financial interest in the business of the person induced. This section states that there is no tortious interference when one who has a financial interest in the business of a third party causes that person not to enter into a contract so long as wrongful means are not employed and the actor is protecting his interest in the relationship with the third party. Id. Sec. 769. 17 We believe that section 769 governs here. Comment c of that section states that a part owner of a business has a financial interest in the business. Therefore, NML had a financial interest in A & O. Moreover, comment e states that if the action is directed toward protecting the actor's interest, it is immaterial that he also takes a "malicious delight" in the harm caused by his action. While Barrett argues that NML had a malicious purpose, it is clear that NML also acted to protect what it felt was its interest in the demolition. Finally, there is no evidence that NML used wrongful means. In Pure Milk Products Coop v. National Farmers Organization, 64 Wis.2d 241, 219 N.W.2d 564 (1974), the Wisconsin Supreme Court listed coercion by physical force or fraudulent misrepresentation as examples of improper means. There is no evidence that any means resembling physical force or misrepresentation were used by NML. Consequently, Barrett's claim of tortious interference fails as a matter of law.3 18 Barrett also seeks reimbursement for State Surety's attorney's fees which Barrett had to pay subject to an indemnification agreement. This element of damages was not pled. Rather it was raised only in post-trial motions. A & O and NML might have avoided these costs had they been forewarned of this exposure, i.e., they might have settled with State Surety or made a record regarding duplication of efforts between counsel for State Surety and counsel for Barrett. On this basis, the district court ruled that it would be inequitable to amend the pleadings at this last stage and we agree. 19 Barrett also argues that it is entitled to prejudgment interest on the contract damages and on the conversion or salvage damages. The contract itself grants interest on payments due and unpaid under the contract. The district court, however, held that Wisconsin law was determinative because of a clause in the contract stating that "[the] contract shall be governed by the law of the place where the Project is located." The district court held that this clause required it to look at Wisconsin law on prejudgment interest. State common law, however, normally governs a contract only insofar as there are gaps in the contractual language. Where the contract is clear, there is no need to examine the underlying state law. The contract in this case indicates that interest shall be due on all unpaid amounts under the contract. As such, Barrett is entitled to prejudgment interest on the contract damages and we remand for a finding on the amount of such interest. 20 Prejudgment interest on the salvage is not governed by the contract, so the district court properly turned to Wisconsin law. Wisconsin case law holds that interest can ordinarily be recovered if the amount claimed and recovered was readily determinable prior to trial. Moutsopoulos v. Amer. Mut. Ins. Co., 607 F.2d 1185, 1190 (7th Cir.1979) (applying Wisconsin law). A genuine dispute as to the amount due will defeat the claim for interest because the defendants cannot reasonably determine the amount due and tender it. Id. Barrett sought over $280,000 for the lost salvage but was awarded only $62,798. Consequently, there was a genuine dispute as to the amount that was due, and therefore, the district court correctly held that the judgment was not readily determinable prior to trial and denied prejudgment interest on the salvage. 21 Finally, Barrett contends that the district judge abused his discretion in not issuing a written explanation of his denial of costs pursuant to Fed.R.Civ.Proc. 54(d). Rule 54(d) states that "costs shall be allowed as of course to the prevailing party unless the court otherwise directs." Our standard on review is abuse of discretion. Hudson v. Nabisco Brands, Inc., 758 F.2d 1237, 1242 (7th Cir.1985). Both Barrett and A & O prevailed in part, and therefore we cannot say that the district court abused its discretion. A written opinion is helpful for the reviewing court, but here the district court was clearly within its discretion. 22 To summarize our holdings on this covey of claims, we affirm the district court on all claims except for prejudgment interest on the contract damages. We remand for a finding on the extent of this interest. 2. Contract Damages 23 The remaining issue concerns the proper measure of contract damages. A & O advances several contentions with respect to damages. First, they argue that Barrett offered no proof of lost profits, that is, the benefit of the bargain had the contract been fully performed less the expenses saved by nonperformance. In fact, they argue that the contract was a losing proposition for Barrett and he should have been glad to be relieved of its burden: it would have cost Barrett more to complete the job than was due as payment. In addition, A & O argues that the jury was exposed to "total-cost-theory" evidence which is not recognized by Wisconsin. Fattore Co. v. Metropolitan Sewerage Commission, 505 F.2d 1, 5-6 (7th Cir.1974). They argue that because "total cost" is not a theory of contract damages recognized under Wisconsin law or by the contract, introduction of this evidence resulted in the incorrect measure of damages by the jury. 24 To counter these arguments, Barrett offers a contract modification theory. Under this theory, they argue that the written modification provision of the contract was waived by A & O. Based on this waiver, Barrett asserts that it is entitled to payment for a number of "extras," the sum of which almost equals the damages awarded by the jury. If Barrett's contract modification theory is correct, then A & O's position that the written contract was not profitable is not tenable: the contract price would have been modified and Barrett would be entitled to damages under the modified contract. 25 It is clear under Wisconsin law that a written contract may be subsequently modified orally by the parties. S & M Rotogravure Serv. Inc. v. Baer, 77 Wis.2d 454, 252 N.W.2d 913, 919 (1977); Wiggins Constr. Co. v. Joint School Dist., 35 Wis.2d 632, 638, 151 N.W.2d 642 (1967). The Wisconsin Supreme Court has "recognized that a provision in construction contracts requiring written change orders may be avoided where the parties evidence by their words or conduct an intent to waive or modify such a provision." Rotogravure, 252 N.W.2d at 919. Waiver involves an inquiry into the intent of the parties, and is properly an issue for the jury. We divide this issue into two parts: whether there was sufficient evidence for the jury to conclude that the written modification provisions were waived and whether an agreement to modify the contract for each individual "extra" was supported by sufficient evidence. 26 A & O argues that there was no waiver of the written modification provisions of the contract. To support their claim, they contend that the written modification provisions had been followed early in the contract and that just prior to the termination of the contract, negotiations to formally modify the contract had gone as far as drafting of the modification provisions. They maintain that because the parties followed the written modification provisions at these times, there was no waiver of the provision requiring written modification. 27 Barrett, on the other hand, submits as evidence the testimony of Ron Retzer, a principal of Barrett, stating that an informal arrangement had been reached between A & O and Barrett. The substance of this arrangement was that Barrett would perform changes in the work as requested by A & O without a written change order and would be compensated when the demolition was completed. Presumably, the purpose behind such an arrangement was to allow the parties to work out their differences without the formality and expense of written change orders. 28 This evidence was submitted to the jury with instructions that a waiver must be a voluntary, knowing choice to forego something of value. Neither party objects to this instruction. Based on this evidence, the jury apparently believed the testimony of Ron Retzer. The district court, when considering the motion for judgment n.o.v. also found his testimony credible. We decline to disturb a finding of fact supported by evidence, found credible by both the district court and the jury, and therefore hold that the finding of the waiver of the written modification provisions was supported by the evidence. 29 The final issue before us is whether the various "extras" claimed by Barrett were each supported by sufficient evidence for the jury to grant damages. As a preliminary matter, we note that while the jury returned a special verdict, it did not separate the elements of the contract award; rather is simply awarded damages that reasonably flowed from A & O's wrongful termination. It is difficult, therefore, for us to determine which of the "extras" the jury found to be supported by the evidence. We believe, however, that there was sufficient evidence to support each of the various "extras" claimed by Barrett. 30 Barrett claimed that it was due payment on eleven different "extras." The total cost of these "extras" was $556,953. In addition, Barrett claims another $147,593 for overhead and profit on the "extras," which, if the "extras" are supported by the evidence, flow from these "extras." The two major items are a change in technique ($216,608) and removal of a vault ($148,400). The change in technique was ordered by A & O after they found that the original method contemplated by Barrett caused too much noise and dust. Consequently, the jury was entitled to believe that the contract was modified in this respect. With respect to the vault, the bid package given to Barrett only showed an area for a future vault and contained no plans at all showing the composition of the vault. When Barrett began to dismantle the vault, it discovered that the vault was virtually impervious to the wrecking ball and had to be dismantled piece by piece at considerable time and cost. The jury apparently relied on the testimony of Ron Retzer that A & O agreed to the unexpected extra costs for dismantling the vault and the district court found his testimony credible. We, therefore, believe that the jury had sufficient evidence to find A & O liable for the cost of dismantling the vault. The other, smaller "extras" are all supported by similar evidence. The testimony of Ron Retzer, who was subject to days of cross-examination, appears to have been the primary source relied on by the jury for calculating damages and given this testimony, we decline to overturn the jury verdict. We agree with the district court that "the verdict was not against the weight of the evidence; the damages, although generous, are not excessive...." 31 Finally, regarding the introduction of total cost evidence, apparently, this evidence was little more than the sum of the "extras," each of which we have found supported by evidence sufficient to support the jury. Therefore, the introduction of this evidence cannot be said to have prejudiced the jury. 32 The judgment of the district court is affirmed in part, remanded in part. 1 The contract was actually split into two contracts each for half the work, apparently for bonding purposes. The two contracts are identical and we will simply refer to them collectively as the contract 2 Barrett also argues that its bank's discontinuation of its line of credit, a memorandum between an attorney for the City of Milwaukee and an attorney for A & O concerning Barrett, and the institution of this lawsuit support the theory of conspiracy. All of this evidence, while potentially helpful to their claim, is more easily explained by innocent occurrences than by a malicious conspiracy. This evidence, even in concert with the evidence of simultaneous termination, does not provide sufficient support for Barrett's claim 3 Since neither of Barrett's tort claims have merit, Barrett's claim of punitive damages must also fail
Mid
[ 0.6208651399491091, 30.5, 18.625 ]
The Persistence of Sects - davisclark http://www.workersect.org/2x205p.html ====== nitrogen If you are looking for a definition of sect, as I was, to figure out what the paper is about, it's a few paragraphs down: _"...first, that each of them exists in a state of tension with the wider society; second, that each imposes tests of merit on would-be members; third, that each exercises stern discipline, regulating the declared beliefs and the life habits of members and prescribing and operating sanctions for those who deviate, including the possibility of expulsion; and fourth, that each demands sustained and total commitment from its members, and the subordination, and perhaps even the exclusion of all other interests."_ ~~~ tragomaskhalos This is sort of a counterpoint to #3, but an important litmus for a cult is that it is very difficult to leave voluntarily. Certainly in the case of the JWs, an individual exiting the cult has to contend not only with the fear and uncertainty preceding their decision to quit, but with subsequently being completely ostracised by family members who are still members. All of this acts as powerful psychological coercion to remain even when one no longer believes in the cult's message, and makes escapees enormously vulnerable. ------ gpvos Interesting, I get: The requested URL was rejected. If you think this is an error, please contact the webmaster. Your support ID is: 13078553747997112974 Anyone know what could cause this? I have been getting this from one other web site (the web comic www.sinfest.net) when accessing it from home, but not from work. ~~~ jloughry Do you run a Tor relay? If your IP address is listed as being such, you'll see that message on lots of web sites. Apple's support site (although not their store, interestingly) and phdcomics.com are other places I've run into it. If you have a static IP address on your home router, don't _ever_ start a Tor relay at home, or you'll poison it, evidently, forever. :-( ~~~ gpvos I have run a Tor relay in the past, yes. Although I have been getting this error only recently. And indeed phdcomics.com gives the same error. Interestingly, I get support.apple.com in Russian!? ~~~ gpvos It was a Tor relay, not an exit node, so I'm still curious why it would have to be blacklisted. And I guess I cannot rule out completely that either my home computer or - more likely - someone I granted access to my wifi has been part of a botnet. ~~~ jloughry I'm positive that no machine in my home network has ever been part of a botnet; the only explanation that stands up to scrutiny is the Tor relay that was running for a few days as a NAT forwarded service through my router's static IP address. I wish the faceless entity behind this widespread blacklist service could be identified so I could ask them. ------ teddyh > _In recent decades_ […] When was this written? ~~~ davisclark Bryan R. Wilson, "The Persistence of Sects", Diskus, Journal of the British Association for the Study of Religions, Vol 1, No. 2, 1993
Mid
[ 0.607843137254901, 23.25, 15 ]
Q: text retrieved from textform by .val() disappearing when using .append() or .html() to insert I'm trying to learn how to use forms and jQuery together. I'm trying to get text typed into a text form and append or insert it into another element I've created a jsFiddle: http://jsfiddle.net/B9weu/5/ form some reason, the text that I've retrieved using .val() appears for a split second in the element it's appending to, then it disappears. Why is this? Am I on the right path in regards to utilizing forms by simply using .val() as a handler in the .submit() event? the code: <form id="target"> <div> <textarea id="blogentry" name="d" rows="8" cols="40">4</textarea> </div> <div> <input type="submit" name="g" value="Submit" id="g" /> </div> </form> <script> $('#target').submit(function() { var blogtext = $('#blogentry').val(); $('#printbodytexthere').append('<p>' + blogtext + '</p>'); }); </script> <div id="printbodytexthere"> </div> A: Your form is being submitted (page refresh) USE: event.preventDefault(); to prevent default submit behavior $('#target').submit(function( e ) { e.preventDefault(); var blogtext = $('#blogentry').val(); $('#printbodytexthere').append('<p>' + blogtext + '</p>'); }); DOCS: http://api.jquery.com/event.preventDefault/
Mid
[ 0.5444444444444441, 30.625, 25.625 ]
Q: fill UIWebVIew TextInput with custom keyboard For an app I am working on I want a custom keyboard to come up when you tap on an input field in a UIWebview, for example for UITextView i use this code : myText.text = [myText.text stringByAppendingString:@"a"]; but i don't know how can i fill a text field in UIWebVIew Any help would be very much appreciated; thanks! A: Check dcorbatta's answer in this article Custom Keyboard in UIWebView He suggest to use a UITextField as a "media" when you detect keyboard coming up inside a UIWebView. Thus, you can use whatever Keyboard type you want on the UITextField. After users taps something, copy the text in UITextField the media via JavaScript ActiveDocument.
Mid
[ 0.5835294117647051, 31, 22.125 ]
{ "": [ "--------------------------------------------------------------------------------------------", "Copyright (c) Microsoft Corporation. All rights reserved.", "Licensed under the MIT License. See License.txt in the project root for license information.", "--------------------------------------------------------------------------------------------", "Do not edit this file. It is machine generated." ], "version": "1.0.0", "contents": { "package": { "displayName": "GitHub", "description": "GitHub", "config.gitAuthentication": "Controlla se abilitare l'autenticazione GitHub automatica per i comandi GIT all'interno di VS Code.", "welcome.publishFolder": "È anche possibile pubblicare direttamente questa cartella in un repository GitHub. Dopo la pubblicazione sarà possibile accedere alle funzionalità di controllo del codice sorgente basate su GIT e GitHub.\r\n[$(github) Pubblica in GitHub](command:github.publish)", "welcome.publishWorkspaceFolder": "È anche possibile pubblicare direttamente una cartella dell'area di lavoro in un repository GitHub. Dopo la pubblicazione sarà possibile accedere alle funzionalità di controllo del codice sorgente basate su GIT e GitHub.\r\n[$(github) Pubblica in GitHub](command:github.publish)" }, "dist/publish": { "pick folder": "Seleziona una cartella da pubblicare in GitHub", "ignore": "Selezionare i file da includere nel repository." }, "dist/pushErrorHandler": { "create a fork": "Crea fork", "no": "No", "fork": "Non si hanno le autorizzazioni per eseguire il push in '{0}/{1}' in GitHub. Creare un fork in cui eseguire il push?", "create fork": "Crea fork GitHub", "forking": "Creazione del fork per '{0}/{1}'...", "pushing": "Push delle modifiche...", "openingithub": "Apri in GitHub", "createpr": "Crea richiesta pull", "done": "Il fork '{0}' è stato creato in GitHub.", "createghpr": "Creazione della richiesta pull GitHub...", "openpr": "Apri richiesta pull", "donepr": "La richiesta pull '{0}/{1}#{2}' è stata creata in GitHub." } } }
Low
[ 0.495555555555555, 27.875, 28.375 ]
Proliferation of human B lymphocytes mediated by a soluble factor. Recent studies have established the ability of a proportion of activated human B lymphocytes to undergo G1 phase cell cycle progression and subsequent S phase entry on exposure to factor(s) present in lectin-stimulated mononuclear cell-conditioned media. One factor capable of stimulating activated human B lymphocyte proliferation may be separated from peripheral blood lymphocyte-conditioned media by successive ammonium sulfate precipitation, ion exchange, and gel filtration chromatography. The isolated factor is distinct from the other well-described cytokines, possesses a molecular weight of 12,000-13,000, has a mildly acidic isoelectric point (at pH 6.3-6.6), is protease sensitive, and is relatively heat sensitive. The human B cell mitogenic factor possesses functional and cellular specificity in that its action is restricted to B lymphocytes and its function is proliferative. The production of the B cell mitogenic factor by T lymphocytes is augmented by the presence of a macrophage and further stimulated by syngeneic B cells.
Mid
[ 0.6384615384615381, 31.125, 17.625 ]
Introduction ============ All chronic liver diseases whether of toxic, genetic, autoimmune or infectious origin undergo typical histological changes that ultimately lead to fibrosis/cirrhosis, and the excess deposition of matrix. Liver cirrhosis can rapidly decompensate and has a high mortality rate. Patients with cirrhosis suffer from a decreasing hepatic capacity to metabolize and synthesize proteins, peptides and hormones. In addition, progression of fibrosis and regenerating nodules cause an increased vascular portocaval resistance with portal hypertension and an increased hepatic venous pressure gradient (HVPG) of \>10 mm Hg. Portal hypertension finally leads to ascites, and vascular collaterals will develop such as esophageal varices. Most patients suffering from cirrhosis eventually die from complications such as spontaneous bacterial peritonitis, variceal bleeding, liver failure or hepatocellular carcinoma (HCC). Especially compensated liver cirrhosis without clinical signs such as spider nevi, encephalopathy, icterus, or ascites is difficult to diagnose. Such patients typically do not show specific symptoms. This is also one important reason why no valid and reliable prevalence data are available for cirrhosis for many countries, although cirrhosis is a major mortality cause in developed countries at the age of 40--60 years. Many techniques have been explored in the last decades to allow an early and reliable diagnosis of cirrhosis (see [Figure 1](#f1-hmer-2-049){ref-type="fig"}). These include both invasive but also noninvasive approaches. Liver biopsy is still considered the gold standard for assessing hepatic cirrhosis. However, it is an invasive procedure, with rare but potentially life-threatening complications.[@b1-hmer-2-049] In addition, the accuracy of liver biopsy in assessing fibrosis is limited owing to sampling error (reaching up to 30%) and interobserver variability.[@b2-hmer-2-049]--[@b6-hmer-2-049] Other invasive procedures such as laparoscopy and endoscopy are not very sensitive. Likewise, conventional imaging techniques such as ultrasound, magnetic resonance imaging (MRI) and computer tomography (CT) are noninvasive but absolute signs of cirrhosis such as collaterals or nodular aspect of the liver surface are required, rendering these methods rather insensitive. Many efforts have been invested to identify serum markers that allow the diagnosis of cirrhosis from a simple blood test.[@b7-hmer-2-049] Unfortunately, although markers such as serum collagen or hyaluron reflect profibrogenic activity, they do not correlate with the absolute amount of matrix deposited in the liver. Liver cirrhosis per se causes a typical induration of the liver that is sometimes clearly palpable. In fact, palpation of the liver has been used by physicians for centuries as the only valid bedside test to diagnose cirrhosis. Thus, it has been a question of time to develop sophisticated physical methods to truly quantify liver stiffness (LS). The first such approach has been successfully introduced by Sandrin and coworkers in 2003.[@b8-hmer-2-049] Meanwhile, many studies on chronic liver diseases have proven that measurement of LS is a rapid and excellent screening test for liver cirrhosis. Alternative approaches are currently explored either based on competing ultrasound or MRI methods, and the future will show which technique will prevail in which clinical setting. On the other hand, LS has been introduced to the field of hepatology as a novel objective physical parameter that can be followed up as compared to, for example, temperature. Like body temperature, we have learnt in a rather short time that LS is not only determined by the degree of fibrosis but also other clinical settings such as inflammation, cholestasis and liver congestions. This review, therefore, is designed to briefly update the reader on the present knowledge of LS. After an overview about technical aspects and alternative methods, basic conditions are discussed that influence LS. Algorithms are presented on how to use LS values in clinical practices, to consider pitfalls. In addition, the novel pressure--stiffness--fibrosis sequence hypothesis is introduced and briefly discussed that could stimulate the intensive search to identify the molecular mechanisms underlying liver fibrosis. Finally, open LS-related questions are defined that should be addressed by future clinical and basic research studies. Pathophysiology of liver stiffness ================================== Liver stiffness -- definition ----------------------------- Going through the theory of elasticity is far beyond the scope of this review. However some basic notions are useful to better understand what stiffness means. From a physical and mechanical point of view, stiffness can be defined as the modulus of elasticity or Young's modulus (E). Hooke's law of elasticity is an approximation that states that the extension of a material is directly proportional to the applied stress, σ = Eɛ, where σ is the stress applied to the material, and ɛ is the strain induced in the material. Stiffness (E) is expressed in kilopascals (kPa) and represents the resistance of material to deformation. While stiff materials, such as concrete, exhibit low strain even at high stress, soft materials such as biological soft tissues exhibit large strain even at low stress. LS, like any other soft tissue stiffness, depends on many factors. The first and main factor is the extracellular matrix of the organ. The extracellular matrix is a deformable structure that transfers the external forces through the liver. It can be compared to the foundation of a building. A second factor is the constraints that are applied on the organ. The more pressure that is applied to the liver at its boundaries, the stiffer it gets. A third factor is the internal pressure inside the organ -- if blood, or another liquid is coming in and out, then stiffness will depend on the resistance that the organ applies to the flow. A fourth and important factor is the viscous effects which influence the time constant over which stiffness is tested. This effect is linked to frequency, ie, stiffness depends on frequency. While liver is soft at very low frequency (on the order of several hertz) which corresponds to manual palpation time-constant, it tends to be much harder at high frequencies (over several tens of kilohertz). Measurement of liver stiffness using transient elastography (FibroScan®) ------------------------------------------------------------------------ The FibroScan® (FS) (Echosens, Paris, France) device is the first elastography technique developed to quantitatively and noninvasively assess soft biological tissue stiffness *in vivo*. Liver was a natural first organ to study due to its size and rather homogenous texture.[@b8-hmer-2-049] In principle, shear waves are generated through the liver and LS is deduced from their velocity. FS uses the technique called transient elastography (TE) or vibration-controlled transient elastography (VCTE™). It is based on the controlled generation of a transient shear wave using a servo-controlled vibration of known frequency and amplitude. LS is computed from the velocity of these mechanical waves using the following equation: E = 3 ρ V s 2 where E is the Young's modulus or stiffness, ρ is the density, and V*~s~* the shear velocity. The shear velocity measured by VCTE™ is a group velocity around 50 Hz. Minimum and maximum stiffness values that can be measured by FS are 1.5 kPa and 75.0 kPa respectively. Technically, FS consists in a dedicated acquisition platform that includes a single channel ultrasound analog front end to emit and receive ultrasound signals, and a servo-controlled vibrator for the shear wave generation. The probe itself contains a sophisticated vibrator on the axis of which a single element ultrasound transducer is mounted. As shown in [Figure 2](#f2-hmer-2-049){ref-type="fig"}, the vibration consists of a sinusoid period with a center frequency of 50 Hz. Its amplitude depends on the probe model: 2 mm peak-to-peak (PP) with the standard probe (model M), 1 mm PP with the pediatric probe (model S), and 3 mm PP with the obese patients dedicated probe (model XL). The shear wave propagation is monitored using ultrafast ultrasound acquisitions. In the standard examination procedures, LS measurements using FS are performed on the right lobe of the liver in intercostal position (see [Figure 3](#f3-hmer-2-049){ref-type="fig"}). This prevents direct compression of the liver that would eventually affect LS values. The patient is lying on his back with the right arm behind the head in order to enlarge intercostals space as much as possible. The operator uses ultrasound M-mode and A-mode images ([Figures 4A](#f4-hmer-2-049){ref-type="fig"} and [B](#f4-hmer-2-049){ref-type="fig"}) to locate the liver, and triggers the measurement by pushing on the probe button. The shear wave can be observed on the elastogram image ([Figure 4C](#f4-hmer-2-049){ref-type="fig"}) which represents the strains induced in the liver as a function of time and depth. It is computed from ultrasound data acquired at a very high frame rate during the shear wave propagation which lasts 80 ms. Measurement of liver stiffness using other elastographic techniques and normal stiffness values ----------------------------------------------------------------------------------------------- Although FS has been the first noninvasive elastographic techniques in practical use to assess LS, other competing-technical approaches have been developed. They are under current cross-validation and it is still too preliminary for a final statement (see [Table 1](#t1-hmer-2-049){ref-type="table"}). Magnetic resonance elastography (MRE) was introduced in 1995 by Muthupillai[@b9-hmer-2-049] and is now commercially available as MR-Touch (General Electric). Rouviere et al[@b10-hmer-2-049] measured liver shear stiffness in healthy volunteers and in patients with liver fibrosis. The shear stiffness μ can be deduced from Young's modulus E (as measured by FS) using the simple relationship: μ = E/3. Klatt et al measured the shear elastic modulus in 12 healthy volunteers and two patients.[@b11-hmer-2-049] Results obtained on volunteers are close to 6 kPa when converted in Young's modulus. MRE looks very promising. It seems to have a smaller standard deviation and, naturally, offers the combination of magnetic resonance imaging and elastography in one setting for different organs. However, it is an expensive, time consuming, certainly not a bedside procedure, and cannot be used in the setting of metal implants. FS has been directly cross-validated with MRE using artificial phantoms with an excellent correlation of r = 0.96.[@b12-hmer-2-049],[@b13-hmer-2-049] A linear correlation between LS and fibrosis stage has been observed in animal fibrosis models using MRE.[@b14-hmer-2-049] In addition to FS, various ultrasound-scanner-compatible elastography procedures are currently being evaluated. FS should not be mismatched with conventional static elastography that is now integrated in many ultrasound devices. The first system based on static elastography was the real time elastography (HI-RTE). It allows a visualization of relative stiffness within a B-ultrasound image using a red and blue color map. However, HI-RTE does not allow the quantitative measurement of stiffness values and, hence, pilot studies did not show a satisfying correlation with fibrosis score as compared to FS.^15^ More recently, several techniques[@b16-hmer-2-049]--[@b18-hmer-2-049] based on radiation force[@b19-hmer-2-049] have been proposed for LS measurement. These techniques use high intensity ultrasound beams to induce displacements inside the liver remotely. Acoustic Radiation Force Impulse (ARFI) with Virtual Touch™ tissue quantification has been introduced by Siemens (German). First ARFI-based results have been presented at international meetings in cross-validation with FS. Reasonable areas under receiver operating characteristic curves (AUROCS) for F3-4 fibrosis \>0.86 have been presented on various diseases with excellent interobserver variability of 0.98[@b20-hmer-2-049],[@b21-hmer-2-049] and a good correlation with FS of r = 0.65.[@b22-hmer-2-049] In contrast to FS, ascites does not impose a limitation to ARFI. However, up to now, FS seems to outscore the identification of F2-4 fibrosis stages with regards to diagnostic accuracy.[@b21-hmer-2-049] Since the physiological determinants of LS are not completely understood and the detection methods vary considerably, it is still a debate how to define normal LS values. In a recent study, we could demonstrate that simple breath maneuvers such as valsalva, or position changes such as laying to standing position, can dramatically either permanently or temporarily increase LS up to the upper detection limit of 75 kPa.[@b23-hmer-2-049] This study could also demonstrate that a horizontal position with normal breathing yields the lowest and most reproducible LS values. According to our experience, LS of \<6 kPa can be considered as normal.[@b23-hmer-2-049] Confirmation has come from a large screening study obtained on 1067 blood donors median with a medium LS of 4.4 kPa (95th centile 6.7).[@b24-hmer-2-049] [Tables 1](#t1-hmer-2-049){ref-type="table"} and [2](#t2-hmer-2-049){ref-type="table"} give an overview of recently reported stiffness values for liver and other organs obtained with different techniques for normal and pathological conditions. Liver stiffness assessment by FibroScan® -- practical experience ---------------------------------------------------------------- The major success of FS in measuring LS can be mainly explained by its true bedside test character that can be performed within 5--10 minutes. After a rapid training, FS provides a reasonable performance for the diagnosis of cirrhosis that is not influenced substantially by any other feature.[@b25-hmer-2-049] FS has an excellent interobserver rate especially without elevated transaminases[@b26-hmer-2-049] and a fast learning curve.[@b27-hmer-2-049] In addition, no significant difference in LS values have been found whether they were obtained from the fifth, sixth and seventh intercostal space.[@b28-hmer-2-049] Thus, in general, FS measurements can be routinely performed in more than 95% of patients. Major limitations are severe obesity and ascites that directly weaken the ultrasound signal.[@b29-hmer-2-049] In some collectives such as patients with decompensated cardiac insufficiency, the successful performance of FS can drop down to ca. 50%.[@b30-hmer-2-049] However, with the development of the novel XL probe, these obstacles could be drastically overcome. In our own preliminary experience, we found that the XL probe could measure LS in 70% of patients where the normal M probe were not applicable. Moreover, the XL probe could not only be successfully applied to severely obese patients but also to patients with ascites and lean patients with an ultrasound-diffracting subcutaneous fat tissue. It will be interesting to learn in the future why some nonobese people are critical to be measured by FS. Potential artificial results obtained by FibroScan® --------------------------------------------------- Shear wave propagation in soft biological tissues might be very complex. Thus, LS is calculated as the median of 5 to 10 valid measurements. Outliers are removed and the interquartile range is provided as a mean to check for the quality of the measure. Furthermore, FS implements special algorithms to automatically reject incorrect measurements which are ranked invalid and are thus not included in stiffness calculation. However some cautions must be taken, especially with probe perpendicularity and rib cage intercostal spaces. First, it is important that the probe is placed perpendicular to the skin surface when measuring LS to prevent overestimation which could happen if the shear wave propagation is misaligned with the ultrasound beam. Second, the probe model should be adapted to the patient morphology so that the ribs do not contribute to the shear wave generation. This would affect the measurement quality by inducing secondary shear waves. Although diffraction effects by ribs are rare, they may lead to confusion and misinterpretation. Interestingly, shear waves do not propagate through liquids because they are elastic waves and only pressure waves can propagate through liquids such as those which are used by ARFI. For this reason patients with ascites may not be measurable with FS as far as no physical contact exists between the liver and the intercostals wall. (Patho)physiology of liver stiffness ==================================== Liver stiffness as surrogate marker of fibrosis stage ----------------------------------------------------- LS has mainly been studied in patients with viral hepatitis B and C (HBV and HCV),[@b8-hmer-2-049],[@b25-hmer-2-049],[@b31-hmer-2-049]--[@b37-hmer-2-049] and to a lesser extent in alcoholic liver disease (ALD)[@b38-hmer-2-049]--[@b40-hmer-2-049] and primary biliary cirrhosis (PBC)/primary sclerosing cholangitis (PSC).[@b41-hmer-2-049]--[@b43-hmer-2-049] In contrast, only random and preliminary reports exist on autoimmune hepatitis[@b44-hmer-2-049],[@b45-hmer-2-049] and nonalcoholic liver disease (NALD).[@b46-hmer-2-049],[@b47-hmer-2-049][Table 3](#t3-hmer-2-049){ref-type="table"} shows the performance of LS to assess fibrosis stages F3 and F4 for various diseases (selected studies). [Table 4](#t4-hmer-2-049){ref-type="table"} compares normal and fibrotic stiffness values obtained by different methods. The major experience of these studies can be summarized as follows: a. LS correlates well with fibrosis stage typically with an r \> 0.7 and *P* \< 0.005. b. Advanced fibrosis stage F3 and cirrhosis (F4) are identified via LS with high accuracy (AUROC \> 0.9). This is mainly due to the so called bridging fibrosis (the continuous formation of collagen septa between liver lobuli) that are characteristic for these fibrosis stages. In contrast, fibrosis stages F1 and F2 only mildly increase LS. Therefore, these fibrosis stages are not well discriminated via the measurement of LS. c. Cut-off values have been defined that allow the diagnosis of advanced fibrosis (F3/F4). Despite some variability, cut-off values of 8.0 and 12.5 kPa are widely accepted to identify patients with F3 and F4 fibrosis, respectively ([Figure 5](#f5-hmer-2-049){ref-type="fig"}). It has also become rapidly clear that cut-off values differ between various chronic liver diseases, being tentatively higher in disease with pronounced inflammation or cholestasis such as ALD, PSC or PBC. This is one reason to ask for studies with well defined and homogenous patient populations. Potential causes for varying cut-off values will be discussed below. Fibrosis assessment by liver stiffness and comparison with other noninvasive fibrosis markers/techniques -------------------------------------------------------------------------------------------------------- ### Imaging techniques Since abdominal ultrasound is routinely and rapidly performed in liver patients, a few studies have naturally asked the questions whether LS provides additional information with regard to fibrosis. In comparison to FS, ultrasound is a subjective examination that largely depends upon the experience of the examiner. It is not always clear that only a few ultrasound signs such as nodular aspects of the liver surface, or vascular collaterals are so called sure ultrasound signs of liver cirrhosis (but not splenomegaly or ascites). In an actual larger study on 320 patients with various liver disease, the diagnostic accuracy of LS was significantly superior to ultrasound.[@b48-hmer-2-049] In our own experience, FS recognized generally more than twice of patients with F3/4 fibrosis compared to ultrasound.[@b49-hmer-2-049] This means in numbers, that more than 20 patients with F3/4 fibrosis were not recognized by routine ultrasound, while FS identified almost all 45 patients. It should be pointed out that these are results for a typical clinical routine ultrasound performed within 15--20 min; the accuracy of ultrasound can certainly be increased by a more meticulous and time consuming procedure. However, the time-intensive ultrasound is still subjective and can typically not be performed during the daily practice in most regular hospitals and outpatient departments. Therefore, as a rule of thumb, the rapid 5--10 min FS recognizes ca. Twice as many patients with advanced fibrosis as compared to the routine ultrasound. ### Serum markers Although serum markers that are used within scores such as the Fibrotest, APRI score, etc, are widely explored and have been also cross-validated with FS,[@b22-hmer-2-049],[@b35-hmer-2-049],[@b50-hmer-2-049]--[@b54-hmer-2-049] the authors, up to now, do not generally recommend their use and FS seems to outscore all of these tests. However, we admit, as will be discussed below, that a combination and a refined algorithm using elastography, serum markers, and imaging techniques may optimize a cost-efficient screening for liver fibrosis in certain settings or spare patients from invasive histology.[@b55-hmer-2-049] The major problem is that serum markers reflect the profibrogenic or profibrolytic activity, but do not yield any information about the net deposition of matrix in the liver which are not necessarily correlated to each other. Other factors that increase liver stiffness ------------------------------------------- It has been rapidly learnt that LS is also increased by other confounding factors such as hepatitis, mechanic cholestasis, liver congestion, cellular infiltrations, and deposition of amyloid irrespective of fibrosis stage (see [Figures 5](#f5-hmer-2-049){ref-type="fig"} and [6](#f6-hmer-2-049){ref-type="fig"}). These important interferences will now be discussed in more detail. It should be mentioned that steatosis does not increase LS[@b40-hmer-2-049],[@b56-hmer-2-049] although it is often regarded as an essential initial state in chronic liver disease. Rather, steatosis may slightly decrease LS. ### Inflammation (hepatitis) LS can be dramatically increased during laboratory signs of hepatitis[@b50-hmer-2-049],[@b57-hmer-2-049],[@b58-hmer-2-049] independent of the degree of fibrosis. These conditions may increase LS to a degree that would otherwise suggest advanced liver cirrhosis (ie, stiffness values of 12.5 kPa and above). In our recent studies on patients with ALD undergoing alcohol detoxification, LS was initially increased up to 50 kPa but could decrease within 1 week by 30 kPa.[@b40-hmer-2-049] In HCV patients with biochemical remission (either spontaneous or after antiviral therapy), LS was lower than in patients with identical fibrosis stage, but elevated alanine transaminase (ALT). The LS dynamic profiles paralleled those of ALT, increasing 1.3- to 3-fold during ALT flares in patients with hepatitis exacerbations.[@b50-hmer-2-049] In patients with HBV infection, fibrosis assessment was unreliable if serum transaminases were higher than twice of normal values.[@b59-hmer-2-049] In our experience, ongoing biochemical activity of liver disease in form of increased transaminases leads to an overestimation of fibrosis stage, since hepatitis per se increases LS, irrespective of fibrosis. What are the underlying factors leading to increased LS in these patients? In our sequential FS study on 50 patients with ALD undergoing alcohol detoxification we could show the following phenomena:[@b40-hmer-2-049] a. All transaminase levels decreased during alcohol detoxification, and almost all LS values decreased during the observation interval. b. The higher the decrease in transaminases was, the higher was the decrease of LS. c. Excluding patients with significant ongoing biochemical activity of hepatitis from fibrosis assessment by FS significantly improved AUROC for F3/4 fibrosis. d. Additional histological information on inflammation did not further improve the diagnostic accuracy. This study thus shows that, at least in patients with ALD, serum transaminases truly reflect the degree of hepatitis and that the inflammation is a critical factor determining LS. In our patient population, the decrease of aspartate aminotransferase (AST) correlated better with the decrease of LS as compared to ALT. It is interesting to learn that in HCV infected patients similar observations have been made. Here, AST was found to be the unique variable significantly related (*P* = 0.046) with discordance between biopsy and LS.[@b60-hmer-2-049] Subanalysis of histological scores with LS values was also very revealing. Here, necrosis, hepatocyte swelling and the degree of inflammation correlated with LS but not steatosis. This has been partly confirmed in a recent study on patients with nonalcoholic fatty liver disease (NAFLD).[@b47-hmer-2-049] We conclude from our study that patients with an AST \> 100 U/L lead to an overestimation of fibrosis stage. These patients should be first detoxified from alcohol and LS should be obtained after normalization. A refined algorithm will be discussed below. ### Cholestasis In a recent study on 15 patients with mechanic cholestasis due to tumor obstruction (pancreas carcinoma, Klatskin tumor, liver metastases, and gastrointestinal stromal tumor \[GIST\]) or choledocholithiasis, we could demonstrate that mechanical cholestasis per se can drastically and reversibly increase LS.[@b61-hmer-2-049] LS correlated significantly with a decrease in bilirubin, but not with gamma-glutamyl transpeptidase (GGT), alkaline phosphatase (AP), AST, or ALT. We further confirmed the direct relation between LS and choletasis in bile duct ligation experiments on landrace pigs. The bile duct ligation over 120 min led to a significant swelling of the liver and a tightly palpable gall bladder. LS values doubled during bile duct ligation and reached values suggesting F3 fibrosis. After removal of the bile duct ligation and a recovery period of 30 min, LS values returned to almost normal values around 6.1 kPa. The reasons underlying the high stiffness in cholestasis are unknown but could be related to tissue swelling, edema and increased intracellular pressure due to impaired bile flow. In addition, cholestasis might be a general phenomenon leading to increased LS in various chronic liver diseases as intrahepatic cholestasis has been shown to correlate strongly with LS in patients with acute hepatitis[@b58-hmer-2-049] but also ALD.[@b40-hmer-2-049] ### Liver congestion and venous pressure Random observation had suggested earlier that FS is unreliable in patients with liver congestion, for example due to cardiac insufficiency. We could recently demonstrate that the central venous pressure directly controls LS in a reversible manner.[@b30-hmer-2-049] Over a wide range, LS is a linear function of intravenous pressure reaching the upper detection limit of 75 kPa at an intravenous pressure of 36 cm water column. We eventually showed in 10 patients with decompensated congestive heart failure that LS is dramatically elevated under such pathological conditions and rapidly decreases during clinical recompensation due to diuretic therapy. Since fibrosis state cannot change within such a short period of time, these findings further underline the direct dependence of LS on venous pressure. The majority of patients with decompensated cardiac failure had initial LS far above the cut-off value of 12.5 kPa which is generally accepted for the diagnosis of F4 fibrosis, reaching up to 51.3 kPa. Although LS decreased in all patients during therapy with diuretics it only fell below 12.5 kPa in two of them while seven remained in the range of F4 fibrosis. Older age as a reason for increased LS can be excluded as a recent study by Sirli and colleagues showed.[@b62-hmer-2-049] Thus, increased LS could be due to the onset of cardiac fibrosis in these cases, and fibrosis assessment by FS will be especially challenging in patients with cardiac insufficiency since both fibrosis and venous pressure increase LS. It also remains questionable in this context whether recently reported increased LS in patients with failing Fontan circulation was indeed due to cardiac liver fibrosis,[@b63-hmer-2-049] or just elevated central venous pressure since no sequential LS measurements were performed. On a special note, LS may become a useful noninvasive tool for screening cardiac patients and identifying those that are at risk of cardiac cirrhosis since increased venous pressure (but not abnormal liver function tests) has been recognized as major risk factor of cardiac fibrosis.[@b64-hmer-2-049] ### Liver infiltration, deposits, rare diseases It is a daily experience of surgeons that hepatic tumor infiltration increases LS. Therefore, focal or nodular masses within the liver should be excluded by ultrasound prior to FS. However, since not all hepatic masses can be detected by ultrasound, one should be aware of such potential misinterpretations of LS measurements. A typical finding during LS measurements in, for example, a metastatic liver, are extremely variable stiffness values that clearly depend on position changing of the probe.[@b61-hmer-2-049] However, also rare and less visible infiltration with mast cells can also lead to dramatically increased LS.[@b23-hmer-2-049] We recently reported on a patient with systemic mastocytosis showing an LS of 75 kPa (upper detection limit). The patient had otherwise suspicious signs of liver cirrhosis (splenomegaly, ascites, varices). However, liver synthesis was normal and the differential blood count showed an increased number of mast cells. Diagnosis was ultimately confirmed by liver biopsy. An important noncancerous differential diagnosis of increased LS is amyloidosis. Increased LS due to amyloid deposits has been demonstrated in animal models (submitted by Sandrin L, et al) and humans with amyloidosis A.[@b65-hmer-2-049], [@b66-hmer-2-049] Interestingly, all these clinical entities showed pronounced hepatomegaly. Liver stiffness and clinical end points ======================================= The ultimate goal of novel medical techniques should be to improve diagnosis or therapy of human disease. Therefore, with regard to LS, we would like to see whether it improves the early recognition of cirrhosis-related complications such as portal hypertension, esophageal varices, primary liver cancer or the response to therapies. Liver stiffness and portal hypertension --------------------------------------- Since fibrosis increases the hepatic vascular resistance and ultimately leads to portal hypertension (see [Figure 7](#f7-hmer-2-049){ref-type="fig"}), it was just a matter of time to test whether LS could be used as a diagnostic test for portal hypertensions. Meanwhile, several studies have compared LS directly against invasive hepatic venous pressure gradient (HVPG) or the presence of esophageal varices in adults (0.84--0.86)[@b54-hmer-2-049],[@b67-hmer-2-049]--[@b73-hmer-2-049] and children.[@b70-hmer-2-049] As shown in [Table 5](#t5-hmer-2-049){ref-type="table"}, there is an excellent direct correlation between LS and HVPG (0.84--0.86)[@b67-hmer-2-049]--[@b69-hmer-2-049] with an AUROC for detection of significant HVPG (\>6--12 mm Hg) of 0.92--0.99.[@b67-hmer-2-049]--[@b69-hmer-2-049] A cut-off value of ca. 20 kPa (13.6--34.9 kPa) predicted significant HVPG.[@b67-hmer-2-049]--[@b69-hmer-2-049] Interestingly, lower values were found for HCV (ca. 20 kPa) as compared to ALD (34 kPa). More interestingly, LS correlated with the degree of esophageal varices (r = 0.6, *P* \< 0.0001)[@b71-hmer-2-049] and the AUROC for the prediction of significant varices was 0.71--0.95 with a comparable cut-off of ca. 20 kPa (see [Table 6](#t6-hmer-2-049){ref-type="table"}).[@b54-hmer-2-049],[@b69-hmer-2-049]--[@b73-hmer-2-049] [Figure 8](#f8-hmer-2-049){ref-type="fig"} and [Table 7](#t7-hmer-2-049){ref-type="table"} explain the more complex relation of liver and spleen stiffness with regard to the location of a potential thrombosis in the porto-caval system. This might explain why additional assessment of spleen stiffness could be better to predict portal hypertension and varices.[@b74-hmer-2-049] In addition, cirrhosis develops in post or sinusoidal thrombosis,[@b75-hmer-2-049],[@b76-hmer-2-049] but not in presinosoidal idiopathic portal hypertension (IPH).[@b77-hmer-2-049] Hence, no increased LS can be detected in patients with IPH and this explains why in some patients a normal LS does not exclude portal hypertension and the presence of varices. Indeed, a recent report documented five patients presented with variceal bleeding, two with splenomegaly, and one with ascites. All had large esophageal varices. Median HVPG was 8 mm Hg (range 3.5--14.5), clearly underestimating the true portal pressure due to the presinusoidal component of portal hypertension. Median LS was 8.9 kPa (range 6.8--14.9) and was unreliable in predicting the presence of fibrosis or of esophageal varices.[@b77-hmer-2-049] Liver stiffness and disease follow up ------------------------------------- ### Follow up studies in viral hepatitis C patients Meanwhile, several longitudinal studies have been reported on LS during HCV treatment. Vergniol et al studied 416 patients, of whom 112 started treatment after enrolment. In multivariate analysis, treatment was the only factor independently associated with a fall in LS.[@b78-hmer-2-049] Ogawa et al prospectively studied 145 Japanese patients with chronic HCV infection at baseline, at the end of treatment, and at 48 and 96 weeks after the end of treatment. LS significantly decreased in the groups with sustained virological response and biochemical response but not in the nonresponders.[@b79-hmer-2-049] Andersen et al prospectively studied 114 Japanese patients with chronic HCV median follow up 47--48 months. In this study, LS was significantly lower for patients with sustained viral response (SVR). The differences were more pronounced in the F2-F4 fibrosis group.[@b80-hmer-2-049] ### Liver stiffness and alcoholic liver disease follow up We recently performed a sequential FS study in patients with ALD undergoing alcohol detoxicification[@b40-hmer-2-049] to test if inflammation also interferes with LS assessment in ALD, and to provide a clinical algorithm for reliable fibrosis assessment in ALD by FS. We first performed sequential LS analysis before and after normalization of serum transaminases in a learning cohort of 50 patients with ALD admitted for alcohol detoxification. LS decreased in almost all patients within a mean observation interval of 5.3 d. Six patients (12%) would have been misdiagnosed with F3 and F4 fibrosis but LS decreased below critical cut-off values of 8 and 12.5 kPa after normalization of transaminases. Of the serum transaminases, the decrease in LS correlated best with the decrease in glutamic oxaloacetic transaminase (GOT). No significant changes in LS were observed below GOT levels of 100 U/L. After establishing the association between LS and GOT levels, we applied the rule of GOT \< 100 U/L for reliable LS assessment in a second validation cohort of 101 patients with histologically confirmed ALD. By excluding those patients with GOT \> 100 U/L at the time of LS assessment from this cohort, the AUROC for cirrhosis detection by FS improved from 0.921 to 0.945 while specificity increased from 80 to 90% at a sensitivity of 96%. A similar AUROC could be obtained for lower F3 fibrosis stage if LS measurements were restricted to patients with GOT \< 50 U/L. Histological grading of inflammation did not further improve the diagnostic accuracy of LS. In conclusion, coexisting steatohepatitis markedly increases LS in patients with ALD, independent of fibrosis stage. Postponing cirrhosis assessment by FS during alcohol withdrawal until GOT decreases to \<100 U/mL significantly improves the diagnostic accuracy. Liver stiffness and hepatocellular carcinoma -------------------------------------------- Some studies have tested whether LS allows the prediction of HCC risk since cirrhosis is an independent risk factor of HCC. Foucher et al reported a cut off values for the presence of HCC of 53.7.[@b81-hmer-2-049] Several studies have now looked in more detail into the relation of HCC and LS.[@b54-hmer-2-049],[@b72-hmer-2-049],[@b73-hmer-2-049],[@b82-hmer-2-049]--[@b84-hmer-2-049] As can be seen from [Table 8](#t8-hmer-2-049){ref-type="table"}, an LS of \>20 kPa drastically increases the risk for HCC. Not by coincidence, this cut-off value is almost identical with the cut-of value for esophageal varices and significant portal hypertension. Liver stiffness and surgery --------------------------- ### Liver stiffness and liver transplant Risk stratification of patients on the liver transplant waiting list is still an unresolved challenge, but the limited organ supply asks for more quantitative risk assessment strategies. LS could be a supplemental quantitative method since it recognizes pathological states of the liver that could all worsen the outcome such as fibrosis, inflammation, venous pressure, cholestasis, or portal hypertension. In a post-transplant study on patients infected with HCV, median LS at months 6, 9, and 12 were significantly higher in rapid fibrosers as compared to slow fibrosers. The slope of LS progression in rapid fibrosers was significantly greater than in slow fibrosers, suggesting two different speeds of liver fibrosis progression.[@b85-hmer-2-049] Multivariate analysis identified donor age, bilirubin level, and LS as independent predictors of fibrosis progression and portal hypertension in the estimation group.[@b85-hmer-2-049] Another study suggested that TE is a reliable tool to assess liver fibrosis in patients with recurrent HCV after living donor liver transplantation.[@b86-hmer-2-049] ### Liver stiffness and hepatectomy Tactile stiffness sensors have been evaluated in the pre-FS era with success in patients with partial hepatectomy to predict the sufficient remain liver mass.[@b87-hmer-2-049]--[@b89-hmer-2-049] It remains open whether FS will add to the evaluation of critical liver mass especially in fibrotic patients prior to partial hepatectomy. Present algorithm to diagnose liver disease via liver stiffness =============================================================== Various algorithms have been presented mainly for viral hepatitis to use LS in combination with blood tests to improve the noninvasive diagnosis of liver fibrosis or to spare at least some patients from the invasive liver biopsy.[@b55-hmer-2-049],[@b90-hmer-2-049] Given the many interfering factors that modulate LS, however, we are somewhat skeptical about using such approaches. Such statistical approaches aim to automate a complex diagnostic decision procedure. At the end, a unique patient requires an individual differential diagnosis, and a careful balance of the various risks has to be kept. Just to mention one example, many patients with viral hepatitis do have additional liver diseases such as alcoholic liver disease or suboptimal dietary condition. These are all factors that can dramatically worsen the outcome of chronic hepatitis in a synergistic manner.[@b91-hmer-2-049] With this regard, at least to us, it is more useful to view LS as a novel physical parameter such as, for example, body temperature -- which can be objectively measured and should then be interpreted in the full clinical context. We propose this more open and critical procedure since misinterpretations or biases can rapidly harm the patient and delay other important diagnostic or therapeutic measures. A general actual scheme for the interpretation of LS is shown in [Figure 9](#f9-hmer-2-049){ref-type="fig"}. Although, the definition of normal stiffness values are still under discussion and need to be defined for various populations with regard to age, gender, or other factors, recent populations of healthy blood donors or the influence of position changes and breath maneuvers suggest an LS \< 6 kPa as normal.[@b23-hmer-2-049],[@b24-hmer-2-049] Moreover, at least in our experience, an LS \< 6 kPa seems to exclude any manifest liver disease since all potential confounding factors such as inflammation, cholestasis or congestion increase LS. LS measurements are therefore an ideal screening tool to exclude any severe ongoing liver disease. Of course, one should be aware that other pathological conditions such as fatty liver or even terminal liver failure do not increase LS further, or may even decrease LS, but these conditions are easily discernible within the clinical context. If LS is higher than 6 kPa an ultrasound is required to exclude mechanic cholestasis,[@b61-hmer-2-049] liver congestion[@b30-hmer-2-049] or nodular masses. Typically, we obtain the ultrasound before stiffness measurements since other valuable information such as splenomegaly, ascites or signs of liver disease can be detected. In addition, the location of an optimal stiffness measurement is identified. Thus, it becomes rapidly clear that a valid interpretation of LS is only possible in association with a qualified abdominal ultrasound. If ultrasound does not reveal any of the stiffness-modulating factors above, serum transaminases should be obtained. If the serum transaminases are normal, LS can be directly used to quantitate the degree of fibrosis. If the serum transaminases, mainly AST, are below 100 U/L, the diagnosis of F4 fibrosis is highly accurate while F3 fibrosis should be viewed with caution. At AST levels higher than 100 U/L, an accurate determination of fibrosis stage is not possible. It should be mentioned that these transaminase cut-off values have been obtained for patients with ALD[@b40-hmer-2-049] and future studies are required to determine the conditions for other liver diseases. The context-related interpretation of LS is more difficult in the case of several stiffness-related factors such as inflammation/fibrosis or liver congestion/cardiac cirrhosis. However, under certain conditions, a decision is still possible. For instance, in the case of ALD, the diagnosis of F4 cirrhosis can be made at LS \> 24 kPa despite ongoing severe alcoholic steatohepatitis.[@b40-hmer-2-049] Such upper cut-off values need to be confirmed and defined for all other liver diseases in larger populations (see [Figure 9](#f9-hmer-2-049){ref-type="fig"}). In addition, if possible, therapeutic interventions may help to more accurately differentiate fibrosis stage from other LS-increasing confounding factors. Thus, if liver congestion in a patient with congestive heart failure can be clearly cured by therapy with diuretics (as confirmed by ultrasound and blood tests), an increased but stable LS could directly be used to quantitate fibrosis stage. Under certain circumstances it is possible to estimate the contribution of venous pressure, mechanic cholestasis and inflammation (hepatitis). [Figure 10](#f10-hmer-2-049){ref-type="fig"} gives typical empirical values for stiffness changes as obtained from previous reports.[@b30-hmer-2-049],[@b40-hmer-2-049],[@b61-hmer-2-049] Thus, during mechanic cholestasis by gallstones, an increase of bilirubin by 1 mg/dl will cause a medium increase in LS by ca. 1 kPa. Liver stiffness as molecular mechanism of liver fibrosis ======================================================== The molecular mechanisms of liver fibrosis are poorly understood despite extensive research activities over many decades.[@b92-hmer-2-049]--[@b94-hmer-2-049] Consequently, no targeted treatment options exist to directly prevent progression of matrix deposition. It is intriguing that all chronic liver diseases eventually lead to liver cirrhosis and the sequence of steatosis, steatohepatitis and fibrosis/cirrhosis is generally accepted as causative. However, it is not known which of the intermediated steps are just bystanders or obligatory. In fact, most, if not all, liver diseases show various forms of inflammation and steatosis. It is also notable that in most scenarios, eg, ALD or HCV, only a minority of patients (ca. 15%) progress to cirrhosis.[@b91-hmer-2-049] This generates some optimism that there are genetic or environmental causes that determine fibrosis progression and that fibrosis progression is not an essential and constitutional process. This optimism is further nourished by the established knowledge that early causative treatment of liver diseases not only stops fibrosis progression but can even introduce the complete reversal of fibrosis. Unfortunately, the conditions that define the "points of no return" are not known. LS and its direct relation to pressure[@b30-hmer-2-049],[@b61-hmer-2-049] may serve as an eye-opener for mechanical stretch as a longtime neglected potential stimulus of matrix deposition. It is indeed fascinating to see that all possible conditions of liver cirrhosis increase LS, and that these conditions are not always related to inflammation (which is typically regarded as a common road to liver fibrosis of all liver diseases). Thus, mechanical stop of bile flow or hepatic vein blood flow dramatically increase LS, and both conditions are known to cause cirrhosis. Both conditions increase hydrostatic pressure in distinct compartments and ultimately lead to specific cirrhosis patterns (cardiac cirrhosis, biliary fibrosis). Although both conditions may also lead to remarkable signs of inflammation or hepatocellular necrosis, they are typically not as pronounced as compared to inflammatory liver diseases such as ALD or viral hepatitis. On the other hand, it has become clear that inflammatory conditions increase LS irrespective of fibrosis.[@b57-hmer-2-049],[@b58-hmer-2-049] This is not a surprise since "tumor" (swelling) has been known since the ancient times as a classical sign of inflammation besides "calor" (heat), *functio laesa* and "rubor" (reddening). It is, however, undisputable that inflammation-caused tissue swelling regardless of its multifactorial cause, is also caused by pressure that is more related to osmotic pressure. Thus, in fact, all conditions that ultimately lead to cirrhosis cause increased LS, and this increased LS is initially related to increased pressure of various origins and in various compartments. It is very obvious that matrix and connective tissue are in balance with various kinds of pressures. These observations and thoughts yield to the following new paradigm that we would like to call pressure-stiffness-fibrosis sequence hypothesis (see [Figure 11](#f11-hmer-2-049){ref-type="fig"}): during chronic liver diseases, the accumulation of interstitial liquid and inflammatory infiltrate yield to an increase of local stress and stretch of blood vessels or bile ducts. Therefore, increased mechanical stretch would stimulate the production of collagen (fibrotic tissue) which would result in a permanent stiffness increase as if the liver was adapting its structure to mechanical conditions. Interestingly, increased LS values related to fibrotic tissue could be a long-term consequence of a short-term stiffness increase due to the inflammatory episode related to the chronic liver diseases. Portal hypertension would then be the consequence of increased vascular resistance either caused by inflammation or matrix-related increase of stiffness. Indeed, an increased rate of esophageal variceal bleeding is observed in patients with ALD in the phase of fulminant alcoholic steatohepatitis and in the absence of end stage cirrhosis, and these patients are known to reach high but reversible LS values. Some very recent molecular findings may support the pressure-matrix-stiffness sequence hypothesis. Thus, mechanical stretch induces transforming growth factor (TGF)-β synthesis in hepatic stellate cells, which is known to be highly expressed under profibrogenic conditions.[@b95-hmer-2-049] From animal experiments it was recently concluded that increases in LS precede fibrosis and potentially myofibroblast activation.[@b96-hmer-2-049] Thus, matrix stiffness could be a major denominator of the equilibrium of matrix-bound growth factors.[@b97-hmer-2-049],[@b98-hmer-2-049] These findings point to a regulatory interlink between physical forces of gravity, hemodynamic stress, and movement in tissue development that are still a poorly understood area of research.[@b99-hmer-2-049] Intercellular mechanical coupling of stress fibres via adherens junctions, intracellular calcium oscillations, and mechanosensitive ion channels have been discussed to control cell-dense tissue by coordinating the activity of myofibroblasts.[@b100-hmer-2-049] The pressure-matrix-stiffness hypothesis would also encourage a more in-depth look into the regulation of cell volume[@b101-hmer-2-049],[@b102-hmer-2-049] and aquaporin regulation.[@b103-hmer-2-049] In addition, also a relation to vasoactive hormones such as natriuretic peptides seems to be attractive which are increased in all patients with edematous disorders which lead to an increase in atrial tension or central blood volume, such as renal failure or liver cirrhosis with ascites.[@b104-hmer-2-049] Indeed, continuous intravenous infusion of atrial natriuretic peptide prevented liver fibrosis in rat.[@b105-hmer-2-049] Liver stiffness and future perspectives ======================================= The noninvasive ability to measure LS has opened a new realm for both the diagnosis but also the molecular understanding of liver fibrosis. We will observe a rapid technical improvement of ultrasound and MRI-based elastography techniques. In addition, stiffness measurements of other organs such as spleen, pancreas or kidney will be possible. Hopefully, miniaturization will open stiffness measurements via endoscopic procedures. Modified technologies such as FS will be able to quantitate the degree for liver steatosis. Thus, a novel physical parameter has been developed to quantify hepatic steatosis. This VCTE-based ultrasonic attenuation is called 'CAP', for 'controlled attenuation parameter' and demonstrates good performance for diagnosis of fatty infiltration in more than 10% of hepatocytes.[@b106-hmer-2-049] With regards to LS, upcoming studies have to clarify the following open questions: - Can we identify a direct quantitative relation between type and histological localization of hepatitis, serum transaminases and LS? - What is the diagnostic value of LS in more complex clinical settings, eg, a patient with combined alcoholic liver fibrosis, steatohepatitis, and cardiomyopathy? - Could LS be part of prognostic scores for patients on the liver transplantation waiting list? - What other factors or rare diseases increase LS? - Could we use LS as a novel parameter to measure venous pressure in the context of intensive care settings or cardiology? - How valuable is LS in the neonatal screening for inborn liver diseases? - What are the gender and age specific normal stiffness values? - What are the population-wide prevalence rates of inceased LS and fibrosis? The area of LS will booster many basic research activities, and novel miniaturized equipment is urgently required that will allow LS measurements on small animals such as mice. These are some of the questions that need to be addressed in the future: - What are the genetic and molecular determinants of LS? - What are the kinetics of LS in various fibrosis models? - What are the kinetics of stiffness resolution in these models and is there a point of no return? - Is there a critical cut-off value for stiffness that causes fibrosis? - What is the role of vasoactive hormones, mechanosensing channels, and water channels such as aquaporins on LS and fibrosis? - Are there pharmacological or other therapeutic approaches to modulate LS and treat liver fibrosis? This work was supported by the Dietmar-Hopp foundation and the Manfred-Lautenschläger Foundation. The authors are grateful to Professor Richard Ehman from the Mayo Clinic (Rochester, USA) for the very stimulating discussions. **Disclosures** SM reports no conflict of interest. LS developed VCTE (Fibroscan) and is currently Director of Research and Development at Echosens. ![Invasive and noninvasive methods to determine liver fibrosis hepatic venous pressure gradient.](hmer-2-049Fig1){#f1-hmer-2-049} ![FibroScan® vibration consists of a period with a center frequency of 50 Hz. The standard M probe has a 2 mm peak-to-peak amplitude.](hmer-2-049Fig2){#f2-hmer-2-049} ![Liver stiffness measurements are performed on the right lobe of the liver in intercostal position using FibroScan®.](hmer-2-049Fig3){#f3-hmer-2-049} ![FibroScan® operator uses **A**) A-mode and **B**) M-mode images to locate the liver. The shear wave velocity is deduced from the **C**) elastogram which represents the strains induced in the liver by the shear wave propagation as a function of time and depth.](hmer-2-049Fig4){#f4-hmer-2-049} ![Liver stiffness range caused by matrix deposition (fibrosis) and pressure changes (osmotic, hydrostatic, intra-abdominal).](hmer-2-049Fig5){#f5-hmer-2-049} ![Not only matrix but also pressure-associated conditions influence liver stiffness.](hmer-2-049Fig6){#f6-hmer-2-049} ![Relation of liver stiffness with clinical fibrosis-related entities such as fibrosis stage, portal hypertension and esophageal bleeding.](hmer-2-049Fig7){#f7-hmer-2-049} ![Liver stiffness is increased in post-sinusoidal thrombosis (eg, Budd-Chiari-Syndrome) but not in pre-sinusoidal thrombosis (eg, portal vein thrombosis). Additional measurement of spleen stiffness closes the diagnostic gap.](hmer-2-049Fig8){#f8-hmer-2-049} ![Estimated increase of liver stiffness by various clinical conditions irrespective of fibrosis.\ **Note:** \*alcohol withdrawal.](hmer-2-049Fig9){#f9-hmer-2-049} ![Present diagnostic algorithm of liver stiffness. For details see text.\ \*Arrows indicate cured hepatitis eg, detoxification from alcohol or cure from hepatitis C virus.](hmer-2-049Fig10){#f10-hmer-2-049} ![Pressure-stiffness-matrix sequence hypothesis. Either hydrostatic (venous or bile) or osmotic (eg, inflammation) pressure increases liver stiffness which, in turn, initiates increased matrix deposition via mechanical intercellular signaling. Matrix deposition finally leads to an irreversible increase of liver stiffness that is independent of pressure. These events may ultimately enter a vicious cycle causing end-stage liver disease. For more details see text.](hmer-2-049Fig11){#f11-hmer-2-049} ###### Comparison of various techniques to assess liver stiffness Method Product name Vibration mode/source Frequency Advantages Limitations --------------------------------------------- -------------------------- ---------------------- -------------------------------- ---------------- ----------------------------------------------------------------------- ----------------------------------------------------------------- Static elastography Quasi-static compression eg, by Hitachi None Not applicable Widely available in ultrasound scanners Qualitative only Magnetic resonance elastography Shear wave Optima MR450 w 1.5 T Continuous mechanical actuator 50--60 Hz 2D/3D stiffness mapping, frequency controlled vibration, other organs Expensive, metal implants (pace makers, bone implants) Acoustic radiation force impulse Shear wave Acuson S2000 Transient radiation force Ascites, other organs Accuracy, limited clinical data Vibration-controlled transient elastography Shear wave FibroScan® Transient mechanical actuator 50 Hz Largely validated, frequency controlled vibration Sensitive to body habitus (obesity, ascites, bowel interpolate) ###### Stiffness and shear velocity of liver and other organs by various methodological approaches Liver Pancreas Spleen Kidney Ref. ------------------------------------------------ ------------------- ------------------- ---------- ------------------- --------------------------------------------------------------------------- MRE[\*](#tfn1-hmer-2-049){ref-type="table-fn"} \~2.2 kPa (60 Hz) \~2.0 kPa (60 Hz) \~7.3 kPa (90 Hz) [@b107-hmer-2-049] ARFI 1.16--1.59 m/s 1.4 m/s 2.44 m/s 2.24 m/s [@b108-hmer-2-049],[@b46-hmer-2-049],[@b20-hmer-2-049],[@b109-hmer-2-049] VCTE/FS 4--6 kPa (50 Hz) [@b23-hmer-2-049],[@b24-hmer-2-049] **Note:** Young's modulus E (as measured by VCTE/FS) is three times higher than the MRE-measured shear stiffness μ according to the following equation: μ = E/3. **Abbreviations:** MRE, magnetic resonance elastography; ARFI, acoustic radiation force impulse; VCTE, vibration-controlled transient elastography™; FS, FibroScan®. ###### Liver stiffness and fibrosis stages in various liver diseases Disease N Fibrosis-LS correlation AUROC F3 AUROC F4 Cut-off F3 Cut-off F4 Ref. --------- ----- ------------------------- ---------- ---------- ------------ ------------ -------------------- HCV 193 0.9 0.95 9.5 12.5 [@b32-hmer-2-049] HCV 935 0.89 0.91 [@b25-hmer-2-049] HCV/HIV 72 0.48; *P* \< 0.0001 0.91 0.97 11.9 [@b37-hmer-2-049] HBV 202 0.65; *P* \< 0.001 0.93 0.93 11.0 [@b110-hmer-2-049] ALD 103 0.72, *P* \< 0.014 0.9 0.92 11 19.5 [@b38-hmer-2-049] ALD 45 0.97 25.8 [@b39-hmer-2-049] ALD 101 0.72; *P* \< 0.001 0.91 0.92 8 11.5 [@b40-hmer-2-049] NAFLD 246 0.92 0.95 7.9 [@b47-hmer-2-049] PBC/PSC 101 0.84, *P* \< 0.0001 0.95 0.96 9.8 17.3 [@b42-hmer-2-049] PBC 80 0.96 [@b43-hmer-2-049] **Abbreviations:** HCV, hepatitis C virus; HIV, human immunodeficiency virus; HBV, hepatitis B virus; ALD, alcoholic liver disease; NAFLD, nonalcoholic fatty liver disease; LS, liver stiffness; PBC, primary biliary cirrhosis; PSC, primary sclerosing cholangitis; AUROC, areas under receiver operating characteristic curves. ###### Comparison of liver stiffness obtained by various techniques for normal and cirrhotic livers Normal Fibrosis (F3) Cirrhosis (F4) Ref. ------------------------------------------------ ------------------ ----------------- -------------------- -------------------------------------- MRE[\*](#tfn4-hmer-2-049){ref-type="table-fn"} 2 kPa (90 Hz) 5 kPa (90 Hz) [@b10-hmer-2-049] ARFI 1.5 m/s 1.8 m/s \>1.95 m/s [@b111-hmer-2-049] 2.1--2.3 m/s [@b20-hmer-2-049],[@b109-hmer-2-049] VCTE/FS 2--6 kPa (50 Hz) \>8 kPa (50 Hz) \>12.5 kPa (50 Hz) see above[@b23-hmer-2-049] **Note:** Young's modulus E (as measured by FibroScan/VCTE/FS) is three times higher than the MRE-measured shear stiffness μ according to the following equation: μ = E/3. **Abbreviations:** MRE, magnetic resonance elastography; ARFI, acoustic radiation force impulse; VCTE, vibration-controlled transient elastography™; FS, FibroScan®. ###### Liver stiffness and hepatic venous pressure gradient ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Patients N HVPG vs LS correlation HVPG (mm Hg) AUROC for significant portal hypertension Cut-off for significant portal hypertension Ref. --------------------------- ----- ------------------------ -------------- ------------------------------------------- --------------------------------------------- -------------------- HCV 150 0.858; *P* \< 0.001 0.945 21 kPa [@b112-hmer-2-049] HCV, ALD 92 0.76 20.5 kPa (HCV)\ [@b68-hmer-2-049] 34.9 kPa (ALD) Liver transplant patients 124 0.84; *P* \< 0.001 \>6 0.93 [@b67-hmer-2-049] HCV 61 0.81, *P* \< 0.0001 \>10 0.99 13.6 kPa [@b69-hmer-2-049] \>12 0.92 17.6 kPa ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Abbreviations:** HCV, hepatitis C virus; ALD, alcoholic liver disease; HVPG, hepatic venous pressure gradient; LS, liver stiffness; AUROC, areas under receiver operating characteristic curves. ###### Liver stiffness and prediction of esophageal varices Patients n Cut-off for varices AUROC Sensitivity Specificity PPV/NPV Ref. -------------------------------------------------------- ----- --------------------- ------- ------------- ------------- --------- ------------------- HCV 65 17.6 kPa 0.76 0.9 [@b69-hmer-2-049] Children with biliary atresia 49 9.7 kPa 0.97 0.8 [@b70-hmer-2-049] Cirrhosis 165 19.5 kPa 0.83 0.84 47/93 [@b71-hmer-2-049] HBV LSM-spleen diameter to platelet ratio score (LSPS) 90 0.95 0.947 [@b72-hmer-2-049] HCV 21.5 kPa 0.76 0.78 [@b54-hmer-2-049] HIV/HCV coinfected patients with liver cirrhosis 102 21 kPa 0.71 100 [@b73-hmer-2-049] **Abbreviations:** HCV, hepatitis C virus; HBV, hepatitis B virus; HIV, human immunodeficiency virus; ALD, alcoholic liver disease; NAFLD, nonalcoholic fatty liver disease; LS, liver stiffness; PBC, primary biliary cirrhosis; PSC, primary sclerosing cholangitis; AUROC, areas under receiver operating characteristic curves; PPV, positive predictive value; NPV, negative predictive value. ###### Liver stiffness and portal hypertension by pre-and postsinusoidal thrombosis Thrombosis Disease Fibrosis Portal hypertension Liver stiffness --------------------------- ------------------------------------------------------------------ ---------- --------------------- --------------------------- Presinusoidal Portal vein thrombosis no yes normal Presinusoidal Idiophatic portal hypertension no yes normal, slightly elevated Sinusoidal thrombosis hepatic veno-occlusive disease (sinusoidal obstruction syndrome) yes yes elevated Postsinusoidal thrombosis Budd-chiari syndrome yes yes elevated ###### Liver stiffness and risk of hepatocellular carcinoma ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Patients N Liver stiffness HCC likelihood Ref. -------------------------------------------------------- ----- ------------------------ ----------------------------------------------------------------------------- ------------------- HCV 262 \<10 kPa\ 0.22\ [@b82-hmer-2-049] 10.1 to 15 kPa\ 0.73\ 15.1 to 25 kPa\ 1.3\ .25 kPa 5.0\ (stratum-specific likelihood ratios) HCV, prosepctive study 984 10.1--15 kPa\ 16.7\ [@b83-hmer-2-049] 15.1--20 kPa\ 20.9\ 20.1--25 kPa \> 25 kPa 25.6\ 45.5\ (hazard ratio, as compared to LSM ≤ 10 kPa) HCV, ALD 265 Patients with HCC had higher LS than patients without HCC; 35.3 vs 19.0 kPa [@b84-hmer-2-049] HBV LSM-spleen diameter to platelet ratio score (LSPS) 90 0.95 [@b72-hmer-2-049] HCV 21.5 kPa [@b54-hmer-2-049] HIV/HCV-coinfected patients with liver cirrhosis 102 21 kPa 0.71 [@b73-hmer-2-049] ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Abbreviations**: HCV, hepatitis C virus; ALD, alcoholic liver disease; HBV, hepatitis B virus; HIV, human immunodeficiencyvirus; LS, liver stiffness; HCC, hepatocellular carcinoma.
Mid
[ 0.6485148514851481, 32.75, 17.75 ]
Q: pChart - Display png image on browser I am using the pChart class library to display .png Image on the browser. Through AJAX, I call the controller action graphgenerator to call generateGraph function in a model and display the output through a view on the browser. The generateGraph function in the MVC model, tries to generate graphs in a loop with an HTML table using pChart's stroke() function. When I view the output in the browser, that comes with the controller, I see it as under: How can I make sure I display the images instead of the following binary data? �PNG ��� IHDR����������h����tRNS������7X}�� �IDATx���wt[Y~'���C H�Q�(�RV)TUW��v�}��cό�9�;g�xvv�;s��z����a�㝙 �v�cUu�����L�,Q)f��/��@E�� ���� ����.���{��W?"������P}�rW�������� !�����@�BB�����P�������T)$�������U !�����@�BB �����P�X˲�]����� �(!������RH������B��������������J!!������RH������ B��������������Jaaz������*�+�������U !�����@�BB�����P�������T)$�������U !�����@�BB �����P��!�����@�b˸������m����s��EA��0LE��^늧�2� A: Save it to disk, and return the url to the browser. Then create an tag with that url.
Mid
[ 0.5521739130434781, 31.75, 25.75 ]
OK, it's time for a quiz. Each of these teams sees its offensive DVOA drop in the fourth quarter: Green Bay, Houston, New England, and Pittsburgh. Can you put them in order from the team with the biggest drop to the team with the smallest drop? While you work on that, here's the mailbag question from today's DVOA discussion thread. Paul M.: If there was a "First Three Quarters" DVOA, I would think the Packers would dominate and begin to show they are more of a historically dominant team than they are currently credited for, but then again that would discount some teams (any one in particular that comes to mind??? Hmmmm..... maybe they play in the Mountain Time Zone??) and their ability to rally late. Nat: Aaron, could you publish the rankings/numbers for "First Three Quarters" DVOA? ... In theory, this all-but-late DVOA should avoid the prevent-defense, garbage time, hail-mary, shut-the-offense down, play-the-backups issues -- while still being a large enough sample to characterize each team pretty well. Pretty please? OK, so, first let's answer the quiz question above. The answer, in order from biggest drop to smallest, goes: Pittsburgh, Houston, New England, Green Bay. Didn't expect that, I bet? An idea that came up in the DVOA discussion thread today is that the Packers take their foot off the gas in the fourth quarter, and that's the biggest reason they don't have a historically dominant DVOA that compares with teams like the 2007 Patriots and 1998 Broncos. Well, if DVOA is to believed, this is simply not true. In general, compared to this year's other top offenses, the Packers don't drop off much in the fourth quarter. This week against the Raiders was a dramatic exception, with the Packers putting up 33.4% DVOA in the first three quarters and then -155.4% DVOA in the fourth quarter (on only eight plays, compared to 53 plays in the first three quarters). Actually, the offense which drops off the most in the fourth quarter is Miami, which is slightly above average for three quarters and then the worst offense in the league in the fourth quarter. Apparently, the Dolphins take their foot off the gas even when they are losing the race. And of course, we know which team improves the most in the fourth quarter. TEAM Q1-3 OFF RK Q4 OFF RK DIF MIA 7.1% 14 -38.4% 32 -45.5% PIT 28.8% 4 -16.0% 27 -44.8% HOU 25.7% 5 -4.2% 24 -29.9% CHI -2.3% 21 -25.5% 29 -23.2% NE 41.0% 1 19.1% 8 -21.9% SD 19.7% 7 -0.7% 19 -20.5% STL -22.1% 31 -36.9% 31 -14.9% CIN 9.0% 10 -3.4% 23 -12.4% KC -16.0% 29 -26.6% 30 -10.6% GB 37.2% 2 26.9% 3 -10.4% BAL 10.8% 8 4.5% 14 -6.3% BUF 6.6% 15 0.6% 17 -6.0% OAK 2.8% 18 -1.7% 20 -4.5% DET 7.2% 13 3.3% 16 -3.8% ATL 10.1% 9 7.0% 13 -3.1% WAS -8.5% 24 -10.4% 26 -1.8% TEAM Q1-3 OFF RK Q4 OFF RK DIF SF -2.0% 20 -3.2% 22 -1.2% CAR 19.8% 6 19.6% 7 -0.2% MIN -2.5% 22 -1.8% 21 0.7% JAC -22.1% 32 -19.0% 28 3.1% PHI 6.3% 16 9.6% 11 3.3% NO 32.4% 3 37.2% 1 4.9% IND -15.7% 28 -9.9% 25 5.8% CLE -9.9% 25 -0.1% 18 9.8% DAL 8.8% 12 19.8% 6 11.0% TEN 4.6% 17 18.9% 9 14.3% TB -8.3% 23 7.6% 12 15.9% NYG 8.9% 11 30.2% 2 21.3% NYJ 0.7% 19 23.5% 4 22.8% ARI -19.9% 30 3.8% 15 23.7% SEA -12.6% 26 13.7% 10 26.2% DEN -12.9% 27 22.8% 5 35.6% Actually, Green Bay seems to take its foot off the gas more on defense; its defense would rank 17th if we didn't include the fourth quarter. But San Francisco and New England see their defensive DVOA ratings decline even more in the fourth quarter than Green Bay's. TEAM Q1-3 DEF RK Q4 DEF RK DIF CAR 9.1% 24 42.7% 32 33.6% SF -20.9% 2 9.9% 20 30.8% PHI -3.0% 10 23.8% 29 26.8% NE 8.0% 21 32.9% 31 24.9% NYG 4.4% 16 28.0% 30 23.6% MIA -5.6% 8 15.8% 23 21.4% SD 8.6% 22 21.8% 28 13.2% GB 5.7% 17 17.7% 25 12.1% MIN 8.8% 23 20.2% 26 11.3% DET -10.1% 5 -1.9% 13 8.3% WAS -0.6% 12 7.2% 17 7.8% ARI 6.7% 18 14.2% 22 7.5% BUF 13.1% 29 20.4% 27 7.2% DAL 1.6% 13 8.8% 19 7.1% HOU -10.0% 6 -3.8% 11 6.2% BAL -22.2% 1 -18.0% 2 4.2% TEAM Q1-3 DEF RK Q4 DEF RK DIF TB 13.0% 28 16.1% 24 3.1% NYJ -12.9% 3 -10.5% 8 2.5% OAK 7.0% 19 7.5% 18 0.5% JAC -10.6% 4 -11.6% 7 -1.0% TEN 2.4% 15 -0.4% 15 -2.8% NO 17.6% 31 12.7% 21 -4.9% STL 11.4% 27 5.0% 16 -6.4% DEN 7.1% 20 -1.2% 14 -8.4% CHI -7.1% 7 -16.9% 4 -9.8% SEA 2.2% 14 -10.5% 9 -12.7% PIT -0.8% 11 -17.6% 3 -16.8% CLE 13.8% 30 -4.8% 10 -18.6% ATL -3.1% 9 -22.6% 1 -19.5% KC 10.7% 26 -13.9% 6 -24.6% CIN 10.5% 25 -15.9% 5 -26.4% IND 25.1% 32 -2.1% 12 -27.1% Here is what the overall ratings would look like if we just included the first three quarters -- except in special teams, where frankly I'm too lazy right now to go do a whole new set of "first three quarters" special teams ratings.
Mid
[ 0.560509554140127, 33, 25.875 ]
Q: $\int\int\int_R \cos x\, dxdydz,$ where $R= \{(x,y,z)\in \textbf{R}^3 :x^2+y^2+z^2\le\pi^2\}$ Let $R = \{(x,y,z)\in \textbf{R}^3 :x^2+y^2+z^2\le\pi^2\}$ How do I integrate this triple integral $$\int\int\int_R \cos x\, dxdydz,$$ where $R$ is a sphere of radius $\pi$? I have trouble understanding this particular step from the solution, $$\int\int\int_R \cos x \,dxdydz=\pi\int_{-\pi}^\pi({\pi}^2-x^2)\cos x\,dx$$ A: The integrand does not depend on $y$ or $z$, so they are performing the integration over those two variables. For a given $x$, the region parallel to the $yz$ plane is a circle of radius $\sqrt{\pi^2-x^2}$ so the area is $\pi(\pi^2-x^2)$.
High
[ 0.6666666666666661, 30.25, 15.125 ]
AR=ar crvs RANLIB=ranlib ORIGINALS=bbuffered.o bfildes.o bflush.o bgetc.o bgetd.o bgetrune.o \ binit.o boffset.o bprint.o bputc.o bputrune.o brdline.o bread.o \ bseek.o bwrite.o UNNEEDED=bbuffered.o bfildes.o bgetd.o boffset.o \ bputrune.o brdline.o bread.o bseek.o OFILES=bflush.o bgetc.o bgetrune.o binit.o bprint.o bputc.o bwrite.o TARG=../libbio.a CFLAGS=-g -I../../include all: $(TARG) $(TARG): $(OFILES) $(AR) $(TARG) $(OFILES) $(RANLIB) $(TARG) $(OFILES): ../../include/bio.h ../../include/lib9.h clean: rm -f *~ rm -f *.o nuke: clean rm -f $(TARG)
Mid
[ 0.562015503875969, 36.25, 28.25 ]
Lakirein Lakeeren is a 1954 Hindi Bollywood film starring Nalini Jaywant, playback by Geeta Dutt and Shamshad Begum, music by Hafiz Khan & Lyrics by Shevan Rizvi. Plot Songs "Duniya Se Ja Raha Hoon" - Talat Mahmood "Dil Ke Dhadkan Pa Gaa" - Talat Mahmood "Daaman Na Chhudaa Yun Dur Na Kar" - Geeta Dutt "Aabaad Jahaan Barbaad Kiya Aarman Bhara Dil Todh Diya" - Geeta Dutt "Mere Maalik Meri Kismat Ko Jab Tune" - G. M. Durrani "Mohabbat Ki Duniya Mein Barbad" - Talat Mahmood, Geeta Dutt "Nimbua Pe Papiha Bola Kanduaa Pe KoyalGaye" - Shamshad Begum, Geeta Dutt "Tujhse Shikwa Kiya Nahin Jata" - Geeta Dutt "Jab Teer Chalata Naino Ke Dildar Chalte" - Shamshad Begum Cast Ashok Kumar Nalini Jaywant Pran Durga Khote Ramayan Tiwari Cukoo Sulochana Latkar Yakub Kamal References External links Lakeeren at the Internet Movie Database Category:1954 films Category:Indian films Category:1950s Hindi-language films
High
[ 0.6930232558139531, 37.25, 16.5 ]
The Dion Law Group specializes in providing you the most personalized service available. Our office is staffed with highly educated, competent people who understand your complicated problem. We will always be here to answer the tough questions about your legal issue. You will be dealing with people who take a personal interest in your case, and you will get the service and attention you deserve. We understand that a legal problem can be one of the most stressful times in a person’s life. So, we use interest based problem solvingtechniques in order to keep your legal fees as low as possible and find the best solution to your specific issue. At the Dion Law Group, our goal is to provide you with courteous, expedient, professional service of the highest caliber. We have the skill and tools to handle your case as efficiently as possible in order to avoid unnecessary legal expenses. Browse our website for more information about our office. If you have any questions, or would like to make an appointment for a no cost consultation, please call us at (805) 497-7474. At the Dion Law Group, YOU always come first. Routinely voted one of Ventura County‘s Favorite East County law firms! Address 660 Hampshire Road Suite 216 Westlake Village, CA 91361 Hours Monday—Friday: 9:00AM–5:00PM Phone (805) 497-7474 What Our Clients Say “In this day in age, it is hard to find quality of your caliber when it comes to legal services.” – Ron G. “I’m so pleased with the time, energy, and interest you put into our case, not to mention taking our phone calls after hours.” – Sally C. “Thank you for helping me through the most difficult time in my life. You are amazing, caring people.” – Julie K.
High
[ 0.678983833718244, 36.75, 17.375 ]
Various frames which are commonly used to present replaceable marketing and advertising material are known. Some of these frames are relatively costly with components that are difficult to produce and to assemble. The cost of such frames becomes significant where they are purchased in large numbers for wide-scale use as part of a pricing system. This is generally the case where beverage manufacturers (or other goods/service providers) seek to adopt a given frame or frames as a standard for their merchandising and marketing. The advertising and pricing material used to identify product and display prices must usually be changed or updated with every price variation, special or sale. The replacement of such material also becomes expensive and wasteful.
Low
[ 0.518518518518518, 33.25, 30.875 ]
Social Share Nearly a week ago in The New Republic, writer Kevin Baker penned an open letter stating that blue and red states should breakup. In the letter, addressed to “Red State Trump voters,” Baker spends about 4,700 words making an intellectual case for progressivism, insulting people who live in West Virginia and rural Arkansas, and explaining why a Bluexit makes sense for everyone. Here is how the letter opens: Dear Red-State Trump Voter, Let’s face it, guys: We’re done…. So here’s my modest proposal: You go your way, we go ours. Baker bluntly states what many progressives might believe but are too tactful (or calculating?) to say: That blue states would be better off without rural America, or what he calls “Food Stamp Red America”: Truth is, you red states just haven’t been pulling your weight. Not for, well, forever. Red states are nearly twice as dependent on the federal government as blue states. Of the twelve states that received the least federal aid in return for each tax dollar they contribute to the U.S. Treasury, ten of them voted for Hillary Clinton—and the other two were Michigan and Wisconsin, your newest recruits. By the same count, 20 of the 26 states most dependent on federal aid went to Trump. Take Mississippi (please!), famous for being 49th or 50th in just about everything that matters. When it comes to sucking at the federal teat, the Magnolia State is the undisputed champ. More than 40 percent of Mississippi’s state revenue comes from federal funding; one-third of its GDP comes from federal spending; for every dollar it pays out in federal taxes, it takes in $4.70 in federal aid; one in five residents are on food stamps—all national highs. You people—your phrase, not mine—liked to bash Obama for turning America into what you derisively referred to as “Food Stamp Nation.” In reality, it’s more like Food Stamp Red America—something your Trump-loving congressmen will discover if and when they fulfill their vow to gut the program. If Baker’s argument sounds familiar, it’s probably because you heard it before. It’s essentially the thesis of Thomas Frank’s 2004 work “What’s the Matter with Kansas?”, a book that spent 18 weeks on the New York Times Bestseller List. Baker and Frank are perplexed as to why red states aren’t more grateful for the federal aid they receive from Washington, D.C. (It never seems to occur to either writer that many people in these states might see federal dollars as a detriment, one that makes people comfortable in their poverty, both economic and intellectual.) Baker’s plea is not for outright secession. What he proposes is, well, to do what many conservatives and libertarians have pleaded for these many decades: You want to organize the nation around your cherished principle of states’ rights—the idea that pretty much everything except the U.S. military and paper currency and the national anthem should be decided at the local level? Fine. We won’t formally secede, in the Civil War sense of the word. We’ll still be a part of the United States, at least on paper. But we’ll turn our back on the federal government in every way we can, just like you’ve been urging everyone to do for years, and devote our hard-earned resources to building up our own cities and states. We’ll turn Blue America into a world-class incubator for progressive programs and policies, a laboratory for a guaranteed income and a high-speed public rail system and free public universities. We’ll focus on getting our own house in order, while yours falls into disrepair and ruin. I suspect that few conservatives or libertarian thinkers would be cowed by this cruel threat, and it's worth noting that Baker's essay is getting dinged not from the right, but from the left. The Nation yesterday called his idea “dumb and cruel.” That description was both kind and eloquent compared to the Daily Kos, which said "Blueexit is bullsh*t" and compared Baker to a middleschooler. Despite the essay's panning, I’d encourage readers to take the time to read what Baker has to say. For one, his basic premise is mostly correct: Red states do receive a higher percentage of federal funds. (This doesn’t necessarily make them hypocrites, however.) Second, though Baker’s reasoning and prose are a tad fevered, the essay is well written and highly entertaining. (I appreciated the Virgil “the Turk” Sollozzo reference from The Godfather.) Third, his essay sheds light on some fundamental differences Americans have in respect to its federal government: its proper role; its moral responsibilities; and the constitutional powers it possesses. The idea of secession sounds crazy. And I imagine this is especially true for those who believe history is essentially a linear march of human progress.Yet trends well before Brexit and Donald Trump suggested that perhaps history was marching in a different direction. A decade and half ago, when a Trump presidency still seemed an odd, empty threat, Jacques Barzun noted that decentralization, not Globalism, was the prevailing worldwide current as humans entered the 21st century: Separatism was rampant all over the globe. No sooner was India free of British rule than Pakistan broke away, and no sooner was the new nation separate than Bangladesh freed itself from it. The old Ceylon, a huge Island renamed Sri Lanka, carried on a civil war for more than 20 years, and in the Himalayas, India again fought Pakistan over Kashmir. The East Timorese nearly destroyed Indonesia. Where everyone looked—at Ireland, the Middle East, South America, Southeast Asia, all of Africa, the Caribbean, and the whole ocean speckled with islands, one would find a nation or would-be nation at war to win or prevent independence. I don’t know if Baker’s Bluexit has hope of succeeding or how seriously he intended his essay to be received. But one need not wear a tinfoil hat to see that the United States might be heading in that direction. Jon is the Director of Digital Media of Intellectual Takeout. He is responsible for daily editorial content, web strategy, and social media operations. Jon previously was the Senior Editor of The History Channel Magazine, Managing Editor at Scout.com, and general assignment reporter for the Panama City News Herald. He also served as a White House intern in the speech writing department of George. W Bush. Jon received degrees from the University of South Dakota (M.A.) and the University of Wisconsin-Platteville (B.A.), where he studied history and literature. Republish this content This work is licensed under a Creative Commons Attribution 4.0 International License, except for material where copyright is reserved by a party other than Intellectual Takeout. Please do not edit the piece, ensure that you attribute the author and mention that this article was originally published on IntellectualTakeout.org Please copy the above code and embed it onto your website to republish.
Low
[ 0.49196787148594306, 30.625, 31.625 ]
Share Facebook Twitter LinkedIn Trion Worlds’ multiplayer online shooter Defiance is off to a strong start. The Defiance TV show, which will integrate with the game as the pair develop, has broken new records on its host station SyFy attracting 2.7m viewers for its debut episode. GameSpot reports that the show also became SyFy’s most-watched scripted series debut for adults in seven years”. The game’s doing alright, too, and has already attracted over 1m registered players. In the process players have to date killed 500,000,000 hellbugs, driven 50,000,000 miles and battled 1m Arkfalls.
Low
[ 0.388625592417061, 20.5, 32.25 ]
EDCET Admission >Counselling likely starts from 2014 , The ED.CET 2014 web counseling will start from . EDCET web counselling is for admission in to B.Ed course for the 2014-15 academic years. , but this year it was bit too late which starts from 14th of this month. Candidate need to download Ed.cet 2014 rank cards from the official website : http://edcet2014.info, and http://edcet.apsche.ac.in/colallotform.php , and http://edcet.apsche.ac.in , the ed.cet counselling by Select the College. District, college, course. This Year Edcet was conducted by the Andhra University of Visakapatnam , The edcet 2014 counselling in ap dates , edcet 2014 mock counselling will be held at some of the educational websites like schools9, manabadi, JntuNet.blogspot.com . The au edcet 2014 counselling dates will be given with complete notification . The qualified candidates of Ed.CET-2014 entrance examination can attend the certificate verification for exercising web options for admission into B.Ed. Course for the academic year 2014-15 from September in any helpline centers with all original certificates Certificates to be presented : Ed.cet 2014 Hall Ticket, Ed.cet 2014 Rank card, Degree Provisional Certificates, Inter Marks list ,Tenth mark list,Study Certificates from 9th class to Graduation or Residence/Nativity Certificate, TC and Income certificate. All candidates have to pay processing fee at the time of EDCET certificate verification. The Complete Details Regarding Counselling will be available on website http://edcet.apsche.ac.in from one day before actual day of counselling, For Having the Colleges List at http://edcet.apsche.ac.in/doc/collegelist.pdf
High
[ 0.7095990279465371, 36.5, 14.9375 ]
Five Blogs That Make Me Think When Nicolai and I started this blog a year ago we were hoping for a plural readership. Now that we’ve hit that target we can move on to the next: making our readers think. Fortunately, we’ve already succeeded with at least one reader: Nijma of Camel’s Nose, who named O&M one of five blogs that make her think. This is one of those “memes” in which each blogger named is supposed to name five bloggers who make him think, who in turn name five bloggers that make them think, and so on, until everyone is nested under someone else, sort of like Amway but without money changing hands. The Thinking Blog’sIlker Yoldas is responsible for all this (and for supplying the nice graphic above). Yoldas describes the exercise as a humanistic alternative to blog ranking systems like Technorati based on quantitative analysis of linking patterns, just as “human-powered” search engines like ChaCha are alternatives to Google. Anyway, I’ll play along. Here are five blogs that make me think. (To make things interesting I’ve excluded the sites listed already on the blogroll below.)
Mid
[ 0.6092715231788081, 34.5, 22.125 ]
The Mystery of Iniquity by Stephan A. Hoeller ON JUNE 10, 1991, A COVER STORY APPEARED in Time magazine on the topic of evil. The author, Lance Morrow, did not argue for a particular thesis and did not reach any conclusions. What he did, however, was in a sense more important. He began by stating three propositions: God is all-powerful. God is all-good. Terrible things happen. Citing several sources, Morrow said that you can match any two of these propositions, but not all three. You can declare that there is an all-powerful God who allows terrible things to happen, but this God could not be all-good. On the other hand, there might be an all-good God who lets terrible things happen because he does not have the power to stop them; thus he is not all-powerful. This analysis might easily have been stated by a Gnostic of the first three or four centuries of the Christian era, or for that matter by a contemporary Gnostic, such as the present writer. Not that Gnostics were the only ones who recognized this uniquely monotheistic predicament. The supreme medieval luminary of Catholic theology, St. Thomas Aquinas, admitted in his Summa Theologiae that the existence of evil is the best argument against the existence of God. If the concept of the monotheistic God is to be accepted, then the issue of evil has no viable explanation. Conversely, if evil exists, then the monotheistic God as presented by the mainstream religious traditions cannot exist. Whence Cometh Evil? Throughout history, religious traditions have accounted for the existence of evil in a number of ways. In primeval times, the undifferentiated nature of human consciousness allowed people to say that both good and bad come from the Divine. Thus archaic shamans would not have found it difficult to say that good and evil are visited upon human beings by the Great Spirit. In the more sophisticated context of Sumero-Babylonian traditions, it was believed that the gods amused themselves by creating terrible things freakish beings, evil demons, and horrible conditions for human life. To employ a psychohistorical rationale, one might say that when people did not yet possess a differentiated consciousness (which we may equate with the conscious ego), it was relatively easy for them to envision God or the gods as being like themselves, so that the coincidence of good and evil was part of their nature. More advanced spiritual traditions have inherited some of this attitude; thus in mystical Jewish theology we find the notion that God partakes of both good and evil tendencies (yetzirim). With the growth of consciousness, the mind begins to differentiate between the beneficent and the malefic sides of being. The tension induced by trying to hold a God concept that unites good and evil becomes unbearable, so that it becomes necessary for the mind to separate the two. The notion of radical dualism thus arises. The most prominent example is that of Zoroastrianism. Here the true and good God, Ahura Mazda (sometimes called Ormazd), possesses a divine antagonist known as Angra Mainyu (Ahriman). The two are engaged in a perennial cosmic struggle for supremacy. Although Ahura Mazda is supreme and his ultimate victory is assured, as long as creation endures Angra Mainyu will continue to fight him and bring suffering into the world. A sophisticated but very impersonal view of evil and its origins can be found in the great religions that originated in India. Most of these imply that evil is part of the unenlightened state of existence, and that the cause of evil is ignorance (avidya). If one attains to a transformed or enlightened consciousness and thus rises above all dualities, one is liberated from karma and from all other conditions in which evil plays a role. Whether such liberation inevitably leads to the cessation of incarnate existence is not always clear, but it is clear that life as one has known it ceases, and with it evil ceases also. The fourth category is that of classical monotheism as found in mainstream Judaism and Christianity. As some of the other traditions ascribe the existence of evil to God, a malign counter-God, or human ignorance, this position ascribes the origin of evil to human sin. The creation myth of the mainstream Judeo-Christian tradition, with its story of the Garden of Eden and of the curious events that are said to have transpired there, forms the foundation for this view. This belief holds that the transgressions committed by the first human pair brought about a "Fall" of creation, resulting in the present state of the world. The sin of the original pair passed by inheritance to all members of the human race, who are born corrupt, afflicted by the weight of this "original sin." Such evils as we find in this world, including natural disasters, plagues, and the ruthlessness of the food chain, are all somehow part of the momentous consequences of the Fall. As some scholars, notably Elaine Pagels, have pointed out, these mythologems inevitably exercise a profound influence on the cultures founded on them. Even in a secularized age like our own, the powerful shadow of such beliefs continues to cast a pall on our minds. One may wonder how differently our history would have proceeded had the guilt of the Fall not been present to oppress the souls of men and women in our culture! The Gnostic View All spiritual traditions acknowledge that the world is imperfect; they differ only in how they believe this happened and in what is to be done about it. Gnostics have always had their own views of these matters. They hold that the world is flawed not because of human sin, but because it was created in a flawed manner. Buddhism (regarded by many scholars as the Gnosticism of Asia) begins with the recognition that earthly life is filled with suffering. Gnostics, both ancient and modern, agree. Suffering is indeed the existential manifestation of evil in the world. Although humans, with their complex physiology and psychology, are subject to torments of a singularly refined nature, the fear, pain, and misery of all other creatures is evident as well. To recall St. Paul's insight, all creation groans and travails in pain. Yet Gnostics have not been inclined to attribute such misfortunes to the sin of the first human pair. They reasoned that it makes much more sense to say that the world has not fallen but was made in a sadly imperfect manner to begin with. To put it in slightly more abstract terms, evil is part of the fabric of the world we live in; it is part and parcel of the existential reality of earthly life. If indeed there is a creator of this reality, then it is assuredly this creator who is responsible for the evil in it. Since, for the monotheistic religions, this creator is God, the Gnostic position appears blasphemous to conventional believers, and is often viewed with dismay even by those who consider themselves unbelievers. The Gnostic position may need to be considered in the light of the historical roots of the tradition. According to most contemporary scholars, Gnosticism originated in the Jewish religious matrix (probably in its heterodox manifestations) and then came to ally itself with the Jewish heresy that became Christianity. Thus the Gnostics were confronted with the image of the monotheistic God in the Old Testament and its adaptations in the New Testament. They faced a God who was often capricious, wrathful, vengeful, and unjust. It was easy for them to conclude that this flawed God might have created a world in his own flawed image. The greatest of all questions the Gnostics asked was this: is this flawed creator truly the ultimate, true, and good God? Or is he a lesser deity, who is either ignorant of a greater power beyond himself or is a conscious impostor, arrogating to himself the position of the universal deity? The Gnostics answered these questions by saying this creator is obviously not the true, ultimate God, but rather a demiurgos ("craftsman"), an intermediate, secondary deity. This Demiurge whom they equated with the deity of the Old Testament was the originator of evil and imperfection in the world. Thus the apparent blasphemy of attributing the world's evil to the creator is revealed as originating in the Gnostics' confrontation with the monotheistic God. Kindred movements, such as Hermeticism, did not face this predicament: being pagans, the Hermeticists did not inherit the dark, ambivalent figure of the Old Testament God, so they were able to adopt a less harsh position. (Ironically, today many people tend to favor Hermeticism over Gnosticism for this very reason.) Many have tried to evade recognition of this flawed creation and its flawed creator, but none of their arguments have impressed Gnostics. The ancient Greeks, especially the Platonists, advised people to look to the harmony of the universe, so that by venerating its grandeur they might forget their own afflictions as well as the innumerable grotesqueries of ordinary life. "Look at this beautiful world:' they said; "see its superbly orderly way of functioning and perpetuating itself, how can one call something so beautiful and harmonious an evil thing?" To which Gnostics have always answered that since the flaws, forlornness, and alienation of existence are also undeniable, the harmony and order of the universe are at best only partial. Those influenced by Eastern spirituality have at times brought up the teaching of karma whereby one's misdeeds generate misfortune later in life or even in another life as explaining the imperfection of the manifest world. Yet a Gnostic might counter that karma can at best only explain how the chain of suffering and imperfection works. It does not tell us why such a sorrowful system should exist in the first place. Qualified Dualism As we noted earlier, one way of explaining the existence of evil was radical dualism, of which the Zoroastrian faith is a possible example. The Gnostic position, by contrast, is not of a radically dual nature; rather it might be called "qualified dualism." In a simplified form one might define this position as declaring that good and evil are mixed in the manifest world; thus the world is not wholly evil, but it is not wholly good either. If the evil in the world should not blind us to the presence of good, neither should the good blind us to the reality of evil. Here we might resort to the approach that was most favored by the Gnostics themselves the mythological. (The power of this method has been rediscovered by such contemporary figures as C. G. Jung and Joseph Campbell.) Myths telling of the commingling of good and evil in creation predated the Gnostics. One of these tales is the Greek myth of Dionysus. When this god was torn apart by the Titans, Zeus came to his aid and blasted the malefactors with a thunderbolt. The bodies of both the Titans and Dionysus were reduced to ashes and mixed. When all sorts of creatures, including humans, rose from these ashes, the divine nature of Dionysus was mingled with the evil nature of the Titans. Thus light and darkness are at war with each other within human nature and in the natural world. The Gnostics had their own myth about the origins of good and evil. They began by speaking of a boundless, blissful fullness (Pleroma) that dwells beyond all manifest existence. The Pleroma is the abode of and constitutes the essential nature of the true, ultimate God (alethes theos). Before time and memory, this ineffable fullness extended itself into the lower regions of being. In the course of this emanation, it came to manifest itself in a number of intermediate-deities who were rather like great angels endowed with enormous talents of creativity and organization. Some of these beings, or demiurgoi, became alienated from their supernal source, thus becoming replete with evil tendencies. Thus the world-creating will was tainted with self-will, arrogance, and the hunger for power; through the works performed by these alienated agencies, evil came to penetrate creation. Ever since then, as the Gnostic teacher Basilides reportedly said, "Evil adheres to created existence as rust adheres to iron." As one of these created beings, the human entity partakes of the nature of his flawed creators. The human body, being a material creation, is subject to disease, death, and various other evils; even the soul (psyche) is not free from imperfection. Only the spirit (pneuma), deeply hidden within the human essence, remains free from the admixture of evil and tends toward the true God. Such mythic statements can convey insights in a fashion that is not possible through other methods of communication. At the same time it must be admitted that these myths were formulated long ago and far away and so may profit from certain amplifications and clarifications within a contemporary context. Contemporary Conclusions Terrible things do happen, as the Time essay stated. The world is filled with evil, with grotesque horror and universal suffering. Fiendish humans, often possessing great power, torment and slay others daily. The history of the twentieth century offers much proof of rampant wickedness in the world. Believers in the monotheistic God and/or in karma often tell us that this does not matter all that much, because in the final analysis evil really promotes good. They seem to be saying that evil is not really evil at all, but good masquerading in an unpleasant disguise. Yet this kind of topsy-turvy argument is an affront to all those who have looked evil in the face. To present this argument to survivors of the Holocaust or the Gulag or the killing fields would be insulting as well as ridiculous. For these victims, evil is evil, and all else is but an evasion. Moreover many terrible things happen that are in no way due to human volition. While the perversities of the human condition are responsible for some of the suffering in this world, much of it is not our fault. Frequently, however, we believe that it is. Yet, whether occasioned by the myth of Adam and Eve or by the propaganda of some trendy folk today who make out humans to be the sole villains in the environment, the cultivation of guilt in the human mind is no remedy for evil. On the contrary, guilt usually begets more sorrow in the long run. Let us be done with this self-flagellation and try to mitigate the evils over which we have some control while remembering that it is beyond our powers to eradicate misfortune altogether. Like the world, humans are a mixture of good and evil. Just as it is impossible to exorcise evil from the fabric of creation, so we cannot entirely get rid of it in ourselves. If human schemes and techniques were able to eliminate evil from human nature, they would have succeeded in doing so long ago. This is why so many spiritual traditions teach the need for redemption from outside. Every spiritual tradition worth its salt has always possessed a soteriology a teaching about salvation. Gnostics ancient and modern do not perceive liberating gnosis as a do-it-yourself project. We cannot purify or psychoanalyze evil away by our own strength. The Messengers of Light recognized in the Gnostic tradition, such as Jesus, Mani, and others, have always been envisioned as the great facilitators of salvation. Their salvific mission is to enable the consciousness of the individual to experience gnosis. An early Gnostic source, Excerpta de Theodoto, defines this gnosis as the knowledge of "who we were, what we have become; where we were, whereinto we have been thrown; whither we hasten, whence we are redeemed; what is birth and what rebirth." Many have noted the similarities between these Gnostic teachings and those of Hinduism and Buddhism. In all of these traditions, insight into the origin and nature of the manifest world is seen as liberating us from it and its evils, reuniting our spirits with transcendental reality. Unlike the great Eastern religions, however, Gnosticism specifically identifies the root of all evil as the faulty creation brought about by spiritual agencies of limited wisdom and goodness. The Gnostic view of the human condition thus also differs from the modern secular view. Gnostics do not share the assumption of many in our culture that there is a purely naturalistic and humanistic remedy for evil. Contemporary Gnostics for the most part agree with the fundamental insights of their ancient counterparts. Do modern Gnostics believe in the Demiurge? Do they believe in Messengers of Light? Do they regard such ideas as metaphysical truths or as mythologems hinting at more subtle and mysterious realities? The answer is that some Gnostics may believe these things more in a literal sense, while others may believe them symbolically; still others may hold a mixture of both views. What matters is not the precise form of these teachings but their substance. And this is clear enough. It speaks of the reality and power of evil, of its fundamental presence in all of manifest existence. It declares that while we may not be able to rid the world or ourselves of evil, we may and indeed will rise above it through gnosis. And when the task of this extrication is accomplished, then we shall indeed no longer fear the noonday devil or the terror that walks by night.
Mid
[ 0.5527426160337551, 32.75, 26.5 ]
Postnatal development of zinc-containing cells and neuropil in the visual cortex of the mouse. The postnatal development of zinc-containing synaptic boutons and their cells of origin in the visual cortex of a pigmented mouse is described. Two phases can be distinguished. During the early phase zinc-containing neuropil is first apparent by postnatal day 3. By day 7 a light, but distinct neuropil staining sketches the primary and secondary visual cortices. The primary visual area contains light precipitate in layers V and VI as well as the monocular portion of layer II/III. The secondary visual areas contain slightly denser precipitate in layers II/III through VI. The transition to the second phase is marked by a large increase in precipitate density by day 11. Thereafter, the intensity of the neuropil staining increases to day 28, first in layer II/III and then in layer V, as the adult pattern of neuropil staining gradually develops. In the primary visual cortex precipitate is dense in layers II/III and V, moderate in layer VI, and sparse in layers I and IV. In the secondary visual areas the precipitate is dense in layers II/III and V and moderate in the lower portion of layer I and in layers IV and VI. Cells of origin of zinc-containing boutons are visible by the end of the second postnatal week in layer II/III of the secondary visual cortex. By 21 days of age the pattern of staining in the mature mouse is established, and cells in layers II/III and VI are labeled in both the primary and secondary visual cortices. The developmental sequence of zinc-containing cells and neuropil does not preclude an involvement of zinc in the postnatal regulation of NMDA receptor function.
High
[ 0.679144385026738, 31.75, 15 ]
Pages Friday, 7 September 2012 Bandai Figurise-6 Black Lotus Completed I had a bit of trouble trying to get the kit to look glossy with the limited material I have access to, In the end, it was a combination of various varnish and car wax that do the trick (although still not perfect). Joint modification is mainly on the shoulder. a bit of armor trimming is also done on the skirt armor, and spacing added to the torso joints. See the WIP post for more information.
Low
[ 0.45788336933045304, 26.5, 31.375 ]
Q: Easily tell if focused on VirtualBox window? Is there a way to make it very obvious that my VirtualBox window isn't in focus? The problem I'm having is switching between workspaces on Ubuntu or OS X and then trying to type in my Windows virtual machine, only to find that it's not in focus. The Windows window looks like it's in focus (based on the title bar), but I'm actually typing in Firefox on the host machine, for example. It's even worse because the Windows text insertion cursor is blinking to show focus. Ideally, I'd like the VM's display to get unsaturated (e.g. a "partial" grayscale) when not in focus, just to prevent this keyboard-focus problem. Other options would be fine too, as long as I don't have to second guess where my focus is. I'm not using the seamless mode -- the display is all within a window. A: If Ubuntu is your host and you have Compiz enabled, you could use the "ADD Helper" setting in the CompizConfig Settings Manager. This will "dimm" all but the active window. Note - this will apply to all windows, not just Virtualbox.
Mid
[ 0.605970149253731, 25.375, 16.5 ]
President Trump's re-election campaign blasted out a marketing email to supporters Sunday about a 25-percent-off sale in honor of military members who died while serving their country. "Memorial Day Sale ... 25% off ... The Official Trump Store," states the ad, which was placed above pictures of a Trump-Pence navy towel, Make America Great Again baseball cap, Trump-Pence flag, beer koozies, and sunglasses holders. Shoppers were told to use the word "remember" to get their discount at checkout. The website's latest featured products include women's and men's swimsuits with the campaign's motto "Make America Great Again" blazened across the front.
Low
[ 0.44395604395604304, 25.25, 31.625 ]
(Photo: Omid Singh Instagram) PANAJI: Coach Igor Stimac wants to add Omid Singh to his new-look India side but the Iranian winger is yet to take a final call whether he wants to surrender his Iranian passport. Singh, son of a Punjabi father and Iranian mother, has made a name for himself in Iran where he has excelled as a left-winger and full-back. He was recommended to Stimac as someone who could add to India's strengths. The Croatian coach didn't mind having a look and is now awaiting a word from the Iranian, who plies his trade in the Iran Pro League with Nassaji FC. India does not allow for dual passports and only those with an Indian passport are eligible to represent the country in international competitions. "It's a huge call for Omid. Certainly, he is a good player who could strengthen the India team but he must decide whether he wants to give up his Iranian passport and take up an Indian one. "It's a cumbersome process and may take 15 months. If he makes up his mind, we can help him by speeding up the process," a senior AIFF official told TOI on Tuesday. The AIFF has already initiated discussions with the player's agent and explained the process to obtain an Indian passport. There were reports that Singh would be added to the India campers for next month's Inter-Continental Cup but sources said that might be too soon, even though Stimac is keen to see how best he can make use of the Iranian. "Based on what he has heard, Stimac believes Omid can be an asset but we have to know what the player thinks. Stimac will not be here forever, and just because he has changed his passport (from Iranian to Indian), we cannot guarantee him a place in the Indian team. That can happen only on merit," said the official, adding that his wife is an Iranian. AIFF has in the past helped players obtain Indian passports. Prior to the Fifa U-17 World Cup 2017 in India, Canadian Sunny Dhaliwal obtained an Indian passport and was included in the India squad. He didn't get a game, and according to AIFF sources, it's much easier for a minor rather than someone lie Singh, who is 28. Japanese midfielder Izumi Arata was the first person of Indian origin (PIO) to represent the country in 2013 when he took the field against Palestine in a friendly in Kochi. Arata, born to an Indian father and Japanese mother, spent six years in India before he could finally pull on the India jersey.
Mid
[ 0.59043659043659, 35.5, 24.625 ]
Variation in nitrogen use efficiencies on Dutch dairy farms. On dairy farms, the input of nutrients including nitrogen is higher than the output in products such as milk and meat. This causes losses of nitrogen to the environment. One of the indicators for the losses of nitrogen is the nitrogen use efficiency. In the Dutch Minerals Policy Monitoring Program (LMM), many data on nutrients of a few hundred farms are collected which can be processed by the instrument Annual Nutrient Cycle Assessment (ANCA, in Dutch: Kringloopwijzer) in order to provide nitrogen use efficiencies. After dividing the dairy farms (available in the LMM program) according to soil type and in different classes for milk production ha(-1) , it is shown that considerable differences in nitrogen use efficiency exist between farms on the same soil type and with the same level of milk production ha(-1) . This offers opportunities for improvement of the nitrogen use efficiency on many dairy farms. Benchmarking will be a useful first step in this process.
High
[ 0.6786632390745501, 33, 15.625 ]
Q: Broken Pipe com sockets em python Estou tendo o seguinte erro sempre que tento enviar uma mensagem do servidor para o cliente usando sockets no Python: server: import socket s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) host = 'localhost' port = 5008 s.bind((host,port)) s.listen(1) conn,addr = s.accept() while True: data = conn.recv(1024).decode() print(data) msg = input("mensagem:") s.send(bytes(msg.encode())) cliente: import socket s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) host = 'localhost' port = 5008 s.connect((host, port)) while True: msg = input("mensagem:") s.send(msg.encode()) data = s.recv(1024).decode() print(data) A: O mais grave é esta linha do lado servidor: s.send(bytes(msg.encode())) deve ser: conn.send(msg.encode('utf-8')), e está a fazer de maneira errada o encode/decode (parto do príncipio que está com python3.x), neste caso até pode específicar o char encoding que quer, neste caso UTF-8, faça o seguinte: SERVIDOR: import socket with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(('127.0.0.1', 9000)) sock.listen(5) conn, addr = sock.accept() conn.send('WELCOME\n(isto veio do servidor)\n'.encode()) while True: data = conn.recv(1024).decode() conn.send('(isto veio do servidor)... A sua menssagem foi: {}\n'.format(data).encode('utf-8')) CLIENTE: import socket with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.connect(('127.0.0.1', 9000)) while True: data = sock.recv(1024).decode() print(data) msg = input('messagem\n') sock.send(msg.encode()) NOTA: isto é um servidor pronto para um só cliente (conecção, socket), caso queira mais conecções ao seu servidor terá de adiocinar paralelismo (threading por ex) no lado do servidor, para que haja um processo para cada cliente
Mid
[ 0.584766584766584, 29.75, 21.125 ]
Job Snapshot Job Description Responsibilities Manages all work related to the physical plant and ancillary systems, Engineering, Grounds, and Safety. The Manager acts as liaison between Medical Center and all contracted personnel/vendors performing Facilities related work. The Manager presents a positive image of the hospital to patients, visitors, physicians and the public Qualifications Education and Work Experience We are an Equal Opportunity/ Affirmative Action Employer and do not discriminate against applicants due to veteran status, disability, race, gender, gender identity, sexual orientation or other protected characteristics. If you need special accommodation for the application process, please contact Human Resources. EEO is the Law: http://www1.eeoc.gov/employers/upload/eeoc_self_print_poster.pdf
Low
[ 0.522123893805309, 29.5, 27 ]
Why Do Silver Trimers Intercalated in DNA Exhibit Unique Nonlinear Properties That Are Promising for Applications? Our investigation of one-photon absorption (OPA) and nonlinear optical (NLO) properties such as two-photon absorption (TPA) of silver trimer intercalated in DNA based on TDDFT approach allowed us to propose a mechanism responsible for large TPA cross sections of such NLO-phores. We present a concept that illustrates the key role of quantum cluster as well as of nucleotide bases from the immediate neighborhood. For this purpose, different surroundings consisting of guanine-cytosine and adenine-thymine such as (GCGC) and (ATAT) have been investigated that are exhibiting substantially different values of TPA cross sections. This has been confirmed by extending the immediate surroundings as well as using the two-layer quantum mechanics/molecular mechanics (QM/MM) approach. We focus on the cationic closed-shell system and illustrate that the neutral open-shell system shifts OPA spectra into the NIR regime, which is suitable for applications. Thus, in this contribution, we propose novel NLO-phores inducing large TPA cross sections, opening the route for multiphoton imaging.
High
[ 0.6557377049180321, 30, 15.75 ]
In a seemingly coordinated attack, three explosions took place in Mumbai one after the other on Wednesday between 6.54 and 7.06 in the evening - the first at Zaveri Bazaar in South Mumbai, the second at kabutar khana in Dadar West, central Mumbai and the third again in South Mumbai at Opera House. The Maharashtra home secretary has said 20 people have died 113 are injured. The death toll may rise. (Watch: This is an attack on the heart of India, says Maharashtra CM) In Delhi, the Home Ministry has confirmed that this was a terrorist attack and Mumbai has been sealed and is on high alert. Mr Chidambaram said his ministry would make a statement every two hours or sooner. The next statement will be made at 11 pm today. (Watch: A coordinated attack by terrorists, says Chidambaram) Mumbai Police sources say the Indian Mujahideen is suspected to be behind the attack. Union Home Ministry sources too say that the hand of the IM working closely with the Lashkar e Taiba is suspected. (Read) Two members of the IM were arrested in Maharashtra yesterday by the Maharashtra Anti-Terror Squad. However, sources confirm there was no intelligence alert or input that warned of today's terror attack.Mr Chavan said the blast at Opera House was of the highest intensity.All blasts took place during rush hour and in crowded places.The Zaveri Bazaar blast took place in an umbrella kept at the crowded Kahu Gali, a street of eateries. (Watch: Eyewitness accounts of the blast in Zaveri Bazaar) The Dadar West explosion took place in a meter box on an electric pole near a bus stop.The Opera House blast took place at Prasad Chamber building.Dadar, a middle class area also houses the Shiv Sena Bhawan and the famous Shivaji Park. The Home Ministry has said that Improvised Explosive Devices (IEDs) were used. (Read: Find out who is behind the blasts, Shiv Sena tells Govt) There were also unconfirmed reports of an unexploded bomb being found in Dadar and a bomb hoax in Santacruz.Early reports said a police control room had received a call claiming that there were serial blasts in Mumbai.A National Investigation Agency (NIA) Team has rushed to Mumbai from Delhi to investigate. A National Security Guards (NSG) team and forensic experts have also been airlifted on a BSF plane and are bound for Mumbai. The police have appealed for calm.Zaveri Bazaar has been hit before - over 50 people were killed in twin blasts in 2002.Malls, markets and other crowded places in Delhi, Chennai, Hyderabad and Bangalore are on high alert. (Read)
Mid
[ 0.597402597402597, 34.5, 23.25 ]
Q: Doxygen: Moving a static variable documentation inside a function Consider this code: #include <string> namespace MyNamespace { /** \brief Error Class */ class Error { public : /** \brief Constructor */ Error ( int in_code, const std::string& in_message ); ... }; /** \brief Successfull result */ static Error SUCCESS( 0, "Success. " ); /** \brief Error of unknown kind, used internally only to identify a pending result */ static Error ERROR_UNKNOWN( 1, "Unknown Error. " ); ... }; MyNameSpace is already part of a doxygen group. When I generate the documentation, I get: One page for Error class description SUCCESS and ERROR_UNKNOWN are "Functions" (wierd!) in the MyNamespace page How can I tag SUCCESS and ERROR_UNKNOWN to appear in the Error class description page? Note: I can't define a new group and put both in, as my group list is frozen (we want one group per library). "Hidden" group could be solution. A: So far, the best "solution" I could find was to use \memberof. This: /** \memberof Error @{ */ /** \brief Successfull result */ static Error SUCCESS( 0, "Success. " ); /** \brief Error of unknown kind, used internally only to identify a pending result */ static Error ERROR_UNKNOWN( 1, "Unknown Error. " ); ... /** @} */ Will move SUCCESS and ERROR_UNKNOWN under "Static Public Member Functions" of Error class description. Which is not very nice as they are then mixed with other true static public member functions. But adding a \name tag as well... /** \memberof Error @{ */ /** \name Error values: */ /** \brief Successfull result */ static Error SUCCESS( 0, "Success. " ); /** \brief Error of unknown kind, used internally only to identify a pending result */ static Error ERROR_UNKNOWN( 1, "Unknown Error. " ); ... /** @} */ Will move SUCCESS and ERROR_UNKNOWN in a "Error values" section under "Related Functions" of Error class description, with a nice Note reporting (Note that these are not member functions.)....which is definitely acceptable...
High
[ 0.669975186104218, 33.75, 16.625 ]
Dump trailer Anyone have a Bri-mar dump trailer?? Any issues I should look out for?? Also looked at ez-dumper. What do you guys think of dump inserts? The bri-mar dump was a 6X10 for $3550. Didnt price the ez-dumper trailers yet............. better question is how do you plan on using it? Get a low profile if you want to tow a skidder than low profile is better, if your getting it for materials then you get better dumping with over the tires. Goosenecks hold more angle better and give you a place to put extra buckets. remember a dump trailer that is rated for less than 10,000 lbs is not worth the money spent, and if your choice is a 6x10 10k or a 7x14 10k trailer the 6x10 actually can hold more because the trailer itself weighs less. They weigh in close to 1200 lbs if I remember right. The 10k rated units weigh in around 1800 or so. For the bit extra money, I'd definitly go for the 10k gvw. th 7k just seems as if you'll easiy out grow it to fast. Joe J&R Lawn and Landscaping 27 years old, Fully licensed and insured. 50+ accountsContents of Company for sale! PM for info! I have a Bri-Mar, 12,000# LPHD...it is awesome. I bought it last year but we have one at work (at my PT gig) and it's been through Hell in the past 10 years and it is still going! That's why I decided on Bri-mar. It's an expensive trailer but well worth it. I would agree that the weight of an empty trailer really takes away from what you can actually put in it. I can put 8440# in mine, legally. I tow it with an F-550, so it's like not even behind me! Don't get under a 10,000# dump trailer, that is, if you have the truck to tow it. Pulling a smaller skid steer is fine, but it's a bit akward....i'd get a dedicated trailer for that if I had a MTL, it'd be a better ride. --Specializing in Professional Landscape Installations and Enhancements-- Out east take a look at PJ trailers, they make a great trailer and you could probably drive and get it direct. We have an ABU 14' 16000lb. and there are days I wish we could carry more weight, so go as heavy as your wallet will afford.
Mid
[ 0.619385342789598, 32.75, 20.125 ]
Q: SSH (or SFTP) upload files from OSX terminal I'm trying to upload a file through terminal (didnt managed to do it trough transmit.app) I'm connected to SSH to my server. I want to sent a file from my Local desktop to my server. That the command I'm trying : scp /Users/username/Desktop/ad-blocker.sh user@IP:/var/packages/DNSServer/target/script/ I always have this error though " No such file or directory " not a expert of SSH, but if I'm connected to my server to SSH, how would terminal access my local file anyway (/Users/username/Desktop/ad-blocker.sh) Kind regards. A: You have to make sure that the path you are copying to exists. Before executing your scp command, ssh into your remote host, cd to the directory, and issue the pwd command. That will give you the current working directory. Copy that to your clipboard. Exit SSH, then re-issue the scp command with the directory (paste from your clipboard).
Mid
[ 0.641686182669789, 34.25, 19.125 ]
s the b'th term of -273961, -547926, -821895, -1095868? -2*b**2 - 273959*b What is the h'th term of -52813, -52815, -52817? -2*h - 52811 What is the z'th term of 114, 324, 672, 1158, 1782? 69*z**2 + 3*z + 42 What is the c'th term of -7, -16, -41, -88, -163, -272, -421? -c**3 - 2*c**2 + 4*c - 8 What is the j'th term of -252, -723, -1196, -1671, -2148, -2627, -3108? -j**2 - 468*j + 217 What is the p'th term of -3253, -4333, -6133, -8653, -11893? -360*p**2 - 2893 What is the w'th term of -358, -1413, -3182, -5671, -8886, -12833, -17518, -22947? -w**3 - 351*w**2 + 5*w - 11 What is the i'th term of 929, 912, 867, 782, 645, 444, 167? -2*i**3 - 2*i**2 + 3*i + 930 What is the s'th term of -40775, -81551, -122327, -163103, -203879? -40776*s + 1 What is the y'th term of -2, -50, -114, -194, -290, -402? -8*y**2 - 24*y + 30 What is the k'th term of 11406, 22949, 34494, 46041, 57590? k**2 + 11540*k - 135 What is the b'th term of -614, -1221, -1828? -607*b - 7 What is the a'th term of -89, -193, -335, -527, -781, -1109, -1523, -2035? -2*a**3 - 7*a**2 - 69*a - 11 What is the l'th term of 825, 1618, 2395, 3150, 3877, 4570, 5223, 5830? -l**3 - 2*l**2 + 806*l + 22 What is the k'th term of -48581, -97167, -145755, -194345? -k**2 - 48583*k + 3 What is the k'th term of 784, 691, 612, 541, 472? -k**3 + 13*k**2 - 125*k + 897 What is the i'th term of -518, -545, -630, -803, -1094? -5*i**3 + i**2 + 5*i - 519 What is the j'th term of 359, 473, 779, 1373, 2351? 16*j**3 + 2*j + 341 What is the o'th term of -1576, -3501, -5426, -7351, -9276, -11201? -1925*o + 349 What is the t'th term of 82405, 82401, 82397, 82393? -4*t + 82409 What is the k'th term of -9022, -9014, -9002, -8986? 2*k**2 + 2*k - 9026 What is the y'th term of 111, 110, 33, -120, -349, -654? -38*y**2 + 113*y + 36 What is the b'th term of 111, 75, 27, -33, -105, -189, -285? -6*b**2 - 18*b + 135 What is the w'th term of 2794847, 2794846, 2794845? -w + 2794848 What is the n'th term of -192, -171, -142, -105, -60? 4*n**2 + 9*n - 205 What is the k'th term of 394, 377, 358, 337, 314? -k**2 - 14*k + 409 What is the v'th term of 28693, 57409, 86125? 28716*v - 23 What is the t'th term of 2519, 5036, 7551, 10064, 12575, 15084, 17591? -t**2 + 2520*t What is the c'th term of 22261, 22278, 22295, 22312, 22329, 22346? 17*c + 22244 What is the u'th term of 3137, 6263, 9387, 12509? -u**2 + 3129*u + 9 What is the d'th term of -2690, -2747, -2842, -2975, -3146? -19*d**2 - 2671 What is the x'th term of -29, -460, -1627, -3896, -7633? -61*x**3 - 2*x**2 + 2*x + 32 What is the d'th term of -211, -744, -1723, -3370, -5907, -9556, -14539, -21078? -37*d**3 - d**2 - 271*d + 98 What is the o'th term of 116452, 232903, 349354, 465805, 582256? 116451*o + 1 What is the l'th term of 11843, 11583, 11323? -260*l + 12103 What is the c'th term of -23621, -23607, -23593, -23579? 14*c - 23635 What is the g'th term of -3160, -6302, -9418, -12508, -15572, -18610? 13*g**2 - 3181*g + 8 What is the v'th term of 734, 355, -278, -1165, -2306, -3701, -5350? -127*v**2 + 2*v + 859 What is the v'th term of 12614, 12717, 12820, 12923, 13026? 103*v + 12511 What is the c'th term of 782, 2322, 4608, 7640, 11418? 373*c**2 + 421*c - 12 What is the o'th term of -88130, -176259, -264390, -352523, -440658, -528795, -616934? -o**2 - 88126*o - 3 What is the y'th term of 29904, 29916, 29934, 29958? 3*y**2 + 3*y + 29898 What is the l'th term of 1316, 2624, 3926, 5222, 6512, 7796? -3*l**2 + 1317*l + 2 What is the c'th term of 3466, 13921, 31346, 55741, 87106, 125441? 3485*c**2 - 19 What is the f'th term of -5073, -5349, -5625? -276*f - 4797 What is the w'th term of -12619, -12077, -11165, -9877, -8207, -6149, -3697? w**3 + 179*w**2 - 2*w - 12797 What is the k'th term of -395, -495, -595, -695, -795? -100*k - 295 What is the l'th term of 108, 392, 676? 284*l - 176 What is the x'th term of 19109, 38217, 57325? 19108*x + 1 What is the y'th term of 550, 1983, 4308, 7525, 11634, 16635, 22528? 446*y**2 + 95*y + 9 What is the m'th term of -549486, -549483, -549480, -549477, -549474? 3*m - 549489 What is the f'th term of 13821, 27670, 41519, 55368? 13849*f - 28 What is the k'th term of 10770, 10809, 10876, 10983, 11142, 11365, 11664? 2*k**3 + 2*k**2 + 19*k + 10747 What is the h'th term of -16340, -16336, -16330, -16322, -16312, -16300, -16286? h**2 + h - 16342 What is the c'th term of -133623, -267244, -400865, -534486, -668107, -801728? -133621*c - 2 What is the o'th term of -3680, -7357, -11032, -14705, -18376? o**2 - 3680*o - 1 What is the d'th term of -12186, -48749, -109688, -195003, -304694, -438761? -12188*d**2 + d + 1 What is the m'th term of 425, 1337, 2857, 4985, 7721, 11065, 15017? 304*m**2 + 121 What is the q'th term of -29010, -28979, -28948? 31*q - 29041 What is the s'th term of -18457, -36883, -55297, -73693, -92065, -110407, -128713? s**3 - 18433*s - 25 What is the g'th term of 67, 68, 73, 82, 95, 112? 2*g**2 - 5*g + 70 What is the b'th term of 40871, 81755, 122641, 163529, 204419, 245311? b**2 + 40881*b - 11 What is the u'th term of -11041, -44175, -99399, -176713, -276117, -397611, -541195? -11045*u**2 + u + 3 What is the r'th term of 26, 79, 96, 17, -218, -669? -10*r**3 + 42*r**2 - 3*r - 3 What is the h'th term of -393, -1633, -3697, -6585, -10297, -14833, -20193? -412*h**2 - 4*h + 23 What is the n'th term of -3447, -3095, -2741, -2385, -2027, -1667? n**2 + 349*n - 3797 What is the i'th term of 446, 1371, 2914, 5075, 7854, 11251, 15266? 309*i**2 - 2*i + 139 What is the i'th term of -647, -656, -685, -740, -827? -i**3 - 4*i**2 + 10*i - 652 What is the v'th term of 316, 641, 966, 1291, 1616, 1941? 325*v - 9 What is the t'th term of 2718, 10876, 24486, 43560, 68110? 2*t**3 + 2714*t**2 + 2*t What is the x'th term of -13388, -26780, -40172, -53564, -66956, -80348? -13392*x + 4 What is the f'th term of 39762, 39764, 39766, 39768, 39770, 39772? 2*f + 39760 What is the f'th term of 3019, 11967, 26871, 47725, 74523, 107259? -f**3 + 2984*f**2 + 3*f + 33 What is the a'th term of 1828591, 1828590, 1828589, 1828588, 1828587, 1828586? -a + 1828592 What is the m'th term of 4149, 4121, 4091, 4059, 4025, 3989, 3951? -m**2 - 25*m + 4175 What is the n'th term of 2748, 10924, 24548, 43620? 2724*n**2 + 4*n + 20 What is the a'th term of 98, 147, 200, 263, 342, 443, 572? a**3 - 4*a**2 + 54*a + 47 What is the m'th term of 4636, 9226, 13816, 18406, 22996? 4590*m + 46 What is the p'th term of 24908, 24855, 24792, 24713, 24612, 24483, 24320? -p**3 + p**2 - 49*p + 24957 What is the a'th term of -1276, -1795, -2666, -3895, -5488, -7451, -9790, -12511? -a**3 - 170*a**2 - 2*a - 1103 What is the o'th term of 389, 766, 1141, 1514? -o**2 + 380*o + 10 What is the q'th term of 225, 517, 815, 1119, 1429? 3*q**2 + 283*q - 61 What is the d'th term of 295, 306, 317? 11*d + 284 What is the g'th term of -6698, -6701, -6704, -6707, -6710? -3*g - 6695 What is the z'th term of -1548, -1509, -1432, -1311, -1140, -913, -624? z**3 + 13*z**2 - 7*z - 1555 What is the j'th term of -164, -194, -224? -30*j - 134 What is the k'th term of -183391, -183393, -183397, -183403, -183411, -183421? -k**2 + k - 183391 What is the n'th term of -15938, -15935, -15932? 3*n - 15941 What is the v'th term of -2245, -2259, -2299, -2371, -2481, -2635, -2839? -v**3 - 7*v**2 + 14*v - 2251 What is the a'th term of 8115, 16232, 24365, 32520, 40703, 48920, 57177? a**3 + 2*a**2 + 8104*a + 8 What is the d'th term of 109828, 109833, 109840, 109849, 109860, 109873, 109888? d**2 + 2*d + 109825 What is the q'th term of -16467, -33278, -50087, -66894, -83699, -100502, -117303? q**2 - 16814*q + 346 What is the p'th term of -947, -1173, -1709, -2711, -4335, -6737? -26*p**3 + p**2 - 47*p - 875 What is the j'th term of -43127, -86259, -129393, -172529, -215667, -258807, -301949? -j**2 - 43129*j + 3 What is the l'th term of 2293, 2336, 2399, 2494, 2633, 2828, 3091, 3434? 2*l**3 - 2*l**2 + 35*l + 2258 What is the f'th term of -55, -135, -357, -793, -1515, -2595, -4105, -6117? -12*f**3 + f**2 + f - 45 What is the t'th term of 3169, 3131, 3093? -38*t + 3207 What is the b'th term of -29829, -119331, -268501, -477339, -745845? -29834*b**2 + 5 What is the r'th term of 5727, 7080, 8433, 9786, 11139, 12492? 1353*r + 4374 What is the i'th term of 1
Low
[ 0.5255813953488371, 28.25, 25.5 ]
Q: recaptcha, проверка формы Доброго времени суток, есть рабочая рекапча в скрипте которого только одно поле в БД добавляется, больше одного никак не получается, просто POST параметры не передаются //отправка формы и проверка капчи $("#send").click(function(e){ e.preventDefault(); var login = $("#login").val(); var email = $("#email").val(); $.ajax({ url: 'handler.php', type: "post", data:"login=" + login +"&g-recaptcha-response=" + grecaptcha.getResponse(), success: function(data){ if(data==='ok'){ $("span#err").text("ok").css("color","lime"); }if(data==='recaptch error'){ $("span#err").text("неправильно введена капча").css("color","yellow"); } if(data==='pls enter captcha'){ $("span#err").text("ввведите капчу").css("color","orange"); } } }); }); //////////////// handler.php if($_POST['g-recaptcha-response']){ $captcha = $_POST['g-recaptcha-response']; $secret = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"; $res = json_decode(file_get_contents("https://www.google.com/recaptcha/api/siteverify?secret=" . $secret . "&response=".$captcha),true); if($res['success']){ echo "ok"; $login = $_POST['login']; $email = $_POST['email']; global $connection; $insert = mysqli_query($connection,"INSERT INTO news (login,email) VALUES ('$login','$email')"); }else{ echo "recaptch error"; } }else{ echo 'pls enter captcha'; } data:"login=" + login +"&g-recaptcha-response=" + grecaptcha.getResponse(), в этой строке никак не получается отправить данные больше чем одного поля, как решить эту задачу? A: Передавать параметры массивом. JQuery Ajax data: { login: login, g-recaptcha-response: grecaptcha.getResponse() } В PHP ловить $_POST['login'] и $_POST['g-recaptcha-response']
High
[ 0.744570837642192, 22.5, 7.71875 ]
Q: Getting return values from Task.WhenAll Hopefully a fairly simple one here. I have a collection of objects, each of which has an async method that I want to call and collect values from. I'd like them to run in parallel. What I'd like to achieve can be summed up in one broken line of code: IEnumerable<TestResult> results = await Task.WhenAll(myCollection.Select(v => v.TestAsync())); I've tried various ways of writing this without success. Any thoughts? A: If the tasks you're awaiting have a result of the same type Task.WhenAll returns an array of them. For example for this class: public class Test { public async Task<TestResult> TestAsync() { await Task.Delay(1000); // Imagine an I/O operation. return new TestResult(); } } We get these results: var myCollection = new List<Test>(); myCollection.Add(new Test()); IEnumerable<TestResult> results = await Task.WhenAll(myCollection.Select(v => v.TestAsync()));
High
[ 0.693418940609951, 27, 11.9375 ]
Product Description KTI Safety Alert Sports Arm Band is designed to suit the SA2G Personal Locator Beacon.Ideal for those either on land or on water activities.Being custom made for the KTI PLB, it will also fit other small items including the Ocean Signal rescueMe PLB1. FEATURES: Made from super comfortable black neopreneAdjustable arm band with velcro strapClear plastic window for easy viewingCut away for antenna extened use when activatedPocket for storage of a heliograph signalling mirrorAdditional storage for ID / credit card
Mid
[ 0.5742092457420921, 29.5, 21.875 ]
According to an old tradition, the monastery was built where Mary, Joseph and the infant Jesus took shelter in a cave while fleeing from Herod the Great. This event is commemorated in the ground-floor crypt beneath the monastery church. An icon illustrates the Flight of the Holy Family into Egypt and a large painting depicts a contented Jesus being nursed at the breast by his mother Mary. Fresco of St Gerasimus with his lion (Bukvoed) The upper-floor church contains many holy icons and frescoes, including paintings of Gerasimus and his lion. Cabinets in the crypt store the bones of monks killed during the Persian invasion of 614. Hospitable place for pilgrims A place of hospitality and refreshment for pilgrims, with fruit trees, flowers and birdsong, the gold-domed monastery offers a contrast to the hot and barren environment of the Judaean wilderness. Founded in the fifth century, it was originally dedicated to Our Lady of Kalamon (Greek for reeds), but was later renamed in honour of Gerasimus, who founded a nearby monastery that had been abandoned. It was destroyed in 614, rebuilt by the Crusaders, abandoned after the Crusader period, restored in the 12th century, rebuilt in 1588, destroyed around 1734 and re-established in 1885. In Arabic it is known as Deir Hajla, meaning the monastery of the partridge, a bird common to the area. The monastery functioned in the form of a laura — with a cluster of hermits’ caves located around a community and worship centre. The hermits spent weekdays alone in their caves, occupied in prayer and making ropes and baskets. They went to the centre for Saturdays and Sundays, taking their handiwork and partaking in Divine Liturgy and communal activities. The monastic rule was strict. During the week the hermit monks survived on dry bread, dates and water. At the weekends they ate cooked food and drank wine. Their only personal belongings were a rush mat and a drinking bowl. Hermits’ caves can still be seen in the steep cliffs a kilometre east of the monastery and in the adjacent mountains. Gerasimus redeveloped monastic life Like many who founded Judaean monasteries, Gerasimus (also spelt Gerassimos or Gerasimos) came from outside the Holy Land — from a wealthy family in Lycia, in present-day Turkey. Already a monk when he came to Palestine, he followed the monastic leader Euthymius into the desert and became renowned for his piety and asceticism. Because of the similarity of names, Gerasimus is sometimes confused with St Jerome, the Bible translator who lived in Bethlehem. Gerasimus is credited with a new development in monastic life. Previously desert monks lived either in caves or in monasteries. He was the first to combine the solitude of a wilderness hermit with the communal aspect of a monastery by bringing hermits together on Saturdays and Sundays for worship and fellowship. He is believed to have attended the crucial Council of Chalcedon in 451, which caused a major rift in the Eastern Orthodox world. Called to settle differences of opinion on the nature of Christ, the council declared that he has two natures in one Person as truly God and truly man. Gerasimus briefly opposed this declaration, then accepted it. The lion depicted in icons of Gerasimus comes from a story that he found the animal wandering in the desert, suffering from a thorn embedded in a paw. The saint gently removed the thorn and tended to the wound. The lion thereafter devoted himself to Gerasimus, serving him and the monastery and retrieving the monastery’s donkey when it was stolen by thieves. The story has it that when Gerasimus died in 475 the lion lay on his grave and died of grief.
High
[ 0.669354838709677, 31.125, 15.375 ]
Q: How to understand the choice of Krylov subspace orthonormal basis? This semester, I study the Krylov subspace iterative methods (about Ax=b) using the book H. A. Van der Vorst. Iterative Krylov Methods for Large Linear Systems, volume 13. Cambridge University Press, 2003. About choice of the basis of the Krylov subspace, I have some doubts about the saying in this book (section 3.3) as follows: The obvious basis $r0,Ar_0,...,A^{i-1}r_0$ for i-dimensional Krylov subspace, is not attractive from a numerical point of view, since the vectors $A^jr_0,j=0,...i-1$ point more and more in the direction of the dominant eigenvector for increasing $j$ (the power method!), and hence the basis vectors become dependent in finite precision arithmetic. It does not help to compute this nonorthogonal generic basis first and to orthogonalize it afterwards. The result would be that we have orthogonalized a very ill-conditioned set basis vectors, which is numerically still not an attractive situation. I have two questions about what the author said (I can not get it what he want to say): since the vectors $A^jr_0,j=0,...i-1$ point more and more in the direction of the dominant eigenvector for increasing $j$ (the power method!), and hence the basis vectors become dependent in finite precision arithmetic. The result would be that we have orthogonalized a very ill-conditioned set basis vectors, which is numerically still not an attractive situation. These 2 sentences are what I can not get it, I have known that often we use a Gram-Schmidt to generate an orthonormal basis of Krylov subspace. But I also want to know that why we donot use the obvious basis above the power method? Furthermore, about the orthonormal basis of the Krylov subspace, I have something to ask. Usually, we use the Gram-Schmidt or modified Gram-Schmidt (MGS) to construct it, but I also know that Householder reflection is more stable, alternatively, we can also use MGS twice (maybe this needs more computational work) to guarantee the orthogonality of the basis. Which way does matlab choose? and why matlab chooses that way in its built-in gmres.m or other built-in functions, like bicg, bicgstab, etc? Which way should we (as a user) choose when we writer a gmres.m function? Any suggestions are welcome. A: I doubt I can explain this better than the author, but I'll give it a shot. Let's say that $r_0 = \sum \alpha_i x_i$, with $x_i$ an eigenvector with eigenvalue $\lambda_i$. We can then write the vectors in the basis as $A^kr_0 = \sum \lambda_i^k\alpha_i x_i$. If all eigenvalues are distinct, $A^kr_0$ will converge to the eigenvector with the largest (in absolute value) eigenvalue. This is the basis of the power method and what Van der Vorst was referring to. Because $A^kr_0$ will be close to the eigenvector for large $k$, this also means that $A^kr_0$ and $A^{k+1}r_0$ will be close to eachother. They will still form a basis for the Krylov subspace, but they will be ill-conditioned. (I suggest you read chapter 2 again if you don't understand condition). Working with an ill-conditioned basis is numerically not interesting. If we then orthogonalise the basis, the basis would have a good condition, but it still is not a good idea. Let's suppose we use a Householder based QR. Since it is backward stable, the numerical result $\hat{Q}\hat{R}$ will be close to $QR$. However, the forward error on $\hat{Q}$ and $\hat{R}$ might still be large and the orthogonal basis that we have computed might not span the correct space. Edit: Matlab uses Householder, my bad, using 'edit gmres.m' opened up my own implementation. BICG and BICGSTAB are quite different methods, it is a biorthogonalisation scheme that leads to a tridiagonal projection even for non-symmetric matrices. For symmetric matrices, the two subspaces are the same and the orthogonalisation is equivalent to MGS. When considering what variant to use, I suggest you stick to using MGS. I find it a bit easier to implement. However, if you really become worried about loss of orthogonality for some matrices, you could switch to double MGS or Householder Arnoldi.
Mid
[ 0.5962059620596201, 27.5, 18.625 ]
Introduction {#s1} ============ *Helicobacter pylori* is responsible for most duodenal and peptic ulcer and also plays an important role in gastric adenocarcinoma [@pone.0002689-Atherton1]--[@pone.0002689-Brenner1]. The mechanism of *H. pylori* pathogenic effect is unclear, but it is believed to be related to complex host bacterial interactions triggered by virulence genes [@pone.0002689-Amieva1], and it is possible that these effects are enhanced by the invasiveness of the bacterium [@pone.0002689-SeminoMora1]--[@pone.0002689-Necchi1]. Finally, *H. pylori* was recently observed within gastric mucosa capillaries, where it appears to establish close association with erythrocytes [@pone.0002689-Necchi1], [@pone.0002689-Aspholm1]. Therefore, it is important to develop specific and sensitive molecular methods allowing the detection and identification of this microorganism in biological specimens. Culture of the bacterium is considered the gold standard, but the method is not sensitive and is specific only if additional testing is performed on the isolates. The method of choice involves polymerase chain reaction (PCR) amplification of specific *H. pylori* genes. However, using this approach may be problematic due to the extensive polymorphism of many *H. pylori* genes and the absence of particular genes in some strains \[e.g. cagA [@pone.0002689-CamorlingaPonce1]\]. Among the genes that have been tested, *ureA* and *ureC* (also named *glmM*) appear sensitive, but they lack specificity. Therefore, the concurrent detection of multiple, *H. pylori-*specific, genes and the use of different sets of primers has been considered to be necessary to achieve specific and sensitive diagnosis of the infection. Another approach to the question has been to use *H. pylori 16S rRNA*. This ribosomal gene is particular in that it is present in all bacteria while, at the same time, it comprises nucleotide sequences that are specific to a given bacterial genus [@pone.0002689-Kolbert1], [@pone.0002689-Smith1]. Sequence analysis of the *16S rRNA* gene has led to our current understanding of prokaryotic phylogeny and *H. pylori 16S rRNA* gene sequence analysis unambiguously differentiated the *Helicobacter* genus from the closely related *Campylobacter* genus [@pone.0002689-Gorkiewicz1] thus allowing creation of the *Helicobacter* genus. Finally, *H. pylori 16S rRNA* gene sequence has been used as a tool to differentiate *H. pylori* from other *Helicobacter* sp. especially for isolates from animal sources [@pone.0002689-Ho1]--[@pone.0002689-Fox1]. Here, we sequenced the *16S rRNA* genes of two *H. pylori* strains with markedly different DNA fingerprints that had been cultured from two patients living in different continents and with different endoscopic diagnosis. By matching these sequences with each other and with those available in the National Center for Biotechnology Information (NCBI) nucleotide database, we first identified a unique nucleotide domain that is homologous in most *H. pylori* strains. We then defined, within this domain, a sequence that is homologous among *H. pylori* strains but not among other bacterial species and used this domain to design *H. pylori*-specific primers and probes to be used in a real-time quantitative RT-PCR (TaqMan) assay and an *in situ* hybridization (ISH) method. These methods can specifically detect less than 10 copies of *H. pylori* in gastric biopsies and also allow quantification of *H. pylori* density in biopsies from animals and patients with gastritis, gastric precancerous lesions and cancer. Methods {#s2} ======= **Ethical approval** to carry studies in humans was obtained from Institutional Review Board of the participating institutions and written consent forms was obtained from each participant. In addition, studies performed in animals were approved by the Institutional Animal Care and Use Committee. *H. pylori* strains {#s2a} ------------------- Gastric antral biopsies were harvested in (1) an Albanian patient with gastric adenocarcinoma and (2) a U.S. Caucasian patient with marked gastritis but no ulcer. Biopsies were cultured using *Campylobacter* chocolatized blood agar plates supplemented with Trimethoprim, Vancomycin, Amphotericin B and Polymyxin B (Remel, Lenexa) at 37°C in an atmosphere of 90% N~2~, 5% O~2~, and 5% CO~2~ (microaerobic conditions). Bacterial isolates consistent with *H. pylori* in shape, colony morphology, enzymatic activity, and Gram-negative status grew within 7--10 days. Single colony isolates were subcultured on sheep blood agar plates supplemented with Tryptic Soy Agar (Remel, Lenexa, KS), confirmed for enzymatic activity and Gram stain and collected in phosphate buffer saline (PBS; 137 mM NaCl, 2.7 mM KCl, 10 mM phosphate buffer) for subsequent genomic DNA extraction and analysis. DNA extraction {#s2b} -------------- DNA was extracted from each isolate collected in PBS by QIAamp DNA mini kit and processed the samples as described in the insert (Qiagen Inc., Stanford). Random Amplification of Polymorphic DNA (RAPD) {#s2c} ---------------------------------------------- DNA fingerprinting using the RAPD technique was used to compare the isolates. A set of 5 different 10-mer primers (1247: 5′-AAGAGCCCGT-3′; 1254: 5′-CCGCAGCCAA-3′; 1281: 5′-AACGCGCAAC-3′; 1238 5′- GCGATCCCCA-3′; 1290: 5′- GTGGATGCGA-3′) were used as published [@pone.0002689-Akopyants1]. *16S rRNA* gene amplification and sequencing {#s2d} -------------------------------------------- Total DNA was extracted from each patient\'s isolate and PCR-amplified using published primers (see supplementary [Methods S1](#pone.0002689.s001){ref-type="supplementary-material"}) [@pone.0002689-Eckloff1]. The Basic Local Alignment Search Tool (nucleotide BLAST), National Center for Biotechnology Information (NCBI), NIH, (<http://www.ncbi.nlm.nih.gov/blast/Blast.cgi>) feature for alignment between two nucleotide sequences (bl2seq) [@pone.0002689-Tatusova1] was used to align the overlapping sequenced segments of the *16S rRNA* gene. The *16S rRNA* sequences of strains USU101 and USU102 were decoded and registered in the GenBank nucleotide database as EU544199 and EU544200, respectively. Histology and *in situ* hybridization {#s2e} ------------------------------------- Gastric biopsies were fixed in 4% paraformaldehyde within 30 seconds of harvesting, dehydrated in ethanol within two days, and embedded in paraffin. Unstained sections were then stained with hematoxylin and eosin or according to Genta [@pone.0002689-Genta1] or processed for ISH as described in the supplementaries [@pone.0002689-SeminoMora1], [@pone.0002689-Aspholm2]. Controls of method {#s2f} ------------------ Control for nonspecific binding was performed by using: (1) sense instead of antisense probe; (2) hybridization buffer instead of antisense probe; (3) unlabeled antisense probe; (4) digoxigenin or biotin-labeled probe for scorpion Butus martensi Karsch neurotoxin sequence \[5′-GGC CAC GCG TCG ACT AGT AC-3′\] [@pone.0002689-Lan1]; (5) RNaseA pretreatment (Roche); (6) DNase I pretreatment (Roche); and (7) RNase plus DNase I pretreatment. *In silico* search for a *16S rRNA* sequence conserved in, and specific for, *H. pylori* strains {#s2g} ------------------------------------------------------------------------------------------------ The DNASTAR software ([www.dnastar.com](http://www.dnastar.com)) was used to perform multi-alignment of the two decoded sequences described above along with the sequences of the three strains that have been completely sequenced to date (J99, 26695, and HPAG1) and with the published sequences of the *16S* ribosomal RNA of *E. coli* (J01859), *S. bareilly* (U92196), *C. jejuni* (LO4315), *S. flexneri* (AE016991 AE014073), and *H. heilmannii* (AF506793). Design of primers and probes specifically recognizing published *H. pylori* strains {#s2h} ----------------------------------------------------------------------------------- The PrimerExpress® v2.0 Software was used to design multiple sets of real-time RT-PCR primers flanking an oligonucleotide probe. The rules and requirements described in the PrimerExpress tutorial [@pone.0002689-Applera1] were then applied to select the set that would provide maximum sensitivity and specificity of the assay. Locus-specific primers flanking an oligonucleotide probe labeled with a 5′ fluorescent Reporter dye (FAM or TET) and a 3′ Quencher dye (TAMRA) were ordered from Applied Biosystems ([www.appliedbiosystems.com](http://www.appliedbiosystems.com)). Validation of the primers and probes {#s2i} ------------------------------------ Pure cultures of *H. pylori*, *E. coli* (Top10, Invitrogen, Carsbad, CA), *S. typhimurium* LT2, *V. cholerae* O139 (Classical Ogawa), *V. cholerae* O139 (El Tor), and *P. aeruginosa* were lysed and total DNA was extracted. The specificity of the primers and probes described above was then verified by real-time PCR using an ABI PRISM 7500 Sequence Detection System (Applied Biosystems) [@pone.0002689-Giulietti1]. In addition, smears of the pure cultures were streaked onto glass slides, immediately covered with a drop of 4% paraformaldehyde, and let to dry overnight. The next day, they were processed for ISH as described above. Cloning of the standard cRNA {#s2j} ---------------------------- The MEGAscript protocol for Standard cRNA cloning (MEGAscript high yield transcription kit, Ambion) was used to incorporate the SP6 promoter into *H. pylori* strain J99 *16S rRNA* at a location situated upstream of the sequence of interest, thus ensuring that the promoter sequence was incorporated into the PCR product. Conditions for primer extension were 95°C for 15 sec, 60°C for 15 sec, 72°C for 1 min. for 38 cycles to produce a 246 bp PCR product. The ABI Prism BigDye Terminator Cycle Sequencing Ready Reaction kit was used to verify that the sequence of the PCR product was identical to the corresponding *16S rRNA* sequence. In vitro transcription of cRNA was then performed using 2 µL (0.2 µg) of the PCR product as a template with the MEGAscript High Yield Transcription Kit (Ambion). This reverse transcription product was purified by RNeasy Mini Kit and treated with DNaseI during this purification (Qiagen). The concentration of this cRNA was calculated from the mean of three OD measurements and then converted to the copy numbers using Avogadro\'s number. The stock solution was aliquoted from freshly prepared 10-fold serial dilutions from 10^1^ to 10^6^ copies and stored at −80°C. Absolute quantitative real-time RT-PCR (QRT-PCR) {#s2k} ------------------------------------------------ A single-tube reaction with a TaqMan One-Step RT-PCR Master Mix Reagents kit (Applied Biosystems) designed for reverse transcription (RT) and polymerase chain reaction (PCR) in a single buffer system was used in an ABI PRISM 7500 Sequence Detection System (Applied Biosystems, Foster City, CA). The primers and probes concentrations were first optimized using controls from a pool of total RNA extracted from *H. pylori* cultures and monkey gastric biopsies (BioChain Institute, Inc. Hayward, CA). The assay was then performed by adding 2 µl of 50 ng/µl monkey total RNA aliquots to the real time RT-PCR reaction mix to a final volume of 50 µl. The RT step was performed at 48°C for 30 min, followed by 10 min at 95°C for AmpliTaq Gold Activation. The PCR step consisted of 40 cycles of denature 15 sec at 95°C and anneal/extend 1 min at 60°C. All samples and cRNA standards were assayed without reverse transcriptase to confirm the absence of DNA contamination. Conversion of Ct values to *H. pylori 16S rRNA* copy numbers was performed using linear regression analysis of a standard curve derived from serial 10^1^ to 10^6^ copies, 10-fold dilutions of the cloned cRNA. Gastric biopsies {#s2l} ---------------- Three biopsies were obtained from each of the 23 rhesus monkeys studied in an inoculation experiment [@pone.0002689-Liu1]. As described above, the first biopsy was cultured for *H. pylori*, the second biopsy was fixed in formalin and either stained according to Genta [@pone.0002689-Genta1] or unstained sections were processed for ISH, and the third biopsy was processed to extract total RNA. Statistical Analysis {#s2m} -------------------- Data were entered into our Microsoft Access database. Log-transformed copy numbers were normally distributed. Pearson correlation coefficients (r) and associated probabilities (P) were calculated and a two-sided P-value of 0.05 or less was considered statistically significant. Results {#s3} ======= DNA fingerprinting {#s3a} ------------------ In order to study the genomic diversity between various *H. pylori* strains, we performed RAPD fingerprinting analysis of strains USU101, USU102, J99, and 26695. As shown in [Figure 1](#pone-0002689-g001){ref-type="fig"}, the pattern of these four strains was markedly different from each other in regard to all 4 primers used for RAPD. ![DNA fingerprinting (RAPD) of four *H. pylori* strains: USU101, isolated from an Albanian patient with gastric adenocarcinoma (1), USU102, isolated from a U.S. Caucasian patient with no ulcer (2), strain J99 (3), and strain 26695 (4).\ Note that the DNA fingerprints of the four strains are quite different from each other.](pone.0002689.g001){#pone-0002689-g001} *In silico* search for a *16S rRNA* sequence conserved in *H. pylori* strains {#s3b} ----------------------------------------------------------------------------- To examine whether a particular domain of *H. pylori 16S rRNA* sequence was conserved among strains with markedly different fingerprints, the DNASTAR software was used to perform multi-alignment of the *16S rRNA* sequences of the four strains described above**.** We discovered that a 546-bp nucleotide domain was 100% conserved among these five sequences ([Figure 2A](#pone-0002689-g002){ref-type="fig"}). To determine whether this domain was also conserved among various *H. pylori* strains, we performed a nucleotide BLAST of this sequence and observed that the sequence was 100% homologous to 49 *H. pylori* sequences published in GenBank to date. ![Sequences of *H. pylori 16S rRNA* that are 100% homologous among USU-101, USU-102, J99, and 26695 *H. pylori* strains (A), and also do not match the *16S rRNA* sequences of *E. coli*, *S. bareilly*, *C. jejuni*, and *S. flexneri* (B), nor the sequences of *H. heilmannii* (C), and encompass the set of primers and TaqMan probe (D).\ E shows the sequences of the two ISH probes used in the present study (546-bp from 187 to 732 of J99 *16S rRNA* sequence).](pone.0002689.g002){#pone-0002689-g002} *In silico* search for a conserved *16S rRNA* sequence that is also specific to *H. pylori* strains {#s3c} --------------------------------------------------------------------------------------------------- In order to search for a region that is specific for *H. pylori*, the conserved 546-bp nucleotide domain was entered into the DNAStar software along with the published *16S rRNA* sequences of *E. coli*, *S. bareilly*, *C. jejuni*, *S. flexneri*, and *H. heilmannii*. We observed that a 229-bp domain of the conserved region did not match the other five bacteria ([Figure 2C](#pone-0002689-g002){ref-type="fig"}). Basic nucleotide BLAST alignment (Blastn) of this sequence demonstrated complete homology with 74 *H. pylori* strains, two *H. nemestrinae* and four *Helicobacter* sp. "liver" (that were subsequently found to be indistinguishable from *H. pylori* [@pone.0002689-Avenaud1], [@pone.0002689-Suerbaum1]), and 17 uncultured *Helicobacter* species. Sequences of these uncultured *Helicobacter* species had been determined from biopsies from human esophageal carcinoma or inflamed colon [@pone.0002689-Sturegard1], from the stomach of cheetahs \[a carnivore that is frequently colonized by the closest *H. pylori* relative, *H. acinonychis* [@pone.0002689-Eppinger1]\], or from the stomach of thoroughbred horses [@pone.0002689-Contreras1]. The following TaqMan RT-PCR primers and probe were then designed within the 229-bp sequence as described in [Materials and Methods](#s2){ref-type="sec"}: forward primer 5′-TCG GAA TCA CTG GGC GTA A-3′; reverse primer 5′-TTC TAT GGT TAA GCC ATA GGA TTT CAC-3′; probe 5′--TGA CTG ACT ATC CCG CCT ACG CGC T-3′ ([Figure 2D](#pone-0002689-g002){ref-type="fig"}). In addition, two probes for in situ hybridization (ISH) were designed within the same 229-bp sequence ([Figure 2E](#pone-0002689-g002){ref-type="fig"}). *In silico* validation of the RT-PCR set of primers and probe and of the ISH probes {#s3d} ----------------------------------------------------------------------------------- In order to validate the specificity of the set of two primers and a probe used in our real-time RT-PCR assay, we performed a BLAST alignment of the corresponding 76-bp sequence ([Figure 2D](#pone-0002689-g002){ref-type="fig"}) with the GenBank database. We observed 100% homology with 136 *H. pylori* strains, three *H. nemestrinae* and four *Helicobacter* sp. "liver" (that are, in fact, *H. pylori* [@pone.0002689-Avenaud1], [@pone.0002689-Suerbaum1]), one *H. acinonychis* [@pone.0002689-Eppinger1] and 37 uncultured *Helicobacter* species (isolated from human esophageal carcinoma, inflamed colon, or liver [@pone.0002689-Sturegard1], [@pone.0002689-Castera1], from seven cheetahs, and from a tiger). In addition, two *H. pylori* 16S RNA sequences (AY057935 and AY057936) showing a low homology (91% and 97%, respectively) with the 76-bp nucleotide sequence were isolates referred to the genomic sequences of the *H. pylori* strains 26695 and J99 in the ATCC catalog. It is noteworthy, however, that in contrast to these two ATCC isolates, both 26695 and J99 strains are among those showing 100% homology with our 76-bp sequence. To clarify this apparent discrepancy, we performed BLAST alignment of AY057935 and AY057936 with their respective parental strains, and found 82 and 91% homology, respectively. Thus, it is likely that AY057935 and AY057936 strains are, in fact, mutated clones of the respective parental strains, or that they were contaminated during laboratory procedures. Alignment of the 37- and 33-bp sequences corresponding to the ISH probes revealed that they were 100% homologous with over 150 *H. pylori* strains but that there were at least two mismatches with different *Helicobacter* sp. such as *H. cetorum* and *H. bilis*. Interestingly, the ISH probes were also 100% homologous with several *Helicobacter* sp. isolates from horses, dogs, zoo seals, and other animals that live in close contact with humans. *In silico* verification of the specificity of the primers and probes {#s3e} --------------------------------------------------------------------- In order to determine whether the proposed method was specific for *H. pylori*, we performed a series of BLAST (bl2seq) of the sequence corresponding the RT-PCR primers and probe (71-bp of the 76 bp entire sequence) with the sequences of non-*H. pylori* bacteria. We observed the presence of 27 mismatches for *E. coli*, *S. bareilly*, and *S. flexneri*, 13 mismatches for *C. jejuni* and 6 mismatches for *H. heilmannii.* *In vitro* validation of the RT-PCR primers and probes {#s3f} ------------------------------------------------------ By real-time RT-PCR, pure cultures of *H. pylori* were positive whereas pure cultures of *E. coli* (Top10), *S. typhimurium* LT2, *V. cholerae* O139 (Classical Ogawa), *V. cholerae* O139 (El Tor), and *P. aeruginosa* were negative. *In vitro* validation of the *in situ* hybridization probe {#s3g} ---------------------------------------------------------- Pure cultures of *H. pylori* were positive whereas pure cultures of *E. coli*, *S. typhimurium*, *V. cholerae*, and *P. aeruginosa* were negative ([Figure 3](#pone-0002689-g003){ref-type="fig"}). This method is being used in the laboratory to specifically verify that *H. pylori* single colony isolates are not contaminated by other bacteria. ![Smears of bacteria processed by ISH and FISH using biotin-labeled probe specific for *H. pylori 16S rRNA* (1,000X).\ *H. pylori* isolate processed by ISH and using biotin-labeled probe specific for *H. pylori 16S rRNA* (A; avidin peroxidase-DAB; brown; and B; avidin alkaline phosphatase BCIP/NBT; blue) or by FISH \[C; avidin- fluorescein (FITC) stained green\]. Negative controls (light violet stain due to the hematoxylin QS counterstaining but no brown or blue reaction): ISH using biotin-labeled probe specific for *H. pylori 16S rRNA* (avidin- peroxidase-DAB; negative) in a strain of *S. typhimurium* LT2 (D). Negative controls of methods using PBS (E) or scorpion toxin (F).](pone.0002689.g003){#pone-0002689-g003} Determination of *H. pylori* density in gastric biopsies from Rhesus monkeys by RT-PCR {#s3h} -------------------------------------------------------------------------------------- Primary cultures of the first biopsy were negative in 105 monkeys that had less than 500 copies/100 ng of RNA extracted from the second biopsy. The number of positive cultures increased progressively with increasing *H. pylori* density (500--5,000: 2/29; 5,000--50,000: 8/30; and \>50,000: 15/30). Visualization of *H. pylori* in Rhesus monkey gastric biopsies by Genta and ISH {#s3i} ------------------------------------------------------------------------------- Biopsies from a Rhesus monkey colonized by both *H. pylori* and *H. heilmannii* demonstrated that only *H. pylori*--shaped bacteria were detected by ISH ([Figure 4](#pone-0002689-g004){ref-type="fig"}). ![Gastric biopsy of a rhesus monkey with *H. pylori* and *H. heilmannii* co-infection.\ Genta stain (A: 400X; insert: 1,000X) demonstrates the presence of high *H. heilmannii* infection (typical tightly spiraled, ∼10 µm-long rods), in addition to a few *H. pylori*--like bacteria (∼3 µm-long and curved). *In situ* hybridization (ISH) with *16S rRNA* probe (B: 400X; insert: 1,000X) demonstrates the presence of *H. pylori* (stained blue by the avidin alkaline phosphatase (nitroblue tetrazolium) while other, tightly spiraled bacteria are negative.](pone.0002689.g004){#pone-0002689-g004} Discussion {#s4} ========== In the present study, we used an *in silico* approach to demonstrate that a 546-bp domain of *H. pylori 16S rRNA* is highly conserved in most *H. pylori 16S rRNA* sequences registered in the NCBI GenBank and that a 229-bp sub-domain of this conserved region is specific to *H. pylori*. Within this sub-domain, it was possible to design an ISH probe and a set of real-time RT-PCR primers and a TaqMan probe that are 100% homologous with over 100 *H. pylori* strains isolated from humans residing in four continents, from monkeys [@pone.0002689-Drazek1], [@pone.0002689-Doi1], and from cats [@pone.0002689-Handt1]. In addition, 100% homology was found with many *Helicobacter* sp. that were later identified as *H. pylori*. Two are listed as *H. nemestrinae* (AF363064 and AF348617), although the strains are now recognized to be *H. pylori* [@pone.0002689-Suerbaum1]. The revised GenBank description of the strain, under "source" and "organism" reflects the correction, although the name *H. nemestrinae* still remains associated with the accession number. Four other strains are published in GenBank as *Helicobacter* sp. "liver" (AF142583 and AF142585) although a subsequent phylogenetic study suggested that they are, in fact, *H. pylori* [@pone.0002689-Avenaud1]. Five other sequences correspond to those of *H. pylori*--like DNA extracted from the liver of patients with hepatitis C [@pone.0002689-Castera1]. Another strain is currently listed as a *H. heilmannii* (AF506794) in NCBI, but this strain is not mentioned in the publication [@pone.0002689-ORourke1] because it clustered with *H. pylori* by both *16S rRNA* and urease sequencing (O\'Rourke, personal communication). Finally, 13 of the 100% homologous *Helicobacter sp.* strains are extremely close to *H. pylori* and were isolated from carnivores including cheetahs and a tiger, and from horses. Interestingly, these animals live in close association with humans and they may be infected with *H. pylori* [@pone.0002689-Eppinger1]. Importantly, the 76-bp region corresponding to the primers and probe and the 37- and 33-bp sequences of the ISH probes have multiple mismatches with non-*H. pylori* sequences. *16S rRNA* was chosen for detection and quantification of *H. pylori* because ribosomal RNAs exhibit a high degree of functional and evolutionary homology within all bacteria and those sequences have been used for phylogenetic comparisons and classifications of microbial organisms [@pone.0002689-Drancourt1], [@pone.0002689-Gorkiewicz2]. Analysis of *16S rRNA* in bacteria led to the detection of conserved, semi-conserved and non-conserved domains in this gene and to the development of molecular techniques that can specifically identify a variety of bacteria species [@pone.0002689-Gray1]. *Helicobacter* genus-specific primers for *16S rRNA* have been used in PCR amplification as a screening tool to detect *Helicobacter* organisms in biological specimens [@pone.0002689-Fox1], [@pone.0002689-Riley1]. Although the sequences corresponding to these primers are common to most species within the genus *Helicobacter*, sequencing and restriction enzyme analysis showed that the nucleotide sequence delimited by the primers varies with the species [@pone.0002689-Fox1], [@pone.0002689-Riley1]. In order to specifically identify *H. pylori*, Ho et al. proposed an assay based on PCR amplification of a 109-nucleotide segment within the *16S rRNA* sequence [@pone.0002689-Ho1], but these primers were subsequently shown to be non-specific for *H. pylori* [@pone.0002689-Chong1]. In recent years, real-time RT-PCR and ISH have become standard methods in well-equipped laboratories and many well-trained laboratory technicians have the required expertise to perform the tests. Therefore, we believe that the information provided in the present paper will lead to their use in clinical practice, especially since the calculated cost for real-time RT-PCR reagents and supplies is less that \$2.00/sample. In summary, a 76-bp region of *H. pylori 16S rRNA* that is common to a large number of *H. pylori* sequences and is specific to this bacterium was used to design primers and probes to be used in real-time RT-PCR and ISH assays. Both approaches are very sensitive and specific for *H. pylori* and the real time RT-PCR assay can be used readily in most modern laboratories if frozen samples have been saved. If only archived specimens are available, then the more specialized *in situ* hybridization assay can be used. We propose that both assays combine sensitivity and specificity, making them strong clinical tools for precise and rapid identification of *H. pylori* in biological specimens harvested from humans, animals, or environmental source. Supporting Information {#s5} ====================== ###### Text (0.04 MB DOC) ###### Click here for additional data file. We thank Dr. R. Peek for providing strain J99 and Dr. S. Merrell for providing strain 26695 and for reviewing the manuscript. The opinions and assertions contained herein are the private ones of the authors and are not to be construed as official or reflecting the views of the Department of Defense, the Uniformed Services University of the Health Sciences or the Defense Nuclear Agency. **Competing Interests:**The authors have declared that no competing interests exist. **Funding:**Work supported in part by USUHS grant R0-83GM and by NIH Grant R01-CA082312. [^1]: Conceived and designed the experiments: HL CSM SQD AD. Performed the experiments: HL AR CSM AD. Analyzed the data: HL AR CSM SQD AD. Wrote the paper: HL CSM SQD AD.
Mid
[ 0.6480686695278971, 37.75, 20.5 ]
In all areas of laboratory testing, the clinical laboratory must ensure proper quality measures are in place to reduce or eliminate carryover between samples, false positive, and false negative results. Some testing techniques are generally assumed to be better than others (e.g., less prone to yielding false positive and false negative results). For example, quantitative, confirmatory testing using liquid chromatography-tandem mass spectrometry is often taken at face value to be more specific than qualitative, antibody-based detection methods, but this is not always true. The transition from individual vial to 96-well based, high-throughput sample preparation methods is one of many examples of the progress in clinical laboratory testing. Samples can be processed more rapidly and many automated systems have been developed for processing 96-well plates. Nevertheless, small sample volumes and the small form factor of 96-well plates may increase the likelihood of false-positive results for wells in close proximity to significantly elevated wells. For example, an increased likelihood of false-positive results for wells in close proximity to significantly elevated wells at a rate of approximately 4% has been observed in mass-spectrometry analysis of drugs of abuse using a 96-well format. Error rates may be expected to increase as high-throughput assays are transitioned to plates having a greater number of wells (e.g., 384 well plates or even 1536 well plates).
Mid
[ 0.621923937360178, 34.75, 21.125 ]
Q: xutility file? I'm trying to use C code with opencv in face detection and counting, but I cannot build the source. I am trying to compile my project and I am having a lot of problems with a line in the xutility file. The error message shows that it is giving errors in the xutility file. Please help me solve this problem? Code // Include header files #include "stdafx.h" #include "cv.h" #include "highgui.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <assert.h> #include <math.h> #include <float.h> #include <limits.h> #include <time.h> #include <ctype.h> #include <iostream> #include <fstream> #include <vector> using namespace std; #ifdef _EiC #define WIN32 #endif int countfaces=0; int numFaces = 0; int k=0 ; int list=0; char filelist[512][512]; int timeCount = 0; static CvMemStorage* storage = 0; static CvHaarClassifierCascade* cascade = 0; void detect_and_draw( IplImage* image ); void WriteInDB(); int found_face(IplImage* img,CvPoint pt1,CvPoint pt2); int load_DB(char * filename); const char* cascade_name = "C:\\Program Files\\OpenCV\\OpenCV2.1\\data\\haarcascades\\haarcascade_frontalface_alt_tree.xml"; // BEGIN NEW CODE #define WRITEVIDEO char* outputVideo = "c:\\face_counting1_tracked.avi"; //int faceCount = 0; int posBuffer = 100; int persistDuration = 10; //faces can drop out for 10 frames int timestamp = 0; float sameFaceDistThreshold = 30; //pixel distance CvPoint facePositions[100]; int facePositionsTimestamp[100]; float distance( CvPoint a, CvPoint b ) { float dist = sqrt(float ( (a.x-b.x)*(a.x-b.x) + (a.y-b.y)*(a.y-b.y) ) ); return dist; } void expirePositions() { for (int i = 0; i < posBuffer; i++) { if (facePositionsTimestamp[i] <= (timestamp - persistDuration)) //if a tracked pos is older than three frames { facePositions[i] = cvPoint(999,999); } } } void updateCounter(CvPoint center) { bool newFace = true; for(int i = 0; i < posBuffer; i++) { if (distance(center, facePositions[i]) < sameFaceDistThreshold) { facePositions[i] = center; facePositionsTimestamp[i] = timestamp; newFace = false; break; } } if(newFace) { //push out oldest tracker for(int i = 1; i < posBuffer; i++) { facePositions[i] = facePositions[i - 1]; } //put new tracked position on top of stack facePositions[0] = center; facePositionsTimestamp[0] = timestamp; countfaces++; } } void drawCounter(IplImage* image) { // Create Font char buffer[5]; CvFont font; cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, .5, .5, 0, 1); cvPutText(image, "Faces:", cvPoint(20, 20), &font, CV_RGB(0,255,0)); cvPutText(image, itoa(countfaces, buffer, 10), cvPoint(80, 20), &font, CV_RGB(0,255,0)); } #ifdef WRITEVIDEO CvVideoWriter* videoWriter = cvCreateVideoWriter(outputVideo, -1, 30, cvSize(240, 180)); #endif //END NEW CODE int main( int argc, char** argv ) { CvCapture* capture = 0; IplImage *frame, *frame_copy = 0; int optlen = strlen("--cascade="); const char* input_name; if( argc > 1 && strncmp( argv[1], "--cascade=", optlen ) == 0 ) { cascade_name = argv[1] + optlen; input_name = argc > 2 ? argv[2] : 0; } else { cascade_name = "C:\\Program Files\\OpenCV\\OpenCV2.1\\data\\haarcascades\\haarcascade_frontalface_alt_tree.xml"; input_name = argc > 1 ? argv[1] : 0; } cascade = (CvHaarClassifierCascade*)cvLoad( cascade_name, 0, 0, 0 ); if( !cascade ) { fprintf( stderr, "ERROR: Could not load classifier cascade\n" ); fprintf( stderr, "Usage: facedetect --cascade=\"<cascade_path>\" [filename|camera_index]\n" ); return -1; } storage = cvCreateMemStorage(0); //if( !input_name || (isdigit(input_name[0]) && input_name[1] == '\0') ) // capture = cvCaptureFromCAM( !input_name ? 0 : input_name[0] - '0' ); //else capture = cvCaptureFromAVI( "c:\\face_counting1.avi" ); cvNamedWindow( "result", 1 ); if( capture ) { for(;;) { if( !cvGrabFrame( capture )) break; frame = cvRetrieveFrame( capture ); if( !frame ) break; if( !frame_copy ) frame_copy = cvCreateImage( cvSize(frame->width,frame->height), IPL_DEPTH_8U, frame->nChannels ); if( frame->origin == IPL_ORIGIN_TL ) cvCopy( frame, frame_copy, 0 ); else cvFlip( frame, frame_copy, 0 ); detect_and_draw( frame_copy ); if( cvWaitKey( 30 ) >= 0 ) break; } cvReleaseImage( &frame_copy ); cvReleaseCapture( &capture ); } else { if( !input_name || (isdigit(input_name[0]) && input_name[1] == '\0')) cvNamedWindow( "result", 1 ); const char* filename = input_name ? input_name : (char*)"lena.jpg"; IplImage* image = cvLoadImage( filename, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } else { /* assume it is a text file containing the list of the image filenames to be processed - one per line */ FILE* f = fopen( filename, "rt" ); if( f ) { char buf[1000+1]; while( fgets( buf, 1000, f ) ) { int len = (int)strlen(buf); while( len > 0 && isspace(buf[len-1]) ) len--; buf[len] = '\0'; image = cvLoadImage( buf, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } } fclose(f); } } } cvDestroyWindow("result"); #ifdef WRITEVIDEO cvReleaseVideoWriter(&videoWriter); #endif return 0; } void detect_and_draw( IplImage* img ) { static CvScalar colors[] = { {{0,0,255}}, {{0,128,255}}, {{0,255,255}}, {{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}} }; double scale = 1.3; IplImage* gray = cvCreateImage( cvSize(img->width,img->height), 8, 1 ); IplImage* small_img = cvCreateImage( cvSize( cvRound (img->width/scale), cvRound (img->height/scale)), 8, 1 ); CvPoint pt1, pt2; int i; cvCvtColor( img, gray, CV_BGR2GRAY ); cvResize( gray, small_img, CV_INTER_LINEAR ); cvEqualizeHist( small_img, small_img ); cvClearMemStorage( storage ); if( cascade ) { double t = (double)cvGetTickCount(); CvSeq* faces = cvHaarDetectObjects( small_img, cascade, storage, 1.1, 2, 0/*CV_HAAR_DO_CANNY_PRUNING*/, cvSize(30, 30) ); t = (double)cvGetTickCount() - t; printf( "detection time = %gms\n", t/((double)cvGetTickFrequency()*1000.) ); if (faces) { //To save the detected faces into separate images, here's a quick and dirty code: char filename[6]; for( i = 0; i < (faces ? faces->total : 0); i++ ) { /* CvRect* r = (CvRect*)cvGetSeqElem( faces, i ); CvPoint center; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); cvCircle( img, center, radius, colors[i%8], 3, 8, 0 );*/ // Create a new rectangle for drawing the face CvRect* r = (CvRect*)cvGetSeqElem( faces, i ); // Find the dimensions of the face,and scale it if necessary pt1.x = r->x*scale; pt2.x = (r->x+r->width)*scale; pt1.y = r->y*scale; pt2.y = (r->y+r->height)*scale; // Draw the rectangle in the input image cvRectangle( img, pt1, pt2, CV_RGB(255,0,0), 3, 8, 0 ); CvPoint center; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); cvCircle( img, center, radius, CV_RGB(255,0,0), 3, 8, 0 ); //update counter updateCounter(center); int y=found_face(img,pt1,pt2); if(y==0) countfaces++; }//end for printf("Number of detected faces: %d\t",countfaces); }//end if //delete old track positions from facePositions array expirePositions(); timestamp++; //draw counter drawCounter(img); #ifdef WRITEVIDEO cvWriteFrame(videoWriter, img); #endif cvShowImage( "result", img ); cvDestroyWindow("Result"); cvReleaseImage( &gray ); cvReleaseImage( &small_img ); }//end if } //end void int found_face(IplImage* img,CvPoint pt1,CvPoint pt2) { /*if (faces) {*/ CvSeq* faces = cvHaarDetectObjects( img, cascade, storage, 1.1, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(40, 40) ); int i=0; char filename[512]; for( i = 0; i < (faces ? faces->total : 0); i++ ) {//int scale = 1, i=0; //i=iface; //char filename[512]; /* extract the rectanlges only */ // CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i); CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i); //IplImage* gray_img = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 ); IplImage* clone = cvCreateImage (cvSize(img->width, img->height), IPL_DEPTH_8U, img->nChannels ); IplImage* gray = cvCreateImage (cvSize(img->width, img->height), IPL_DEPTH_8U, 1 ); cvCopy (img, clone, 0); cvNamedWindow ("ROI", CV_WINDOW_AUTOSIZE); cvCvtColor( clone, gray, CV_RGB2GRAY ); face_rect.x = pt1.x; face_rect.y = pt1.y; face_rect.width = abs(pt1.x - pt2.x); face_rect.height = abs(pt1.y - pt2.y); cvSetImageROI ( gray, face_rect); //// * rectangle = cvGetImageROI ( clone ); face_rect = cvGetImageROI ( gray ); cvShowImage ("ROI", gray); k++; char *name=0; name=(char*) calloc(512, 1); sprintf(name, "Image%d.pgm", k); cvSaveImage(name, gray); //////////////// for(int j=0;j<512;j++) filelist[list][j]=name[j]; list++; WriteInDB(); //int found=SIFT("result.txt",name); cvResetImageROI( gray ); //return found; return 0; // }//end if }//end for }//end void void WriteInDB() { ofstream myfile; myfile.open ("result.txt"); for(int i=0;i<512;i++) { if(strcmp(filelist[i],"")!=0) myfile << filelist[i]<<"\n"; } myfile.close(); } Error messages Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 8 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 13 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 18 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 23 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 10 error C2868: 'std::iterator_traits<_Iter>::value_type' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 25 error C2868: 'std::iterator_traits<_Iter>::reference' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 20 error C2868: 'std::iterator_traits<_Iter>::pointer' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 5 error C2868: 'std::iterator_traits<_Iter>::iterator_category' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 15 error C2868: 'std::iterator_traits<_Iter>::difference_type' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 9 error C2602: 'std::iterator_traits<_Iter>::value_type' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 24 error C2602: 'std::iterator_traits<_Iter>::reference' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 19 error C2602: 'std::iterator_traits<_Iter>::pointer' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 4 error C2602: 'std::iterator_traits<_Iter>::iterator_category' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 14 error C2602: 'std::iterator_traits<_Iter>::difference_type' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 7 error C2146: syntax error : missing ';' before identifier 'value_type' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 22 error C2146: syntax error : missing ';' before identifier 'reference' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 17 error C2146: syntax error : missing ';' before identifier 'pointer' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 2 error C2146: syntax error : missing ';' before identifier 'iterator_category' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 12 error C2146: syntax error : missing ';' before identifier 'difference_type' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 6 error C2039: 'value_type' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 21 error C2039: 'reference' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 16 error C2039: 'pointer' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 1 error C2039: 'iterator_category' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 11 error C2039: 'difference_type' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 A: I mentioned, you got function named "distance" in your code. So had I and I had been receiving the same error. Once I renamed this function to "Distance" my code was compiled successfully. My guess is that there is another function called "distance" defined somewere in xutility file. So the solution is just to rename the function. A: You state that you're "trying to use c code with opencv", but your code contains #include <iostream>. That's a C++ header. Now, there's no such thing as a C/C++ language. C and C++ are two distinct languages. You'll have to choose. A: I guess that one of your C files includes a header file that is C++, for example: #include <iostream> A general approach for solving such issues is isolating the problem: Determine which source file is problematic Remove as much code as possible from that file while making sure the problem still appears Edit your question, showing that code
Mid
[ 0.560483870967741, 34.75, 27.25 ]
1. Field The invention relates to a cryostat arrangement for an electrical power conditioner, more particularly, to a cryostat for use with superconducting transformers, superconducting fault current limiters, superconducting power devices for phase correction, etc. 2. Description of Related Art Cryostats for electrical power conditioners are known which may be provided in one of the following two variants: (i) a cryostat comprising no opening for accommodating a ferromagnetic limb, and (ii) a cryostat with one or more openings for accommodating one or more ferromagnetic limbs. A cryostat for electric power conditioner of type (i) is described for instance in EP 1 544 873 A2. The cryostat comprises external walls in contact with an ambient medium, internal walls in contact with a cooled medium, a thermal insulating gap formed between the external walls and the internal walls, the insulating gap comprising a thermal insulation. The thermal insulation is provided in this technical solution by vacuum; the insulating gap is evacuated. The external walls comprise one cylindrical wall and two flat walls; a first external wall from the top (in the cap flange) and a second external wall from the bottom. In the same way, the internal walls comprise one cylindrical wall and two flat walls; a first internal wall from the top (in the cap flange) and a second internal wall from the bottom. The cryostat comprises also means for forming a liquid from a gas. Both the external walls and the internal walls comprise a uniform structure and are made from a homogeneous metallic sheet. A similar construction of a cryostat for electric power conditioners is disclosed in WO 94 003 955 A1. The cryostat comprises practically the same features as in the EP 1 544 873 A2 with a difference that the cap flange is converted in an upper external wall and an upper internal wall. A cryostat with a central axial opening is also disclosed in U.S. Pat. No. 5,847,633 A which has similar features to those cryostats discussed above. A cryostat for electric power conditioner of type (ii), i.e. with an internal opening for a ferromagnetic limb, is disclosed in U.S. Pat. No. 5,107,240 A, for example. The cryostat comprises external walls in contact with an ambient medium, internal walls in contact with a cooled medium and a thermally insulating gap formed between the external walls and the internal walls. The thermally insulating gap comprises a thermal insulation provided by vacuum. The external walls comprise two cylindrical walls and two flat walls: a first external flat wall forms the top side (in the cap flange) and a second external flat wall forms the bottom side. The internal walls comprise two cylindrical walls and a flat wall forms the bottom side. The external walls and the internal walls comprise a uniform structure and are formed from a homogeneous glass fiber reinforced vinyl polyester resin (FRP). As mentioned above, a vacuum is created between these FRP walls to provide the thermal insulation. The ambient medium in this cryostat is provided by a ferromagnetic shell which serves for guiding of a magnetic flux. This material is kept practically at ambient temperature by means of natural or forced heat exchange. The ferromagnetic shell may play also a role of a fixture for the external walls. This fixture may provide an external mechanical stabilization of the cryostat (e.g. in case of electromagnetic forces) and may also allow, nevertheless, forces caused by presence of the vacuum between the external walls and the internal walls to be compensated. In order to provide such compensation in the cryostat for electric power conditioner disclosed in U.S. Pat. No. 6,324,851 B1 the thermal insulating gap is filled, at least in part with a solid thermal insulator. The cryostat comprises external walls being in contact with an ambient medium, internal walls being in contact with a cooled medium, a thermal insulating gap formed between the external walls and the internal walls, the insulating gap comprising a thermal insulation. In the arrangement disclosed in U.S. Pat. No. 6,324,851 B1, the thermal insulation is provided partly by the solid thermal insulation and partly by a vacuum. The external walls comprise a plurality of side walls defining a plurality of openings, each of which can accommodate a ferromagnetic limb and two flat walls. The internal walls comprise also a plurality of side walls and two flat walls. Furthermore, the cryostat comprises means for filling in with a liquidized gas or/and means for gas liquidizing. The external walls and the internal walls comprise a uniform structure. The external walls are made of metal sheet. The internal walls are made of a fiber composite material comprising properties of an electrical insulator. The solid thermal insulator plays a role of a spacer and is load bearing. The solid thermal insulator is able to transmit the internal pressure acting on the internal walls to the external walls. The thermal conductivity of the solid thermal insulator (e.g. of 2 mW/(K×m)) is relatively low, but, however, not low enough to be compared to the vacuum insulation. Comparing different technical solutions of the actual state of the art one may conclude that there is an obvious dilemma: (a) to employ a cryostat with the metallic walls which may provide an excellent and long-lifetime vacuum insulation and needs practically no maintenance, but causes high eddy currents and therefore leads to elevated cooling losses, or (b) to employ a cryostat with insulating walls (i.e. the walls without eddy current losses) which are much less vacuum tight and, as a result, the cryostat has to be periodically pumped in order to maintain a sufficient vacuum. Thus, in the latter case an additional periodic maintenance a special service means are needed while the lifetime of the cryostat is shorter. Further improvements to the arrangements of cryostats for use in electrical power conditioners which overcome at least some of these disadvantages are desirable. It is, therefore, desirable to provide an improved cryostat for use in electrical power conditioners avoids at least some of these disadvantages.
Low
[ 0.53125, 31.875, 28.125 ]
I grew up listening to Ron Santo’s rambles and rants, moans and groans. I remember watching the 2003 NLCS with my dad on Fox with the television on mute and the radio dial spun loud. I remember Game 6, The Bartman Game, and how shook Ron sounded. But I also remember Game 7, when Kerry Wood hit a home run to left center field in the third inning to take the lead. I remember the belief in Ronnie’s voice. The glee, the hope that he pumped through the radio waves and into my adolescent heart. When Ron Santo is inducted into the Baseball Hall of Fame this weekend, I will remember him as an announced. The best announcer I ever listened to. Ron didn’t always break down the intricacies of the game, although sometimes his partner Pat Hughes would guide him along expertly, but he embodied all that it is to be a Cubs fan. As a player, Santo was one of the best third baseman of his time. He was a nine-time All-Star with five Gold Gloves. At 24 years old he led the league in triples with 13 (while also hitting 30 home runs). That same year he led the league in walks, something he did three other seasons as well. At 25, Santo played 164 games in the regular season. And, of course, he did all this with diabetes. Ronnie hid his diabetes for much of his playing days, but I can still hear his “donations to J…D..R.F. to find a cure.” Ron faced his obstacles head on in his playing days and defeated them. In his days as a broadcaster, he worked to defeated not only his obstacles but anyone else who faces ones similar to his. One year, before the 2003 season, my family went to Wrigley. I was about 10 at the time and we got there early enough so that I could take my ball and glove down to the field with me and ask for autographs from the real life superheroes. I think I was calling for Sammy when an old guy walked by and caught everyone’s attention. I didn’t even know who he was. But being little and wide-eyed, I worked my way to the front and got the old man’s signature. I remember how many autographs he signed before mine and how he wore a smile throughout. I brought the ball back up to my dad afterwords. He couldn’t believe it. While Cubs fans are happy to see #10 go into Cooperstown, it rubs a bit raw being posthumously. Ronnie deserved to hear all the Cubs fans cheering for him as he was inducted. He said the Cubs retiring his number meant more to him than entering the Hall, and I believe him. But a man so great, so loyal, so passionate deserved to see those traits reciprocated in his lifetime. But what would Ron say? “Bad break?” “Keep your head up?” He would have something positive to say and that is why I will be smiling from ear to ear when Ron enters the Hall this weekend. It will be the best Cubs win of the year. Yes! The Big Guy P.S. Got a Ron Santo story of your own? Share it in the comment section! We want to hear it!
High
[ 0.6563192904656321, 37, 19.375 ]
Posts Tagged arabic In her first semester at Tufts, Alexa Stevens, A13, enrolled in Arabic 1 completely unaware that a subject as unfamiliar as Arabic would change the course of her life. Throughout her time at Tufts, her intellectual curiosity towards the Middle East grew into a love and respect for the area that led her to major in Middle Eastern studies, visit Iran the summer after her sophomore year, and study abroad in Jordan the fall of her Junior year. She keeps a blog chronicling her intellectual journey exploring the Middle East that begins days before she departs for Iran and continues through her college experience in which she explains what, or rather who, inspired her to follow her dreams overseas: I could tell you that a semester in Jordan sparked my interest in the Cause, or maybe the weeks of travel to Israel and Palestine that I did after that semester, or that perhaps it was my intense study of the Middle East that brought me to delve further into the tightly-wound knot that is the Israeli-Palestinian Conflict. But I would be lying, because really the thing that pulled me into a situation which has become highly academic, polemical and esoteric was its most essential element: a person. Namely, my second-year Arabic teacher Suhad. She told us about her family’s home in Gaza, about her experiences during the Second Intifada at Birzeit University, about the fifty-two days she spent between the Israeli and Jordanian borders as a stateless person, about coming to the US and re-doing her entire college education, and about the uphill battle she’s been fighting since birth. Her eyes pulled me in, right back to the center of this thing–the people whose lives will never be entirely their own, but rather a part of the intractable conflict in the Holy Land. Check out the rest of her post and where her studies at Tufts have taken her here.
High
[ 0.6713615023474171, 35.75, 17.5 ]
Upcoming Local Shows Related Links Synoposis “We got a cease-and-desist letter over the title of our album The Complete Idiot’s Guide to Morality," said Aaron Spaulding of a Conscious Few. “The CD came out last October [2008], and we just got the letter last week.” Dated June 2009, the notice from the NYC law firm of Von Maltitz, Derenberg, Kunin, Janssen & Giordano says their client, Penguin Books, “strongly objects to your unauthorized use of its registered trademark The Complete Idiot’s Guide," presenting two federal trademark registration numbers to back up its assertion of ownership. Both of the registered marks are applicable to commerce involving “a series of non-fiction books” – neither mark is pertinent to recorded music. However, the letter further alleges that the band’s CD cover is “a point-by-point counterfeit of our client’s well-known trademark…[with] the same blue capital letters, in the same type style and layout,” right down to the “orange rectangle with the same proportions, orientation, and cover location” as the Penguin books. Which is true; the album even uses a similar orange border. “We looked at it as a parody,” says Spaulding. “They didn’t have to come off like bullies, with a big New York law firm. There are maybe 100 to 150 of the CDs out there. It cost us $2500 to press 1000, so we still have boxes of them.” According to Penguin Group (USA) Inc, its Complete Idiot’s Guide series debuted 15 years ago, with sales averaging around 1.25 million print and ebooks last year. Complete Idiots have been Guided through topics like World Religions, Understanding Ethics, Peer Pressure for Teens, and music subjects like Arranging and Orchestration; Theory, History, Songwriting; and Playing the Guitar. The legal salvo fired across the reggae band’s bow gives them until July 30 to provide “written assurance that you will promptly cease all use of our client’s trademark and trade dress.” Though parody has long-established legal protections that tend to trump trademarks, Spaulding says, “We’ll go along, it’s not worth getting sued. We’ll probably take them off CB Baby, iTunes, and Amazon, and just give them away as promotional items.” In October 2009, the band changed the album title to Cease and Desist. In 2011, bassist Aaron Spaulding formed a new heavy rock band with former members of October Burning and the Human Abstract, whose old vocalist Nate Ells had recently moved to San Diego from Nashville.
Mid
[ 0.547785547785547, 29.375, 24.25 ]
Supraoptic nucleus The supraoptic nucleus (SON) is a nucleus of magnocellular neurosecretory cells in the hypothalamus of the mammalian brain. The nucleus is situated at the base of the brain, adjacent to the optic chiasm. In humans, the SON contains about 3,000 neurons. Function The cell bodies produce the peptide hormone vasopressin, which is also known as anti-diuretic hormone (ADH). This chemical messenger travels via the bloodstream to its target cells in the papillary ducts in the kidneys, enhancing water reabsorption. In the cell bodies, the hormones are packaged in large, membrane-bound vesicles that are transported down the axons to the nerve endings. The secretory granules are also stored in packets along the axon called Herring bodies. Similar magnocellular neurons are also found in the paraventricular nucleus. Signaling Each neuron in the nucleus has one long axon that projects to the posterior pituitary gland, where it gives rise to about 10,000 neurosecretory nerve terminals. The magnocellular neurons are electrically excitable: In response to afferent stimuli from other neurons, they generate action potentials, which propagate down the axons. When an action potential invades a neurosecretory terminal, the terminal is depolarised, and calcium enters the terminal through voltage-gated channels. The calcium entry triggers the secretion of some of the vesicles by a process known as exocytosis. The vesicle contents are released into the extracellular space, from where they diffuse into the bloodstream. Regulation of supraoptic neurons Vasopressin (antidiuretic hormone, ADH) is released in response to solute concentration in the blood, decreased blood volume, or blood pressure. Some other inputs come from the brainstem, including from some of the noradrenergic neurons of the nucleus of the solitary tract and the ventrolateral medulla. However, many of the direct inputs to the supraoptic nucleus come from neurons just outside the nucleus (the "perinuclear zone"). Of the afferent inputs to the supraoptic nucleus, most contain either the inhibitory neurotransmitter GABA or the excitatory neurotransmitter glutamate, but these transmitters often co-exist with various peptides. Other afferent neurotransmitters include noradrenaline (from the brainstem), dopamine, serotonin, and acetylcholine. The supraoptic nucleus as a "model system" The supraoptic nucleus is an important "model system" in neuroscience. There are many reasons for this: Some technical advantages of working on the supraoptic nucleus are that the cell bodies are relatively large, the cells make exceptionally large amounts of their secretory products, and the nucleus is relatively homogeneous and easy to separate from other brain regions. The gene expression and electrical activity of supraoptic neurons has been studied extensively, in many physiological and experimental conditions. These studies have led to many insights of general importance, as in the examples below. Morphological plasticity in the supraoptic nucleus Anatomical studies using electron microscopy have shown that the morphology of the supraoptic nucleus is remarkably adaptable. For example, during lactation there are large changes in the size and shape of the oxytocin neurons, in the numbers and types of synapses that these neurons receive, and in the structural relationships between neurons and glial cells in the nucleus. These changes arise during parturition, and are thought to be important adaptations that prepare the oxytocin neurons for a sustained high demand for oxytocin. Oxytocin is essential for milk let-down in response to suckling. These studies showed that the brain is much more "plastic" in its anatomy than previously recognized, and led to great interest in the interactions between glial cells and neurons in general. Stimulus-secretion coupling In response to, for instance, a rise in the plasma sodium concentration, vasopressin neurons also discharge action potentials in bursts, but these bursts are much longer and are less intense than the bursts displayed by oxytocin neurons, and the bursts in vasopressin cells are not synchronised. It seemed strange that the vasopressin cells should fire in bursts. As the activity of the vasopressin cells is not synchronised, the overall level of vasopressin secretion into the blood is continuous, not pulsatile. Richard Dyball and his co-workers speculated that this pattern of activity, called "phasic firing", might be particularly effective for causing vasopressin secretion. They showed this to be the case by studying vasopressin secretion from the isolated posterior pituitary gland in vitro. They found that vasopressin secretion could be evoked by electrical stimulus pulses applied to the gland, and that much more hormone was released by a phasic pattern of stimulation than by a continuous pattern of stimulation. These experiments led to interest in "stimulus-secretion coupling" - the relationship between electrical activity and secretion. Supraoptic neurons are unusual because of the large amounts of peptide that they secrete, and because they secrete the peptides into the blood. However, many neurons in the brain, and especially in the hypothalamus, synthesize peptides. It is now thought that bursts of electrical activity might be generally important for releasing large amounts of peptide from peptide-secreting neurons. Dendritic secretion Supraoptic neurons have typically 1-3 large dendrites, most of which projecting ventrally to form a mat of process at the base of the nucleus, called the ventral glial lamina. The dendrites receive most of the synaptic terminals from afferent neurons that regulate the supraoptic neurons, but neuronal dendrites are often actively involved in information processing, rather than being simply passive receivers of information. The dendrites of supraoptic neurons contain large numbers of neurosecretory vesicles that contain oxytocin and vasopressin, and they can be released from the dendrites by exocytosis. The oxytocin and vasopressin that is released at the posterior pituitary gland enters the blood, and cannot re-enter the brain because the blood–brain barrier does not allow oxytocin and vasopressin through, but the oxytocin and vasopressin that is released from dendrites acts within the brain. Oxytocin neurons themselves express oxytocin receptors, and vasopressin neurons express vasopressin receptors, so dendritically-released peptides "autoregulate" the supraoptic neurons. Francoise Moos and Phillipe Richard first showed that the autoregulatory action of oxytocin is important for the milk-ejection reflex. These peptides have relatively long half-lives in the brain (about 20 minutes in the CSF), and they are released in large amounts in the supraoptic nucleus, and so they are available to diffuse through the extracellular spaces of the brain to act at distant targets. Oxytocin and vasopressin receptors are present in many other brain regions, including the amygdala, brainstem, and septum, as well as most nuclei in the hypothalamus. Because so much vasopressin and oxytocin are released at this site, studies of the supraoptic nucleus have made an important contribution to understanding how release from dendrites is regulated, and in understanding its physiological significance. Studies have demonstrated that secretin helps to facilitate dendritic oxytocin release in the SON, and that secretin administration into the SON enhances social recognition in rodents. This enhanced social capability appears to be working through secretin's effects on oxytocin neurons in the SON, as blocking oxytocin receptors in this region blocks social recognition. Co-existing peptides Vasopressin neurons and oxytocin neurons make many other neuroactive substances in addition to vasopressin and oxytocin, though most are present only in small quantities. However, some of these other substances are known to be important. Dynorphin produced by vasopressin neurons is involved in regulating the phasic discharge patterning of vasopressin neurons, and nitric oxide produced by both neuronal types is a negative-feedback regulator of cell activity. Oxytocin neurons also make dynorphin; in these neurons, dynorphin acts at the nerve terminals in the posterior pituitary as a negative feedback inhibitor of oxytocin secretion. Oxytocin neurons also make large amounts of cholecystokinin as well as the cocaine and amphetamine regulatory transcript (CART). See also Paraventricular nucleus References External links Category:Neuroendocrinology Category:Hypothalamus
Mid
[ 0.6466512702078521, 35, 19.125 ]
Q: Please help me understand this. $\frac{dx}{dt} = S x (a-x)$. What does it mean for some constant $S$? How to find $x$ for fastest/slowest growth? I am having some trouble understanding this problem. There is this function that calculates reaction rate of a substance for some constant positive $S$. $a$ = original amount of the first substance $x$ = some amount of substance First question, what does $\frac{dx}{dt} = S x (a-x)$ mean? Does this mean that the rate of change of $x$ in the equation $(S x (a-x))$ is affected by the change in time? So if there was no '$x$' in the equation, than change in time would not affect the equation right? *This is the first time I am encountering '$dt$' in my derivative assignments. When finding derivatives for simple equations, its mostly been of d/dx notation. When I graphed $S x (a-x)$, I substituted random numbers for $S$ and $a$. Does this tell me anything about the function for rate of increase and decrease? Should I have solved for $a$? I notice the function increases and then decreases as $x$ moves away from $0$. I also know that when the derivative crosses the $x$ axis, the original function (which I don't know) will start to decrease. What do I need to do to determine the fastest/slowest growth rate using the derivative? Thanks A: It means that the amount $x(t)$ is a function of time (makes sense) and tells you that the rate of change of $x$ is a function of $x$ itself. If the right hand side were a constant, then the rate of change would be constant: every second the amount $x$ would increase by $S$. I'm sure in Calculus you've already seen examples of functions where the right-hand side depended on $t$. If that were the case, then you could calculate $x(t)$, given the formula for $dx/dt$, by integrating both sides, e.g. finding the area under the curve of $dx/dt$. Here the situation is a bit different, since the derivative depends on $x$ itself, and not $t$. It takes some getting used to but conceptually, the formula tells you the slope of $x(t)$, given the current value of $x(t)$. In this particular case, if there is no material ($x=0$) or there is exactly $a$ amount of material, then $dx/dt=0$ and the system is in equilibrium: the slope is 0 and $x$ does not change over time. If $x$ is between 0 and $a$, then $dx/dt$ is positive and $x$ will increase over time. As $x$ increases and approaches $a$, the rate of increase decreases. If $x$ is over $a$, then $dx/dt$ is negative and $x$ decreases over time towards $a$. The maximum increase will occur when $Sx(a-x)$ is most positive; you can find the value of $x$ as usual, by taking the derivative of $Sx(a-x)$ with respect to $x$ and setting it equal to zero. The equation $dx/dt = Sx(t)[a-x(t)]$ is called an ordinary differential equation and given the initial value of $x$ at time 0, $x(0)$, you can solve it for the function $x(t)$ which gives $x$ at all times, although you have probably not learned the techniques yet. Here are some plots of $x(t)$ for $S=1$, $a=1$, and several different values of $x(0)$: You can see the main characteristics of the system here, namely, that $x$ either decreases or increases towards $a$ depending on where it starts.
High
[ 0.658653846153846, 34.25, 17.75 ]
Q: Is there a table that can be used to access local variables in Lua? Just like how we can do this: a = 3 print(_G['a']) -- 3 I want to be able to do something like this: local a = 3 print(_L['a']) -- 3 I basically want to be able to access local variables using their names as strings. Is there a table that can do this, perhaps one that can be passed as a function argument? It would be like the this keyword in ActionScript. A: This is possible by way of the debug library - namely the getlocal and setlocal functions. If you can't use this library (or access the C API), then you're out of luck. You can extend your global environment with a specially crafted _L table, that when accessed performs linear lookups of the current set of locals. Reading a local variable simply finds a matching variable name, and returns its value. Writing to a local variable requires you to discover its index in the stack frame, and then update the value accordingly. Note that you cannot create new locals. Here's a simple example that works with Lua 5.1 (but not Lua 5.2+). local function find_local (key) local n = 1 local name, sname, sn, value repeat name, value = debug.getlocal(3, n) if name == key then sname = name sn = n end n = n + 1 until not name return sname, sn end _G._L = setmetatable({}, { metatable = false, __newindex = function (self, key, value) local _, index = find_local(key) if not index then error(('local %q does not exist.'):format(key)) end debug.setlocal(2, index, value) end, __index = function (_, key) return find_local(key) end }) In use: local foo = 'bar' print(_L['foo']) --> 'bar' _L['foo'] = 'qux' print(_L['foo']) --> 'qux' local function alter_inside (key) local a, b, c = 5, 6, 7 _L[key] = 11 print(a, b, c) end alter_inside('a') --> 11 6 7 alter_inside('b') --> 5 11 7 alter_inside('c') --> 5 6 11 You could write this in a different manner, using plain functions instead of the table combined with read / write operations (__index, __newindex). See §2.4 – Metatables and Metamethods if the above use of metatables is a brand new topic for you. In Lua 5.2+, you can use the special _ENV tables to adjust your current chunk's environment, but note that this is not the same as using local variables. local function clone (t) local o = {} for k, v in pairs(t) do o[k] = v end return o end local function alter_inside (key) local _ENV = clone(_ENV) a = 5 b = 6 c = 7 _ENV[key] = 11 print(a, b, c) end alter_inside('a') --> 11 6 7 alter_inside('b') --> 5 11 7 alter_inside('c') --> 5 6 11 As a final note, also consider that this (ab)use of locals might not be the best approach. You could simply store your variables in a table, when appropriate, to achieve the same results with far less overhead. This approach is highly recommended. local function alter_inside (key) -- `ls` is an arbitrary name, always use smart variable names. local ls = { a = 5, b = 6, c = 7 } ls[key] = 11 print(ls.a, ls.b, ls.c) end alter_inside('a') --> 11 6 7 alter_inside('b') --> 5 11 7 alter_inside('c') --> 5 6 11 Don't dig yourself into a hole trying to solve unnecessary problems.
Mid
[ 0.578817733990147, 29.375, 21.375 ]
Q: Unable to install RPostgreSQL package in R Studio on CentOS 7 I installed PostgreSQL 9.6 with PostGIS 2.3 using one click installer of Enterprise DB available here on my CentOS 7 (x64) Linux based machine. Now I am trying to connect R Studio to Postgres. To do so, I tried to install RPostgreSQL package in R Studio but I am getting following error: > install.packages("RPostgreSQL") Installing package into ‘/home/jk/R/x86_64-redhat-linux-gnu-library/3.3’ (as ‘lib’ is unspecified) trying URL 'https://cran.rstudio.com/src/contrib/RPostgreSQL_0.4-1.tar.gz' Content type 'unknown' length 476204 bytes (465 KB) ================================================== downloaded 465 KB * installing *source* package ‘RPostgreSQL’ ... ** package ‘RPostgreSQL’ successfully unpacked and MD5 sums checked checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for pg_config... no configure: checking for PostgreSQL header files configure: Checking include /usr/include. configure: Checking include /usr/include/pgsql. configure: Checking include /usr/include/postgresql. configure: Checking include /usr/local/include. configure: Checking include /usr/local/include/pgsql. configure: Checking include /usr/local/include/postgresql. configure: Checking include /usr/local/pgsql/include. configure: Checking include /usr/local/postgresql/include. configure: Checking include /opt/include. configure: Checking include /opt/include/pgsql. configure: Checking include /opt/include/postgresql. configure: Checking include /opt/local/include. configure: Checking include /opt/local/include/postgresql. configure: Checking include /opt/local/include/postgresql84. configure: Checking include /sw/opt/postgresql-8.4/include. configure: Checking include /Library/PostgresPlus/8.4SS/include. configure: Checking include /sw/include/postgresql. configure: Checking lib /usr/lib. configure: Checking lib /usr/lib/pgsql. configure: Checking lib /usr/lib/postgresql. configure: Checking lib /usr/local/lib. configure: Checking lib /usr/local/lib/pgsql. configure: Checking lib /usr/local/lib/postgresql. configure: Checking lib /usr/local/pgsql/lib. configure: Checking lib /usr/local/postgresql/lib. configure: Checking lib /opt/lib. configure: Checking lib /opt/lib/pgsql. configure: Checking lib /opt/lib/postgresql. configure: Checking lib /opt/local/lib. configure: Checking lib /opt/local/lib/postgresql. configure: Checking lib /opt/local/lib/postgresql84. configure: Checking lib /sw/opt/postgresql-8.4/lib. configure: Checking lib /Library/PostgresPlus/8.4SS/lib. configure: Checking lib /sw/lib. checking for "/libpq-fe.h"... no configure: creating ./config.status config.status: creating src/Makevars ** libs gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-DBI.c -o RS-DBI.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-PQescape.c -o RS-PQescape.o In file included from RS-PQescape.c:7:0: RS-PostgreSQL.h:23:26: fatal error: libpq-fe.h: No such file or directory # include "libpq-fe.h" ^ compilation terminated. make: *** [RS-PQescape.o] Error 1 ERROR: compilation failed for package ‘RPostgreSQL’ * removing ‘/home/jk/R/x86_64-redhat-linux-gnu-library/3.3/RPostgreSQL’ Warning in install.packages : installation of package ‘RPostgreSQL’ had non-zero exit status The installation directory of PostgreSQL 9.6 is /opt/PostgreSQL/9.6/bin which doesn't seem to be in the error above. Could someone help me to resolve this error? EDIT 1: Thanks to the suggestion of @lavajumper, I got rid of above error. But now getting this error which shows some missing html links. > install.packages("RPostgreSQL") Installing package into ‘/home/jk/R/x86_64-redhat-linux-gnu-library/3.3’ (as ‘lib’ is unspecified) trying URL 'https://cran.rstudio.com/src/contrib/RPostgreSQL_0.4-1.tar.gz' Content type 'unknown' length 476204 bytes (465 KB) ================================================== downloaded 465 KB * installing *source* package ‘RPostgreSQL’ ... ** package ‘RPostgreSQL’ successfully unpacked and MD5 sums checked checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for pg_config... no configure: checking for PostgreSQL header files configure: Checking include /usr/include. configure: Checking lib /usr/lib. configure: Checking lib /usr/lib/pgsql. configure: Checking lib /usr/lib/postgresql. configure: Checking lib /usr/local/lib. configure: Checking lib /usr/local/lib/pgsql. configure: Checking lib /usr/local/lib/postgresql. configure: Checking lib /usr/local/pgsql/lib. configure: Checking lib /usr/local/postgresql/lib. configure: Checking lib /opt/lib. configure: Checking lib /opt/lib/pgsql. configure: Checking lib /opt/lib/postgresql. configure: Checking lib /opt/local/lib. configure: Checking lib /opt/local/lib/postgresql. configure: Checking lib /opt/local/lib/postgresql84. configure: Checking lib /sw/opt/postgresql-8.4/lib. configure: Checking lib /Library/PostgresPlus/8.4SS/lib. configure: Checking lib /sw/lib. checking for "/usr/include/libpq-fe.h"... yes configure: creating ./config.status config.status: creating src/Makevars ** libs gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-DBI.c -o RS-DBI.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-PQescape.c -o RS-PQescape.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-PostgreSQL.c -o RS-PostgreSQL.o RS-PostgreSQL.c: In function ‘RS_PostgreSQL_createDataMappings’: RS-PostgreSQL.c:446:5: warning: passing argument 1 of ‘Rf_protect’ from incompatible pointer type [enabled by default] PROTECT(flds = RS_DBI_allocFields(num_fields)); ^ In file included from /usr/include/R/Rdefines.h:36:0, from S4R.h:64, from RS-DBI.h:29, from RS-PostgreSQL.h:25, from RS-PostgreSQL.c:17: /usr/include/R/Rinternals.h:1348:6: note: expected ‘SEXP’ but argument is of type ‘struct RS_DBI_fields *’ SEXP Rf_protect(SEXP); ^ gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-pgsql-copy.c -o RS-pgsql-copy.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-pgsql-getResult.c -o RS-pgsql-getResult.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-pgsql-pqexec.c -o RS-pgsql-pqexec.o gcc -m64 -std=gnu99 -I/usr/include/R -DNDEBUG -I/usr/include -I/usr/local/include -fpic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -c RS-pgsql-pqexecparams.c -o RS-pgsql-pqexecparams.o gcc -m64 -std=gnu99 -shared -L/usr/lib64/R/lib -Wl,-z,relro -o RPostgreSQL.so RS-DBI.o RS-PQescape.o RS-PostgreSQL.o RS-pgsql-copy.o RS-pgsql-getResult.o RS-pgsql-pqexec.o RS-pgsql-pqexecparams.o -L -lpq -L/usr/lib64/R/lib -lR installing to /home/jk/R/x86_64-redhat-linux-gnu-library/3.3/RPostgreSQL/libs ** R ** inst ** preparing package for lazy loading Creating a generic function for ‘format’ from package ‘base’ in package ‘RPostgreSQL’ Creating a generic function for ‘print’ from package ‘base’ in package ‘RPostgreSQL’ Creating a generic function for ‘summary’ from package ‘base’ in package ‘RPostgreSQL’ ** help *** installing help indices converting help for package ‘RPostgreSQL’ finding HTML links ... done PostgreSQL html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL /man/PostgreSQL.Rd:26: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQL.Rd:76: missing file link ‘dbUnloadDriver’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQL.Rd:84: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQL.Rd:89: missing file link ‘dbCommit’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQL.Rd:90: missing file link ‘dbRollback’ PostgreSQLConnection-class html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLConnection-class.Rd:20: missing file link ‘dbCommit’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLConnection-class.Rd:32: missing file link ‘dbRollback’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLConnection-class.Rd:34: missing file link ‘dbWriteTable’ PostgreSQLDriver-class html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLDriver-class.Rd:25: missing file link ‘dbUnloadDriver’ PostgreSQLObject-class html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLObject-class.Rd:20: missing file link ‘isSQLKeyword’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLObject-class.Rd:22: missing file link ‘SQLKeywords’ PostgreSQLResult-class html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLResult-class.Rd:31: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/PostgreSQLResult-class.Rd:32: missing file link ‘fetch’ S4R html dbApply-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbApply-methods.Rd:27: missing file link ‘fetch’ dbApply html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbApply.Rd:37: missing file link ‘fetch’ dbCallProc-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbCallProc-methods.Rd:31: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbCallProc-methods.Rd:32: missing file link ‘dbCommit’ dbCommit-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbCommit-methods.Rd:36: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbCommit-methods.Rd:37: missing file link ‘dbCommit’ dbConnect-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbConnect-methods.Rd:58: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbConnect-methods.Rd:59: missing file link ‘dbCommit’ dbDataType-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbDataType-methods.Rd:33: missing file link ‘isSQLKeyword’ dbDriver-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbDriver-methods.Rd:26: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbDriver-methods.Rd:44: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbDriver-methods.Rd:45: missing file link ‘dbCommit’ dbGetInfo-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbGetInfo-methods.Rd:47: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbGetInfo-methods.Rd:48: missing file link ‘dbCommit’ dbListTables-methods html dbObjectId-class html dbReadTable-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbReadTable-methods.Rd:119: missing file link ‘isSQLKeyword’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbReadTable-methods.Rd:124: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbReadTable-methods.Rd:125: missing file link ‘dbCommit’ dbSendQuery-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbSendQuery-methods.Rd:40: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbSendQuery-methods.Rd:41: missing file link ‘dbCommit’ dbSetDataMappings-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/dbSetDataMappings-methods.Rd:33: missing file link ‘fetch’ fetch-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/fetch-methods.Rd:46: missing file link ‘dbCommit’ isPostgresqlIdCurrent html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/isPostgresqlIdCurrent.Rd:34: missing file link ‘fetch’ make.db.names-methods html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/make.db.names-methods.Rd:69: missing file link ‘dbWriteTable’ postgresqlBuildTableDefinition html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/postgresqlBuildTableDefinition.Rd:41: missing file link ‘fetch’ Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/postgresqlBuildTableDefinition.Rd:42: missing file link ‘dbCommit’ postgresqlDBApply html Rd warning: /tmp/Rtmp15353Q/R.INSTALL161911b14876/RPostgreSQL/man/postgresqlDBApply.Rd:75: missing file link ‘fetch’ postgresqlSupport html summary-methods html ** building package indices ** testing if installed package can be loaded Error in dyn.load(file, DLLpath = DLLpath, ...) : unable to load shared object '/home/jk/R/x86_64-redhat-linux-gnu-library /3.3/RPostgreSQL/libs/RPostgreSQL.so': /home/jk/R/x86_64-redhat-linux-gnu-library/3.3/RPostgreSQL/libs/RPostgreSQL.so: undefined symbol: PQfmod Error: loading failed Execution halted ERROR: loading failed * removing ‘/home/jk/R/x86_64-redhat-linux-gnu-library/3.3/RPostgreSQL’ Warning in install.packages : installation of package ‘RPostgreSQL’ had non-zero exit status A: Okay, I figured out the problem by myself. Answer given by @Manoj at this link helped me to resolve the second error. According to the mentioned link, RPostgreSQL checks for libraries in these directories only: /usr/lib /usr/lib/pgsql /usr/lib/postgresql /usr/local/lib /usr/local/lib/pgsql /usr/local/lib/postgresql /usr/local/pgsql/lib /usr/local/postgresql/lib /opt/lib /opt/lib/pgsql /opt/lib/postgresql /opt/local/lib /opt/local/lib/postgresql /opt/local/lib/postgresql84 /sw/opt/postgresql-8.4/lib /Library/PostgresPlus/8.4SS/lib /sw/lib So as a superuser I copied library files from Postgres installation directory to /usr/lib and run the command again like this in R Studio: install.packages('RPostgreSQL', dependencies=TRUE, repos='http://cran.rstudio.com/') and it worked! :)
Mid
[ 0.578666666666666, 27.125, 19.75 ]
Dichloroethane Dichloroethane can refer to either of two isomeric organochlorides with the molecular formula C2H4Cl2: 1,1-Dichloroethane (ethylidene dichloride) 1,2-Dichloroethane (ethylene dichloride) See also Dichloroethene Difluoroethane de:Dichlorethan ru:Дихлорэтан
High
[ 0.6829931972789111, 31.375, 14.5625 ]
/******************************************************************************* * Copyright (c) 2012 GigaSpaces Technologies Ltd. All rights reserved * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. *******************************************************************************/ package org.cloudifysource.domain.cloud; /******** * Supported file transfer modes. * * @author barakme * */ public enum FileTransferModes { /***** * Secure FTP. Typically used for linux. */ SFTP(22), /******* * Secure copy. Used for linux when SFTP is not enabled. */ SCP(22), /******* * Windows file sharing. */ CIFS(445); private final int defaultPort; private FileTransferModes(final int defaultPort) { this.defaultPort = defaultPort; } public int getDefaultPort() { return defaultPort; } }
Low
[ 0.503521126760563, 35.75, 35.25 ]
#ifndef STAN_MATH_PRIM_PROB_STUDENT_T_LCDF_HPP #define STAN_MATH_PRIM_PROB_STUDENT_T_LCDF_HPP #include <stan/math/prim/meta.hpp> #include <stan/math/prim/err.hpp> #include <stan/math/prim/fun/beta.hpp> #include <stan/math/prim/fun/constants.hpp> #include <stan/math/prim/fun/digamma.hpp> #include <stan/math/prim/fun/grad_reg_inc_beta.hpp> #include <stan/math/prim/fun/inc_beta.hpp> #include <stan/math/prim/fun/log.hpp> #include <stan/math/prim/fun/max_size.hpp> #include <stan/math/prim/fun/size.hpp> #include <stan/math/prim/fun/size_zero.hpp> #include <stan/math/prim/fun/value_of.hpp> #include <stan/math/prim/functor/operands_and_partials.hpp> #include <cmath> namespace stan { namespace math { template <typename T_y, typename T_dof, typename T_loc, typename T_scale> return_type_t<T_y, T_dof, T_loc, T_scale> student_t_lcdf(const T_y& y, const T_dof& nu, const T_loc& mu, const T_scale& sigma) { using T_partials_return = partials_return_t<T_y, T_dof, T_loc, T_scale>; using std::exp; using std::log; using std::pow; static const char* function = "student_t_lcdf"; check_not_nan(function, "Random variable", y); check_positive_finite(function, "Degrees of freedom parameter", nu); check_finite(function, "Location parameter", mu); check_positive_finite(function, "Scale parameter", sigma); if (size_zero(y, nu, mu, sigma)) { return 0; } T_partials_return P(0.0); operands_and_partials<T_y, T_dof, T_loc, T_scale> ops_partials(y, nu, mu, sigma); scalar_seq_view<T_y> y_vec(y); scalar_seq_view<T_dof> nu_vec(nu); scalar_seq_view<T_loc> mu_vec(mu); scalar_seq_view<T_scale> sigma_vec(sigma); size_t N = max_size(y, nu, mu, sigma); // Explicit return for extreme values // The gradients are technically ill-defined, but treated as zero for (size_t i = 0; i < stan::math::size(y); i++) { if (value_of(y_vec[i]) == NEGATIVE_INFTY) { return ops_partials.build(negative_infinity()); } } T_partials_return digammaHalf = 0; VectorBuilder<!is_constant_all<T_dof>::value, T_partials_return, T_dof> digamma_vec(size(nu)); VectorBuilder<!is_constant_all<T_dof>::value, T_partials_return, T_dof> digammaNu_vec(size(nu)); VectorBuilder<!is_constant_all<T_dof>::value, T_partials_return, T_dof> digammaNuPlusHalf_vec(size(nu)); if (!is_constant_all<T_dof>::value) { digammaHalf = digamma(0.5); for (size_t i = 0; i < stan::math::size(nu); i++) { const T_partials_return nu_dbl = value_of(nu_vec[i]); digammaNu_vec[i] = digamma(0.5 * nu_dbl); digammaNuPlusHalf_vec[i] = digamma(0.5 + 0.5 * nu_dbl); } } for (size_t n = 0; n < N; n++) { // Explicit results for extreme values // The gradients are technically ill-defined, but treated as zero if (value_of(y_vec[n]) == INFTY) { continue; } const T_partials_return sigma_inv = 1.0 / value_of(sigma_vec[n]); const T_partials_return t = (value_of(y_vec[n]) - value_of(mu_vec[n])) * sigma_inv; const T_partials_return nu_dbl = value_of(nu_vec[n]); const T_partials_return q = nu_dbl / (t * t); const T_partials_return r = 1.0 / (1.0 + q); const T_partials_return J = 2 * r * r * q / t; const T_partials_return betaNuHalf = beta(0.5, 0.5 * nu_dbl); T_partials_return zJacobian = t > 0 ? -0.5 : 0.5; if (q < 2) { T_partials_return z = inc_beta(0.5 * nu_dbl, (T_partials_return)0.5, 1.0 - r); const T_partials_return Pn = t > 0 ? 1.0 - 0.5 * z : 0.5 * z; const T_partials_return d_ibeta = pow(r, -0.5) * pow(1.0 - r, 0.5 * nu_dbl - 1) / betaNuHalf; P += log(Pn); if (!is_constant_all<T_y>::value) { ops_partials.edge1_.partials_[n] += -zJacobian * d_ibeta * J * sigma_inv / Pn; } if (!is_constant_all<T_dof>::value) { T_partials_return g1 = 0; T_partials_return g2 = 0; grad_reg_inc_beta(g1, g2, 0.5 * nu_dbl, (T_partials_return)0.5, 1.0 - r, digammaNu_vec[n], digammaHalf, digammaNuPlusHalf_vec[n], betaNuHalf); ops_partials.edge2_.partials_[n] += zJacobian * (d_ibeta * (r / t) * (r / t) + 0.5 * g1) / Pn; } if (!is_constant_all<T_loc>::value) { ops_partials.edge3_.partials_[n] += zJacobian * d_ibeta * J * sigma_inv / Pn; } if (!is_constant_all<T_scale>::value) { ops_partials.edge4_.partials_[n] += zJacobian * d_ibeta * J * sigma_inv * t / Pn; } } else { T_partials_return z = 1.0 - inc_beta((T_partials_return)0.5, 0.5 * nu_dbl, r); zJacobian *= -1; const T_partials_return Pn = t > 0 ? 1.0 - 0.5 * z : 0.5 * z; T_partials_return d_ibeta = pow(1.0 - r, 0.5 * nu_dbl - 1) * pow(r, -0.5) / betaNuHalf; P += log(Pn); if (!is_constant_all<T_y>::value) { ops_partials.edge1_.partials_[n] += zJacobian * d_ibeta * J * sigma_inv / Pn; } if (!is_constant_all<T_dof>::value) { T_partials_return g1 = 0; T_partials_return g2 = 0; grad_reg_inc_beta(g1, g2, (T_partials_return)0.5, 0.5 * nu_dbl, r, digammaHalf, digammaNu_vec[n], digammaNuPlusHalf_vec[n], betaNuHalf); ops_partials.edge2_.partials_[n] += zJacobian * (-d_ibeta * (r / t) * (r / t) + 0.5 * g2) / Pn; } if (!is_constant_all<T_loc>::value) { ops_partials.edge3_.partials_[n] += -zJacobian * d_ibeta * J * sigma_inv / Pn; } if (!is_constant_all<T_scale>::value) { ops_partials.edge4_.partials_[n] += -zJacobian * d_ibeta * J * sigma_inv * t / Pn; } } } return ops_partials.build(P); } } // namespace math } // namespace stan #endif
Mid
[ 0.5557655954631381, 36.75, 29.375 ]
Q: passing an variables from django to jquery? I'm trying to make an array out of a text file My django/python code is f = open('text file path here', 'r') names = [] for line in f: names.append(line) f.close() return render_to_response('frontend.html', {'names'}, context_instance=RequestContext(request)) Then my frontend.html takes the array, names, and uses the data for autocomplete in a form whose id='status': <script> var names = {{ names }}; $(document).ready(function(){ $("#status").autocomplete(names); }); </script> Nothing happens in the field on frontend.html when I start typing someone's name. Any thoughts? A: You can build an url endpoint inside your url.py and in your views you can return a response with names in JSON and fetch them in your template using a getJSON call. View from django.utils import simplejson def names(request): ..build your name collection data = simplejson.dumps(names) return HttpResponse(data, mimetype='application/javascript') Template $.getJSON('/your_json_endpoint/names/', function(data) { // data contains your names serialized }); You need to set the endpoint in your URLConf https://docs.djangoproject.com/en/1.3/topics/http/urls/ See here for a complete example: http://mitchfournier.com/2011/06/06/getting-started-with-ajax-in-django-a-simple-jquery-approach/
Mid
[ 0.635696821515892, 32.5, 18.625 ]
Q: How does the food industry know the expiration date? I always have this question in mind whenever I see the EXP date on different products. I wonder how they can know a chocolate bar will spoil after 1 year or 6 months, and why they have such longer shelf life time compared to homemade chocolate bars, for example. A: Just to clarify: "Expiration dates" (or sometimes "Best if used by" dates) are the dates when a product may no longer be of high quality. It is not a safety indicator. It is a quality indicator, and it is just a guideline. Companies determine expiration dates during storage studies or stability tests. More detail can be found here. The reason why manufactured food items often last longer than home-prepared items is that companies add ingredients to prolong shelf life. This is simply for economic reasons. You could add the same ingredients, but it is often not necessary because of the scale of home cooking.
Mid
[ 0.6453333333333331, 30.25, 16.625 ]
Optimized Tetrazine Derivatives for Rapid Bioorthogonal Decaging in Living Cells. The inverse-electron-demand Diels-Alder (iDA) reaction has recently been repurposed as a bioorthogonal decaging reaction by accelerating the elimination process after an initial cycloaddition between trans-cyclooctene (TCO) and tetrazine (TZ). Herein, we systematically surveyed 3,6-substituted TZ derivatives by using a fluorogenic TCO-coumarin reporter followed by LC-MS analysis, which revealed that the initial iDA cycloaddition step was greatly accelerated by electron-withdrawing groups (EWGs) while the subsequent elimination step was strongly suppressed by EWGs. In addition, smaller substituents facilitated the decaging process. These findings promoted us to design and test unsymmetric TZs bearing an EWG group and a small non-EWG group at the 3- and 6-position, respectively. These TZs showed remarkably enhanced decaging rates, enabling rapid iDA-mediated protein activation in living cells.
High
[ 0.687411598302687, 30.375, 13.8125 ]
<?xml version="1.0" encoding="utf-8"?> <resources> <style name="AppSuperTheme" parent="Theme.AppCompat.Light.NoActionBar"> <item name="android:dialogTheme">@style/DialogTheme.Default</item> <item name="android:alertDialogTheme">@style/DialogTheme.Default</item> <item name="alertDialogTheme">@style/DialogTheme.Default</item> <item name="dialogTheme">@style/DialogTheme.Default</item> </style> <style name="DialogTheme.Default" parent="@style/Theme.AppCompat.Light.Dialog.Alert"> <item name="android:colorBackground">@android:color/transparent</item> <item name="android:background">@null</item> <item name="dialogTopMinHeight">54dip</item> <item name="dialogButtonMinHeight">54dip</item> <item name="dialogTopMarginTop">6dip</item> <item name="dialogTopMarginBottom">9dip</item> <item name="dialogTopMarginLeft">10dip</item> <item name="dialogTopMarginRight">10dip</item> <item name="dialogIconPaddingTop">6dip</item> <item name="dialogIconPaddingBottom">0dip</item> <item name="dialogIconPaddingRight">10dip</item> <item name="dialogIconPaddingLeft">0dip</item> <item name="dialogTitleDividerHeight">1dip</item> <item name="dialogScrollPaddingTop">2dip</item> <item name="dialogScrollPaddingBottom">2dip</item> <item name="dialogScrollPaddingRight">10dip</item> <item name="dialogScrollPaddingLeft">14dip</item> <item name="dialogMessagePadding">5dip</item> <item name="dialogTitleAppearance">@style/TextAppearance.AppCompat.Large</item> <item name="dialogMessageAppearance">@style/TextAppearance.AppCompat.Medium</item> <item name="dialogLeftButtonStyle">@style/DialogButton</item> <item name="dialogCenterButtonStyle">@style/DialogButton</item> <item name="dialogRightButtonStyle">@style/DialogButton</item> <item name="dialogParent">@style/DialogStyle.Parent</item> <item name="dialogUpdateAppearance">@style/TextAppearance.AppCompat.Medium</item> <item name="dialogUpdateProgressBar">?android:attr/progressBarStyleHorizontal</item> </style> <style name="DialogButton" parent="@style/Widget.AppCompat.Button.ButtonBar.AlertDialog"> <item name="android:layout_width">0dip</item> <item name="android:layout_weight">1.0</item> </style> <style name="DialogStyle"> <item name="android:background">@android:color/transparent</item> </style> <style name="DialogStyle.Parent"> <item name="android:layout_width">match_parent</item> <item name="android:paddingTop">9dip</item> <item name="android:paddingBottom">3dip</item> <item name="android:paddingRight">1dip</item> <item name="android:paddingLeft">3dip</item> </style> </resources>
Low
[ 0.509127789046653, 31.375, 30.25 ]
This post is also available in: Español (Spanish) Update: According to New Brunswick Today’s Richard Rabinowitz, unmanned aerial vehicles have been in New Brunswick since at least 2009. NEW BRUNSWICK, NJ—Rutgers University will be testing unmanned aerial vehicles, commonly referred to as drones, for use by U.S. government agencies. The Federal Aviation Administration (FAA) announced plans on Monday to test unmanned flying aircrafts at several colleges including Rutgers, Virginia Tech, and the University of Maryland. “These test sites will give us valuable information about how best to ensure the safe introduction of this advanced technology into our nation's skies," said Anthony Foxx, the head of U.S. Department of Transportation. The FAA explained in a press release that officials “considered geography, climate, location of ground infrastructure, research needs, airspace use, safety, aviation experience and risk.” “Each test site operator will manage the test site in a way that will give access to parties interested in using the site,” reads the press release. “The FAA’s role is to ensure each operator sets up a safe testing environment and to provide oversight that guarantees each site operates under strict safety standards.” Earlier this year, Congress passed a bill urging the FAA to open the skies by September 2015 for use unmanned drone use nationwide. Parties interested in flying drones would include law enforcement, government agencies, for-profit businesses like farming or photography, "hobbyists," and fire departments. "Today, UAS perform border and port surveillance, help with scientific research and environmental monitoring, support public safety by law enforcement agencies, help state universities conduct research, and support various other missions for government entities." The FAA’s Destination 2025 “is a vision that captures the future we will strive to achieve – to transform the Nation’s aviation system by 2025.” “Manned and unmanned flights will each achieve safe flight, as will commercial launches to space.” The New Jersey Full Assembly is scheduled to vote on a bill (A4073/S2702) proposing restrictions on what types of unmanned aircrafts can fly or hover over NJ. The bill passed the Senate by a vote of 36-0 last year, and also recently passed the Assembly Homeland Security and State Preparedness Committee. The proposed law “prohibits drones from being equipped with an ‘antipersonnel device’…[such as] a firearm or any prohibited weapon or device or any other projectile designed to harm, incapacitate, or otherwise negatively impact a human being.” “Information or records of a verbal or video communication derived from the use of an unmanned aerial vehicle shall be strictly safeguarded and shall not be made available or disclosed to the public or any third party.” Assemblyman Daniel Benson (D-14) says that the bill ensures a “basic framework that protects privacy… It’s important that we have this ahead of expected use in the future.”
Mid
[ 0.558758314855875, 31.5, 24.875 ]
Mollard P Series Rosewood Baton White 14 In. Being busy from doing your daily activities is really stressing.This is the excellent time wherein playing Mollard musical instruments can help you feel calm with the restful sound they create. Playing such instruments can mean more to some individuals, other than just a mere form of unwinding. There are also talented individuals who make use of music as their way of living. With this, you should always make sure to find good quality musical instruments because your performance greatly depends on them. Always go for Mollard P Series Rosewood Baton White 14 In. guitars, keyboards, microphones, drums, and other popular musical instruments if you are looking for the best and high quality musical instruments. For you to select the best instrument, learn from the subsequent guidelines below. To choose the right musical instrument, you should base it on what you like to play. Whether you are a beginner , an amateur, or a pro, you must at least, know your needs as a musician. You must determine the instrument you want yourself playing. It will always be an excellent start to understand the basics for beginners. Getting a Batons & Arrangers at a reasonable cost is a thing that everybody wants, am I right? Among the many critical factors to consider when buying something, price is very important. It's always the quality that we have to prioritize whenever we purchase an instrument, therefore we must have the most durable one; Mollard P Series Rosewood Baton White 14 In. can give you that. You should ensure its durability and functionality. Overall, it must have a good quality. Possible Amount of Money you are Ready to Spend It is not difficult to find the place to find Batons & Arrangers. Suppliers and merchants of musical instruments are scattered in all places, so it will not be hard for you to find one which provides varieties of choices with excellent styles. Look for a reliable supplier of Mollard P Series Rosewood Baton White 14 In. to confirm quality and durability of the instrument. To simply get what you need, go order online. But before transacting to any shop online, read reviews first to find out if they are reliable. You may also benefit from different forums and chat boxes on the web because there you will have the opportunity to ask the experts about the right instrument to buy. Moreover, musical magazines will help you find the best instruments. You just need to be resourceful and seek information. Before you decide to purchase a product, there is always a need for you to check out its warranty. You also have to get the necessary information about your order such as the serial number, so be sure to acquire the receipt after paying for it. This type of information and document is necessary so that you can track up the instrument whenever it gets lost or stolen. You can also use the warrantee when you have to buy a new one. Before bringing the product home, double check everything. Batons & Arrangers - Mollard If you're having a tight budget, you can opt for instruments for newbies because they are cheaper. Renting an instrument can also be considered. Above all, it still up to you what you want to do. "Your expectations of a baton are about to change! Hold a Mollard baton in your hand and it will be instantly obvious to you. The balance, the beauty, the response! This exceptional baton is handcrafted from the finest hardwoods, then polished to bring out the deep beauty in the handle. The shaft is made from white birch and coated with either a white or a clear finish. Mollard batons are famous for their lightweight, precision balance, beauty, and responsiveness.12"" or 14"" lengthWhite or clear finish"
High
[ 0.6726457399103141, 37.5, 18.25 ]
{ "network": [ [ "probeNetwork - default - end", 1, 84942 ], [ "probeNetwork - default - start", 0, 0 ] ], "gfx": [ [ "probeGFX - default - end", 4, 13, 21, 1, [ 153536, 153536 ], [ 1426, 1426 ], [ 18784, 18784 ] ] ] }
Low
[ 0.531914893617021, 31.25, 27.5 ]
Kimiko Glenn Kimiko Elizabeth Glenn (born June 27, 1989) is an American actress, singer and Broadway performer known for portraying Brook Soso in the Netflix series Orange Is the New Black, for which she received three Screen Actors Guild Awards. She also originated the role of Dawn Pinket in the Broadway musical Waitress and has provided the voices of Ezor in Voltron: Legendary Defender, Lena De Spell in DuckTales and Peni Parker in Spider-Man: Into the Spider-Verse. Personal life Glenn was born and raised in Phoenix, Arizona with her sister Amanda. Her mother, Sumiko, is Japanese and her father, Mark, is of Scottish, Irish and German descent. Glenn began acting at the Valley Youth Theatre in Phoenix and several other local theaters, when she was in fifth grade. She was educated at Desert Vista High School and the Interlochen Arts Academy boarding school in Interlochen, Michigan. She attended the Boston Conservatory for a year as a musical theatre major, but dropped out when she was cast in the touring company of Spring Awakening. Glenn is a pescatarian. Career During her freshman college year in 2008, Glenn was cast as Thea in the first U.S. national tour of Steven Sater's and Duncan Sheik's rock musical Spring Awakening. In 2013, she booked her breakthrough role as Litchfield Penitentiary inmate Brook Soso in the Netflix comedy-drama series Orange Is the New Black, for which she has received Screen Actors Guild Awards in 2014, 2015 and 2016 for Outstanding Performance by an Ensemble in a Comedy Series. In 2014, she appeared in the Lena Dunham-directed music video "I Wanna Get Better" for Jack Antonoff's solo project Bleachers. Glenn played the supporting role of Liv Kurosawa in the drama-thriller film Nerve (2016), directed by Henry Joost and Ariel Schulman and based on the young adult novel of the same name. In 2016 she was cast as Dawn Williams in the Broadway transfer production of Sara Bareilles's and Jessie Nelson's musical Waitress. Preview performances began on March 25, 2016 at the Brooks Atkinson Theatre, and the show officially opened on April 24. In 2018, Glenn began starring as Harlow in the comedy Web television series Liza on Demand, with Travis Coles and creator Liza Koshy. Filmography Film Television Music videos Stage References External links Category:1989 births Category:21st-century American actresses Category:Actresses from Phoenix, Arizona Category:American film actresses Category:American people of German descent Category:American people of Irish descent Category:American actresses of Japanese descent Category:American people of Scottish descent Category:American stage actresses Category:American television actresses Category:Eurasian Americans Category:Living people Category:American voice actresses Category:Actresses of German descent Category:Catfish the TV Show hosts Category:Boston Conservatory alumni
High
[ 0.717201166180758, 30.75, 12.125 ]
Chamonix Mont-Blanc ski resort reviews We were impressed by the beauty of the Chamonix resort and by its unique atmosphere. It offers amaizing views to the Mont Blanc massive. The whole Chamonix Valley is so charmfull and unforgettable. Certainly, I would go there again. Eligiusz The skiing is fantastic. we were a party of three, 1 beginner, 1 intermediate and 1 advanced. The resort fully catered for us all. It is not a resort with miles of pistes but it has miles of available skiing area Stuart Liked the compactness of the entire town and the ease and consistency of the public transport. Yes we would happily return to Chamonix Andrew When my knee will be fine I will go skiing again, and Charmonix shuld be nice destination. Ingvar A bad surprise are prices of food and drinks. Snow was excellent. Joze
Low
[ 0.524163568773234, 35.25, 32 ]
D.C. United would like to bring Yamil Asad back in 2019. Asad would love to be back with United in 2019. If only it was that simple. By this point in the offseason, the Argentine playmaker was expected to be in D.C., preparing to join United in training camp next week. Instead, Asad is in Argentina, training with Velez Sarsfield, the club to which he is contracted. With United’s opener fast approaching, hope seems to be fading fast that the winger will ever play for D.C. again. A source familiar with the ongoing negotiations said they put those chances at “a hard 50/50.” So how did we get here? United acquired Asad on loan from Velez prior to the 2018 season after Atlanta United, the Argentine’s previous club, was unable to agree to terms with him. It was a complex deal with a few moving parts. D.C. had to send allocation money to Atlanta for his MLS rights, and also agree loan terms with Velez and a contract with the...
Mid
[ 0.552486187845303, 37.5, 30.375 ]
Chubby Hubby Rice Krispies Treats Do you remember the amazing Chubby Hubby Truffles from a month or so ago? The truffles that I fashioned after my favorite ice cream of all time – Ben & Jerry’s Chubby Hubby? For those of you who may not have heard of it, Chubby Hubby consists of vanilla ice cream chock full of peanut butter-filled, chocolate-covered pretzels. It got me through college, and I still can’t get enough of that flavor combination to this day. Given how much I loved the truffles, and how much you all said you loved the truffles, I started brainstorming about other things that I could put a chubby hubby spin on and came up with these peanut butter-pretzel Rice Krispies treats topped with chocolate. Talk about addicting! Regular Rice Krispies treats get some peanut butter added to the melted marshmallows, and then in place of some of the Rice Krispies, you add pieces of crushed up pretzels, and I threw in some peanut butter chips for even more peanut butter flavor. Then, of course, they are topped with chocolate. If you want to up the chocolate factor, you could always throw some chocolate chips in with the peanut butter chips (or swap them). I think these have an awesome balance of all three flavors, and can’t stop eating them. Although that’s not necessarily surprising since I am a huge fan for Rice Krispies treats and am always a sucker for any fun variation on the classic recipe. Do you have a favorite version of Rice Krispies treats? I always love finding new varieties! Directions: 1. Grease a 9x13-inch baking dish; set aside. 2. Melt the butter in a large saucepan over low heat. Add the marshmallows and stir until completely melted. Remove from heat. 3. Off the heat, add the peanut butter and stir until completely melted and smooth. Add the Rice Krispies cereal, pretzel pieces and peanut butter chips, stirring gently until all dry ingredients are coated. Turn the mixture into the prepared baking dish and press it into the pan, creating an even top. 4. Microwave the chocolate chips on 50% power in 30-second intervals, stirring after each, until completely melted and smooth. Pour the chocolate over the treats and spread in an even layer with an offset spatula. Let cool at room temperature until set, about 1 hour. (You can also refrigerate the treats for about 30 minutes to speed up the process.) Store leftovers at room temperature in an airtight container. 61 comments Show Me: Comments These look amaaaazing. I think fancy Rice Krispie treats are becoming a thing. I’ve been drooling over a caramel macadamia Rice Krispie treat I saw on a blog a few days ago, and now I’m coveting these as well! First time, long time 🙂 Your chubby hubby krispies sound great. Have you heard of mars bar slice? It is something of an institution here in Australia, a no bake slice based on rice bubbles (krispies) and mars bars! Happy to share if you’re keen. Thank you Kim! I’m wondering if there is a good substitute for the Copha, as you can’t get it here in the U.S. (although I saw it is available on Amazon). Could coconut butter or oil be used? Or vegetable shortening? What do you think? Hi Michelle. To be honest, I would just omit it. I’ve made mars bar slice many times and have never bothered to melt copha into the chocolate and I generally don’t bother to use cooking chocolate for the topping either – just a good quality “eating quality” milk chocolate (that would be cadbury’s here in Oz!) I think I have saved every Rice Krispie treat recipe you have posted. I loved the Reece’s peanut butter cup ones. Sooo addicting! I also love making Rice Krispie treats with seasonal marshmallows. I have posted a few on my site like Pumpkin Spice with Maple Cream Cheese Frosting. Those are always fun! And sometimes I actually like the simplicity (and fewer calories) of plain old Rice Krispie treats! Wow…these sound good. I make traditional RK treats but I add peanut butter to melted chocolate and spread that over the treats. The addition of pretzels sounds really good! I would love to see the recipe from Austrailia…Mars Slice! Bring it on! OMG! I always though Chubby Hubby was how you referred to your Husband! lol These look fantastic!! I think I need to make these, anything with peanut butter and chocolate is going to be good, hopefully I won’t eat the whole pan. I’ll be calling you when I gain 5 more lbs. 🙂 These sound SO good! I like making cake batter rice krispie treats- just add 1/4 cup of cake mix to the marshmallow mixture before adding the rice krispies. They are best when topped with sprinkles, and they go over VERY well! 🙂 These just sound delish!!! Have you ever tried using special K cereal with the strawberys in them in place of the rice krispies? The strawberrys end up flavoring the marshmallows and are so yummy. My mother use to make something like this with reg. Special K cereal and pb. and chocolate on top though. Will have to hunt for this recipe too! Oh I am sooo hungry now!!! Have you tried scotcheroos before? It’s the same idea as these bars but with butterscotch chips melted into the chocolate on top. So. Good. The recipe is on the Rice Krispies website (it’s super simple and only needs one measuring cup!). Great recipe Michelle! My father was Italian, and my mother was French/Canadian. Food was front and center at every French/Canadian event. I know I will love this recipe, because I like the crunch of crushed pretzels in any gooey dessert. I can’t wait to look at your other recipes! My favorite rice krispie square is the Butterscotch Treats at ricekrispies.com that incorporate a bit of butterscotch pudding mix into the recipe, which I change up by pouring on my favorite chocolate frosting. I don’t like to wait for the squares to cool, and this frosting can be poured on the hot squares, or on hot brownies, or hot baked cakes, right out of the oven. Enjoy! Lyn onefoodietoanother.blogspot.com anoldcookbookcollector.blogspot.com everyonesfavoriterecipes.blogspot.com recipefinderforeveryone.blogspot.com Why yes, I do see these awesome krispy treats in my future! I have to admit though, my favorite are the kinda stale plain old fashioned ones you find in the wrapper at gas stations and things. They age like a fine wine 😉 Just found your blog and am excited to explore! These do sound delicious. For me, a key flavor in Chubby Hubby is the malt in the vanilla ice cream. I’m going to try these with some malt powder stirred in. I’ll let you know how they turn out. My favorite version is a Pumpkin Spice Krispie Treats that I make with Erewhon’s Crispy Brown Rice Cereal. My son ate 2 batches really quickly and wants more. Made me make a batch to bring to our friends house on Christmas Day. She loved them too. This looks amazing too. I may make these for Valentine’s Day – for my Valentine – Hubby of 22 years who is a PB fanatic. Thanks for making me sooo hungry. I made these tonight and your focaccia! These came out fantastic! I mistakenly purchased chocolate and peanut butter morsels but they turned out awesome! Just a little more peanut butter deliciousness! The focaccia I tweeked a bit (having worked in a bakery for years) but still came out great! Great website!! Thanks for sharing all your recipes! I have made 5 batches of them and have eaten at least 4 of them myself! As a new mom who is not getting much sleep and relying on sugar to keep me awake, these have come in handy at all hours of the day! I have even upped the pretzel/krispie ratio because I love the addition of the salty. Everyone goes crazy for them, on the rare occasion to choose to share! Thanks for this one!! Help! This recipe totally frustrates me. I’m a pretty avid and adventurous cook/baker and thought these would be easy to whip up. I’ve tried twice now and always encounter the same problem…when I take the butter/marshmallow mixture off the stove and add the peanut butter, the mix starts to stiffen to the point that nothing can be added to it. What am I doing wrong? Please help. Thanks! Hi Jill, You could use the natural peanut butter that does not require stirring or refrigeration – I believe that JIF and Skippy both make versions of this. I would stay away from the natural varieties that require stirring and refrigeration, as it’s generally too oily. Hi, Thanks for this recipe! Another one of your posts led me here, I love the embedded links. These were so good, I made them for a movie party. The pb was so creamy plus salty crunchy pretzels! The chocolate chips on top put me over the edge. I can’t get enough. I keep walking by and cutting off another little hunk, and another . . . I think it will be a staple in the house! Thanks for this creative delicious recipe!
Mid
[ 0.5714285714285711, 29.5, 22.125 ]
Determining what practising clinicians believe about long-acting injectable antipsychotic medication. To explore the factors that influence clinician prescribing choice when using depot anti-psychotics. A two-phase qualitative exploration of the attitudes to and knowledge about risperidone long-acting injection (RLAI) in a group of New Zealand psychiatrists. The first phase was conducted shortly after the treatment was funded (n = 16), the second phase was a year or so later (n = 35). Data was gathered using a focus group technique with scenario stimulus. The data were examined using thematic analysis. Themes fitted the broad categories of who RLAI was used for, how it was best used, what were the efficacy determinants and what adverse effect monitoring occurred. For many areas of exploration there was a gap between actual practice and what the psychiatrist thought might be best practice. There was considerable variance in details regarding the administration of the treatment including dose, titration and efficacy monitoring. The results confirm the utility of quantitative exploration in understanding prescribing choice. The effect of outdated views regarding long-acting injectable (LAI) antipsychotics contributes to a gap between actual practice and what is thought to be desirable. The study targeted RLAI but findings are likely to also pertain to other LAI anti-psychotics.
Mid
[ 0.6192660550458711, 33.75, 20.75 ]
[The development of the treatment of vascular injuries until today]. The healing of vascular injuries goes hand in hand with the healing of scars, which means we can search for various methods from centuries ago. The different ages, with or without wars, showed and show a huge variety of injuries, up until this day. The healers have always tried their best to come up with the best possible methods taking care of the injured body parts, and help the patients survive. This article is aiming to show the main changes in vascular healing, in an enjoyable and colourful way. While focusing on the past few decades of quality development and having a look at Hungarian literature, the reader shall learn that Hungarian vascular surgery and traumatology is on a very high level and is keeping up with the international stage, as using modern techniques. The treatment of vascular injuries has been around for thousands of years. The Ebers Papyrus gave a professional guidance in the treatment of wounds. Hippocrates recommended compressing dressing. Later on, Ambroise Paré performed ligatures. War injuries serve with numerous experiences. In the begining of the 20th century, autolog veins are used more and more often. The amputation rate of DeBakey is 49%, of Hughes is 7-22%, and of Rich is 12.7%. Thanks to surgical technique, antibiotics and the use of transfusion, the rate of amputation has been decreasing. The wars of Iraq and Afganistan - between 2003 and 2011 - left the injured with more serious explosive and gunshot wounds than ever before. The challanges of nowadays are the injuries caused by accidents and violent acts. Also, endovascular interventions are widespread. Orv Hetil. 2019; 160(28): 1112-1119.
High
[ 0.6994382022471911, 31.125, 13.375 ]
Owen Good No doubt about it, this was Sony's week, as the PS3 Slim's arrival dominated the news out of Gamescom - one of three expositions on two continents where we have or had writers in the past week. Totilo went to QuakeCon, Fahey's now at BlizzCon, and though Gamescom is wrapping up in Cologne, by no means is that end of our coverage there. Look for more reporting on Gamescom in the coming week, for now, take a look back on what Kotaku delivered in this one.
Mid
[ 0.54, 33.75, 28.75 ]
The stereotype of a successful salesperson is an extrovert who sells anything to anybody. He (or, less commonly, she) charms customers so thoroughly that they sign on the dotted line before they know what hit them. Customers, however, don't find extroverted salespeople charming. On the contrary, most customers tune out the moment a seller looks or sounds like the stereotype. The dislike and distrust of salespeople is nothing new; the fast-talking, backslapping salesman was a stock villain for nearly 100 years. (See: Gantry, Elmer.) Customers hate being cajoled or manipulated into buying something they don't want. As the old saying goes, "Everybody likes to buy, but nobody likes to be sold to." The near-universal dislike of stereotypical salespeople stems directly from the traditional definition of selling: Intruding (to get your foot in the door) Pitching (to persuade the customer to buy) Persisting (to push until you make the sale) Effective selling is quite different. It consists of: Research (to understand the customer) Listening (to understand individual needs) Reacting (to adapt to the identified needs) Research requires time spent alone on the Web, reading and analyzing information. Listening means being patient and quiet while remaining open to new ideas and perspectives. Reacting is all about letting the other person set the pace and the agenda. These are all classic introverted behaviors that are difficult for extroverts to do well. Why, then, do companies continue to employ and deploy extroverts rather than introverts in sales roles? The answer is that such companies don't understand how technology is changing the sales process. Back in the day, salespeople needed to be extroverts, because most new sales opportunities evolved from cold calling, originally in person, but later by telephone. Extroverts tend to be good at cold calling and telemarketing, because they thrive on social interaction and tend to have thick skins and therefore the ability to cope with rejection. Technology, however, has made cold calling ineffective. What with voice mail, call blocking, caller ID, no-call lists, it's become nearly impossible to get a decision maker on the phone without an appointment. (Note to chief sales officers: When your customers spend billions of dollars on technology to prevent your salespeople from interrupting them, take the damn hint.) The collapse of cold calling as a lead-generation mechanism is the reason inbound or email marketing is so popular, BTW. A savvy email can easily engage a decision maker online and segue into a phone call, especially when that decision maker has already indicated interest. Inbound or email marketing, however, demands an ability to research a customer, see the world from the customer's perspective, and adapt to the customer's situation and specific response--all skills that come easier to introverts than extroverts.
Mid
[ 0.642201834862385, 35, 19.5 ]
Hello. New to the forum and new Tacoma owner. Saw the BFD wheels and extremely interested. Could you provide a quote on 18" bronze or black wheels to Chesapeake, VA (23320)? Truck is currently at stock height. What is the max size tire that I can run safely on 18" wheel? Thx. Hello. New to the forum and new Tacoma owner. Saw the BFD wheels and extremely interested. Could you provide a quote on 18" bronze or black wheels to Chesapeake, VA (23320)? Truck is currently at stock height. What is the max size tire that I can run safely on 18" wheel? Thx. Welcome to the wonderful world of Tacomas and welcome to TacomaWorld Forums. =) I just sent you a PM with the information you are requesting. I agree! I was really happy with the way everything came together. I think the sidewall and the all-terrain tread pattern on these Coopers makes the tire look much bigger than a normal 60-series. This truck is completely stock too, with almost 100,000 miles on stock suspension, and there is absolutely no rubbing with this wheel/tire setup. FN Wheels is gauging interest in a new color for the 18x9" BFD - dark metallic blue (aka magnesium blue or mag blue). They will need seven (7) customers in order to offer this color as a factory finish at no extra charge. Here are a few samples photos showcasing the BFD wheels in this color: FN Wheels is gauging interest in a new color for the 18x9" BFD - dark metallic blue (aka magnesium blue or mag blue). They will need seven (7) customers in order to offer this color as a factory finish at no extra charge. Here are a few samples photos showcasing the BFD wheels in this color: They are very dark, but as mentioned below it could be the lighting or the settings on your screen making them look black (also I see below that they looked better once you saw them on your computer - maybe you were looking at them on a cell phone). Graphite is a good choice too! I would say that this color of blue is as dark as the graphite gunmetal, but the graphite has the grey metallic hue, and these have a blue metallic hue. In terms of darkness they're pretty much the same. Quote: Originally Posted by DATZ WEAKSAUCE probably just the lighting Agreed, or maybe the screen brightness / contrast settings. I will try to take some more photos today in different lighting. Quote: Originally Posted by TacoDaddy just got home and they look much better on the computer screen. i bet they look really sick in person. They look the best in person. Even with a pretty decent camera I can only get the color to come out "so well." In person this color really pops. To describe it - you think it's black if you see it in the shade or at night, and then in indirect sunlight you can tell the color is different from a black wheel. When the sun hits the wheels directly the metallic stands out subtly (not blingin'), and then you can see the dark / midnight metallic blue. If you look up Rays Engineering / Volk Racing's color of magnesium blue this is the same exact finish. We used a magnesium blue color chip to match up with this color. Quote: Originally Posted by S7ICKlVlAN SICKKKKK!!!!!!!!!! Glad you like them! Quote: Originally Posted by ares650 That New Color is very Hot!!!!!!!! Thank you for the feedback. I'm glad you like the color too. Quote: Originally Posted by Robo02cop How come they look so much darker than the picture posted by DATZ WEAKSAUCE??? I'm not sure. It could be the lighting or camera settings, or it's possible DATZ WEAKSAUCE had his wheels powder coated a slightly different color. This color is not a factory finish yet. DATZ WEAKSAUCE had his wheels powder coated at his local powder coater in Northern California independently of FN Wheels' powder coater in Southern California. If enough customers would like to order this color as a factory finish (about 5-7 customers would be needed for a production run), the wheels will be finished in dark metallic blue in the factory, and the color will be the same as the powder coated set that I posted. If there's not enough demand to have the color produced as a factory finish, we can always have the wheels powder coated this color (or any color) locally. The cost for powder coating is usually $65 per wheel for standard colors, but a special metallic color such as the dark metallic blue is about $80 per wheel. If the color can be offered as a factory finish there is no additional cost to the customer.
Mid
[ 0.631313131313131, 31.25, 18.25 ]
Find a job that you love. Although work is an expected societal norm, your career shouldn't be restraining. If you hate what you do, you aren't going to be happy, plain and simple. You don't need to love every aspect of your job, but it needs to be exciting enough that you don't dread getting out of bed every morning. Finding a job that you are so passionate about you would do it for free. If your job is draining you, and you are finding it difficult to do the things you love outside of work, something is wrong. You may be working in a toxic environment, for a toxic person, or doing a job that you truly don't love. If this is the case, it is time to find a new job.. Share The Work. You don’t have to do everything! Recruit some help from your fellow employees or family members. Delegate responsibilities, and you’ll get everything done in a timely manner. This facilitates teamwork and makes everyone feel like they’ve made a contribution. It also gives you some relief while your helpers gain a sense of accomplishment. Make exercise a must-do, not a should-do It’s easy to cancel the gym, the evening run or the yoga class because a client wants something done yesterday. Instead, ensure exercise is given as much priority as your clients and making money. A healthy body means a fresh mind, which means you will function better and complete tasks in less time. Establish Boundaries Set fair and realistic limits on what you will and will not do both at work and at home. Clearly communicate these boundaries to your supervisor, coworkers, partner and family. For instance, you might commit to not working late on certain days unless there is a crisis. Additionally, set aside a time at home during which you will not check or respond to work-related emails or voice mails. Take a vacation. Sometimes, truly unplugging means taking vacation time and shutting work completely off for a while. Whether your vacation consists of a one-day staycation or a two-week trip to Bali, it's important to take time off to physically and mentally recharge. According to the State of American Vacation 2018 study conducted by the U.S. Travel Association, 52% of employees reported having unused vacation days left over at the end of the year. Employees are often worried that taking time off will disrupt the workflow, and they will be met with a backlog of work when they return. This fear should not restrict you from taking a much-needed break. The truth is, there is no nobility in not taking well-deserved time away from work; the benefits of taking a day off far outweigh the downsides. With proper planning, you can take time away without worrying about burdening your colleagues or contending with a huge workload when you return. Eliminate Distractions. If your personal life interferes with your job, set some boundaries. Be firm when you have work to finish. Discourage frequent personal calls or visits. Discourage frequent personal calls or visits. Leave your cell phone in the car or switch it off during work hours. Delay socializing with co-workers until your work is done. Once you’ve learned to streamline your workday, you’ll find that you no longer dread going to work and bringing it all home (or closing the door to your home office). Imagine knowing that you’re ready to start your day instead of playing catch up from yesterday. Take time to make time Invest in time-tracking tools. There are plenty of tools you can use to track everything from the frequency and duration of meetings, to chasing and converting leads. Time-tracking software allows you to quickly build an understanding of how long a particular task takes. That way, you can effectively estimate how long your next work task will take. Be realistic At the end of each working day, perform a little self-analysis. Ask yourself what worked today, what didn’t, what went wrong and how the issue can be fixed. Remember there are thousands of businesses just like yours learning the same lessons every day. Don’t forget to tap into the valuable resources around you – your peers – for help. Prioritize your health. Your overall physical, emotional and mental health should be your main concern. If you struggle with anxiety or depression and think therapy would benefit you, fit those sessions into your schedule, even if you have to leave work early or ditch your evening spin class. If you are battling a chronic illness, don't be afraid to call in sick on rough days. Overworking yourself prevents you from getting better, possibly causing you to take more days off in the future. Prioritizing your health first and foremost will make you a better employee and person. You will miss less work, and when you are there, you will be happier and more productive. Talk it out with your bosses Talk it out with your bosses It's gainful to keep the correspondence lines open with your work director, HR, and bosses. Be hundred percent legitimate and straightforward. Suppose you can't achieve office on time since you need to drop your child to class, they can enable you to out by keeping your work hours adaptable. On the off chance that that is an issue, be set up with elective answers for show how the course of action won't influence your execution and efficiency. Make time for yourself and your loved ones. While your job is important, it shouldn't be your entire life. You were an individual before taking this position, and you should prioritize the activities or hobbies that make you happy.Achieving work-life balance requires deliberate action. If you do not firmly plan for personal time, you will never have time to do other things outside of work. No matter how hectic your schedule might be, you ultimately have control of your time and life. When planning time with your loved ones, create a calendar for romantic and family dates. It may seem weird to plan one-on-one time with someone you live with, but it will ensure that you spend quality time with them without work-life conflict. Just because work keeps you busy doesn't mean you should neglect personal relationships. Realize that no one at your company is going to love you or appreciate you the way your loved ones do. Also that everyone is replaceable at work, and no matter how important you think your job is, the company will not miss a beat tomorrow if you are gone. Get familiar with the specialty of appointment There's nothing incorrectly in recognizing that you can't do everything all alone and a little help could facilitate your gigantic outstanding burden. By doing everything independent from anyone else, you're exhausting your body as well as setting it up for a breakdown later on. Choose what you should do yourself and what others can deal with. Look for assistance from colleagues, life partner, and relatives. You and your life partner can separate errands so that both of you don't fear coming to home following a long working day at office. Remain associated amid the day On account of innovation that knowing the prosperity and whereabouts of your friends and family isn't a test any longer. Every single working mother can without much of a stretch remain associated with their youngsters while they are working at office. In case you're feeling the loss of your children, you can make a telephone call or even a video call amid your meal break and spotlight on work with no pressure or strains at the backend. This solaces the tyke that you're close and furthermore encourages you traverse a harsh day at work. Limit diversions and time-squanderers When you are a working woman, consistently is crucial — at work and home. You would be amazed to realize that diversions at working environment can cost you over three hours every day. On the off chance that you need to be engaged and profitable, it's fundamental to keep effusive collaborators, easygoing web surfing, cell phones, and different diversions under control. Set explicit time cutoff points to address messages and telephone utilization. At home, abstain from observing excessively of TV and rather, you can utilize that opportunity to reinforce your bond with your accomplice and children. Work Smarter Not Harder Using time more efficiently is an important skill that everyone from the receptionist to the CEO can learn. Adopting the right combination of time-management practices can cut stress and save you up to an hour a day. This can include the use of technology to become more organized, grouping emails and voice messages, avoiding procrastination and learning to say "no." Simplify. Don’t make things complicated. As humans, we tend to make extra work for ourselves even if it’s unnecessary. Why reinvent the wheel? If something works, don’t try to fix it. Always remember the KISS method: Keep It Simple Silly. Leave Work at Work Develop a mental on-off switch between work and home. It helps to establish a transitional activity between the two realms. This might consist of listening to music or recorded books during your evening commute, exercising at the fitness center, running errands, or keeping personal appointments. Scheduling such activities immediately following your normal work hours also prevents you from spending that extra twenty minutes at the office which then turns into several hours. Draw a line among home and work Draw a line among home and work One of the best exercises life has encouraged me is to state NO to things that don't line up with your needs. Trust me, it is the greatest mantra to effectively juggle your own and expert life. Figure out how to define limits so you can give your central core to both the parts of life. Leave work at work, don't get back home with it. While investing energy with your children and accomplice, don't be on the telephone sending messages or talking about work with associates. Be aware of your own connections and begin saying no to things that aren't doing any great to us. Make some time for yourself Setting aside a few minutes to do things you really love is the key to keep up an ideal work-life balance. In some cases, it's alright to consider yourself, have some relaxation time and spoil yourself. Go to a spa, get a back rub, watch reruns of your most loved TV arrangement, read a book, travel solo, or simply do nothing by any means. Figure out how to deal with yourself on the grounds that at exactly that point you would most likely deal with your family and your work. Be sure to start using these ideas today to enjoy a great work-life balance and a more fulfilling life!
Mid
[ 0.631578947368421, 33, 19.25 ]
Electronic apparatuses with a phototransmissive front panel forward of a display panel have been provided.
Mid
[ 0.564516129032258, 30.625, 23.625 ]
Reversal of left ventricular hypertrophy with angiotensin converting enzyme inhibition in hypertensive patients with autosomal dominant polycystic kidney disease. Hypertension occurs commonly and early in the natural history of autosomal dominant polycystic kidney disease (ADPKD), affecting both renal and patient outcome. Activation of the renin angiotensin aldosterone system due to cyst expansion and local renal ischaemia plays an important role in the development of ADPKD related hypertension and left ventricular hypertrophy (LVH), a known important risk factor for cardiovascular morbidity and mortality. The aim of this study was to investigate the effects of an angiotensin converting enzyme (ACE) inhibitor, enalapril, on renal function, blood pressure and LVH in hypertensive ADPKD patients. Fourteen hypertensive ADPKD patients (11 men, 3 women; mean age: 40 years) were included in the study. All patients had LVH and creatinine clearance (Cer) greater than 50 ml/min/1.73 m2. The patients were followed for 7 years on enalapril therapy. The effects of enalapril on renal function, blood pressure and LVH were investigated. Baseline measurements of mean arterial pressure (MAP), Ccr and left ventricular mass index (LVMI) were 110 +/- 2 mmHg, 84 +/- 6 ml/min/1.73 m2 and 146 +/- 4 g/m2, respectively. After one year of enalapril therapy there was a significant decrease in MAP (94 +/- 3 mmHg, P < 0.005) which remained stable until the end of the study at 7 years (94 +/- 1 mmHg, P < 0.005 vs baseline). There was also a significant decrease in LVMI (131 +/- 6 g/m2, P < 0.05) after year 1 which continued to decrease until the end of the study reaching 98 +/- 6 g/m2 (P < 0.01 vs year 1 and baseline). Although Ccr remained stable after year 1, a significant decrease was observed after 7 years of follow-up (59 +/- 6 ml/min, P < 0.001 vs year 1 and baseline). ACE inhibition in hypertensive ADPKD patients provided long-term reversal of LVH in association with a mean 3.6 ml/min/year decline of Ccr. These preliminary results have potential important implications for cardiovascular and renal protection in ADPKD.
Mid
[ 0.6496163682864451, 31.75, 17.125 ]
Identification of gains and losses of DNA sequences in primary bladder cancer by comparative genomic hybridization. Comparative genomic hybridization (CGH) makes it possible to detect losses and gains of DNA sequences along all chromosomes in a tumor specimen based on the hybridization of differentially labeled tumor and normal DNA to normal human metaphase chromosomes. In this study, CGH analysis was applied to the identification of genomic imbalances in 26 bladder cancers in order to gain information on the genetic events underlying the development and progression of this malignancy. Losses affecting 11p, 11q, 8p, 9, 17p, 3p, and 12q were all seen in more than 20% of the tumors. The minimal common region of loss in each chromosome was identified based on the analysis of overlapping deletions in different tumors. Gains of DNA sequences were most often found at chromosomal regions distinct from the locations of currently known oncogenes. The bands involved in more than 10% of the tumors were 8q21, 13q21-q34, 1q31, 3q24-q26, and 1p22. In conclusion, these CGH data highlight several previously unreported genetic alterations in bladder cancer. Further detailed studies of these regions with specific molecular genetic techniques may lead to the identification of tumor suppressor genes and oncogenes that play an important role in bladder tumorigenesis.
High
[ 0.681638044914134, 32.25, 15.0625 ]
The basic operation in the processing of silver halide color photographic materials (referred to hereinafter as color photosensitive materials), in general, consists of a color development process and a desilvering process. In the color development process, the exposed silver halide is reduced by a color developing agent to form silver and at the same time the oxidized color developing agent reacts with a color forming agent (a coupler) and provides a dye image. Then, in the subsequent desilvering process, the silver which has been produced in the color development process is oxidized by the action of an oxidizing agent which is commonly called a bleaching agent and then dissolved by means of a complex silver ion forming agent which is commonly called a fixing agent. Only the dye image is then left behind in the color photographic material as a result of passing through this desilvering process. The desilvering process described above can consist of a procedure involving two baths, namely, a bleaching bath which contains a bleaching agent and a fixing bath which contains a fixing agent, a procedure involving a single bleach-fixing bath in which both bleaching agent and fixing agent are present, a procedure involving two baths consisting of a bleaching bath and a bleach-fixing bath, or a procedure involving three baths, namely, a bleaching bath, a bleach-fixing bath and a fixing bath, for example. Furthermore, each of these baths may in fact be comprised of a plurality of tanks. Actual development processing includes various auxiliary operations as well as the basic operations indicated above for maintaining the photographic and physical quality of the image and for improving the storage properties of the image. For example, use is made of film hardening baths, stopping baths, image stabilizing baths and water washing baths. Recent years have seen the widespread use of small in-store processing service systems known as mini-labs and there is a need for a shortening of the time required for processing as described above in order to meet the demand for rapid and reliable processing. In particular, there has been a great demand for a shortening of the desilvering process which takes up the greater part of the processing time in conventional processing. However, the ethylenediaminetetraacetic acid ferric complex salts which are used in the main as the bleaching agents which are used in bleaching baths and bleach-fixing baths have a fundamental weakness in that they have only a weak oxidizing power and, although improvements can be achieved with the conjoint use of various bleaching accelerators, they are unable to satisfy the aforementioned demands. Furthermore, methods of processing in which the pH of the bleaching bath or bleach-fixing bath is reduced in order to increase the oxidizing power of the ethylenediaminetetraacetic acid ferric complex salts have been adopted, but in processing methods of this type color formation failure due to the formation of leuco cyan dyes, a phenomenon known as color restoration failure occurs. On the other hand, ferricyanide, dichromates, ferric chloride, persulfate and bromates, for example, are all known as bleaching agents which have a strong oxidizing power, but these materials present many disadvantages from the viewpoints of environmental protection, safety in handling and metal corrosion, for example, and the situation is such that they cannot be widely used in in-store processing applications, for example. Among these agents, bleaching baths having a pH of about 6 which contain 1,3-diaminopropanetetraacetic acid ferric complex salts which have a redox potential of at least 150 mV and a strong oxidizing power have been used, for example, in JP-A-62-222252 (the term "JP-A" as used herein refers to a "published unexamined Japanese patent application"), and it is possible to bleach silver more rapidly in this way than with bleaching baths which contain ethylenediaminetetraacetic acid ferric complex salts, but there is a disadvantage in that color fogging of a type known as bleaching fogs occurs if the bleaching process is carried out directly after color development without passing through an intermediate bath. Furthermore, bleaching baths containing 1,3-diaminopropanetetraacetic acid ferric complex salts (for example, at pH 5.0) have also been disclosed in JP-A-62-24253. The above mentioned bleaching baths can be used in desilvering operations with two processing baths with a fixing bath or a processing bath which has a fixing ability, such as a bleach-fixing bath, following the bleaching bath. Furthermore, methods of processing in bleaching baths having a low pH as disclosed in JP-A-1-206341 are known as a means of achieving rapid silver bleaching and overcoming the problem of bleach fogging, but color restoration failure inevitably occurs with this technique. Processing with a color restoring bath having a high pH after the bleaching process as disclosed in JP-A-64-558 is known as a means of overcoming color restoration failure, but these methods are not compatible with rapid processing. Furthermore, when processing is carried out in a bleaching bath which contains 1,3-propylenediaminetetraacetic acid ferric complex salt there is a definite problem with the considerable staining which occurs with the passage of time after processing as compared to the case of bleaching baths which contain ethylenediaminetetraacetic acid ferric complex salts.
High
[ 0.6836734693877551, 33.5, 15.5 ]
No pardon for Edward Snowden - eplanit https://www.washingtonpost.com/opinions/edward-snowden-doesnt-deserve-a-pardon/2016/09/17/ec04d448-7c2e-11e6-ac8e-cf8e0dd91dc7_story.html?utm_term=.67191bd0437c ====== greenyoda Extensive discussion of Greenwald's article critiquing the Post's editorial here: [https://news.ycombinator.com/item?id=12525616](https://news.ycombinator.com/item?id=12525616)
Mid
[ 0.629533678756476, 30.375, 17.875 ]
// SPDX-License-Identifier: GPL-2.0 /* Copyright(c) 2009-2014 Realtek Corporation.*/ #include "../wifi.h" #include "dm_common.h" #include "../rtl8723ae/dm.h" #include <linux/module.h> /* These routines are common to RTL8723AE and RTL8723bE */ void rtl8723_dm_init_dynamic_txpower(struct ieee80211_hw *hw) { struct rtl_priv *rtlpriv = rtl_priv(hw); rtlpriv->dm.dynamic_txpower_enable = false; rtlpriv->dm.last_dtp_lvl = TXHIGHPWRLEVEL_NORMAL; rtlpriv->dm.dynamic_txhighpower_lvl = TXHIGHPWRLEVEL_NORMAL; } EXPORT_SYMBOL_GPL(rtl8723_dm_init_dynamic_txpower); void rtl8723_dm_init_edca_turbo(struct ieee80211_hw *hw) { struct rtl_priv *rtlpriv = rtl_priv(hw); rtlpriv->dm.current_turbo_edca = false; rtlpriv->dm.is_any_nonbepkts = false; rtlpriv->dm.is_cur_rdlstate = false; } EXPORT_SYMBOL_GPL(rtl8723_dm_init_edca_turbo); void rtl8723_dm_init_dynamic_bb_powersaving(struct ieee80211_hw *hw) { struct rtl_priv *rtlpriv = rtl_priv(hw); struct ps_t *dm_pstable = &rtlpriv->dm_pstable; dm_pstable->pre_ccastate = CCA_MAX; dm_pstable->cur_ccasate = CCA_MAX; dm_pstable->pre_rfstate = RF_MAX; dm_pstable->cur_rfstate = RF_MAX; dm_pstable->rssi_val_min = 0; dm_pstable->initialize = 0; } EXPORT_SYMBOL_GPL(rtl8723_dm_init_dynamic_bb_powersaving);
Mid
[ 0.5390625, 34.5, 29.5 ]
Kérastase Chronologiste Revitalizing Bain Shampoo (250ml) Regenerate locks with this concentrated care formula that combines vitamins A and E to cleanse and revitalise hair from top to bottom. Perfect for all hair types, the shampoo works to rid the scalp of impurities whilst strengthening hair fibres with Abyssine (a regenerating molecule) and imparting mirror-like shine. Safeguards hair from external aggressors and helps to maintain colour integrity. Expect revitalised hair that looks and feels healthy and strong. Kérastase Chronologiste Revitalizing Exfoliating Care(200ml) A preparatory step that combines micro-particles and naturally derived ingredients to cleanse and purify the scalp and hair. Golden and luminescent, the rich formula works to gently massage and exfoliate the hair whilst ridding each strand of built up grime and dirt. Experience its range of luxurious textures and witness revitalised, purified results. Kérastase Chronologiste Revitalizing Bain Shampoo (250ml) Regenerate locks with this concentrated care formula that combines vitamins A and E to cleanse and revitalise hair from top to bottom. Perfect for all hair types, the shampoo works to rid the scalp of impurities whilst strengthening hair fibres with Abyssine (a regenerating molecule) and imparting mirror-like shine. Safeguards hair from external aggressors and helps to maintain colour integrity. Expect revitalised hair that looks and feels healthy and strong. Kérastase Chronologiste Revitalizing Exfoliating Care(200ml) A preparatory step that combines micro-particles and naturally derived ingredients to cleanse and purify the scalp and hair. Golden and luminescent, the rich formula works to gently massage and exfoliate the hair whilst ridding each strand of built up grime and dirt. Experience its range of luxurious textures and witness revitalised, purified results. Reviews Top Customer Reviews Where reviews refer to foods or cosmetic products, results may vary from person to person. Customer reviews are independent and do not represent the views of The Hut Group. Luxury Experience Very happy with the delivery speed. Got them in 3 days. First, for your reference, I have very long hair, and on the fine hair side. I wash my hair and blow dry every evening as a daily routine. I suffer from hair lost after move to a hard water area in the UK. I don't dye or curl my hair, so they are natural but a bit dried out and dull especially in winter. I bought a bottle of Kerastase Chronoligiste Fragrance hair oil at the airport tax free, which lead me to the interest of getting the whole set of this line. I have very high expectation as the hair oil I got is the best hair oil I've ever used, and after only one wash I can say I would recommend them. My favorite in this set is the hair masque, which left my hair extremely silky and shiny, during the wash I feel my hair like fluid. The shampoo would be my second favorite, I just used it the first time, and I thought a double amount is needed seeing it is very liquidy compare to other Kerastase shampoo, but I'm completely wrong, even with my very long hair, only the normal little amount should be enough as it is very concentrated, I got excessive bubbles with the doubled amount, the smell is great, leave the hair fresh and clean. The only product I'm not 100% love is the exfoliating care, not sure if I used it correctly, but it is hard to apply as it concentrate and stick to one area of my scalp, when I try to dissolve it a bit and move it down, it tangled my hair, I also found my hair dropped a lot when wash it off. I asked my husband to try this, which end up with a much better result. So I feel maybe the exfoliating care would work better for short hair. I'm very happy my husband enjoyed using it, at least it is not gone wasted. If you are long hair as me, I would recommend to get the shampoo and masque set, or use the exfoliating care with better technique than I do. I also highly recommend the Fragrance oil from the same line after the hair wash before blow dry, it will give you a very long last beautiful smell with silky shine, and not making your hair or scalp oily. All in all, really like this line, the hair finish as from a salon hair treatment. Will keep using them and come back.
Mid
[ 0.614657210401891, 32.5, 20.375 ]
Q: Why is the reflection operation on a planar molecule different from identity? Consider the Ethylene molecule $C_2 H_4$ on teh $OXY$ plane. The reflection operation $\sigma (xy)$ is considered different from the identity E (both being symmetry operations). I don't understand why: each atom stays in its original place... Obviously, my question extends to any planar molecule. A: I'm going to answer myself using what Sanya said in the comment section above. Although $\sigma (xy)$ leaves planar molecules (which lie in the $Oxy$ plane) untouched, that does not mean that $\sigma (xy)$ is the same operation as the identity $E$. All it means is that both $\sigma (xy)$ and $E$ are symmetry operations of our molecule. For $\sigma (xy)$ and $E$ to be equal they would have to have the same image for all objects (i.e. they would be the same mapping). This happens, for instance, with $E$ and the rotations of $2\pi$ (around some axis), which are thus considered to be the same operation.
Mid
[ 0.632311977715877, 28.375, 16.5 ]
Tooth enamel dust as an asthma stimulus. A case report. A case report of a first-year dental student with asthma, who experienced exacerbation of symptoms and a severe asthmatic crisis in the course of her preclinical dental training, is presented. Dust generated as a result of preparing natural teeth triggered the bronchoconstrictive response. Her subsequent medical and preventive measures are cited. This case identifies, for the first time, enamel dust as an asthma stimulus, thus serving as a precaution to prospective dental students and personnel afflicted with the disease and emphasizing the importance of effective face masks in dental laboratories during dust-generating procedures.
High
[ 0.690217391304347, 31.75, 14.25 ]
#version 450 layout(location = 0) in vec4 _; layout(location = 1) in vec4 a; layout(location = 0) out vec4 b; void main() { vec4 _28 = (_ + a) + _; b = _28; b = _; b = _28; b = _; }
Low
[ 0.432795698924731, 20.125, 26.375 ]