text
stringlengths
8
5.74M
label
stringclasses
3 values
educational_prob
sequencelengths
3
3
Q: What sort of ceiling fan support do I need where I can't find framing? As I have already worked on 4 out of 5 ceiling fans I have put up so far (previous question). Now I am going to put the last one up. I need your help on this one. However, one of them I had a light fixuture hanging on the ceiling wall that looks like this: So when I took this light fixture out and look at the ceiling wall to see what is inside and here is what I found: The one I have look much like this which has slides: But the problem is it is very wide and when I put my hand in the hold to find the nearest wood and there is no wood there. My question here is do I need to replace with the different ceiling fan support box with brace kit I bought from Home Depot a few weeks ago ? This one is different than the one above and it is heavy. Should I replace the old brace with the slides with the new one or should I just leave it and put it up? A: Wanted everyone to know that we hired somebody and told us that it is not possible to put up the ceiling fan. We decided to return the light fixture back .
Mid
[ 0.541567695961995, 28.5, 24.125 ]
AFS Troubleshooting My AFS Products Registering the precision farming products you own will keep all of those products nicely organized in the My AFS Products section. This will allow you to quickly find the information you need, for your products, when you have more important things to do. Specifications Select up to three models to view at once. For more than 3 models, scroll left and right to see more specs. If you would like to compare specs between different Case IH equipment or against other competitive equipment, use our Compare Specs Tool. Genuine Case IH Parts & Service Only genuine Case IH parts were made for your machine and designed for peak performance. Find everything you need from filters, fluids, shop products and safety equipment, to owners manuals, parts diagrams, paint, and batteries, at the Case IH online parts store.
Mid
[ 0.598360655737704, 36.5, 24.5 ]
NO. 12-17-00241-CR IN THE COURT OF APPEALS TWELFTH COURT OF APPEALS DISTRICT TYLER, TEXAS STEPHYN CORNELL PRINE, § APPEAL FROM THE 3RD APPELLANT V. § JUDICIAL DISTRICT COURT THE STATE OF TEXAS, APPELLEE § ANDERSON COUNTY, TEXAS MEMORANDUM OPINION Stephyn Cornell Prine appeals his convictions for continuous sexual abuse of a child and sexual assault of a child. In one issue, he argues that his punishment is excessive and grossly disproportionate to the crimes for which he was convicted. We affirm. BACKGROUND Appellant was charged by indictment with one count of continuous sexual abuse of a child, a first degree felony punishable by not less than twenty-five years but not more than ninety-nine years or life imprisonment, and two counts of sexual assault of a child, a second degree felony, punishable by not less than two years but not more than twenty years imprisonment. Appellant entered a plea of “not guilty” and the case proceeded to a jury trial. The jury returned a verdict of “guilty” on the continuous sexual abuse of a child count and on one of the sexual assault of a child counts. The jury assessed punishment at ninety-nine years imprisonment on the continuous sexual abuse count, and twenty years imprisonment on the sexual assault of a child count, to run concurrently. This appeal followed. CRUEL AND UNUSUAL PUNISHMENT In his sole issue, Appellant argues that the ninety-nine and twenty year sentences recommended by the jury and imposed by the trial court are grossly disproportionate to the crimes committed and amount to cruel and unusual punishment. “To preserve for appellate review a complaint that a sentence is grossly disproportionate, constituting cruel and unusual punishment, a defendant must present to the trial court a timely request, objection, or motion stating the specific grounds for the ruling desired.” Kim v. State, 283 S.W.3d 473, 475 (Tex. App.—Fort Worth 2009, pet. ref’d); see also Rhoades v. State, 934 S.W.2d 113, 120 (Tex. Crim. App. 1996) (waiver of complaint of cruel and unusual punishment under the Texas Constitution because defendant presented his argument for first time on appeal); Curry v. State, 910 S.W.2d 490, 497 (Tex. Crim. App. 1995) (defendant waived complaint that statute violated his rights under the United States Constitution when raised for first time on appeal); Mays v. State, 285 S.W.3d 884, 889 (Tex. Crim. App. 2009) (“Preservation of error is a systemic requirement that a first-level appellate court should ordinarily review on its own motion[;] ... it [is] incumbent upon the [c]ourt itself to take up error preservation as a threshold issue.”); TEX. R. APP. R. 33.1. A review of the record indicates that Appellant did not object to the constitutionality of his sentence at the trial court level, and has, therefore, failed to preserve error for appellate review. See Kim, 283 S.W.3d at 475; see also Rhoades, 934 S.W.2d at 120; Curry, 910 S.W.2d at 497; Mays, 285 S.W.3d at 889; TEX. R. APP. P. 33.1. Despite Appellant’s failure to preserve error, we conclude his sentences do not constitute cruel and unusual punishment. The Eighth Amendment to the Constitution of the United States provides that “[e]xcessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.” U.S. CONST. amend. VIII. This provision was made applicable to the states by the Due Process Clause of the Fourteenth Amendment. Meadoux v. State, 325 S.W.3d 189, 193 (Tex. Crim. App. 2010) (citing Robinson v. California, 370 U.S. 660, 666-67, 82 S. Ct. 1417, 1420-21, 8 L. Ed. 2d 758 (1962)). The legislature is vested with the power to define crimes and prescribe penalties. See Davis v. State, 905 S.W.2d 655, 664 (Tex. App.—Texarkana 1995, pet. ref’d); see also Simmons v. State, 944 S.W.2d 11, 15 (Tex. App.—Tyler 1996, pet. ref’d). Courts have repeatedly held that a punishment which falls within the limits prescribed by a valid statute is not excessive, cruel, or unusual. See Harris v. State, 656 S.W.2d 481, 486 (Tex. Crim. App. 1983); Jordan v. 2 State, 495 S.W.2d 949, 952 (Tex. Crim. App. 1973); Davis, 905 S.W.2d at 664. In this case, Appellant was convicted of continuous sexual abuse of a child and sexual assault of a child, the punishment ranges for which are twenty-five to ninety-nine years, or life imprisonment and two to twenty years imprisonment, respectively. See TEX. PENAL CODE ANN. §§ 12.33(a), 21.02(h), 22.011(f) (West 2011 and West Supp. 2017). Thus, the sentences recommended by the jury and imposed by the trial court fall within the range set forth by the legislature. Therefore, the punishment is not prohibited as cruel, unusual, or excessive per se. See Harris, 656 S.W.2d at 486; Jordan, 495 S.W.2d at 952; Davis, 905 S.W.2d at 664. Nevertheless, Appellant urges this Court to perform the three part test originally set forth in Solem v. Helm, 463 U.S. 277, 103 S. Ct. 3001, 77 L. Ed. 2d 637 (1983). Under this test, the proportionality of a sentence is evaluated by considering (1) the gravity of the offense and the harshness of the penalty, (2) the sentences imposed on other criminals in the same jurisdiction, and (3) the sentences imposed for commission of the same crime in other jurisdictions. Id., 463 U.S. at 292, 103 S. Ct. at 3011. The application of the Solem test has been modified by Texas courts and the Fifth Circuit Court of Appeals in light of the Supreme Court’s decision in Harmelin v. Michigan, 501 U.S. 957, 111 S. Ct. 2680, 115 L. Ed. 2d 836 (1991) to require a threshold determination that the sentence is grossly disproportionate to the crime before addressing the remaining elements. See, e.g., McGruder v. Puckett, 954 F.2d 313, 316 (5th Cir. 1992), cert. denied, 506 U.S. 849, 113 S. Ct. 146, 121 L. Ed. 2d 98 (1992); see also Jackson v. State, 989 S.W.2d 842, 845-46 (Tex. App.—Texarkana 1999, no pet.). We are guided by the holding in Rummel v. Estelle in making the threshold determination of whether Appellant’s sentences are grossly disproportionate to his crimes. 445 U.S. 263, 100 S. Ct. 1133, 63 L. Ed. 2d 382 (1980). In Rummel, the Supreme Court considered the proportionality claim of an appellant who had received a mandatory life sentence under a prior version of the Texas habitual offender statute for a conviction of obtaining $120.75 by false pretenses. See id., 445 U.S. at 266, 100 S. Ct. at 1135. In that case, the appellant received a life sentence because he had two prior felony convictions—one for fraudulent use of a credit card to obtain $80 worth of goods or services and the other for passing a forged check in the amount of $28.36. Id., 445 U.S. at 265-66, 100 S. Ct. at 1134–35. After recognizing the legislative prerogative to classify offenses as felonies and, further, considering the purpose of the habitual 3 offender statute, the court determined that the appellant’s mandatory life sentence did not constitute cruel and unusual punishment. Id., 445 U.S. at 284-85, 100 S. Ct. at 1144-45. In this case, the offenses committed by Appellant—continuous sexual abuse of a child and sexual assault of a child—are certainly more serious than the combination of offenses committed by the appellant in Rummel, while Appellant’s ninety-nine year and twenty year sentences are no more severe than the life sentence upheld by the Supreme Court in Rummell. Thus, it is reasonable to conclude that if the sentence in Rummell is not constitutionally disproportionate, neither are the sentences assessed against Appellant in this case. Because we do not conclude that Appellant’s sentences are disproportionate to his crimes, we need not apply the remaining elements of the Solem test. Appellant’s sole issue is overruled. DISPOSITION Having overruled Appellant’s sole issue, we affirm the trial court’s judgment. JAMES T. WORTHEN Chief Justice Opinion delivered April 11, 2018. Panel consisted of Worthen, C.J., Hoyle, J., and Neeley, J. (DO NOT PUBLISH) 4 COURT OF APPEALS TWELFTH COURT OF APPEALS DISTRICT OF TEXAS JUDGMENT APRIL 11, 2018 NO. 12-17-00241-CR STEPHYN CORNELL PRINE, Appellant V. THE STATE OF TEXAS, Appellee Appeal from the 3rd District Court of Anderson County, Texas (Tr.Ct.No. 3CR-16-32998) THIS CAUSE came to be heard on the appellate record and briefs filed herein, and the same being considered, it is the opinion of this court that there was no error in the judgment. It is therefore ORDERED, ADJUDGED and DECREED that the judgment of the court below be in all things affirmed, and that this decision be certified to the court below for observance. James T. Worthen, Chief Justice. Panel consisted of Worthen, C.J., Hoyle, J., and Neeley, J.
Low
[ 0.42647058823529405, 21.75, 29.25 ]
Background ========== The yeast *Yarrowia lipolytica*is a hemiascomycete and represents a homogeneous phylogenetic group with physiological and ecological diversity \[[@B1]\]. It is a non-conventional yeast, often used in research and isdistantly related to *Candida glabrata*, *Kluyveromyces lactis*and *Debaryomyces hansenii*. Strains of *Y. lipolytica*can produce significant amounts of intra- or extra-cellular metabolites including vitamins, lipases, storage lipids, citric acid and pyruvic acid and can be used for biodegradation of various wastes (e.g., olive-mill waters and raw glycerol) \[[@B2]-[@B6]\]. The 3,4-dihydroxy phenyl L-alanine (L-dopa) is a drug used for Parkinson\'s disease, and is capable of changing the enzymes of energy metabolism of myocardium following neurogenic injury. The process of bioconversion of L-tyrosine to L-dopa in microorganisms is generally slow, but is accelerated by a small amount of L-dopa in the broth \[[@B7]\]. L-dopa has also been produced with *Erwinia herbicola*cells carrying a mutant transcriptional regulator TyrR from pyrocatechol and DL-serine \[[@B8],[@B9]\]. It can also be produced using L-tyrosine as a substrate, tyrosinase as a biocatalyst and L-ascorbate as the reducing agent \[[@B10],[@B11]\]. The general reaction is: Tyrosinases (monophenol, o-diphenol:oxygen oxidoreductase, EC 1.14.18.1) belong to a larger group of type-3 copper proteins, which include catecholoxidases and oxygen-carrier haemocyanins \[[@B12]\]. Tyrosinases are involved in the melanin pathway and are responsible for the first steps of melanin synthesis from L-tyrosine, leading to the formation of L-dopaquinone and L-dopachrome \[[@B13]\]. Tyrosinases catalyse the o-hydroxylation of monophenols (cresolase activity or \"monophenolase\") and the ensuing oxidation of molecular oxygen. Subsequently, the o-quinones undergo non-enzymatic reactions with various nucleophiles, producing intermediates \[[@B14]\]. The immobilization of tyrosinases on solid supports can increase enzyme stability \[[@B15]-[@B19]\], protect tyrosinase from inactivation by reaction with quinones, (preserving them from proteolysis) \[[@B20]\], improve thermal stability of fungal tyrosinases \[[@B21]\], and increase activity in comparison to soluble enzymes \[[@B22]\]. Diatomite (2:1 clay mineral) is a naturally occurring, soft, chalk-like sedimentary rock that is easily crumbled into a fine, off-white powder which has K^+^in the interlayer. This powder has an abrasive feeling similar to pumice and is light-weight due to its porosity. By adding diatomite into the reaction, an increased substrate uptake and enzyme production rate with concomitant L-dopa production could result. We have previously reported the effect of cresoquinone and vermiculite on the microbial transformation of L-tyrosine to L-dopa by *Aspergillus oryzae*\[[@B23],[@B24]\]. In the present study, different concentrations of diatomite were added into the reaction mixture to achieve a high performance transformation of L-tyrosine to L-dopa. *A. oryzae*is an organism typically used for L-dopa production. The easy handling, rapid growth rate and environmentally friendly nature of alternative yeasts such as *Y. lipolytica*have created an interest in their use for fermentation. Because tyrosinases are intracellular enzymes, pre-grown cells harvested from fermented broth were used for the microbiological transformation of L-tyrosine to L-dopa. Results and discussion ====================== The production of L-dopa is largely dependent on the addition of specific additives and minerals to the reaction mixture. The inductive effect of diatomite on the transformation of L-tyrosine to L-dopa by *Yarrowia lipolytica*NRRL-143 was investigated (Figure [1](#F1){ref-type="fig"}). The concentration of diatomite added at the start of biochemical reaction ranged from 0.5--3.0 mg/ml, along with 3.5 mg/ml L-tyrosine. A biomass concentration of 3.0 mg/ml was used as a source of intracellular enzyme tyrosinase in a 50 min reaction. The highest production of L-dopa (1.64 mg/ml produced with 2.90 mg/ml consumption of L-tyrosine) was observed with 2.0 mg/ml diatomite. L-dopa production fell while substrate consumption continued to rise, probably due to catecholase activity causing L-dopa to be used for quinone production, since ascorbic acid (which inhibits this activity) was not being replaced in the system. In some enzyme systems, disaccharides or higher molecular weight substrates have been found to be the best supporters of intracellular enzymes \[[@B25],[@B26]\]. It was hypothesized that tyrosinase, a constitutive enzyme, was altered with respect to production of L-dopa in the presence of added diatomite. ![The effect of different diatomite concentrations on L-dopa production by *Y. lipolytica*NRRL-143 (L-tyrosine consumed -∘-, L-Dopa produced -×-). A total of 3.5 mg/ml L-tyrosine, 3.0 mg/ml cell biomass and varying diatomite amounts were added at the start of the biochemical reaction. The total reaction time was 50 min at 50°C.](1472-6750-7-50-1){#F1} The effects of delayed diatomite addition (2.0 mg/ml; 0, 5, 10, 15, 20, 25 min) into the *Y. lipolytica*NRRL-143 reaction were also investigated (Figure [2](#F2){ref-type="fig"}). Reactions were performed aerobically with 3.0 mg/ml cell biomass and 3.5 mg/ml L-tyrosine for 50 min. Production of L-dopa increased from 5 to 15 min after the addition of diatomite; a significant decrease of L-dopa (1.68--2.14 mg/ml) was noticed 20--25 min after the addition. Maximum L-dopa (2.96 mg/ml) was obtained 15 min after the addition of diatomite into the reaction mixture, with concomitant tyrosine consumption of 2.94 mg/ml, a 35% increase when compared to the control which is highly significant (p ≤ 0.05). The L-tyrosine substrate has binding affinity with diatomite, which induces tyrosinase secretion, improves its availability and ultimately leads to an increased L-dopa production rate \[[@B7],[@B11],[@B13],[@B24]\]. In our experiment, the addition of diatomite 15 min after reaction commencement was identified as optimal, increasing production of L-dopa, substrate utilization and time of reaction. However, L-dopa production dropped (1.68 mg/ml with 3.14 mg/ml L-tyrosine consumption) when diatomite was added 25 min after the start of reaction, probably due to conversion of unstable L-dopa to dopamine, melanin and other pigmented products \[[@B10],[@B13]\] after a reduced availability of the enzyme. ![The effect of time of addition of diatomite on L-dopa production by *Y. lipolytica*NRRL-143 (L-tyrosine consumed -∘-, L-Dopa produced -×-). A total of 2.0 mg/ml diatomite was added to 3.5 mg/ml L-tyrosine and 3.0 mg/ml cell biomass. The total reaction time was 50 min at 50°C.](1472-6750-7-50-2){#F2} The consumption of L-tyrosine, however, continued to increase despite the time of diatomite addition. The tyrosinase active center is comprised of dinuclear copper, coordinated with histidine residues, chelating substances or substances that are associated with this metal (as are quinones) which are irreversible inhibitors and/or inactivators of this enzyme \[[@B12]\]. The addition of diatomaceous earth may remove these inhibitors and/or inactivators by active absorption. The absorption of inhibitors increased the enzyme activity of tyrosinases, β-carboxylases and tyrosine hydroxylases which was important for the catabolism of L-tyrosine to L-dopa under controlled conditions. Our data are both substantiated \[[@B25]\] and in contrast to previous reports \[[@B26]\] in which the production of L-dopa was achieved in minimal medium without additive supplementation (pH 7.0). Previous research efforts to produce L-dopa by the addition of 0.16 μg vermiculite during the reaction obtained 0.39--0.54 mg/ml of the desired product \[[@B27]\]. The time course of L-dopa production and L-tyrosine consumption was carried out at different incubation periods (10--60 min) using a hotplate with magnetic stirrers (Figure [3](#F3){ref-type="fig"}). The control gave a maximum of 0.50 mg/ml L-dopa with 1.14 mg/ml consumption of L-tyrosine. The maximum conversion rate (3.20 mg/ml L-dopa with 3.26 mg/ml tyrosine consumption) was obtained with 2.0 mg/ml diatomite added 15 min after the start of reaction, producing a 72% higher yield of L-dopa compared to the control. The L-dopa production from this time course differed significantly (p ≤ 0.05) with the results at all other incubation periods. It is clear that up to 30 min, cresolase activity predominated and, given the non-replacement of ascorbic acid, the overriding activity was catecholase, which consumed the L-tyrosine substrate without a corresponding production of L-dopa. After 40--60 min of incubation, the production of L-dopa and the consumption of L-tyrosine decreased gradually in the control and test reactions. This reduction might be because the L-dopa and residual L-tyrosine were changed into other metabolites such as dopamine, melanin and eventually melanosine. Another study \[[@B25]\] achieved 0.12 mg/ml of L-dopa, 90 min after the biochemical reaction. The present finding of 3.20 mg/ml L-dopa after 30 min of incubation is a major improvement. In the present study, dopamine and melanin were also produced, but their highest production was 0.014 and 0.01 mg/ml/h. ![Time course of L-dopa production and L-tyrosine consumption by *Y. lipolytica*NRRL-143 (L-tyrosine consumed -∘-, L-Dopa produced -×-). a. Control (3.5 mg/ml L-tyrosine and 3.0 mg/ml cell biomass) b. Test (2.0 mg/ml diatomite added 15 min after the start of reaction to 3.5 mg/ml L-tyrosine and 3.0 mg/ml cell biomass). The total reaction time was 50 min at 50°C.](1472-6750-7-50-3){#F3} Conversion of L-tyrosine to L-dopa is an enzyme catalyzed reaction. Figure [4](#F4){ref-type="fig"} shows the effect of the addition of different concentrations of drenched cell biomass (1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5 mg/ml) on the production of L-dopa from L-tyrosine in the reaction mixture. The best results (3.48 mg/ml L-dopa with 3.25 mg/ml L-tyrosine consumption) were obtained using 2.5 mg/ml wet weight yeast cells, leading to 10 fold higher productivity when compared to the control (0.72 mg/ml L-dopa with 1.22 mg/ml L-tyrosine consumption). At this concentration (2.5 mg/ml), most of the added tyrosine was converted to L-dopa as indicated by the small amount of residual substrate (0.25 mg/ml), which is highly significant (p ≤ 0.05). In the present investigation, the increased cell biomass enhanced enzymatic activity (1.55 U/mg tyrosinase). However, increasing the cellular concentration beyond the optimal led to a sharp decrease in activity, probably due to increased cell concentration (proportional to enzyme concentration) and the maintenance of a constant concentration of an inhibitor of catecholase activity (ascorbic acid). This product is the substrate for the second reaction catalyzed by this enzyme (catecholase activity) which leads to the formation of quinones from L-dopa. Only an excessive amount of ascorbic acid continually replaced throughout the reaction might stop this second activity from taking place, leading to the formation of quinones that are also suicide inactivators of this enzyme. Previous research \[[@B28]\] pointed out that tyrosinase activity is directly related to the concentration of cells or mycelia in the reaction mixture in slightly acidic to neutral reaction conditions. Copper atoms found at the active site of tyrosinase are an essential requirement for catalytic activity. Agents such as carbon monoxide or toxins indirectly inhibit tyrosinase activity by chelating copper and abrogating its ability to bind oxygen. Previous research \[[@B8],[@B12],[@B13]\] has shown that tyrosine phenol lyase (*tpl*) is only synthesized under L-tyrosine-induced conditions. The addition of L-tyrosine to the medium was found unavoidable when preparing cells (the enzyme source), but severely impeded preparation of pure L-dopa \[[@B24]\]. ![The effect of various levels of cell biomass (*Y. lipolytica*NRRL-143) on L-dopa production (L-tyrosine consumed -∘-, L-Dopa produced -×-). a. Control (3.5 mg/ml L-tyrosine only). b. Test (3.5 mg/ml L-tyrosine and 2.0 mg/ml diatomite added 15 min after the start of reaction), The total reaction time was 50 min at 50°C.](1472-6750-7-50-4){#F4} A comparison of production parameters for the effect of diatomite addition on bioconversion of L-tyrosine to L-dopa is shown in Table [1](#T1){ref-type="table"}. An overall 12.5 fold increase in L-dopa production (with 4.06 mg/ml proteins) was achieved at the optimal level of added diatomite when compared to the control. The optimal pH of the control reaction without added diatomite was 3.5, however, the test reaction with added diatomite was proficient at a pH range of 2.5--4.0, indicating the enzyme remained active despite the change in reaction pH. The Y~p/s~value (with 2.0 mg/ml diatomite added 15 min after the start of reaction) was significantly improved over the control. Maximum substrate consumption (Q~s~) in terms of volumetric rate was marginally different during bioconversion between the control and test reactions, indicating maximum enzyme activity at this level of diatomite addition. The increase of q~s~(i.e., specific substrate consumption rate) with diatomite addition was highly significant (p ≤ 0.05). In the present study, the optimal values of all kinetic parameters (Y~p/s~, Q~s~and q~s~) were several-fold improved over those reported from *Aspergillus*or *Cellulomonas*spp. \[[@B7],[@B10],[@B28]\]. ###### Comparison of parameters for L-dopa production by *Y. lipolytica*NRRL-143 Production parameters\* Control Test\*\* ----------------------------------- --------- ---------- Proteins (mg/ml) 0.34 4.06 Y~p/s~ 0.590 1.071 Q~s~ 0.073 0.329 q~s~ 0.002 0.011 Optimal pH^\$^ 3.5 2.5--4.0 Max. L-dopa production (mg/ml) 0.28 3.48 Level of significance \<p\>\*^\$^ \- HS \*Kinetic parameters: Y~p/s~= mg L-dopa produced/mg substrate consumed, Q~s~= mg substrate consumed/ml/h, q~s~= mg substrate consumed/mg cells/h. \*\*2.0 mg/ml diatomite added 15 min after the start of reaction. ^\$^Acetate buffer \*^\$^\<p\> is for significance level (≈0.05) on the basis of probability. HS denotes that the values are highly significant. Conclusion ========== In the present studies, *Yarrowia lipolytica*strain NRRL-143 was exploited for L-dopa production. The addition of 2.0 mg/ml diatomite (2:1 clay mineral) markedly improved the microbiological transformation of L-tyrosine to L-dopa. Diatomite addition 15 min after the start of reaction produced a 35% higher substrate conversion rate compared to the control (p ≤ 0.05). A biomass concentration of 2.5 mg/ml and reaction time of 30 min were also optimized. Because production of L-dopa is a high cost, low yield process, scaled up studies are a pre-requisite for commercial exploitation. Methods ======= Microorganism and growth conditions ----------------------------------- *Yarrowia lipolytica*strain NRRL-143 was grown on yeast extract agar slants (pH 5.4) and stored in a cold-cabinet (Model: 154P, Sanyo, Tokyo, Japan) at 4°C. Two hundred milliliters of cultivation medium containing (% w/v); glucose (2.0, polypeptone (1.0), NH~4~Cl 0.3, KH~2~PO~4~(0.3), MgSO~4~·7H~2~O (0.02), yeast extract (1.0) (pH 5.5) were taken into individual 1.0 L Erlenmeyer flasks. The medium was autoclaved at 15 psi (121°C) for 20 min and seeded with 1.0 ml of yeast suspension (1.25 × 10^6^cells/ml). The flasks were incubated in a rotary shaking incubator (200 rpm) at 30°C for 48 h. A biomass ranging from 18--20 g/l was produced while 0.25% (w/v) glucose remained intact in the broth at 48 h of cultivation. Cells were harvested by centrifugation at 16,000 rpm (15,431 × g), washed free of adhering medium with ice-cold water (4°C), dried in filter paper folds (Whatman 44, Brazil) and stored at -35°C in an ultra-low freezer (Model: UF-12, Shimadzu, Tokyo, Japan). Biochemical reaction and critical phases ---------------------------------------- The production of L-dopa from L-tyrosine was carried out in acetate buffer (pH 3.5, 50 mM) containing (mg/ml); L-tyrosine (3.5), L-ascorbic acid (5.0) and intact cells (3.0), dispensed to a 1.25 L capacity reaction vessel (Model: 2134-nmn, Perkin Elmer, NY, USA) with a working volume of 0.75 L. Different diatomite (Sigma, St. Louis, USA) concentrations (0.5--3.0 mg/ml) were added to the reaction mixture at different time intervals (5--25 min). Reactions were carried out aerobically (1.25 l/l/min air supply, 0.5% dissolved oxygen) on a digital hot plate with magnetic stirrers (Model: G542i, Inolab, Bonn, Germany) at 50°C for different time intervals (10--60 min). The level of dissolved oxygen (DO) was measured using a Rota meter equipped with a DO-sensor (Model: RM10, Inolab, Bonn, Germany). Assay methods ------------- The mixture was withdrawn from each reaction vessel, centrifuged at 9,000 rpm (8,332 × g) for 15 min and the clear supernatant was kept in the dark at ambient temperature (\~20°C). Determination of tyrosinase activity ------------------------------------ Tyrosinase activity was determined following a previously described method \[[@B29]\]. Briefly, potassium phosphate buffer (2.60 ml, 50 mM), 0.10 ml L-catechol, 0.10 ml L-ascorbic acid and 0.10 ml EDTA were mixed by inversion and equilibrated to 25°C. The ΔA~265\ nm~was monitored until constant, followed by the addition of 100 μl of reaction broth. The decrease in ΔA~265\ nm~was recorded for approximately 5 min. The ΔA~265\ nm~was obtained using the maximum linear rate for both the test and control. Enzyme activity was determined with the following formula, $$\text{Units/mg~enzyme} = \frac{\Delta\text{A}_{265\text{~nm}}\text{/min~test} - \Delta\text{A}_{265\text{~nm}}\text{/min~control}}{0.001\text{~mg~enzyme/reactionmixture}}$$ One enzyme unit --------------- One unit of tyrosinase activity is equal to a ΔA~265\ nm~of 0.001 per min at pH 6.5 at 25°C in a 3.0 ml reaction mixture containing L-catechol and L-ascorbic acid. Determination of L-dopa production and L-tyrosine consumption ------------------------------------------------------------- L-dopa production and L-tyrosine consumption were determined following procedures previously described \[[@B10],[@B30]\]. ### a) L-dopa One milliliter of supernatant from the reaction mixture was added to 1.0 ml of 0.5 N HCl along with 1.0 ml of nitrite molybdate reagent (10% w/v sodium nitrite + 10% w/v sodium molybdate) (a yellow coloration appeared) followed by the addition of 1.0 ml of 1.0 N NaOH (a red coloration appeared). Total volume was brought to 5.0 ml with distilled water. Transmittance (%) was compared using a double beam UV/VIS scanning spectrophotometer (Cecil-CE 7200-series, Aquarius, London, UK) at 456 nm wavelength and the amount of L-dopa produced was determined from the standard curve. ### b) L-tyrosine One millilitre of the supernatant from the same reaction mixture was added to 1.0 ml of mercuric sulphate reagent (15%, w/v mercuric sulphate prepared in 5.0 N H~2~SO~4~). The test tubes were placed in a boiling water bath for 10 min and then cooled to an ambient temperature. A total of 1.0 ml of nitrite reagent (0.2% w/v sodium nitrite) was added to each tube, followed by the addition of distilled water to a final volume of 5.0 ml. Transmittance was measured by spectrophotometer (546 nm wavelength) with the amount of residual L-tyrosine determined from the tyrosine-standard curve. Determination of protein content -------------------------------- Protein in the reaction broth (with and without diatomite addition) was determined using Bradford reagent \[[@B31]\] with bovine serum albumin (BSA) as a standard. Kinetic and statistical depiction --------------------------------- Kinetic parameters for L-dopa production and L-tyrosine consumption were previously studied \[[@B32]\]. The product yield coefficient (Y~p/s~) was determined using the relationship Y~p/s~= dP/dS, while the volumetric rate for substrate utilization (Q~s~) was determined from the maximum slope in a plot of substrate utilized vs. time of biomass cultivation. Specific rate constants for substrate utilization (q~s~) were calculated by the equation i.e., qs = μ × Y~s/x~. The significance of results has been presented in the form of probability, using post-hoc multiple ranges under analysis of variance \[[@B33]\]. Abbreviations ============= L-dopa, 3,4-dihydroxy phenyl L-alanine; rpm, revolutions per minute, EDTA, ethylene diamine tetra acetic acid; BSA, bovine serum albumin. Authors\' contributions ======================= SA conceived of the study; JS provided the critical review and also helped in the interpretation of results; HI helped in the funding and also gave necessary guidelines for the research work. All authors read and agreed to the final manuscript. Acknowledgements ================ The authors gratefully acknowledge Dr. CP Kurtzman, Microbial Genomics and Bioprocessing Research Unit, Peoria, Illinois, USA for providing the culture (*Y. lipolytica*NRRL-143).
Mid
[ 0.598290598290598, 35, 23.5 ]
Entering a Church I am a university student and have a perplexing problem. Our Hillel House is going to be torn down next year, and while it is being built the students will be using a Catholic church for Shabbat dinner and services. I feel it would be very uncomfortable for me and I think it is wrong for a Jew to pray in a Church. How should I handle this if I cannot persuade the rabbi to find another place for Shabbat services? Also, what is the permissibility of entering church buildings for secular purposes, like voting or a musical performance? This is an issue that comes up here in Boston on "First Night," a New Year's Eve celebration when there are many performances held in various church-related buildings downtown. The Aish Rabbi Replies: The Torah declares that a Jew is not allowed to benefit from anything associated with idolatry. This would include a church, because the worship of a physical form (Jesus) as God this constitutes a violation of the Ten Commandments prohibiting idolatry. (sources: Avney Yashpeh 153:1; Darkey Teshuva 150:2) I think it is important to understand why, throughout the centuries, that our Jewish ancestors chose to be killed rather than convert to another religion (e.g. in the Spanish Inquisition). Why didn't they just "fake it" – i.e. pretend to convert, but really remain Jewish in their heart? The reason is that one must not even give the impression of subscribing to another religion. We don't live in an isolated, compartmentalized world. Rather, we are a community and a nation – and that puts each of us in the position to inspire others and lift the baseline of behavior. One person's actions – even those misconstrued – can generate either good or bad PR for God and the Jewish people. This has implications for a variety of situations, including entering a church. I suggest that you discuss your concerns respectfully with the rabbi. If he cannot appreciate the problem, that may be a sign you should find another place to spend Shabbat.
Mid
[ 0.5602968460111311, 37.75, 29.625 ]
Science discovers basic identities. But, the identities it discovers just are the way things are. Why is a thing, the thing it is? It just is. As Bishop Butler put it: ‘Every thing is what it is, and not another thing’. This sounds mysterious, but it is not. Why is visible light actually electromagnetic radiation rather than something else entirely? Why is temperature mean molecular kinetic energy, rather than something else? Science does not offer explanations for basic identities. Rather, the discovery is that two descriptions refer to one and the same thing; or that two different measuring instruments measure in fact one and the same thing. There is no basic set of laws from which to derive that visible light is electromagnetic radiation or temperature is mean molecular kinetic energy. Why is Venus Venus? Why is the Morning Star identical to the Evening Star? It just is. Moving, causing, surviving. That’s why animals have a central nervous system. And that’s how a religious person ought to lose faith in God: on the move. Simulation (mimicry) is included under ‘moving’. The best way for a religious person who already doubts his faith, but doesn’t know how to go on, is to enter into learning relationships with atheists. Such relationships, mediated by goodwill and the sincere desire to learn, allow the religious doubter to ‘try out’ atheism, to simulate it for its effects on self and others. Multiple simulations should be attempted. Slow cure is all important. These experiences must be largely positive to induce attachment. Sudden and dramatic loss of faith almost never happens, if ever, for the reward system in the brain needs to re-tune itself out of the current attractor-category (religion) and into the new attractor-category (atheism). This change takes time; sometimes years. To lose faith in God, you need to do something. You do this by first copying others who are already masters of the game. How do we think about reality in a way that improves upon the old ways? There is good news here: it is not entirely up to you to improve reality. Your children, and their children will do the job. So, sit back a little. Enjoy the ride! Human beings have the unique capacity to play life’s ‘ratchet game’. Children learn the best society has to offer, and can improve upon it. And, your children’s children can start where your children left off. And so on. My kids are already way ahead of me, since they started where I left off long, long ago, and also vastly ahead of cro-magnon humans. By contrast, chimpanzees start where their ancestors left off, and stay there. They don’t move from this place (chimps are still very cute, though). Thus, humans can produce science and technology, and pass it on to their descendents. This gives human beings the chance to deploy science and AI tech to create increasingly accurate representations of ‘mind’, ‘DNA’, ‘autism’, ‘pain’, ‘happiness’, and so on. The ratchet game takes us beyond the familiar into exciting new territories. (I wonder: Can academic philosophy play life’s ‘ratchet game’? It seems to me that philosophy is not terribly good at reaching out to other disciplines, and learning from them in the way that children naturally learn from parents.)
Mid
[ 0.538922155688622, 33.75, 28.875 ]
Sen. Ted Cruz (R-TX) was the target of a “lynch mob” of angry Senate Republicans during a closed-door meeting earlier this week. The New York Times reported that Sen. Kelly Ayotte (R-NH) asked Cruz at Wednesday’s meeting to renounce attacks by an unspecified conservative group and to explain his strategy to win the battle to defund the Affordable Care Act. ADVERTISEMENT Sens. Dan Coats (R-IN) and Ron Johnson (R-WI) joined in, blasting the first-year Texas senator for championing the effort that helped lead to the government shutdown. Senate Majority Leader Mitch McConnell (R-KY) piled on after Cruz failed to offer an explanation of his strategy. “It just started a lynch mob,” one senator told the New York Times. Politico earlier this week that Senate Conservatives Fund, a group founded by former Sen. Jim DeMint, had attacked 25 Senate Republicans for supporting a procedural vote that the group counted as support of the health law. DeMint, a South Carolina Republican who quit the Senate earlier this year to become president of the conservative Heritage Foundation, had been helping to promote the plan to tie implementation of the health care law to negotiations to fund the federal government or increase the debt ceiling.
Low
[ 0.509278350515463, 30.875, 29.75 ]
Q: Android seekbar customize I am trying to customize and android seekbar (using API 17) so that the whole progress bar line is blue. I have created the following XML in res/drawable: draw_seekbar_settings.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android"> <item> <clip> <shape android:shape="line"> <stroke android:color="@color/progressFxBar" android:width="2dp" /> </shape> </clip> </item> </selector> and <SeekBar android:id="@+id/t_seekbar" android:layout_width="220dp" android:layout_height="35dp" android:layout_centerHorizontal="true" android:max="100" android:progress="50" android:progressDrawable="@drawable/draw_seekbar_settings" The problem is only half the progress bar is blue, the other half doesn't have any colour whatsoever. I would like to get this image Instead I am getting this A: change the draw_seekbar_settings.xml as follows: <layer-list xmlns:android="http://schemas.android.com/apk/res/android"> <!-- Define the background properties like color etc --> <item android:id="@android:id/background"> <shape android:shape="line"> <stroke android:color="@color/progressFxBar" android:width="2dp" /> </shape> </item> <!-- Define the progress properties like start color, end color etc --> <!-- if you want to change the progress edit following --> <item android:id="@android:id/progress"> <clip> <shape> <gradient android:startColor="#007A00" android:centerColor="#007A00" android:centerY="1.0" android:endColor="#06101d" android:angle="270" /> </shape> </clip> </item> </layer-list>
High
[ 0.7148936170212761, 31.5, 12.5625 ]
1. Technical Field The present invention relates to an electronic structure, and associated method of fabrication, for coupling a heat spreader above a chip to a chip carrier below the chip. 2. Related Art A chip on a chip carrier may have a heat spreader on a top surface of the chip, such that the heat spreader is directly coupled to the chip carrier by an adhesive material that encapsulates the chip. If the heat spreader and the chip carrier have about a same coefficient of thermal expansion (CTE), then the adhesive material helps keep the chip carrier-chip-heat spreader structure approximately flat during thermal cycling. Nonetheless, cracking resulting from thermal cycling has been observed to occur at the surface of the chip carrier where a bounding surface of the adhesive material contacts the chip carrier. The cracking can propagate into the chip carrier and damage circuit lines within the chip carrier. A method and structure is needed for preventing said damage to said circuit lines within the chip carrier. The present invention provides a method for forming an electronic structure, comprising the steps of: providing a substrate, a chip on a surface of the substrate and coupled to the substrate, and a thermally conductive member; forming a fillet of at least one adhesive material on the chip and around a periphery of the chip and placing the thermally conductive member on a portion of the fillet and over a top surface of the chip, wherein the at least one adhesive material is uncured, wherein the fillet couples the thermally conductive member to the substrate, and wherein an outer surface of the fillet meets the surface of the substrate at a contact curve and makes an average contact angle xcex81AVE with the surface of the substrate; and curing the at least one adhesive material after which the outer surface of the fillet makes an average contact angle xcex82AVE with the surface of the substrate such that xcex82AVE does not exceed about 25 degrees. The present invention provides an electronic structure, comprising: a substrate; a chip on a surface of the substrate and coupled to the substrate; a fillet of at least one adhesive material on the chip and around a periphery of the chip, wherein an outer surface of the fillet meets the surface of the substrate at a contact curve and makes an average contact angle xcex8AVE with the surface of the substrate, and wherein xcex8AVE does not exceed about 25 degrees; and a thermally conductive member on a portion of the fillet and over a top surface of the chip, wherein the fillet couples the thermally conductive member to the substrate. The present invention method and structure for coupling a heat spreader above a chip to a chip carrier below the chip in a manner that prevents damage to circuit lines within the chip carrier during thermal cycling operations.
Mid
[ 0.5375854214123, 29.5, 25.375 ]
160 Ariz. 311 (1989) 772 P.2d 1164 Patricia DUNN, By and Through her Guardian and conservator, Daniel DUNN; Daniel Dunn, individually; Raymond S. Elliott, M.D.; Gynecology & Obstetrics of Mesa, Ltd., Petitioners, v. SUPERIOR COURT OF the State of Arizona, In and For the COUNTY OF MARICOPA, the Honorable Frederick J. Martone, a judge thereof, Respondent Judge, SAMARITAN HEALTH SERVICE, dba Desert Samaritan Hospital & Health Center; and Anca Maras, Real Parties in Interest. No. 1 CA-SA 88-275. Court of Appeals of Arizona, Division 1, Department A. April 20, 1989. *312 Lowry & Froeb, P.C. by Donald F. Froeb, Scottsdale, for petitioners. Lewis and Roca by John P. Frank, Robert J. Tolman, Foster Robberson, and Roger W. Kaufman, Phoenix, for real party in interest, Samaritan Health Service. Snell & Wilmer by Lonnie J. Williams, Jr., Stephen M. Hopkins and Patrick G. Byrne, Phoenix, for real party in interest, Anca Maras, M.D. OPINION JACOBSON, Judge. In this special action, petitioners seek review of an order entered by the civil presiding judge of the superior court dishonoring their notice of change of judge as untimely, and refusing to reassign their case for trial to the stipulated judge, as required by Rule 42(f), Arizona Rules of Civil Procedure. Extraordinary relief by special action is appropriate when a respondent judge is required to transfer a cause to another judge and fails to do so. See Helge v. Druke, 136 Ariz. 434, 436, 666 P.2d 534, 536 (App. 1983); Consolidated Carpet Corp. v. Superior Court, 13 Ariz. App. 429, 430, 477 P.2d 548, 549 (1970). In the exercise of our discretion, we therefore accept special action jurisdiction in this matter. Background Petitioners are plaintiffs, and real parties in interest are defendants, in the underlying personal injury suit in superior court. Trial in this matter had been set for January 9, 1989, before the Honorable Joseph D. Howe. The parties agree that, at a status conference on November 3, 1988, they discussed with Judge Howe the possibility of utilizing a settlement conference to dispose of this case. Judge Howe advised counsel that he could not act as both trial judge and settlement judge. Judge Howe asked if any party had a notice of change of judge remaining; when plaintiffs' counsel indicated that he did, Judge Howe indicated that he would honor such a request. Judge Howe memorialized this conversation in his minute entry as follows: "He [plaintiffs' counsel] asks if I am serious about honoring a notice of change of judge; I say yes, unless there is objection, in which case the matter comes back to me for decision, and if there is waiver the notice will be of no avail." A week later, on November 10, 1988, exactly sixty days prior to the scheduled trial date, the court and counsel met to further discuss the possibility of a settlement conference. Judge Howe's minute *313 entry for that date indicates the following discussion: Court and Counsel meet, intending informally to discuss settlement formats. The following are considered: 1. A preliminary position statement by each party to be submitted to the others. 2. Conference attended by all parties personally . .. to discuss settlement possibilities with this judge. 3. Same as # 2, except to a person other than the judge of this division. 4. A conference in which each party sets forth confidentially, to this judge or to another person, its position of maximum extension toward settlement. This format might include: .... d. understanding that if the conference includes this judge, he may disqualify from hearing trial, with the possible concomitant resulting loss of the Jan. 9, 1989, trial date; alternatively, the parties might agree in advance whether this judge should disqualify [himself]. On November 17, 1988, the parties stipulated that Judge Howe would be the settlement judge, and that in the event settlement failed Judge Nastro would be assigned as the trial judge. Judge Howe apparently rejected this stipulation. The court and counsel met next on November 23, 1988; the parties agree that, at that time, forty-seven days before trial, Judge Howe again said that he would honor a change of judge if one were filed. On December 1, 1988, forty days prior to trial, plaintiffs' counsel filed a notice of change of judge. That same day, the parties filed the following stipulation: The parties to this action, pursuant to Rule 42(f) of the Arizona Rules of Civil Procedure, hereby stipulate that upon the plaintiffs' exercise of their right to a change of judge, the action shall be re-assigned and transferred to the Honorable Daniel E. Nastro, who has advised all counsel that he is willing to have this action assigned to him, pursuant to Rule 42(f)(1)(F) of the Arizona Rules of Civil Procedure. Judge Howe, on December 1, 1988, acknowledged by minute entry that a notice of change of judge had been filed by plaintiffs, and ordered the case transferred to the civil presiding judge for reassignment to another division. The following day, the Honorable Frederick J. Martone, who was then civil presiding judge of the Maricopa County Superior Court, entered his order dishonoring plaintiffs' notice of change of judge as untimely, and transferred the case back to Judge Howe for trial. Petitioners filed this special action, seeking relief from Judge Martone's refusal to honor both their notice of change of judge and their stipulation to reassign the case to Judge Nastro. Appearance by Respondent Judge Real parties in interest Samaritan Health Service and Anca Maras, M.D., have joined in petitioners' contentions that Judge Martone exceeded his authority as civil presiding judge by refusing to honor plaintiffs' notice of change of judge and the parties' stipulation to an assigned judge. The adverse parties in the underlying action therefore are in agreement that special action jurisdiction is appropriate and that relief should be granted. The only opposition to the petition is a letter from Judge Martone, mailed to this court on January 6, 1989, seven days beyond the allowable response time. Petitioners have requested in their reply that Judge Martone's response be stricken. The Arizona Supreme Court has held that a respondent judge has the right to appear and defend in a special action in which he is named. Fenton v. Howard, 118 Ariz. 119, 575 P.2d 318 (1978). This precedent has been criticized as creating the potential of allowing "the impartial dispenser of justice" to take an adversarial role in the action, when he should have no interest in the outcome of the litigation. State ex rel. Dean v. City Court, 123 Ariz. 189, 191, 598 P.2d 1008, 1010 (App. 1979). *314 We do not believe that the appearance of Judge Martone in this case is subject to such criticism. We are informed that Judge Martone's order determining anew the issue of timeliness and refusing to honor the trial judge's acceptance of change of judge is in conformity with existing policies of the Maricopa County Superior Court. Hence, if we decide that the challenged order was made without authority, the daily administrative policies of the civil presiding judge in reviewing notices of change of judge and in assigning cases could be affected. Under these circumstances, the respondent judge has a legitimate administrative interest in appearing and defending those administrative policies. He is properly before this court as an advocate. Cf. Evertsen v. Industrial Comm'n, 117 Ariz. 378, 382, 573 P.2d 69, 73 (App. 1977) (authority of Industrial Commission, as neutral arbiter of the claim, to appear and defend its decision before the court of appeals, is proper when appearance involves the interest of the Commission in carrying out its procedures). We note that civil presiding judges previously have filed responses in special actions seeking clarification from this court on similar procedural matters. See, e.g., Guberman v. Chatwin, 19 Ariz. App. 590, 509 P.2d 721 (1973) (civil presiding judge responded to advise the court about his uniform practice in handling notices of change of judge). We therefore have considered Judge Martone's letter in reaching a decision in this matter. Moreover, although Judge Martone's letter was untimely filed as a response, this court may, in the furtherance of justice, suspend that time requirement and proceed as if the response was timely filed. See Rule 3, Arizona Rules of Civil Appellate Procedure. We have afforded petitioners the opportunity to respond to Judge Martone's letter to this court, to avoid any prejudice. Notice of Change of Judge Rule 42(f), Arizona Rules of Civil Procedure, provides in relevant part: 1. Change as a matter of right. A. Nature of proceedings. In any action pending in superior court, each side is entitled as a matter of right to a change of one judge.... A party wishing to exercise his right to change of judge shall file a "Notice of Change of Judge." ... .... C. Time. Failure to file a timely notice precludes change of judge as a matter of right. A notice is timely filed if filed sixty (60) or more days before the date set for trial.... In this case, Judge Howe honored petitioners' notice of change of judge by issuing the following order on December 1, 1988: A Notice of Change of Judge having been filed by Plaintiff, IT IS ORDERED transferring the above-entitled cause to the Civil Presiding Judge for reassignment to another Division. For whatever reasons, which may have included the fact that he had previously assured the parties less than sixty days prior to trial that he would honor such a notice, Judge Howe did not dishonor the notice as untimely, nor was the notice challenged by any of the opposing parties as untimely. Rather, the civil presiding judge apparently decided to raise the timeliness issue on his own motion, as reflected in Judge Martone's order of December 2, 1988: The court has before it Judge Howe's minute order of December 1, 1988 transferring this action to us for reassignment. However, a review of the file indicates that plaintiff's notice of change of judge was submitted on December 1, 1988. The trial in this action is set for January 9, 1989. Under Rule 42(f)(1)(C), Ariz.R.Civ.P., a notice is timely if filed sixty or more days before the date set for trial and failure to file a timely notice precludes change of judge as a matter of right. Since plaintiff's notice of change of judge as to Judge Howe is untimely, it is dishonored, and this action is transferred back to the Honorable Joseph D. Howe. Petitioners argue that the civil presiding judge exceeded his authority by dishonoring the notice, thereby in effect overruling *315 Judge Howe's order honoring the notice. They contend that Judge Martone's administrative role as presiding civil judge limits his discretion to honoring Judge Howe's transfer and reassigning the case. We agree. Under Rule 42(f), as well as under the prior statutory method of securing a change of judge by filing an affidavit pursuant to former A.R.S. § 12-409, the ruling on the timeliness of the notice of change of judge is to be made by the trial judge to be disqualified rather than by the civil presiding judge. Guberman v. Chatwin, 19 Ariz. App. at 593, 509 P.2d at 724 (Rule 42(f)); Hendrickson v. Superior Court, 85 Ariz. 10, 330 P.2d 507 (1958) (A.R.S. § 12-409). In Guberman, the respondent presiding judge advised the court "that it is his uniform practice to permit the judge in relation to whom a notice of change of judge has been presented to rule upon the timeliness of his motion ... [pursuant to] Rule 42(f)." The Guberman court approved of this practice, reasoning that the trial judge is best qualified to decide timeliness and waiver issues under Rule 42(f). 19 Ariz. App. at 593, 509 P.2d at 724. In our opinion, a civil presiding judge does not obtain additional judicial authority to overrule a trial judge's decision by nature of his administrative powers. See Fraternal Order of Police, Lodge 2 v. Superior Court, 122 Ariz. 563, 565, 596 P.2d 701, 703 (1979) (superior court judge has no jurisdiction to review or change previous ruling of another superior court judge). The presiding judge is "responsible for the day-to-day administrative operation of the court." Rule 1.3, Local Rules, Maricopa County Superior Court. This responsibility includes the authority to make permanent assignment of a case to one judge. Rule VII, Uniform Rules of Practice for the Superior Court. However, the assignment authority of the presiding judge does not include the power to exercise a "horizontal appeal" and overrule a fellow judge on decisions of substance such as the timeliness of a change of judge. See Rule 42(f), Arizona Rules of Civil Procedure; Rule 2.7, Local Rules, Maricopa County Superior Court. We have previously tried to avoid such "horizontal appeals" so that the same motion cannot be brought in front of different superior court judges. Mozes v. Daru, 4 Ariz. App. 385, 420 P.2d 957 (1966). At this point it is appropriate to comment on the position taken by the dissent both as to the facts and the law. First, the dissent appears to glean from the record that the motive of the parties in seeking a change of judge in this matter was simply a subterfuge to obtain a continuance of a trial date for which they were not prepared. While admittedly, abuses of Rule 42(f) occur for this purpose, there is no evidence of any such motive in this case and its existence is based solely on supposition. Nor is there any indication that the facts presented by the parties are incorrect. The letter of Judge Martone does not suggest otherwise. Second, the dissent treats the question of whether the notice of the change of judge was timely as simply a matter of counting days, a ministerial calendaring function. As previously pointed out, Judge Howe could have properly determined that his previous order had lulled counsel into believing that a notice of change of judge would be honored even though technically untimely. This is exactly how Judge Howe properly resolved this factual issue, by accepting the change of judge. See Hendrickson v. Superior Court, supra, (notice may be timely even though filed after the expiration of time to file has expired, if facts giving rise to the notice were acquired after expiration of normal time period.) The dissent's assertion that Judge Howe lacks judicial authority to make this determination is without support. Thus, one person's "ministerial calendaring function" becomes another person's "horizontal appeal." This brings us to the main point of divergence between the majority and dissent. The dissent takes the position that, even in the absence of additional facts, one superior court judge may overrule another superior court judge on the same issue in the same case. In doing so it relies upon language *316 in Hendrickson that timeliness of a notice of change of judge "must be determined by the Judge presiding," and two Division Two court of appeals decisions rendered in 1966 and 1969 respectively. As to the Hendrickson language, it is clear that "judge presiding" does not refer to the presiding judge of a multi-judge county but to the judge presiding over the matter in the first instance. This is so, for the order being reviewed in Hendrickson was from Cochise County which in 1958 had only one judge. Whatever may have been the force of the court of appeals decisions, the latest pronouncement on this subject is from the Arizona Supreme Court in Fraternal Order of Police v. Superior Court, 122 Ariz. 563, 596 P.2d 701 (1979) which stated: The petitioner contends that the respondent judge had no jurisdiction to issue an injunction forbidding the holding of an election for determination of an employee bargaining representative. Any action taken by the respondent judge would be in conflict with the previous rulings of Judge LaPrade. We agree. The respondent judge, in effect, was acting as a reviewing court of a judge on the same court. He had no jurisdiction to review or change the judgment of a judge with identical jurisdiction. Id. at 565, 596 P.2d 701 (emphasis added). We therefore hold that Judge Martone abused his discretion and acted in excess of his authority in dishonoring the notice of change of judge as untimely filed. In his administrative capacity as presiding judge, he was required to reassign the case upon transfer from the trial judge. We therefore vacate his order reassigning the case to Judge Howe. Stipulation to Assigned Judge Petitioners next argue that Judge Martone also exceeded his authority in refusing to assign the case to Judge Nastro as the stipulated trial judge as required by Rule 42(f)(1)(F). While we agree that Judge Martone had no discretion in the matter, this issue must be remanded for an evidentiary hearing to determine whether Judge Nastro has indicated his willingness and availability as the assigned judge. Rule 42(f)(1)(F) provides: Assignment of action. At the time of the filing of a notice of change of judge, the parties shall inform the court in writing if they have agreed upon a judge who is available and is willing to have the action assigned to him. An agreement of all parties upon such judge shall be honored and shall preclude further changes of judge as a matter of right unless the judge agreed upon becomes unavailable.... Our supreme court has held that the "clear and unambiguous" language of this rule is mandatory, and that once such a stipulation has been filed, a presiding judge "has no discretion but to honor it." City of Tucson v. Birdsall, 109 Ariz. 581, 582, 514 P.2d 714, 715 (1973). Because the presiding judge had no authority to dishonor the notice of change of judge, his administrative authority upon receiving the stipulation was limited to reassigning the case to Judge Nastro in accordance with the stipulation. His reassignment of the case to Judge Howe was an abuse of discretion for failure to honor the stipulation. In City of Tucson v. Birdsall, the supreme court remedied a similar problem by ordering the presiding judge to reassign the matter to the stipulated trial judge pursuant to Rule 42(f)(1)(F). In that case, the court had before it in the record a signed and filed consent by the stipulated judge, indicating his availability and willingness to be assigned the case. However, in this case we find a factual conflict existing in the limited record on review as to Judge Nastro's availability and willingness to be assigned this case. We have before us several affidavits of counsel, attesting that Judge Nastro initially agreed to accept the matter for trial prior to the stipulation of all counsel, and has reconfirmed his willingness to act as trial judge even after the filing of this special action. On the other hand, Judge *317 Martone has indicated to this court that after he consulted with Judges Howe and Nastro about the factual assertions the parties raised regarding this matter, he concluded that this action is based "upon either erroneous factual assertions or misunderstandings." Judge Martone's conclusion is that Judge Howe agreed to be the trial judge and Judge Nastro agreed to be the settlement judge, not the other way around. Additionally, Judge Martone informs us that, since this action was filed, Judge Howe has indicated that he would disqualify himself as trial judge, "based on the assertions made in the petition for special action." Finally, Judge Martone advises us of the following policy in Maricopa County Superior Court: If Judge Howe disqualifies himself, Judge Schneider [the current presiding judge] would then reassign the action to some other judge in the Civil Department on a random basis. The members of the Civil Department have been encouraged not to indicate their ability and willingness to accept a case on a notice within the meaning of Rule 42(f)(1)(F), Ariz.R. Civ.P., in order to discourage forum shopping. The "encouragement" referred to by Judge Martone is obviously intended to circumvent the workings of a mandatory procedural rule. However, even in the absence of such a policy we recognize that a factual conflict exists on the record presented to us. We cannot determine, on this record, if Judge Nastro actually indicated that he "is available and is willing to have the action assigned to him," in accordance with Rule 42(f)(1)(F). We therefore must remand this matter for an evidentiary hearing to determine this issue. If Judge Nastro indicates that the assertions of his availability and willingness contained in the stipulation are true, then the presiding judge has no discretion but to assign the case to Judge Nastro. If Judge Nastro indicates that he is not either available or willing to have the action assigned to him, the presiding judge must assign the case according to the provisions of Rule 42(f)(1)(F) regarding unavailability of the stipulated judge: If a judge to whom an action has been assigned by agreement later becomes unavailable because of a change of calendar assignment, death, illness or other legal incapacity, the parties shall be restored to their several positions and rights under this rule as they existed immediately before the assignment of the action to such judge. As a final matter, we note that our disposition here may affect the internal policies and administrative practices now utilized in the Maricopa County Superior Court. We recognize, as did the supreme court in City of Tucson v. Birdsall and the dissent here, the practical problems in docketing and budgeting that may be inherent in allowing parties to stipulate to a desired trial judge, especially in complex litigation. However, we are, as was the supreme court, "constrained to hold that the Rule is clear and unambiguous." Birdsall, 109 Ariz. at 582, 514 P.2d at 714. If the rule is unworkable and plays havoc with the superior court's administrative system, the remedy is to seek a rule change, not to develop administrative practices and policies which circumvent the rule. Based on the foregoing, we grant special action relief and vacate the presiding judge's order reassigning the case to Judge Howe. We remand this matter to superior court for further proceedings consistent with this opinion. The stay previously entered in this matter is dissolved. BROOKS, J., concurs. GERBER, Judge, dissenting. I dissent for reasons that follow. This case presents unusual circumstances in a number of respects. In the first place, all adversaries in this special action appear before this court in total agreement, even to the point of echoing each other's off-the-record recollections of unrecorded communications with various judges. This court is thus deprived of any opposing authority or argument. Although the stated dispute is about Rule 42(f), it appears from a close reading of the minute entries that none of the parties could have *318 been ready for the firm trial date set by Judge Howe for January 9, 1989. As of early November, 1988, with Thanksgiving and Christmas holidays on the near horizon, four sets of counsel still needed to schedule and take expert depositions of Chez, Ward, Drs. Loftus, Frey, and Depp, plus an unknown number of nurses, as well as interview a third group of non-expert witnesses. The deponents were geographically spread from New Jersey to Nevada to Washington, thus requiring extensive travel. Counsel were scheduling depositions as late as December 17, 1988. Plaintiff's counsel had failed to complete a promised "Day in the Life" film about the plaintiff. As of mid-November, 1988, counsel were not hurrying to complete discovery but were sparring with protective motions to prevent depositions from going forward in the short working time left before holidays and trial. Thus, although noticing Judge Howe was the stated purpose for the change of judge notice, its unstated purpose appears to have been to avoid trial before any judge on the firm trial date of January 9, for which discovery obviously could not have been completed. These same adversaries argue in concert that Judge Howe had informally accepted an oral notice of change of judge as early as the second week in November, 1988. The record does not harmonize with this chorus. Judge Howe would not have honored the December 1 change of judge notice if he had accepted an earlier one. There is no showing of any compliance with Rule 42(f)(1)(A) requiring indication on the record of the date of the informal request and the name of the party requesting it. Furthermore, if an earlier informal request and had been made and honored, the case would have been transferred then, leaving no need for anyone to file the formal notice of change of judge on December 1. These adversaries all admit that the December 1 change of judge notice was untimely because filed less than 60 days prior to trial as required by Rule 42. November 10, 1988 was in fact the last day for a timely notice; the December 1 notice was 20 days late. The tardy nature of the notice is said to be remedied by the argument, in which these adversaries again concur, that Mr. Reilly "intended" to file such a motion, "had it on his desk" or "would have filed it" had he not been lulled by Judge Howe into thinking that time was unimportant. Apart from paving the road to the underworld, intentions such as this count in the law only to the extent that they are put into effect. As to the reliance argument, Judge Howe had warned counsel that while he would disqualify himself to avoid being both the trial and settlement judge, he would honor a change of judge notice only absent waiver or objection (minute entry of November 3-4, 1988). Counsel received this warning six days before November 10, 1988, the last day for a timely notice. Judge Howe's comments indicate nothing but an effort to comply with the standards of Rule 42(f). Nowhere does he promise to honor an untimely notice. Even if one assumes that Judge Howe created the impression he would honor an untimely notice, the fact remains that he lacked authority to honor an untimely notice. Honoring a "technically untimely" notice does not transform that notice into a timely one. Hendrickson only permits a late notice to be considered timely when it is based on facts discovered after the expiration of the time period. 85 Ariz. at 12, 330 P.2d at 508-09. In the present case, counsel did not acquire any facts giving rise to the notice after the expiration of the notice period; to the contrary, the discussion on November 3 and November 10, 1988 about honoring a notice occurred before expiration of the time period. Counsel simply let the time expire, and with it, Judge Howe's authority to grant the notice. In Fendler v. Phoenix Newspapers, Inc., 130 Ariz. 475, 636 P.2d 1257 (App. 1981), this court rejected an assertion that a trial court has authority to honor an informal, untimely notice: ... we address appellant's argument that the trial judge should have recused herself "when asked to do so." The record reflects no notice for change of judge either as a matter of right pursuant to Rule 42(f)(1) or for cause pursuant to Rule 42(f)(2), Arizona Rules of Civil *319 Procedure. The right to apply for a change of judge for cause is waived if not timely filed. Id. at 481, 636 P.2d at 1263 (emphasis added). Central to the concurring arguments of counsel is the assertion that Judge Martone should not have "overruled" Judge Howe's honoring of the December 1 notice. Again, this court is not presented with any contrary authority. I dissent here because I do not see Judge Martone's minute entry order transferring the case back to Judge Howe as an "overruling" of Judge Howe but simply as a reassignment back to Judge Howe after a clearly untimely and thus ineffective notice. In Rules 2.7 and 3.1(b) of the Local Rules of Maricopa County, I find authority for a presiding judge to make such ministerial case assignments and reassignments as necessary for efficient case processing. True, Guberman does authorize the noticed judge to determine the validity of the notice; Guberman, however, also says, in a portion not quoted by concurring counsel: We find that Rule 42(f) is not clear in this regard. Rule 42(f)(3) makes it clear that the effort to disqualify a judge to whom a case has been assigned must be by a timely application and if the application is not timely, or if there has been a waiver, then that judge has not been disqualified. 19 Ariz. App. at 593, 509 P.2d at 724 (emphasis added). Thus, while Guberman approved referring the matter back to the noticed judge to determine the validity of the notice, there is nothing "clear" in that opinion or in Rule 42(f) which precludes the civil presiding judge from also ruling on timeliness, which is merely a counting exercise having nothing to do with the legal merits of the controversy. Presiding judges appear to have such supervisory authority. Rule 42(f)(1)(A) grants the presiding judge explicit authority in some cases to monitor a change of judge. That rule states that: Whenever two or more parties on a side have adverse or hostile interests, the presiding judge may allow additional changes of judges as a matter of right. .. . (emphasis added). The arguments of these concurring adversaries overlook language in analogous situations which supports the civil presiding judge's authority to "preside." For example, in Hendrickson, the court makes the following observation about determination of timeliness of an affidavit of bias and prejudice: ... the legal sufficiency and timeliness of an affidavit must be determined by the judge presiding or one to whom the matter may be assigned for that purpose. 85 Ariz. at 13, 330 P.2d at 509 (emphasis added). The civil presiding judge is at least implicitly one to whom the matter of counting days is assigned. A presiding judge has analogous authority in ruling upon notices of change of judge on the basis of bias and prejudice and on transferring "last day" criminal matters. Similarly, the court administrator's office regularly issues calendaring orders and inactive calendar dismissals which are frequently altered by one or more superior court judges. A presiding judge's review of notices of change of judge is a ministerial calendaring function oriented toward efficient case management and docket control. To deny such authority in the face of a patently untimely notice leads to the result, among others, that a noticed judge has unfettered authority to indulge invalid notices to reduce caseload — hardly a procedure insuring judicial accountability. Admittedly, there is no evidence of any such improper motive in this case, but it has surfaced elsewhere. Rather than an "appellate" decision overruling Judge Howe on a matter of "substance," Judge Martone's minute entry of December 2, 1988 appears simply as a reassignment back to Judge Howe after a patently untimely and thus invalid notice. No case explicitly undermines this procedure. In Fraternal Order, cited by the majority, the Supreme Court vacated a court order purporting to enjoin an employee relations board from conducting a representation *320 election mandated by a prior order of another judge. 122 Ariz. 563, 596 P.2d 701 (1979). In my view the ministerial calendaring function involved in this case is simply not akin to the conflicting substantive orders at issue in Fraternal Police. In addition, State v. Superior Court, 4 Ariz. App. 562, 422 P.2d 393 (1967) merely refers to cases "properly" before a judge; here, the case was indeed "properly" before Judge Howe because the notice was late and thus never took effect as a matter of law under Rule 42(f)(3). Other cases acknowledge the authority of a superior court judge even to "overrule" decisions by judicial colleagues. I do not advocate such "overruling." My point is that if Arizona jurisprudence allows occasional "overruling" on substantive points, then a fortiori it allows correction of counting errors. For example, Williams v. Garrett, 4 Ariz. App. 7, 9, 417 P.2d 378, 380 (1966) states: Decisions generally acknowledge and we are in agreement that "... a trial judge has `power' to vacate, modify, contravene, or depart from the ruling or order of another in the same case, whatever may be the consequences of his so doing." There the court adds pointedly that the function of a court of appeals is to "not interfere ... unless there has been an abuse of discretion," which I submit can hardly lie in dishonoring a notice invalid on its face. Id. In addition, State ex rel. Herman v. Hague, 10 Ariz. App. 404, 459 P.2d 321 (1969) upheld a trial judge overruling another trial judge's invalid order on trial severance, indicating that such overruling was appropriate when the prior ruling was manifestly invalid. Such approximates the situation here, for the December 1 notice was manifestly untimely. If such substantive "overruling" is permitted, a mere counting error ought to be even more subject to administrative correction. I concur with the majority regarding the re-assignment of the case if the notice must be honored. The internal practice in Maricopa County Superior Court has been to discourage judges from agreeing to take cases on the basis of stipulation of counsel because it encourages forum shopping and shifts caseloads beyond the control of the presiding judge and the court administrator. Birdsall and the explicit language of Rule 42(f)(1)(F) make it clear, however, that if a valid notice of change of judge is filed and if counsel agree upon a willing judge, the presiding judge is bound, however reluctantly, to transfer the matter to the agreed-upon judge. 109 Ariz. at 582, 514 P.2d at 715. That part of the law, in my opinion at least, is clear. Obviously, in my view, there is no need to do so here because I find Judge Martone's minute entry of December 2, 1988 referring the case back to Judge Howe to be proper. The bottom line is that I would decline jurisdiction to hear this special action because, as Garrett teaches, the function of a court of appeals is "not to interfere ... unless there has been an abuse of discretion." 4 Ariz. App. at 9, 417 P.2d at 380. Rule 42(f) deserves revisiting, redrafting or possibly interment. The practicing bar and trial judges are well aware of the fact that it is commonly used as a disguised motion to continue, particularly to delay a day of reckoning in court and often to allow time for uncompleted discovery. From an administrative point of view, these uses of Rule 42(f) frustrate a presiding judge's efforts to maintain firm trial dates and to control judicial assignments.[1] Rule 42(f) practice lends itself richly to legal gamesmanship. If Rule 42(f) must exist, a more workable version is the philosophy of Rule 10.2, Arizona Rules of Criminal Procedure, which opens the change of judge window for a period of 10 days following the initial assignment to a judge. Better still, the rule could be abolished; the proper *321 time to change judges is at the ballot box, not on the eve of trial. NOTES [1] Applicable rules in other state court systems require that motions affecting docket control be submitted directly to a presiding or coordinating judge. For example, California's Code of Civil Procedure, § 170.6, provides that a motion alleging prejudice on the part of an assigned judge "shall be made to the judge supervising the master calendar," where a master calendar exists.
Mid
[ 0.542857142857142, 28.5, 24 ]
Will Andy Pettitte’s return to the Yankees bolster their rotation? CineSport’s Noah Coslov and Sporting News’ Stan McNeal discuss that as well as the Angels digging themselves an early hole in the AL West behind the red-hot Rangers.
Low
[ 0.39659367396593603, 20.375, 31 ]
Gastric stasis occurs in spontaneous, visually induced, and interictal migraine. To evaluate and compare gastric motility and emptying during spontaneous migraine to previous observations from induced migraine. We have previously demonstrated a delay in gastric emptying both during the interictal period and during an induced migraine. A limitation noted in these studies was whether there are differences gastrointestinally during a visually induced migraine compared to spontaneous migraines. To address this, 9 additional studies have been performed to ascertain if there is a similar delay during spontaneous migraine Gastric scintigraphy using a standard meal was performed in 3 subjects during 3 periods: spontaneous migraine, induced migraine, and interictal period. On average, the time to half emptying was delayed during spontaneous migraine (124 minutes), during induced migraine (182 minutes), and during the interictal period migraine (243 minutes) compared to normative values established at our center (112 minutes). On average, similar gastric slowing was seen in all 3 groups when the percentage of nuclear material remaining in the stomach at 2 hours was measured. This study provides additional evidence of gastric stasis in migraineurs interictally during induced and spontaneous migraine.
High
[ 0.6615384615384611, 32.25, 16.5 ]
1. Field of the Invention The invention relates to financial institution operating procedures, and more particularly, to accelerating the processing of debit transactions. 2. Background of the Invention Debit transaction processing refers to the processing of a financial transaction by a financial institution, such as a bank. In the transaction an entity authorizes the financial institution to debit an account that contains money belonging to the entity, but held by the financial institution. The financial institution may hold the monies in a checking, savings or other type of customer account. Alternatively, in a transaction an entity may authorize a financial institution to charge a credit account for which the entity is liable to repay. Such transactions are commonplace in today's society and form the backbone of our economic system. Each day trillions of dollars worth of debit transactions are processed within the United States. In the traditional banking business model for customer accounts, a bank tried to maximize the amount of money in the bank based on the view that the more money in the bank, the greater the bank's interest revenues. In this view, accelerating the processing of debit transactions would tend to diminish the amount of money in a bank and therefore diminish revenues and profits. Changes in banking technology, regulation, and economic conditions allow this model to be challenged and refined. Banks have merged, thereby, increasing individual bank size and market share. Interest rates and the cost of funds are low. As a result, bank fee revenues have become increasingly important, compared to interest revenues, in the generation of profits. Debit transactions may either be customer-initiated or bank-initiated. Debit type as used herein refers to a type of debit drawn from a customer account. Examples include a point of sale (POS) debit, a check debit, and an overdraft fee debit. Examples of customer-initiated debit transactions include POS transactions, automatic teller machine (ATM) withdrawals, and presentment of paper checks. Bank-initiated debit transactions may be either service transaction fees or account maintenance fees. Service transaction fees are fees directly associated with a particular type of customer-initiated debit transaction, such as an ATM fee that is charged to the customer's account when an ATM withdrawal is made. Account maintenance fees are fees indirectly associated with customer-initiated debit transactions, but often triggered by them. Account maintenance fees can be either customer transaction driven or cycle driven. Account maintenance fees that are cycle driven are debited from a customer account at the end of the banking cycle, which is often a monthly cycle at which time a customer receives a monthly statement. An example of this type of fee is a fee for an account balance dropping below a minimum requirement. Account maintenance fees that are customer transaction driven are fees directly associated with customer-initiated debit transactions, and often triggered by them. These are fees that can be imposed prior to the end of the banking cycle. An example of this type of fee is an overdraft fee imposed when an account balance drops below zero. Numerous methods and devices exist for processing debit transactions. For example, U.S. Pat. No. 4,933,536 to Lindemann et al., describes a check processing device which is used together with a POS terminal. U.S. Pat. No. 4,810,866 to Lloyd, Jr., describes a check validation system located with a POS system for imprinting and otherwise handling a check. Other examples include U.S. Pat. No. 4,743,743 to Fuakatsu which describes an apparatus where a check is examined by a reader at a POS terminal. Other systems for processing checks have also been the subject of invention. U.S. Pat. No. 4,617,457, for example, addresses an ATM form of cashing checks. These patents largely focus on the problem of how to accept checks and to prevent fraudulent activity. U.S. Pat. No. 5,484,988 to Hills et al., addresses a further aspect of check transaction processing, in that, the patent relates to a checkwriting POS system that integrates with the automated clearing house (ACH) process, primarily to enable greater flexibility as to the types of purchases that may be made and eliminate the need for paper checks. Another category of systems dealing with transaction processing involves electronic check processing (ECP). ECP provides a mechanism for financial institutions to computerize check data at the bank of first deposit (BoFD) and send the electronic representation of the check to the payor's bank at least one day ahead of the paper check. Because the electronic representation of the paper check arrives before the actual paper check, the posting of the debit can occur prior to bank-to-bank settlement, which is triggered by the arrival of the paper check. ECP applies when the BoFD is not the payor's bank which posts the customer's debit. A number of U.S. patents and a significant number of industry publications address ECP. For example, U.S. Pat. No. 5,532,464 to Josephson et al., and U.S. Pat. No. 5,783,808 to Josephson et al., address systems to handle various aspects of handling paper checks to convert them to electronic information and manage the delivery of the paper checks in an ECP environment. Still other devices and systems address other aspects of transaction processing. One such category of devices and systems adds functionality to electronic payment schemes, and makes use of credit and debit cards easier. For example, U.S. Pat. No. 6,408,284 to Hilt, et al., describes an electronic bill payment system that enables consumers to send messages via the Internet directing financial institutions to pay a biller's bill. Similarly, U.S. Pat. No. 6,038,552 to Fleischl et al., describes a method and apparatus to process combined credit and debit card transactions. Additionally, other methods for transaction processing are disclosed in court cases. See e.g., Compass Bank and Compass Bancshares v. Jucretia Snow et al., 823 So. 2d 667 (Ala. 2001). In these cases banks altered the order in which checks and other debit items presented on a given day are posted to the customer's account. In particular, the banks posted the debit items from largest to smallest, so that more bank-initiated fees would be incurred. All the above patents and practices deal generally with transaction processing. However, none deal with the issue of accelerating debit transactions relative to credit transactions in a customer account, irrespective of any settlement or settlement date. Furthermore, none deal with accelerating the posting of any type of debit transaction across any business day or number of days. As a result, because the processing of debit transactions has not been optimized, financial institutions may be losing significant revenues that would accrue from accelerated debit transaction processing. Unfortunately, the determination of the benefits of acceleration of debit transactions is complex and misunderstood. This, in fact, may be why more attention has not been given to this problem. To determine the impacts of accelerating debit transactions, many variables and factors must be considered. These include customer reactions, regulatory limitations, implementation costs and prioritization considerations. The interplay of these factors and industry misconceptions (e.g., related to what day processing of a debit transaction can actually begin) make the task of analyzing the impacts of accelerating debit transactions difficult. What is needed is a method for increasing financial institution revenues through the acceleration of posting debits to a customer account, relative to the credit transactions in that account and irrespective of any settlement or settlement date. What is also needed is a method and system to determine and measure the financial impacts of such acceleration.
Low
[ 0.5270270270270271, 29.25, 26.25 ]
Wild Oregon Foods is a restaurant in Bend, Oregon committed to creating delicious, fresh and locally sourced food. We source our ingredients from all over central Oregon and the pacific northwest to craft beautiful healthy jewish and iltalian delicatessen style dishes.A modern day delicatessen based in Central Oregon, we serve old delicatessen favorites like knishes, all beef hot dogs, homemade mustards, and more for you and your family to enjoy.Grab a sandwich and salad to-go, or try one of our delicious soups, matzo ball soup or potato latkes while sipping a hand crafted cocktail.Located on the south end in the Bend Factory Stores.
Low
[ 0.40425531914893603, 21.375, 31.5 ]
A cysteine in the repetitive domain of a high-molecular-weight glutenin subunit interferes with the mixing properties of wheat dough. The quality of wheat (Triticum aestivum L.) for making bread is largely due to the strength and extensibility of wheat dough, which in turn is due to the properties of polymeric glutenin. Polymeric glutenin consists of high- and low-molecular-weight glutenin protein subunits linked by disulphide bonds between cysteine residues. Glutenin subunits differ in their effects on dough mixing properties. The research presented here investigated the effect of a specific, recently discovered, glutenin subunit on dough mixing properties. This subunit, Bx7.1, is unusual in that it has a cysteine in its repetitive domain. With site-directed mutagenesis of the gene encoding Bx7.1, a guanine in the repetitive domain was replaced by an adenine, to provide a mutant gene encoding a subunit (MutBx7.1) in which the repetitive-domain cysteine was replaced by a tyrosine residue. Bx7.1, MutBx7.1 and other Bx-type glutenin subunits were heterologously expressed in Escherichia coli and purified. This made it possible to incorporate each individual subunit into wheat flour and evaluate the effect of the cysteine residue on dough properties. The Bx7.1 subunit affected dough mixing properties differently from the other subunits. These differences are due to the extra cysteine residue, which may interfere with glutenin polymerisation through cross-linkage within the Bx7.1 subunit, causing this subunit to act as a chain terminator.
High
[ 0.6854219948849101, 33.5, 15.375 ]
Præfektura apostolica Poli arcici- The Polar Prefecture Norway was partly converted to Christianity already in the 11th century, although the heathen believes continued to stay strong in certain regions of the country. In the 17th Century the nation was turned over to protestantism by force after the so called "Reformation" and a Lutheran "State Church" was imposed on everybody. For more than 2 Centuries it was forbidden to practise Catholicism in the region. But in 1855 the See of Rome was able to start a new mission in Norway and the Polar Region; the "Præfektura apostolica Poli arcici." And even though most Catholics abandoned their Catholic Traditions in order to be accepted by the Second Vatican Council sect, there are still Catholics left.. People who wish to stay faithful to the Teachings of the ancient, never changing Catholic Church, with it's Papacy, Doctrines and Traditions. People who reject heresies like modernism, freemasonry, false ecumenism and "salvation" in foreign religions. Regular Catholics in other words.
Mid
[ 0.6125, 36.75, 23.25 ]
Happy holidays, Last Man stans! This week, Will Forte & Co. have been gracious enough to gift us with some fun plot twists. But first, we have to sit through an endless Secret Santa ceremony. Naturally, Carol is Malibu's very own Christmas elf, and she's puts her crafty efforts toward getting their house and her wardrobe into the yuletide spirit. (Green pantyhose were practically invented for her.) As Carol transforms into the Christmas Hulk, she's proud to hear that their living room “looks like Santa ate the Rockefeller Christmas tree and then took a big dump on the wall.” Always one to take rules seriously, Carol insists that her roommates each draw a name from a Santa hat, then eat it so that each Secret Santa's identity will remain a secret. Never one to cooperate, Phil hides his target's name — Erica — under his tongue and, after the world's most awkward “This Little Light of Mine” sing-along, rushes it over to EvPhil, who remains banished to the solar house. The gang has decided, in the spirit of Christmas and all, that EvPhil should be allowed to participate in the festivities, and Phil's as eager as ever to secure his former (or current?) nemesis's friendship. Phil tosses his frenemy all 42.52 carats of the Hope Diamond (an estimated $200-250 million value!), convinced he'll revive both his friendship and EvPhil's ailing relationship with his baby mama. EvPhil tries to tell him that he has his own gift in mind, but lovable dope Phil is too distracted by his own infallible plan to listen. We've also been periodically checking in on Mike Miller, who is now truly alone and lost in space, after the last of his eight pet worms died. Mike has stripped down to his boxers and is chugging down laughing gas. (If it weren't for all that zero-gravity stuff, he'd look just like some of the kids in my high school.) As he watches the last living thing he had a connection with float into outer space, an old family photo conveniently flutters into our view, and then into Mike’s. This almost could have been a melancholy moment if it hadn't been cut short by Carol screaming “Secret Santa!” But it is, so it wasn't, and we're back on Earth, watching Carol delight in extravagant presents. Erica got Carol a chair from Oprah's studio audience, and her Aussie-inflected Oprah impression is recognizable enough for those of us following along at home. Carol plops her festive red petticoats down on her new seat as Erica commands, “LOOK UNDER YOUR CHAAAIR!” She's also gotten her friend J.Lo's famous Versace Grammys dress. We had better get to see Carol model that thing pronto or this is a waste of a joke. Imagine the animal-printed cardigan she'll pair it with! Moving on: For Gail, who is still romancing Todd, her Secret Santa is also her secret lover. Todd insists his giftee is “normal special, not special, really, at all, just human celery,” then presents her with Z.Z. Top's car and a handshake, further underscoring that Gail Is Old. Phil, who pretends he'd drawn his own name — EvPhil actually got it, but Phil offered to trade, remember — shows off Pitbull's yacht … and decides to blow it up rather than let the gang party in it. Well, yippee ki-yay. Carol got Melissa, and their exchange is my favorite. Earlier in the episode, Melissa casually mentioned that she wants size-6.5 Jimmy Choo over-the-knee boots. Carol, who never misses an opportunity to craft, Lisa Frank–ed the heck out of a pair for her understated BFF, encrusting them with jewels and unicorns and rainbows. They may not be my style — and definitely aren't Melissa's, for that matter — but they are friggin' awesome, and Melissa accepts them graciously, even though Carol also got her the pair she originally wanted. Then Melissa, who is totally coming back around on her feelings for Todd, gives him a sweet gift: a crown, sash, and scepter, because the kids in his high school were mean to him at prom. “If anyone deserves to be Prom King, it's you,” she says with a warm hug. Gail looks rightfully jealous. Gail gives EvPhil a decorative wicker ball that was literally the closest thing within reach of her seat, and finally, we move on to the main event: EvPhil's present for Erica. Pushing his way past Phil's endless gem puns, EvPhil leads everyone out to the solar house, and shows off a sonogram machine (complete with reclining medical chair!) for his expectant ex. We'll overlook the logistics here — where did he get that? How does it still work? How did he sneak it into the compound? — because it's pretty cute. Erica and EvPhil hold hands as she gets to see her baby for the first time. (How far along is she supposed to be, again? The baby looks pretty big.) I'm all for reconciliation, but honestly, I hate this notion that a TV couple can't co-parent without rekindling the romance they once shared. It's so tired and predictable. Anyhow, on to our cliff-hangers: Melissa gets down on one Jimmy Choo–bedazzled knee and pops THE question to Todd, who still has his new relationship under wraps. Just as she asks him to marry her, before we can even wonder how Gail will react, EvPhil falls to the floor and starts convulsing. Huh. Random, but also scary. We're not sure what's wrong with him. But then we quickly drift back up to space, where we see Mike Miller add his own name to his RIP wall and eject himself from the spacecraft. I don't know much about gravity or physics or whatever, but I do know that (a) this would kill him immediately if it happened for real and (b) it doesn't matter because he is going to magically wash up in Malibu next week in time for Christmas. But we noticed you're visiting us with an ad blocker We understand the reasons for blocking, but Vulture depends on ads to pay our writers and editors. We're working hard to improve the ad experience on our site, but in the mean time, we'd really appreciate it if you added us to the approved list in your ad blocker. Thanks for the support!
Mid
[ 0.5719921104536491, 36.25, 27.125 ]
Q: Add css runtime to table row and data bind the row I am really not sure how to solve. However i am quite close the solution and now just need one small help from you experts here. My working fiddle is here. When the page loads the 2 checkboxes are checked. My view is binded as below <tbody data-bind="foreach: dataOne"> <tr data-bind="css: { 'makeBold': $root.duplicates.indexOf(name1) !== -1 }" > <td data-bind="text: id"></td><td >&nbsp;&nbsp;&nbsp;</td> <td data-bind="text: display"></td><td >&nbsp;&nbsp;&nbsp;</td> <td>&nbsp;</td> <td> <input type="checkbox" data-bind="checked: $root.duplicates.indexOf(name1) !== -1" /> </td> </tr> </tbody> My view model as below var data1 = [{ name1: "one", id: 1, display: "Test1" }, { name1: "two", id: 2, display: "Test2" }, { name1: "three", id: 3, display: "Test3" }]; var data2 = [{ name2: "five" }, { name2: "two" }, { name2: "three" }]; var viewModel = { dataOne: ko.observableArray(data1), dataTwo: ko.observableArray(data2), duplicates: ko.observableArray() }; viewModel.dataTwo.push({ name: "four" }); //add one on the end var flattenedOne = ko.utils.arrayMap(viewModel.dataOne(), function (item) { return item.name1; }); var flattenedTwo = ko.utils.arrayMap(viewModel.dataTwo(), function (item) { return item.name2; }); var differences = ko.utils.compareArrays(flattenedOne, flattenedTwo); //return a flat list of differences ko.utils.arrayForEach(differences, function (difference) { if (difference.status === 'retained') { viewModel.duplicates.push(difference.value); } }); Now when user click on Update button it again load data and now 3 checkboxes are checked. What i am trying to achieve is When user click on update button it should add css to table row and only make that row bold. It should not check the checkbox as checked when user click's on update button. So in our example when user click's on update button it should make row as bold but checkbox should not be checked. So only first row will be bold on clicking update button. Currently when page is loading it is making 2 rows bold and checked which is wrong. It should make row bold only clicking update button. Please help me guys. A: Hope this solves your problem. Check this Fiddle I have given separate condition using another observable . Html :- <table> <tbody data-bind="foreach: dataOne"> <tr data-bind="css: { 'makeBold': $root.duplicates.indexOf(name1) !== -1 }" > <td data-bind="text: id"></td><td >&nbsp;&nbsp;&nbsp;</td> <td data-bind="text: display"></td><td >&nbsp;&nbsp;&nbsp;</td> <td>&nbsp;</td> <td> <input type="checkbox" data-bind="checked: $root.checkDuplicate.indexOf(name1) !== -1" /> </td> </tr> </tbody> </table> <button class="btn" data-bind="click: $root.UpdateData"> Update </button> Script:- var data1 = [{ name1: "one", id: 1, display: "Test1" }, { name1: "two", id: 2, display: "Test2" }, { name1: "three", id: 3, display: "Test3" }]; var data2 = [{ name2: "five" }, { name2: "two" }, { name2: "three" }]; var viewModel = { dataOne: ko.observableArray(data1), dataTwo: ko.observableArray(data2), duplicates: ko.observableArray(), checkDuplicate : ko.observableArray() // new observable to handle condition }; viewModel.UpdateData = function(){ data2 = [{ name2: "one" }, { name2: "two" }, { name2: "three" }]; viewModel.dataTwo(data2); var flattenedOne = ko.utils.arrayMap(viewModel.dataOne(), function (item) { return item.name1; }); var flattenedTwo = ko.utils.arrayMap(viewModel.dataTwo(), function (item) { return item.name2; }); var differences = ko.utils.compareArrays(flattenedOne, flattenedTwo); //return a flat list of differences ko.utils.arrayForEach(differences, function (difference) { if (difference.status === 'retained' && viewModel.checkDuplicate().indexOf(difference.value) == -1 ) { viewModel.duplicates.push(difference.value); } }); }; viewModel.dataTwo.push({ name: "four" }); //add one on the end var flattenedOne = ko.utils.arrayMap(viewModel.dataOne(), function (item) { return item.name1; }); var flattenedTwo = ko.utils.arrayMap(viewModel.dataTwo(), function (item) { return item.name2; }); var differences = ko.utils.compareArrays(flattenedOne, flattenedTwo); //return a flat list of differences ko.utils.arrayForEach(differences, function (difference) { if (difference.status === 'retained') { viewModel.checkDuplicate.push(difference.value); } }); ko.applyBindings(viewModel);
Low
[ 0.523560209424083, 25, 22.75 ]
Epigraphia Indica Epigraphia Indica was the official publication of Archaeological Survey of India from 1882 to 1977. The first volume was edited by James Burgess in the year 1882. Between 1892 and 1920 it was published as a quarterly supplement to The Indian Antiquary. One part is brought out in each quarter year and eight parts make one volume of this periodical; so that one volume is released once in two years. About 43 volumes of this journal have been published so far. They have been edited by the officers who headed the Epigraphy Branch of ASI. Editors J. Burgess: Vol I (1882) & Vol II (1894) E. Hultzsch: Vol III (1894–95), Vol IV (1896–97), Vol V (1898–99), Vol VI (1900–01), Vol VII (1902–03), Vol VIII (1905–06), Vol IX (1907–08) Sten Konow: Vol X (1909–10), Vol XI (1911–12), Vol XII (1913–14), Vol XIII (1915–16) F. W. Thomas: Vol XIV (1917–18), Vol XV (1919–20), Vol XVI (1921–22) H. Krishna Sastri: Vol XVII (1923–24), Vol XVIII (1925–26), Vol XIX (1927–28) Hiranand Shastri: Vol XX (1929–30), Vol XXI (1931–32) N. P. Chakravarti: Vol XXII (1933–34), Vol XXIII (1935–36), Vol XXIV (1937–38), Vol XXV (1939–40), Vol XXVI (1941–42) N. Lakshminarayan Rao and B. Ch. Chhabra: Vol XXVII (1947-48) D. C. Sircar: Vol XXVIII (1949–50) - jointly with B. Ch. Chhabra), Vol XXX (1951–52) - jointly with N. Lakshminarayan Rao, Vol XXXI(1955–56), Vol XXXII(1957–58), Vol XXXIII(1959–60), Vol XXXIV(1960–61), Vol XXXV (1962–63), Vol XXXVI (1964–65) G. S. Gai: Vol XXXVII (1966–67), Vol XXXVIII, Vol XXXIX, Vol XL K. V. Ramesh: Vol XLI (1975–76), Vol XLII (1977–78) Other contributors Aurel Stein V. Venkayya Robert Sewell D. R. Bhandarkar J. Ph. Vogel F. O. Oertel N. K. Ojha F. E. Pargiter F. Kielhorn John Faithfull Fleet K. A. Nilakanta Sastri K. V. Subrahmanya Aiyar T. A. Gopinatha Rao Arabic and Persian Supplement The ASI also published an Arabic and Persian supplement from 1907 to 1977. While the first volume in 1907 was edited by E. Denison Ross of Calcutta Madrassa and the second and third volumes by Josef Horovitz, subsequent volumes have been edited by Ghulam Yazdani (1913–40), Maulvi M. Ashraf Hussain (1949–53) and Z. A. Desai (1953–77). Since 1946, the volumes have been edited by an Assistant Superintendent for Arabic and Persian Inscriptions, a special post created by the Government of India for the purpose. References External links Official site of Archaeological Survey of India. First 36 volumes available online at The Digital Library of India Category:Publications established in 1888 Category:English-language journals Category:Archaeology journals Category:Publications of the Archaeological Survey of India Category:Indology journals
Mid
[ 0.652, 40.75, 21.75 ]
7 Visitor Messages ARIA doll house does exist!! it's a merchandise from "monthly undine" mag vol 1 - vol 3 the magazine them self is told to be sold out. so if u really want it, go look for it on a place nobody look before Heh... while I'm not as rabid a Yotsuba fan as you two, I just want to say... it's great to have someone like you as a regular in the Misc. forums. You've really contributed a lot, especially in the recommendation threads and the summaries of the new series.
Mid
[ 0.6015037593984961, 40, 26.5 ]
Kashiling Adake Kashiling Adake (born 18 December 1992) is an Indian professional Kabaddi player. He is currently playing for Bengaluru Bulls in Pro Kabaddi League Season 6. He went unsold in the recently held PKL 2019 auction. Early life He was born in Sangli district of the state Maharashtra in India. Pro Kabaddi League He debut in PKL in Season 1 and played for Dabang Delhi in first four seasons. He currently plays for Bengaluru Bulls in Season 6. He is the first player to score 15 raid points in 1st-Half in a match in PKL Season 5. He also scored 24 points in a match against Telugu Titans. References Category:Indian kabaddi players Category:1992 births Category:Living people Category:People from Sangli district Category:Kabaddi players from Maharashtra Category:Pro Kabaddi League players
Mid
[ 0.6386138613861381, 32.25, 18.25 ]
Veterans Preference, Other Bills Before Congress Bills recently introduced or advancing in Congress cover veterans preference, tax delinquency in retirees, and raising the pay cap for some VA positions. S-2594, up for approval in the Senate Veterans Affairs Committee, to expand veterans preference in federal hiring by extending eligibility to those honorably discharged from active duty service if the active duty service was performed for more than 180 days total (currently, the time must be consecutive); and to end the current restrictions that military retirees are eligible only if they retired under disability or below the rank of major or its equivalent. ADVERTISEMENT S-3184, newly introduced, to require an annual report, to be posted online, of current and retired federal employees who have a delinquent tax debt that is not being paid through an installment agreement or who have an unfiled tax return for the prior year, and the aggregate amounts and frequency rates for each. While individual names would not be posted, creating such a database would add visibility to an issue that in the past has led to proposals to disqualify persons from new or continued employment on tax delinquency grounds. S-3084, passed by the Senate, to excludes SES-equivalent health care positions at the VA from an existing pay cap with the intent of improving the department’s ability to recruit and retain in such positions.
High
[ 0.686111111111111, 30.875, 14.125 ]
Anthony Gucciardi Activist Post It may come as a surprise, but you may be consuming cloned meat on a regular basis. In fact, the U.S. Secretary of Agriculture (head of the USDA) says that he has no idea whether or not cloned meat has been sold inside the United States — or even how much. Instead of investigating or setting up parameters, the USDA asserts that it is safe in their view so there is no cause for alarm. It is currently forbidden by the agency itself for any producer to distribute or sell cloned meat. The news came back in August of 2010, when U.S. Secretary of Agriculture Tom Vilsack went on record saying that he really doesn’t know whether or not cloned meat is being put on dinner tables nationwide. The announcement was made after the United Kingdom’s Food Standards Agency told consumers that meat from descendants of cloned animals had already entered the food supply. Of course the agency made the statements a year after the cloned products leaked into the food chain. Still, just like the USDA, the UK’s FSA stated that they believe cloned meat poses no risk, so citizens should not panic. The reason? They say that cloned meat has ‘ no substantial difference’ to traditional meat, and therefore it is safe. The statements echo those of Monsanto, whose genetically modified creations have been linked to everything from organ damage to toxicity-induced cell death. Here’s what Tom Vilsack’s response is to whether or not cloned meat is being sold in United States stores and subsequently being eaten by citizens: ‘I can’t say today that I can answer your question in an affirmative or negative way. I don’t know. What I do know is that we know all the research, all of the review of this is suggested that this is safe,’ Vilsack said to reporters. Conventional meat packing industries and suppliers often utilize disturbing growth techniques with zero regard for the welfare of the animals and the consumer. It is not to believe that cloned meat would slip into this chaotic process and be passed off as traditional meat. In order to avoid the threat of not only cloned meat but a copious amount of antibiotics (that you will soon be eating), you should search for high-quality meat sources that utilize grass as a main feed source. The antibiotic problem is so pervasive, in fact, that a judge recently ordered the FDA to remove antibiotics from animal feed in order to halt the production of super viruses. Explore More: This article first appeared at Natural Society, an excellent resource for health news and vaccine information.
Low
[ 0.536687631027253, 32, 27.625 ]
Diazo transfer-click reaction route to new, lipophilic teicoplanin and ristocetin aglycon derivatives with high antibacterial and anti-influenza virus activity: an aggregation and receptor binding study. Semisynthetic, lipophilic ristocetin and teicoplanin derivatives were prepared starting from ristocetin aglycon and teicoplanin psi-aglycon (N-acetyl-D-glucosaminyl aglycoteicoplanin). The terminal amino functions of the aglycons were converted into azido form by triflic azide. Copper catalyzed 1,3-dipolar cycloaddition reaction with lipophilic alkynes resulted in the title compounds. Two of the teicoplanin derivatives showed very good MIC and MBC values against various Gram-positive bacteria, including vanA enterococci. The aggregation and interaction of a n-decyl derivative with bacterial cell wall components was studied. One of the lipophilic ristocetin derivatives displayed favorable anti-influenza virus activity.
High
[ 0.673883626522327, 31.125, 15.0625 ]
Direct typing of polymorphic microsatellites in the colonial tunicate Botryllus schlosseri (ASCIDIACEA). Five microsatellite loci of the marine protochordate Botryllus schlosseri were cloned: four of uninterrupted (AG)n repeats and one of both (AG)n and (TG)n repeats. By means of an innovative procedure small colony fragments were minimally treated to serve as templates for PCR with microsatellite-specific primers. Four of the loci were polymorphic: 7-8 discrete alleles were scored in nine colonies, heterozygosity ranging between 44-80%. At locus number 811 spacing of the alleles and gel-resolution were highest, therefore, ten additional colonies were typed and in total nine alleles were scored with maximal allelic interval of 120 base pair and 53% heterozygous colonies. The high levels of microsatellite-polymorphism provide a new tool as individual markers for studies on aspects of the botryllid polymorphic allorecognition system.
Mid
[ 0.6153846153846151, 29, 18.125 ]
Brain tissue volumes and small vessel disease in relation to the risk of mortality. Brain atrophy and small vessel disease increase the risk of dementia and stroke. In a population-based cohort study (n=490; 60-90 years) we investigated how volumetric measures of atrophy and small vessel disease were related to mortality and whether this was independent of incident dementia or stroke. Brain volume and hippocampal volume were considered as measures of atrophy, whereas white matter lesions (WML) and lacunar infarcts reflected small vessel disease. We first investigated all-cause mortality in the whole cohort. In subsequent analyses we censored persons at incident dementia or incident stroke. Finally, we separately investigated cardiovascular mortality. The average follow-up was 8.4 years, during which 191 persons died. Brain atrophy and hippocampal atrophy, as well as WML increased the risk of death. The risks associated with hippocampal atrophy attenuated when censoring persons at incident dementia, but not at incident stroke. Censoring at either incident dementia or stroke did not change the risk associated with brain atrophy and WML. Moreover, WML were particularly associated with cardiovascular mortality.
High
[ 0.6614987080103361, 32, 16.375 ]
To continue reading, subscribe now. Already have an account or want to create one to read two commentaries for free? Log in Support High-Quality Commentary For more than 25 years, Project Syndicate has been guided by a simple credo: All people deserve access to a broad range of views by the world's foremost leaders and thinkers on the issues, events, and forces shaping their lives. At a time of unprecedented uncertainty, that mission is more important than ever – and we remain committed to fulfilling it. But there is no doubt that we, like so many other media organizations nowadays, are under growing strain. If you are in a position to support us, please subscribe now. As a subscriber, you will enjoy unlimited access to our On Point suite of long reads and book reviews, Say More contributor interviews, The Year Ahead magazine, the full PS archive, and much more. You will also directly support our mission of delivering the highest-quality commentary on the world's most pressing issues to as wide an audience as possible. By helping us to build a truly open world of ideas, every PS subscriber makes a real difference. Thank you. There is no way out whatsoever if we want to keep oblivious stock market prices, irrational growth of welfare rights, and peace altogether at the same time. But I can suggest some ideas that could be implemented within a week (o tempora, o mores, o Narendra Modi), for a good start. 1) Go back to sound money Replicate basic legislation according to the Peel Banking Act of 1844 but without exempting the demand deposits from the legal requirement of a 100-percent reserve which it did demand with respect to the issuance of paper money. Yes that is killing commercial banking as we know it. Innovation please. Open the mint to new metal discoveries. Reinstate the 90 day real bill market. Anchor national currency units to a metal value according to which, all debts can be extinguished upon ultimate payment in metal. Central banking utter fraud is that they can print unlimited amounts of currency. But the are unable to print a single REAL cup of milk. Of course this would disable oblivious P/E for all ! So ? Are we no longer able to withstand ever changing state of affairs, included poverty? Of course this would create commodities wars at home too ! So ? Are we too lazy to fight wars at home all of a sudden ? Of course this will kill the lunacy of financing long terms maturing goods (like a mortgage) with overnight phantom repos rehypothecated several times at the central bank as "eligible collateral". Of course this would considerably shrink the current welfare state ! So ? Have we already been so badly sugar-coated that we feel no longer handsome eating without all our teeth? 2) Chop down welfare state Welfare state is mandatory, meaning that whatever happens to the economic cycle, the government MUST provide by law a predetermined amount of goods and services for the needy. An example would do great. In Spain, 2007 was base 100. By 2037, Spaniards must provide up to 30 GDP points in additional welfare state to that of 2007. Whatever happens ! No human nature can deliver such a compounded gains in productivity. But such is the rule of law. Until central bankers print their first real glass of fresh milk, welfare state should be postponed. Wars yes. Revolutions, welcome ! History is made of wars. Of are we just bringing conflict to a halt...? By currency printing ? That would be THE real first !! When the needy couldn't stand it anymore, the rose up in revolutions to grab the assets of the dominant class, or they grabbed assets from their neighboring countries. So the needy, rise up from your sofa guys! Sponsored nap time over. Provided sound money is reinstated and welfare state has been littered, the is a walk in the park. 4) Tax reform. Fairness and redistribution - Eliminate by whatever means all tax havens. - Replace all existing taxes by a minuscule transaction tax on all flow of capital. According to a well known study by Mark Chesney a 0.2% would pay for all currently existing taxes ! This has major redistributing effect, as the working class would be taxed 0.2% at receiving income and an additional 0.2% as they purchased goods, making taxation a maximum 0.4% grand total. Richer people would be taxed 0,2% every time they moved their zillions and so on. ---- Entire civilizations, much greater than ours, have collapsed due to abandonment of sound money, by mandatory spending, by overspending, and or by unfair overtaxing. This time is different right ? I concur with all Mr. Shannon's remarks. My note is to say that a major addition to the problem is that the banking industry is a complex system; one of the primary approaches to reducing instability in a complex system is by increasing diversity in all dimensions. Instead, we have witnessed increasing concentration in the sector. In addition, attempts to buffer prospective downward feedback loops by regulation have met with skepticism both on the part of some economists, especially the very influential Alan Greenspan, and so have been disabled and/or removed. If, as seems likely, China's banking sector is inadequate, in whatever sense becomes relevant, to stem the acceleration towards disaster when the next cycle begins, who will remain as a still-productive and profitable outpost in the three major economic zones? I hope to see more thoughtful articles on this topic such as this one, both here and elsewhere, in the near future. Thank you for this great piece. It's a subject that doesn't get enough attention, and we know it's going to happen, therefore, it makes impeccable sense to begin to prepare for it now -- as your essay alludes. I hope to see Per Kurowski's comment here in the coming days, he has thoroughly researched and written extensively on America's banking sector, especially in regards to how to enhance structural solvency for the banking sector. His thinking is one order of magnitude better than the present regulatory system. I've written about China becoming the next real estate bubble and conceivably, it could be 4.5 to 6 times worse than the 2008 crash in the U.S. and it could come *as late as* 2022. But I'd expect it before then. In early 2009, all standing banks were saved from the slaughterhouse by the magic of amending FASB rule number 157. Since that day onwards, bank's balance sheets statements are plain fiction, and any resemblance to reality is mere coincidence. It is upon this fictional land that regulators have built "buffers" to withstand the coming economic seizure. Ultra loose monetary policy has suppressed risk (yield) pricing from securities valuation, making the securities market an almost totally unproductive risk free space, as at least in theory every Treasury issued bond will be bid (indirectly) by outright central bank reserve creation. Risk pricing experts will concentrate their skills in pricing the risks that influence a given currency bid. In all likelihood, the next big dislocation will come from a sudden removal of a given currency bid. Imagine trust in the dollar vanished overnight. Dollar bid would disappear. And all dollar "risk-free" denominated securities would be prized at cents on the dollar. Fictional accounting together with outlandish creation of dubious buffers have shifted risk pricing from the securities market to the currency markets. This tectonic shift will make the next crisis (whenever it comes) several orders of magnitude greater than the previous one. New Comment It appears that you have not yet updated your first and last name. If you would like to update your name, please do so here. Pin comment to this paragraph After posting your comment, you’ll have a ten-minute window to make any edits. Please note that we moderate comments to ensure the conversation remains topically relevant. We appreciate well-informed comments and welcome your criticism and insight. Please be civil and avoid name-calling and ad hominem remarks. Mass protests over racial injustice, the COVID-19 pandemic, and a sharp economic downturn have plunged the United States into its deepest crisis in decades. Will the public embrace radical, systemic reforms, or will the specter of civil disorder provoke a conservative backlash? For democratic countries like the United States, the COVID-19 crisis has opened up four possible political and socioeconomic trajectories. But only one path forward leads to a destination that most people would want to reach. Log in/Register Please log in or register to continue. Registration is free and requires only your email address. Emailrequired PasswordrequiredRemember me? Please enter your email address and click on the reset-password button. If your email exists in our system, we'll send you an email with a link to reset your password. Please note that the link will expire twenty-four hours after the email is sent. If you can't find this email, please check your spam folder.
Mid
[ 0.563876651982378, 32, 24.75 ]
Q: jQuery .first-child syntax As a beginner I struggle with the jQuery syntax. I realize that the following code is NOT selecting the first-child (the div). My question: Why is the syntax wrong and how should I select the div in the previous anchor element? Please can/will someone answer this simple questions for me or put me in in the right direction? (I did search but I just can't find the answer.) Manny thanks in advance! function hilight(a) { $('a').prev().first-child.css({"backgroundColor":"#ffffff","color":"#000000"}); } <div> <a href="#" class="bttn"><div class="bttn">Button</div></a> <a href="#" class="image" onmouseover="hilight(this)" onmouseout="normal(this)"> <img src="imgage.png"/> </a></div> A: There are 2 problems function hilight(a) { //use a as a variable reference & use .children() to find the first child $(a).prev().children(':first-child').css({ "backgroundColor": "#ffffff", "color": "#000000" }); } .children() :first-child Since you are using jQuery, prefer to use jQuery event handlers instead of using inlined handlers. Demo: Fiddle mouseenter mouseleave hover
Low
[ 0.49868073878627905, 23.625, 23.75 ]
INTRODUCTION ============ The SMART trial recently closed recruitment after interim analyses demonstrated that persons undergoing a treatment interruption once their CD4 cell count reached \>350 cells/mm^3^, restarting once it fell to \<250 cells/mm^3^, experienced significantly worse outcomes compared to those randomised to continuous therapy \[[@R1]\]. These CD4 thresholds were chosen to provide a safety margin above 200 cells/mm^3^, the level at which current treatment guidelines recommend initiation of therapy. These disappointing results have rekindled the debate on whether the time is ripe for a "when to start" trial to evaluate benefits and risks from initiating antiretroviral therapy (ART) at higher CD4 counts than the current 200-350 cells/mm^3^ level at which it is considered safe to do so \[[@R2]\]. However, there are few published data to inform such a trial design \[[@R3]\]. We used data from CASCADE, a large collaboration of seroconverter cohorts with CD4 and viral load measurements available prior to the initiation of therapy and once it is initiated to provide estimates of rates of AIDS, and death, at different CD4 categories for persons naïve to therapy as well as those who started combination ART (cART). We also use the derived rates to assess the extent to which any differences in risk before and after cART initiation could be explained by the effect of cART on HIV RNA level. PATIENTS AND METHODS ==================== The CASCADE collaboration includes 23 cohorts of persons with well-estimated dates of HIV seroconversion and has been described in detail elsewhere \[[@R4]\]. After exclusion of patients who started ART in the first 6 months following seroconversion, follow-up was categorised as either \"naïve\", including all follow-up while patients were AIDS-free and antiretroviral naive individuals at their first CD4 cell count after 1 January 1997, and \"cART\", comprising all follow-up of patients once a combination of antiretroviral therapy was initiated after 1 January 1997, with at least 3 antiretroviral (ARV) drugs, or 2 boosted Protease Inhibitors (PI), or one boosted PI and one Non-nucleoside Reverse Transcriptase Inhibitor (NNRTI). A given individual could contribute follow up in both categories. Follow-up under non-cART regimens was ignored. For each follow-up category, the baseline was defined as the first visit for which an individual\'s follow-up qualified for inclusion into that category and until the last visit in that category. For \"cART\", CD4 cell count and HIV RNA at baseline were the closest values before cART initiation and up to a maximum of 6 months prior to initiation. The following clinical events were studied: (i) new AIDS-defining event (ADE), (ii) new serious ADE (all AIDS events except for recurrent bacterial pneumonia, oesophageal candidiasis, reccurent herpes simplex, pulmonary and extrapulmonary tuberculosis, and unspecified events), (iii) death, and two composite end-points: (iv) new ADE or death, (v) new serious ADE or death. Patients who died from AIDS without a previous AIDS diagnosis were classified as ADE progressors. For patients who were not AIDS-free at inclusion in the study group, progression was defined by the occurrence of the first new clinical event. CD4 cell counts were measured with a median periodicity of 98 days, and 91 days, during the naïve and cART follow-up periods, respectively. CD4 cell counts were modelled using linear interpolation between two measurements. The viral load was determined with a median frequency of 105 days, and 91 days, during the naïve, and cART, follow-up, respectively. CD4 cell count was categorized in five specific strata (\<200, 200-350, 350-500, 500-650 and ≥650 cell/mm^3^), and HIV RNA in three levels (\<4, 4-4.99, ≥5 log copies/ml) \[[@R3]\]. For each group of follow-up, incidence of each outcome was estimated within each CD4 cell count stratum. In order to test whether, for a given CD4 stratum, the risk of an event differs between ART-naïve patients and those who had started cART, we included "cART" as an indicator variable in a Poisson regression model with an interaction term between the CD4 cell count and cART. We adjusted for the effect of the following potential confounders: sex and exposure category (as a combined variable), and age. A separate model further adjusted for current HIV RNA. In assessing whether the risk of an event differed between those ART-naïve and those who had started cART within each stratum of CD4 count, an interaction term between the CD4 cell count and cART indicator was significant whether the risk of ADE was modelled excluding HIV RNA (p=0.002) or including it (p=0.03). This interaction term was, therefore, included in both models. Statistical analyses were performed using SAS software package version 9.1 (SAS Institute, Cary, North Carolina, USA). RESULTS ======= A total of 7317 patients contributed 12 297 Person-Years (PY) of antiretroviral-naïve follow-up with median baseline CD4 and HIV RNA of 477 cells/mm^3^ and 4.5 log copies/ml respectively. After cART initiation, 6376 patients, of whom 3690 (58%) were pre-treated, contributed 28 864 PY. Of these 6376 patients, 3911 (61%) were also followed-up as naives, and hence contributed person years to both categories. Median baseline CD4 and HIV RNA for the patients on cART was 310 cells/mm^3^ and 4.5 log copies/ml respectively (Table **[1](#T1){ref-type="table"}**). The first cART prescription was PI-containing for 57%, NNRTI-containing for 30%, Nucleoside Reverse Transcriptase Inhibitor (NRTI) only for 10%, and another combination for 3%. Six months after cART initiation (3 to 9 months), 73% of previously ART-naive patients experienced an increase of \>50 cells/mm^3^ from baseline and 79% achieved HIV RNA \<500 copies/ml. The corresponding values for pre-treated patients were 56% and 59% respectively. Overall, 227 and 146 ADE and serious ADE, and 100 deaths were observed for the naïve follow-up with a corresponding number of events of 498, 335, and 360 during cART follow-up, respectively. Event rates were higher with lower CD4 cell counts (Table **[2](#T2){ref-type="table"}**). For ART-naïve individuals, ADE rates were markedly higher in those with CD4 count below 500 cells/mm^3^ compared with higher CD4, varying from 0.5 event/100 PY (95% Confidence Interval \[CI\] 0.2-0.7) for individuals with CD4 500-650 cells/mm^3^ and rising to 1.2 (0.8-1.5), 2.6 (1.8-3.2), and 21.8 (17.3-26.2) events/100 PY, respectively, at CD4 350-500, 200-350, and \<200 cells/mm^3^. The same trend was observed for serious ADE rates. For those who initiated cART, ADE and serious ADE rates were generally \<1 event/100 PY in CD4 categories \>350 cells/mm^3^. The risk of death remained at \<1 event/100 PY for CD4 \>200 cells/mm^3^. The risk of ADE or death overall was 2.5/100PY, and 2.6/100PY, for naïve, and cART follow-up respectively, and the respective risk of serious ADE or death was 1.9/100PY, and 2.1/100PY. Without adjustment for current HIV RNA, the risk of ADE was nearly 2 fold higher for ART-naive individuals compared to those who started cART for the CD4 count categories below 500 cells/mm^3^, with risk augmentation of 58% (95%CI, 4- 140), 78% (95%CI, 24-156), and 85% ( 95%CI, 46- 135) for CD4 350-500, 200-350, and \<200 cells/mm^3^ respectively (Model 1- Table **[3](#T3){ref-type="table"}**). The risk of ADE became similar for naives and for those who started cART after adjustment for current HIV RNA, except for when the CD4 count was below 200 cells/mm^3^ when the risk of ADE remained significantly higher for naive individuals (Model 2- Table **[3](#T3){ref-type="table"}**). The data provide evidence that the risk of serious ADE was significantly higher for naives compared to those who started cART, only for CD4 \<350/mm^3^, before adjusting for current HIV RNA (RR and 95%CI= 1.68, 1.07-2.62 and 2.06, 1.57- 2.71 respectively for CD4 strata 200-350 and \<200 cells/mm^3^). After adjustment for current HIV RNA, the risk of serious ADE within CD4 categories appeared similar for naives and those who started cART, except for when the CD4 dropped below 200/mm^3^, as with the risk of ADE. In contrast, for fixed CD4 level, there was no association between treatment and risk of death irrespective of adjustment for HIV RNA. When the analyses were restricted to AIDS-free patients at cART initiation, the same results were observed (not shown). Fig. (**[1](#F1){ref-type="fig"}**) shows the rates of ADE, serious ADE, and death estimated using Poisson regression. The differences in rates of clinical progression between the ART-naïve and cART follow-up, which were observed at CD4 counts 200-500 cells/mm^3^, disappeared once we adjusted for current HIV RNA. This suggests that CD4 and HIV RNA had the same prognostic values for naives as for those who started cART, except for CD4 \<200 cells/mm^3^ where the independent effect of cART was more pronounced. DISCUSSION ========== Events rates are extremely high within the lowest CD4 cell stratum (\<200 cells/mm^3^), as reported by a number of studies \[[@R5]-[@R8]\] and fall to 0.5-1 event per 100 PY in those with current CD4 count \>350 cells/mm^3^. However, the risk of ADE was nearly halved after cART initiation compared to naïve follow-up when the CD4 count was below 500 cells/mm^3^. If it can be shown through a randomized controlled trial (RCT) that ART is indicated at these higher levels, this has cost and operational implications for HIV treatment and care programmes in developing countries. This is not only because of the additional numbers who would be eligible for cART, but also because there would likely be no need for CD4 testing to evaluate whether an HIV infected person is eligible for treatment. It is not surprising that at CD4 \>200 cells/mm^3^ and after adjustment for current HIV RNA, the relative risk of an event was similar during cART follow-up and naïve follow-up because HIV RNA is a surrogate for being on cART. This finding is in contrast to that reported from the Frankfurt HIV Outpatient Clinic cohort at the beginning of cART era of lower event rates for patients at the same CD4 cell count and viral load levels receiving a PI-containing regimen compared to those not on therapy \[[@R9]\]. The reason for this is not clear, but it is of note that much higher event rates were reported by that study compared to those observed in our own study, both for treated and naïve follow-up. However, after adjusting on HIV RNA, an independent effect of cART was still observed in our study with 26% of risk reduction at low CD4 cell count. We noted relatively high death rates among ART-naïve patients with CD4 \>350 cells/mm^3^. These appear to be due to an excess of suicides, accidental deaths and deaths due to substance abuse within those CD4 strata as these causes accounted for 53% (8/15) of deaths at CD4 \>650 cells/mm^3^, and for 66% (2/3) and 44% (8/18) of deaths in persons with CD4 500-650 and 350-500 cells/mm^3^ respectively. Our study has a number of limitations. Firstly, included in our cART category are persons who were naïve at the time of initiation as well as pre-treated individuals. Although this may tend to increase event rates, our observed rates were, in fact, lower than those reported by the UK Collaborative HIV Cohort Study (CHIC) \[[@R3]\]. It is important to note, however, that in CHIC, the ADE and serious ADE rates, in fact, included death. The exclusion of pre-treated individuals did not, in any case, have an effect on the evaluation of the prognostic value of CD4 and HIV RNA in cART-treated compared to naïve follow-up (data not shown). In addition, 28%, and 16%, of deaths among naive and treated individuals, respectively, had no cause recorded preventing us from evaluating whether the prognostic value of CD4 and HIV RNA is more pronounced for specific causes. Finally, individuals who happen to be on cART at a given CD4 count and HIV RNA levels are different from those who are not. Although our analyses adjust for the potential effects of age and exposure category as confounders, given the observational nature of our study, unmeasured confounders remain which make it difficult to compare appropriately event rates for those on treatment with those who remain off it. In an attempt to limit the effect of this bias, we assigned events diagnosed in the first week of treatment initiation to the naïve category. Earlier access to therapy, and better immunological and virological responses have been observed among homosexual men compared to other groups, particularly to injecting drug users (IDU) \[[@R10]\]. A difference in clinical benefit of treatment according to patient characteristics, such the transmission group, has also been reported \[[@R8]\]. A number of factors, such as use of health care, lifestyles, level of adherence to therapy, as well as the regimen itself, are important in determining event rates for those initiating cART and could explain differences in event rates between studies. These factors will also impact on an evaluation of differences in event rates between treated and untreated individuals who are at similar CD4 and HIV RNA levels. CONCLUSIONS =========== Comparing clinical event rates between ART-naives and cART-treated follow-up within CD4 cell strata above 200 cells/mm^3^, the risk of AIDS before ART initiation is higher than the risk following cART initiation. However, whether cART should be initiated at higher CD4 levels than is currently recommended in the guidelines, can only be evaluated through a randomised controlled trial. CASCADE has been funded through grants from the European Union BMH4-CT97-2550, QLK2-2000-01431, QLRT-2001-01708 and LSHP-CT-2006-018949. Steering Committee: =================== Julia Del Amo (Chair), Laurence Meyer (Vice Chair), Heiner Bucher, Geneviève Chêne, Deenan Pillay, Maria Prins, Magda Rosinska, Caroline Sabin, Giota Touloumi **Co-ordinating Centre:** Kholoud Porter (Project Leader), Krishnan Bhaskaran (Scientific Co-ordinator), Sarah Walker, Abdel Babiker, Janet Darbyshire **Clinical Advisory Board:** Heiner Bucher, Andrea de Luca, Martin Fisher, Cécile Goujard, Roberto Muga Collaborators: ============== **Australia**Sydney AIDS Prospective Study and Sydney Primary HIV Infection cohort (John Kaldor, Tony Kelleher, Tim Ramacciotti, Linda Gelgor, David Cooper, Don Smith);**Canada**South Alberta clinic (John Gill);**Denmark** Copenhagen HIV Seroconverter Cohort (Louise Bruun Jørgensen, Claus Nielsen, Court Pedersen);**Estonia**Tartu Ülikool (Irja Lutsar);**France**Aquitaine cohort (Geneviève Chêne, Francois Dabis, Rodolphe Thiebaut, Bernard Masquelier), French Hospital Database (Dominique Costagliola, Emilie Lanoy, Marguerite Guiguet),Lyon Primary Infection cohort (Philippe Vanhems),SEROCO cohort (Laurence Meyer, Faroudy Boufassa);**Germany**German cohort(Osamah Hamouda, Claudia Kucherer);**Greece**Greek Haemophilia cohort(Giota Touloumi, Nikos Pantazis, Angelos Hatzakis, Dimitrios Paraskevis, Anastasia Karafoulidou); **Italy** Italian Seroconversion Study (Giovanni Rezza, Maria Dorrucci, Benedetta Longo, Claudia Balotta);**Netherlands**Amsterdam Cohort Studies among homosexual men and drug users (Maria Prins, Liselotte van Asten, Akke van der Bij, Ronald Geskus, Roel Coutinho);**Norway**Oslo and Ulleval Hospital cohorts (Mette Sannes, Oddbjorn Brubakk, Anne Eskild, Johan N Bruun);**Poland**National Institute of Hygiene (Magdalena Rosinska);**Portugal**Universidade Nova de Lisboa (Ricardo Camacho);**Russia** Pasteur Institute (Tatyana Smolskaya);**Spain** Badalona IDU hospital cohort (Roberto Muga), Barcelona IDU Cohort (Patricia Garcia de Olalla),Madrid cohort (Julia Del Amo, Jorge del Romero),Valencia IDU cohort (Santiago Pérez-Hoyos, Ildefonso Hernandez Aguado);**Switzerland**Swiss HIV cohort (Heiner Bucher, Martin Rickenbach, Patrick Francioli);**Ukraine** Perinatal Prevention of AIDS Initiative (Ruslan Malyuta);**United Kingdom** Edinburgh Hospital cohort (Ray Brettle), Health Protection Agency (Valerie Delpech, Sam Lattimore, Gary Murphy, John Parry, Noel Gill), Royal Free haemophilia cohort (Caroline Sabin, Christine Lee),UK Register of HIV Seroconverters (Kholoud Porter,Anne Johnson, Andrew Phillips, Abdel Babiker, Janet Darbyshire, Valerie Delpech),University College London (Deenan Pillay), University of Oxford (Harold Jaffe). ![Estimates of rates and 95%CI of (a) AIDS-defining event (ADE), (b) serious ADE, (c) death, before (circle) and after (square) initiation of combination anti-retroviral therapy (cART) using Poisson regression model adjusting on age and exposure category/sex (hollow symbol), or adjusting on age, exposure category/sex and current HIV RNA strata (solid symbol), with an interaction term between cART indicator and CD4 strata](TOAIDJ-2-3_F1){#F1} ###### Characteristics of Patients Before and After the Initiation of Combination Antiretroviral Therapy (cART) ---------------------------------------------------------- --------------- ---------------   ART-Naïve cART Baseline\* Number of patients 7317 6376 Age (yrs), median (IQR°) 33 (28-39) 35 (30-41) Exposure category/sex (%)     Sex between men 4074 (56) 3269 (51) Male: injecting drug use 685 (9) 703 (11) Female: injecting drug use 348 (5) 396 (6) Male: sex between men and women 698 (9) 616 (10) Female: sex between men and women 1180 (16) 1059 (17) Other/not known 332 (5) 333 (5) Time (yrs) from seroconversion to baseline, median (IQR) 1.1 (0.5-3.3) 4.6 (1.8-7.9) CD4 (cells/mm3), median (IQR) 477 (340-648) 310 (201-447) HIV RNA (log10 copies/ml), median (IQR) 4.5 (3.9-5.1) 4.6 (3.8-5.1) Follow-up Number of Person-Years 12 297 28 864 Duration of follow-up (yrs), median (IQR)  0.8 (0.2-2.5) 4.6 (2.1-6.9) Number of CD4 measurements, median (IQR) 3 (2-7) 13 (6-23) Number of HIV RNA measurements, median (IQR) 3 (1-6) 12 (5-22) Cumulative time spent in CD4 strata, years (%)     \>650 2981 (24) 7357 (25) 500-650 2643 (21) 5406 (19) 350-500 3520 (29) 6403 (22) 200-350 2040 (17) 5004 (17) \<200 458 (4) 2654 (9) missing 653 (5) 2039 (7) ---------------------------------------------------------- --------------- --------------- \* 3911 patients contributed to both categories of follow-up. Baseline characteristics are measured at the first visit for \"ART-naïve\" and at treatment initiation for \"cART\" ^°^IQR : interquartile range ###### Crude Incidence Rates (95% Confidence Intervals) Per 100 Person-Years (PY), Number with Event and PY of Follow-Up in Brackets \[\], According to Current CD4 Cell Count (Cells/mm3) Before and After the Initiation of Combination Antiretroviral Therapy (cART) ------------------- ------------------ ------------------ ---------------   CD4 ART-Naïves cART ADE^°^ \>650 0.7 (0.4-1.0) 0.4 (0.3-0.6) \[20/2967\] \[31/7207\] 500-649 0.5 (0.2-0.7) 0.6 (0.4-0.8) \[12/2633\] \[30/5258\] 350-499 1.2 (0.8-1.5) 0.7 (0.5-1.0) \[41/3494\] \[47/6201\] 200-349 2.6 (1.8-3.2) 1.4 (1.1-1.8) \[51/1997\] \[69/4734\] \<200 21.8 (17.3-26.2) 11.7 (10.3-13.2) \[92/422\] \[260/2214\] Missing 1.7 (0.7-2.8) 3.1 (2.3-3.8) \[11/625\] \[61/1980\] Serious ADE \>650 0.3 (0.1-0.5) 0.3 (0.2-0.4) \[10/2977\] \[19/7266\] 500-649 0.3 (0.1-0.5) 0.3 (0.2-0.5) \[7/2640\] \[16/5321\] 350-499 0.6 (0.3-0.8) 0.5 (0.3-0.6) \[20/3512\] \[30/6280\] 200-349 1.6 (1.1-2.2) 1.0 (0.7-1.3) \[33/2020\] \[47/4831\] \<200 16.1 (12.3-19.9) 7.8 (6.7-8.9) \[71/440\] \[185/2366\] Missing 0.8 (0.1-1.5) 1.9 (1.3-2.8) \[5/639\] \[38/2001\] Death \>650 0.5 (0.2-0.8) 0.3 (0.2-0.5) \[15/2981\] \[24/7357\] 500-649 0.1 (0.0-0.2) 0.4 (0.2-0.5) \[3/2643\] \[19/5406\] 350-499 0.5 (0.3-0.7) 0.6 (0.4-0.8) \[18/3520\] \[38/6403\] 200-349 0.5 (0.2-0.8) 0.9 (0.7-1.2) \[11/2040\] \[47/5004\] \<200 4.6 (2.6-6.5) 5.9 (5.0-6.9) \[21/458\] \[158/2654\] Missing 4.9 (3.2-6.6) 3.6 (2.8-4.5) \[32/653\] \[74/2039\] ADE/death \>650 1.1 (0.7-1.5) 0.7 (0.5-0.9) \[33/2967\] \[52/7207\] 500-649 0.6 (0.3-0.8) 0.9 (0.6-1.1) \[15/2633\] \[46/5258\] 350-499 1.6 (1.2-2.0) 1.3 (1.0-1.6) \[56/3494\] \[82/6201\] 200-349 3.1 (2.3-3.8) 2.2 (1.8-2.6) \[61/1997\] \[105/4734\] \<200 23.2 (18.6-27.8) 14.6 (13.0-16.2) \[98/422\] \[324/2214\] Missing 6.2 (4.3-8.2) 5.2 (4.2-6.2) \[39/625\] \[104/1980\] Serious ADE/death \>650 0.8 (0.5-1.1) 0.6 (0.4-0.8) \[23/2977\] \[42/7266\] 500-649 0.4 (0.1-0.6) 0.6 (0.4-0.8) \[10/2640\] \[32/5321\] 350-499 1.1 (0.7-1.4) 1.0 (0.8-1.3) \[38/3512\] \[65/6280\] 200-349 2.1 (1.5-2.8) 1.8 (1.4-2.2) \[43/2020\] \[86/4831\] \<200 19.1 (15.0-23.2) 11.5 (10.2-12.9) \[84/440\] \[273/2366\] Missing 5.5 (3.7-7.3) 4.6 (3.7-5.6) \[35/639\] \[93/2001\] ------------------- ------------------ ------------------ --------------- ^°^ADE : AIDS defining event. ###### Effect of Combination Antiretroviral Therapy (cART) on Event Rates --AIDS Defining Event (ADE), Serious ADE, or Death- Within CD4 Cell Strata: Relative Rate (95% Confidence Interval) of Clinical Progression for ART-Naïve Follow-Up Compared to cART Follow-Up ------------- --------------------------- ------------------ ------------------   CD4 Strata (cells /mm^3^) Model 1 Model 2 RR (95%CI) RR (95%CI) ADE \>650 1.60 (0.91-2.81) 1.08 (0.62-1.92) 500-650 0.82 (0.42-1.60) 0.55 (0.28-1.09) 350-500 1.58 (1.04-2.40) 1.04 (0.68-1.60) 200-350 1.78 (1.24-2.56) 1.15 (0.79-1.66) \<200 1.85 (1.46-2.35) 1.35 (1.06-1.72) Serious ADE \>650 1.37 (0.64-2.95) 0.93 (0.43-2.03) 500-650 0.91 (0.38-2.22) 0.62 (0.25-1.53) 350-500 1.21 (0.69-2.14) 0.80 (0.45-1.42) 200-350 1.68 (1.07-2.62) 1.07 (0.68-1.69) \<200 2.06 (1.57-2.71) 1.48 (1.12-1.95) Death \>650 1.78 (0.93-3.41) 1.79 (0.93-3.43) 500-650 0.38 (0.11-1.30) 0.37 (0.11-1.25) 350-500 1.02 (0.58-1.79) 0.95 (0.53-1.68) 200-350 0.67 (0.35-1.28) 0.61 (0.31-1.19) \<200 0.80 (0.51-1.26) 0.79 (0.50-1.25) ------------- --------------------------- ------------------ ------------------ Estimates of event rates using Poisson regression model adjusting on age and exposure category/sex (model 1), or age, exposure category/sex and current HIV RNA strata (model 2), with an interaction term between cART indicator and CD4 strata.
Mid
[ 0.619607843137254, 39.5, 24.25 ]
Healthy Pets Tools & Resources Pet Medical Insurance Questions to Ask Before You Buy a Pet Health Insurance Plan WebMD Archive How can I find a reputable provider of pet insurance policies? Asking your vet to recommend a plan is a good first step. They're not permitted to sell pet insurance, Sullivan says, so you don't have to worry about them pushing "their" plan. She also tells WebMD that your vet will likely recommend a plan that other clients have had success with. No single organization, as yet, sets policy or standards for pet health insurance, but plans are regulated state by state by the state attorney general's office. Sullivan says you can call your state attorney general's office and ask if any complaints have been filed against the company or companies you are considering. You can also ask others with the plan to tell you about their experiences. What should I look for when shopping for a pet insurance policy? It sounds obvious, but try to fit the policy to your pet's needs and your own. If you can easily handle routine vaccination expenses for one pet, you may not need a wellness coverage policy. But if you have four dogs or cats, such a plan might be cost-effective. If you have questions after reading the marketing material, call the company and ask what is covered and what isn't. If you have more than one pet, Sullivan suggests you ask if you can get a group rate. Two dogs in a single household might get a group rate, but it's less likely to be given to a dog and a cat under the same roof. When should I buy pet medical insurance? ''Better late than never is one approach," Klingborg says. "However, it would make sense to look into it if you are bringing a new animal home." "A lot of these companies focus on wellness care,” Klingborg says. ”So they will often provide very good reimbursement for the vaccination series and for spaying and neutering. All of those [services] are associated with fewer animal [health] problems in the future." If you wait, a health condition in an older dog or cat might rule out coverage. Sullivan, for instance, can't get coverage for her 16-year-old Lab, who has heart disease and a history of ear and knee problems. You might ask your vet if your particular breed has a tendency for certain health problems, especially expensive or complicated problems, Tait says. "Saint Bernards and German Shepherds are prone to orthopedic problems," he says. "Boxers are prone to heart problems. Little dogs, like pugs, often have breathing problems."
Mid
[ 0.6260869565217391, 27, 16.125 ]
The fallacy of ‘free’ trade is more than a fallacy of ‘free’ trade Now that Donald Trump has begun a trade war that has, among other things, prompted Harley Davidson to shift some production overseas to avoid European Union tariffs,[1] it would seem that at least some so-called “free traders” may be rethinking. It’s a bit overdue: Controversy in the U.S. over the North American Free Trade Agreement and the TransPacific Partnership turned largely on job losses that followed NAFTA.[2] Ana Campoy cites Catherine Novelli, writing that “[n]o one really wants to get rid of NAFTA, or trade. What they want is a job that allows them to support their family and live with dignity.”[3] And, for a decade following the 2003 Doha round on trade, Megan McArdle writes in retrospect, it was free-traders who were fighting a holding action, earnestly debating the merits of better-than-nothing bilateral or regional trade agreements. Then came Brexit. And Trump. Suddenly we’re no longer even thinking about minor advances; we’re thinking about how to manage the retreat.”[4] It’s too soon at least to point even to a stream of self-criticism; this is more like a couple drops out of a spigot. But Campoy looks to Zen Buddhism, writing that “globalists” (advocates of economic globalization) need to “let[] go of notions of how things should be, to instead accept things as they are.”[5] She needn’t have gone quite so far, either in antiquity or in culture: She quotes sixth century Zen master Seng-ts’an, but she’s in fact talking about the naturalistic fallacy: [David] Hume himself drew the distinction, in a famous passage, between judgments about how things are and judgments about how things ought to be. Normative judgments naturally come with views about what one ought to think, do, or feel. And the Positivist picture is often thought to be Humean in part because Hume insisted that the distinction between “is” and “ought” was, as he said, “of the last consequence.” Like desires, oughts are intrinsically action guiding, in a way that is isn’t. And so, kin the familiar slogan, “you can’t get an ought from an is.” Since we are often tempted to move from what is to what ought to be, this move, like many moves philosophers think illicit, has a disparaging name: we call it the naturalistic fallacy.[6] I’m more skeptical. As a critical theorist, I look at the words “free trade” and immediately demand to know, ‘free’ for whom? to do what? to whom? I see the inherent inequity of an exchange system: Max Weber labeled it “the most elemental economic fact” that the market inherently favors whomever has the greater power to say no, that is, to decline a deal, or to hold out for a better one. Further, he explained, the benefits and handicaps that accrue from each transaction are cumulative. “Other things being equal,” Weber wrote, the mode of distribution monopolizes the opportunities for profitable deals for all those who, provided with goods, do not necessarily have to exchange them. It increases, at least generally, their power in the price struggle with those who, being propertyless, have nothing to offer but their labor or the resulting products, and who are compelled to get rid of these products in order to subsist at all.[7] Ask anyone who works for tips, who is least generous? They’ll tell you it’s the rich—the filthier they are, the stingier they are. From the robber barons of the industrial revolution to today’s neoliberals, the refrain is always the same: To be “competitive,” ‘we’ (meaning all of us) must be “efficient,” that is, ‘we’ (meaning the rich) must extract the maximum at the least possible cost, cruelly devaluing human beings, society, and the environment along the way. McArdle wants to treat the situation today as unique, blaming China: The analytical mistake was underestimating the effect that China’s accession to the WTO would have on domestic industries in the rich world. When workers complained about trade displacement, we free-traders pointed out that trade creates jobs as well as destroys them, leaving workers generally better off. That’s usually true. But China was a special case. Most trade liberalization occurs slowly, giving workers time to adjust, but when trade barriers to Chinese goods fell, manufacturing workers in the 37 nations of the Organization for Economic Cooperation and Development were suddenly exposed to competition from millions of low-wage workers. Recent research by economists David H. Autor, David Dorn and Gordon H. Hanson suggests that the “China shock” destroyed jobs faster than they could be created.[8] Campoy is a little smarter, writing that “[t]o undo the damages from trade—and the backlash they are generating—free traders have to start by acknowledging their connections to the people who are bearing the costs of free-trade policies.”[9] But these connections are not merely economic. The problem here is more fundamentally of arrogance, most obviously of assuming that it’s okay to treat people like machines, but also of elites—political, economic, and academic—substituting their own notions of what’s best, the same notions that feather their own nests, for the lived experience of the subjects they rule. Author: benfell David Benfell holds a Ph.D. in Human Science from Saybrook University. He earned a M.A. in Speech Communication from CSU East Bay in 2009 and has studied at California Institute of Integral Studies. He is an anarchist, a vegetarian ecofeminist, a naturist, and a Taoist. View all posts by benfell
Mid
[ 0.540229885057471, 29.375, 25 ]
Q: Disambiguating the "clustering", and "cluster" tags clustering and cluster on Stack Overflow are quite ambiguous. I'll cite computer-related terms from the Wikipedia disambiguation page for Clustering. A result of cluster analysis. 1 An algorithm for cluster analysis, a method for statistical data analysis. 1 Cluster (computing), the technique of linking a many computers together to act like a single computer. 2 Data cluster, an allocation of contiguous storage in databases and file systems. 3 In hash tables, the mapping of keys to nearby slots. 3 The formation of clusters of linked nodes in a network, measured by the clustering coefficient. 1 There seem to be three popular meanings: various types related to cluster analysis (1), cluster-computing (2) and data storage/databases (3). There are a number of related tags on Stack Overflow: clustering 681 (? probably mostly Type 1) cluster 475 (? probably mostly Type 2) cluster-computing 120 (Type 2) dataclustering 42 (Type 1) clustered-index 132 (Type 3) clustered 18 (Type 3?) clusters 16 (? probably mostly Type 2) database-cluster 7 (Mixture of types 2 and 3?) In my opinion these tags would be more useful on SO if we did actually not have "cluster" and "clustering" tags (which are inherently used for both), but typing "cluster" will instead bring up the suggestions [cluster-analysis] and [cluster-computing]. In particular, when the questions are mostly using these tags, they should show up in the beginning of the list. What do you think? A: dataclustering already has a tag wiki describing it as cluster-analysis, so we should probably merge dataclustering into cluster-analysis Now that we have a cluster-analysis tag, we can make clustering and data-clustering synonyms of cluster-analysis. We can make cluster and clusters synonyms of cluster-computing And we can make clustered a synonym of clustered-index And, of course, fixup all the tag wikis.
Mid
[ 0.6136919315403421, 31.375, 19.75 ]
Thursday, September 6, 2018 Six Years in Shanghai...Too Many Adventures to Count Yep! Sightseeing on our initial visit: Yu Gardens in August (boiling!) The six years in Shanghai have flown by, full of new friends, goodbyes, travels, fun of all kinds, new jobs and exciting work, and the ups and downs of life in China. I'm so happy I documented my travels so well at the beginning and so disappointed I've stopped. But, on the other hand, I'd rather be soaking up all life has to offer than writing about it. Recently, I was a guest on the Expat Rewind podcast. The host, Stephanie, is one of those new friends I've been lucky to meet in Shanghai. She attends the Podcast Brunch Club I run and we're also in book clubs and other groups together (there are so many more of these now than 6 years ago!). On her podcast, Stephanie asks people to read something they wrote online in their first year abroad and reflect back on it. I chose my post "The Ever Present ____ of China". You can hear my interview with Stephanie here: This really got me looking back at the blog, thinking I should pick it back up, and generally reflecting on my adventures. I don't even know where to begin in covering everything I've experienced during this time. I've made great friends, visited almost every country in Asia, traveled around China's main tourist spots (and a few less visited), explored countless lanes in Shanghai, learned quite a bit about China and Shanghai, eaten at some of the most amazing restaurants in the world, and experienced all the ups and downs of daily life in Shanghai. I always describe Shanghai as vibrant and dynamic. I can't pick two better words. Vibrant... There is never a lack of things to do, people to meet and places to explore. I've really seen those opportunities (for a foreigner, in particular) blossom in the past few years. Scroll through meetup.com and you can find city walks, hiking groups, art groups, book clubs and something for nearly any interest. Each weekend an art group I'm part of goes on outings to various galleries and museums, never running out of new places to see. There are countless travel groups if you want to take anything from a day trip to a nearby village to a sojourn to Tibet and Everest Base camp with others. This international city with over 25 million people has a pulse all its own. I live at its very heart, not far from East Nanjing pedestrian street, a neon-lined shopping street that's always crowded except at about 4-5 AM. People's Square, my neighborhood, is the city center and was the racing grounds back in the day when foreigners sat at the nearby hotels/clubs watching the races with their tea and cocktails. You can still see the array of art deco architecture from that time, mixed in with busy streets and double-decker tourist buses. On weekends, you can barely walk through the park as its so packed with elderly people advertising their children at the "marriage market". And, old-school, lively Shanghai shopping takes place one street behind my home. Older people doing their daily shopping come out early to find tonight's dinner among the tubs of live seafood, small vegetable shops and butchers. Street food vendors offer starchy, oily goodies to get you going for the day. Colors, sounds, and a solid mix of old and new abound. One street looks like the future, the other a step into a China past that is slowly disappearing. Within one street you see sleek hotels, office towers and luxury cars on display and the bicycle repairman who's camped out to repair the more traditional means of transport. Dynamic... Leaving for vacation means coming back to a changed city. You might find your favorite new restaurant has closed or perhaps the entire block of homes nearby is gone. You'll have 10-12 new restaurants to try, though, and plenty of new sites to see. Even long-held traditions and culture change at a pace you don't typically see elsewhere. Adoption of technology happens so fast, it skips over 3 or 4 iterations elsewhere. Rules change too fast to keep up. You always have to assume they'll be different, ask lots of questions and persist. The way the government can implement change astounds. We overlook a massive elevated highway and it used to be honking from dawn til midnight. I even said in my old blog post that this would never change. Never say never here. They started levying big fines for cars honking and made that problem disappear. Scooters, unfortunately, don't abide by those rules so it's still far from silent. Smoking is another impressive rule change. They had half-heartedly banned indoor smoking before, but then got serious about it. Importantly, they created significant fines for the businesses so they'd serve as enforcers rather than complicit rule breakers. Poof...the indoor smoking went up in smoke (for the most part). Like so many things, this is an example of how China has shown me the subtle pros and cons of different ways. It's highly complex governing the world's largest population. I'm not condoning all of its behavior by any means, but living here does teach you not to judge so quickly. Or, with your own cultural lense, without taking the time to observe and learn. I'm glad to be back to blogging, though I won't promise to keep it up. But, I'm hoping to do a couple more reflection posts before we leave. And, perhaps to document more of the trips which I've missed writing about here (there are tons of photos on Facebook!). In the meantime, there are always more Shanghai adventures. Visiting the "Hidden Library" recently in Shanghai, a Ming and Qing era home which has not been remodeled
High
[ 0.669322709163346, 31.5, 15.5625 ]
Q: do sequelize associations do anything on the DB side I've been using sequelize a lot in recent projects and I'm curious about what happens under the hood for associations vs migrations. For example, when I generate 2 models: user = { id, name, } and post = { id, name, } and then I generate a migration to add the associated columns: module.exports = { up: (queryInterface, Sequelize) => { return queryInterface.addColumn( 'posts', 'userId', // name of the key we're adding { type: Sequelize.UUID, references: { model: 'users', // name of Target model key: 'id', // key in Target model that we're referencing }, onUpdate: 'CASCADE', onDelete: 'SET NULL', } ); }, down: (queryInterface, Sequelize) => { return queryInterface.removeColumn( 'posts', // name of Source model 'userId' // key we want to remove ); } }; what does the associate method in the model do if the migration above adds the actual userId column to the posts table? example of an associate method in a model: module.exports = (sequelize, DataTypes) => { const post = sequelize.define('post', { name: DataTypes.TEXT }, {}); post.associate = function(models) { post.belongsTo(models.user); }; return post; }; which raises a bigger question, if the associate method ends up creating the actual foreign key column in the db, is an intermediate migration (like the one shown above, which creates the foreign key columns) necessary to create the foreign key column? A: TL;DR: Sequelize Associations do not do anything on the DB side, meaning they can't (create tables, add columns, add constraints, ..etc) Disclaimer: I might've not covered all the benefits/differences of both in this answer, this just an abstract. 1) here is how i differentiate the Model from the Migration (based on functionality): The Migration (creates tables, add constraints, ..etc) on the DB The Model makes it easier for you as a developer to interact with the table that corresponds with the Model (which is the model is defined for) on the DB, for example: A User model helps you to interact with the Users table without the need to write SQL queries. 2) The Associate methods give you two special powers which are lazyLoading and eagerLoading who both spare you the headache of doing Joins manually through raw SQL queries. so yeah again: "The Model spare you the headache of writing raw SQL queries yourself."
Low
[ 0.49224806201550303, 31.75, 32.75 ]
Q: UITableView with in a scrollview not working I'm a newbie for ios sdk.I've created a sample app.In that app I'm using a tableview and on clicking any cell in the table view i'm pushing a ScrollViewViewController( scrollView with pageControl ) on it.In ScrollViewViewController I'm loading the views which will have each a tabBarController. In the tabBar there are 5 tabBarItems.And on second tabBarItem corresponds to a viewContoller which has a tableView as subview. Here the problem is that on the second tabBarItem tableView is scrolling fine in the fist page of the scrollView.But once i swipe to the next page in scrollview and on that page if i select the second tabBarItem which loads a tableView and that tableView is not scrolling. Please help me. I've been struck here from more than a month :( Thnaks in advance. A: This approach isn't really recommended. The problem being that UITableView inherits from UIScrollView. It's actually written in the UIWebView documentation: "Important: You should not embed UIWebView or UITableView objects in UIScrollView objects. If you do so, unexpected behavior can result because touch events for the two objects can be mixed up and wrongly handled." http://developer.apple.com/library/ios/#documentation/uikit/reference/UIWebView_Class/Reference/Reference.html
Low
[ 0.5045454545454541, 27.75, 27.25 ]
Preventing hospital-acquired venous thromboembolism: Improving patient safety with interdisciplinary teamwork, quality improvement analytics, and data transparency. Hospital-acquired venous thromboembolism (HA-VTE) is a potentially preventable cause of morbidity and mortality. Despite high rates of venous thromboembolism (VTE) prophylaxis in accordance with an institutional guideline, VTE remains the most common hospital-acquired condition in our institution. To improve the safety of all hospitalized patients, examine current VTE prevention practices, identify opportunities for improvement, and decrease rates of HA-VTE. Pre/post assessment. Urban academic tertiary referral center, level 1 trauma center, safety net hospital; all patients. We formed a multidisciplinary VTE task force to review all HA-VTE events, assess prevention practices relative to evidence-based institutional guidelines, and identify improvement opportunities. The task force developed an electronic tool to facilitate efficient VTE event review and designed decision-support and reporting tools, now integrated into the electronic health record, to bring optimal VTE prevention practices to the point of care. Performance is shared transparently across the institution. Harborview benchmarks process and outcome performance, including patient safety indicators and core measures, against hospitals nationally using Hospital Compare and Vizient data. Our program has resulted in >90% guideline-adherent VTE prevention and zero preventable HA-VTEs. Initiatives have resulted in a 15% decrease in HA-VTE and a 21% reduction in postoperative VTE. Keys to success include the multidisciplinary approach, clinical roles of task force members, senior leadership support, and use of quality improvement analytics for retrospective review, prospective reporting, and performance transparency. Ongoing task force collaboration with frontline providers is critical to sustained improvements. Journal of Hospital Medicine 2016;11:S38-S43. © 2016 Society of Hospital Medicine.
High
[ 0.764705882352941, 30.875, 9.5 ]
<?xml version="1.0"?> <package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd"> <metadata> <id>Owin</id> <version>1.0</version> <title>OWIN</title> <authors>OWIN startup components contributors</authors> <owners>OWIN startup components contributors</owners> <licenseUrl>https://github.com/owin-contrib/owin-hosting/blob/master/LICENSE.txt</licenseUrl> <projectUrl>https://github.com/owin-contrib/owin-hosting/</projectUrl> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>OWIN IAppBuilder startup interface</description> <tags>OWIN</tags> </metadata> </package>
Low
[ 0.5263157894736841, 30, 27 ]
Liver Life Walk Maine 2015|June 28, 2015 One Step. One Walk. One Future... …to a future without liver disease. This is the goal the volunteers and staff of the American Liver Foundation® work toward every day. You can join them and make a difference as a participant in the Liver Life Walk®. Your participation will keep us moving forward in the fight against one of America's fastest-growing public health concerns—liver disease. Ann was born August 20, 2013. When she was about two months old, Ann was diagnosed with biliary atresia. Biliary atresia is a disease which affects only infants and prevents the flow of bile from the liver, causing a toxic back-up. Ann underwent major surgery in attempt to correct the affected bile ducts. The procedure made no improvement; in fact, Ann became very sick with multiple infections and had to undergo further surgeries following the initial procedure. On December 16, 2013, a week after Ann was placed on the list, she received the life-saving gift of a new liver. She was not even four months old. Ann is back to being the happy baby her parents and big brother welcomed into the world just over a year ago. Ann is a survivor and a true champion. Ann’s parents are so thankful for the unconditional support from their family and friends and the hard work from all the doctors who helped save Ann’s life. Ann’s parents love getting to spend every day with their beautiful, thriving baby girl.
Mid
[ 0.5622317596566521, 32.75, 25.5 ]
The application generally relates to load bearing supports. The application relates more specifically to load bearing columns constructed of multiple stacked drums with a resilient core member surrounded by filler material. Various devices disclosed in the prior art are designed and used to provide support to a mine roof. Underground mining results in removal of material from the interior of a mine, thereby leaving unsupported passageways of various sizes within the mine. The lack of support in such passageways may cause mine roof buckling and/or collapse. Thus, it has been desirable to provide support to mine roofs to prevent, delay, or control collapse thereof. In both underground mining and areas of seismic activity, supports must be engineered to withstand enormous forces propagating through the earth. Building and bridge structures may include modified foundations designed to isolate the superstructure from major ground motion during an earthquake. Such supports for building structures are intended to avoid the transmission of high seismic forces. Bridges and building structures which are located in an earthquake zone are capable of being damaged or destroyed by seismic forces. In general bridge structures may be constructed with bearings between the bridge's deck or superstructure and the bridge supporting columns to permit relative movement between the two. It is also known to provide damping for the movement upon these bearings of superstructure relative to supports, however the permitted relative movement is not large and furthermore it is not always preferred to attempt to hold a superstructure in a position around a neutral point with respect to the supports. In underground mining applications, supports of aerated concrete in a hollow tube have been used to permit a support to yield axially in a controlled manner that prevents sudden collapse of an underground mine roof. Such supports yield axially as the aerated concrete within the product is crushed and maintains support of a load as it yields. An oak wood post having a length of 6.5 feet and a diameter of 6 inches will have a slenderness (height to width) ratio of 26. Such a post will have a maximum axial load capacity of about 16,000 lbs. For a post formed from spruce, the maximum safe axial load handling capability for a post that is 6.5 feet in length and 6 inches in diameter is about 13,600 pounds. In addition, when a wood post yields by kneeling or buckling, such yielding will result in catastrophic failure of the post in which the post can no longer support the load. Because of the obvious problem associated with such catastrophic failure of posts, various mine props have been developed in the art for supporting the roof of an underground mine. Such mine props have included, various configurations of wood beams encased in metal housings, and complex hydraulically controlled prop devices. Such props, however, do not allow for controlled axial yielding while preventing sideways buckling or kneeling in a simple, lightweight prop that can be hand carried by a user. U.S. Pat. No. 5,308,196 to Frederick discloses a prior art mine roof support comprising a container that is placed between the mine roof and the mine floor and filled with a load-bearing material. In instances where a support is compressed, whether due to seismic forces or geological forces, the support generally is incapable of rebounding when the load is reduced or removed. What is needed is a support that can compress under extreme loads and rebound to maintain contact with the load, and which satisfies one or more of these needs or provides other advantageous features. Other features and advantages will be made apparent from the present specification. The teachings disclosed extend to those embodiments that fall within the scope of the claims, regardless of whether they accomplish one or more of the aforementioned needs.
Mid
[ 0.6269315673289181, 35.5, 21.125 ]
Menu Search for: Zoombinis Help the Zoombinis find a new home in a puzzle-filled journey. PC Release: October 28, 2015 By Ian Coppock My review of Star Wars: DroidWorks led me to realize that there’s a hole in my reviews. The overwhelming amount of content I have is fit only for mature audiences, but especially as the age of the average gamer rises, I forget that many of you have kids. It’s never been explicitly asked of me, but I’d like to continue this week’s theme of edutainment with another review of a child-friendly game. Whether you’re looking to start out a child on video games, or find something fun to do together, Zoombinis is the way to go. ____________________ Zoombinis is an isometric puzzle game loaded with logical conundrums. Originally released in 1996, the game was known by the much clunkier title of Logical Journey of the Zoombinis. The version that I’m reviewing now, simply known as Zoombinis, is the one and the same game updated with newer graphics and fit to run on modern systems. This new version was released on Steam last fall. The titular Zoombinis are a race of small, blueberry-looking things whose home gets conquered by the evil Bloats. The little creatures band together to escape their home island and reach the mainland, where they hope to find a new haven. It’s up to the player to guide them inland in small batches, solving puzzles to reach the mythical Zoombiniville. Solving puzzles is the only way to free the Zoombinis. Zoombinis come in dozens of designer options. You can have your Zoombinis get around on roller skates or propellers, and customize them with different accessories, noses and hairstyles. Customizing your Zoombini is no mere matter of cosmetics; it’s actually one of the core mechanics of the game. In each of the puzzles between you and Zoombiniville, each Zoombini’s feature corresponds with a feature of the puzzle. Zoombinis with propellers, for example, might be able to make it across a certain bridge, but Zoombinis with roller skates have to find another path. The preferences in each puzzle are purely arbitrary, and players have to logically deduce which Zoombini features are acceptable for which path. A few other puzzles deal with deduction in a different way, like figuring out which toppings to put on a grumpy troll’s pizza. The pizza trolls were my nemeses in elementary school. The further inland you get, the more difficult the puzzles become. It starts out pretty simply, with a pair of stone cliffs that will sneeze on certain Zoombinis who walk on their bridges. The Zoombinis also suffer encounters with hostile wildlife, grumpy pizza addicts and obsessive-compulsive ferrymen. Most puzzles contain a few different paths that only certain Zoombinis can tread safely. Experimenting with different paths is the only way to deduce which Zoombinis can go where, but be careful; make too many mistakes and your Zoombinis will start getting punted all the way back to the start of the game. For the Zoombinis who make it, though, Zoombiniville is a pretty bitching place. As more Zoombinis immigrate here, you can build a new town replete with pizza parlors and swimming pools. The more Zoombinis you can escort, the bigger the town becomes. Aw, look at the little tree houses! Each puzzle, even the ones at the beginning, become more difficult the more Zoombinis you bring in. The infamous pizza troll puzzle, where you have to discern a grumpy highwayman’s favorite toppings, grows from one troll, to two, to three, as you progress through the game. The puzzles later in the journey become similarly difficult, until even an adult player might have a tough time getting every Zoombini to Zoombiniville. Players must also keep an eye on their Zoombinis’ physical traits. Early on, it’s easy to create a bunch of Zoombinis with similar characteristics, but the game deliberately prevents you from making the entire group identical. Eventually you’ll be stuck escorting a bargain bin of misfit Zoombinis who were snipped off the backs of a dozen other expeditions. This, when combined with the puzzles’ escalating difficulty, makes Zoombinis challenging for anyone. It takes balls to brave that abyss on roller skates or propellers, let alone feet. I appreciate that the developers of this Zoombinis update kept the game faithful to its original content. Too often, kids’ games released these days focus less on educating a child and more on distracting them. Contemporary kids’ games seem to focus on bright colors and loud noises instead of substance and subtlety. Zoombinis is truly a game of a bygone era, towering above its peers in terms of the logic lessons it has to offer, and the charm of its content. Nothing in the game has been dumbed down for a contemporary audience, which is outstanding. The graphics and interface have received hefty refits, but that’s about all that’s changed since I played this game back in the day. Talking rocks. We’re screwed. Though the graphics of Zoombinis have been polished up while retaining their cartoony charm, the audio has not been touched up at all. I’m glad they didn’t re-record the maniacal narrator or the little sounds the Zoombinis make, but 90s video game sound design was not great. Every sound effect and bit of music sounds dulled down, and you can still hear the heavy static from whatever toaster they used to record this. It will sound nostalgic to the adults among you, but it might make today’s tech-savvy children scream in terror. Apart from the sound design, there’s really nothing wrong with the rest of Zoombinis’ production. The game is very tightly wound around the concept of logical deduction, and neither the original version I played in the 3rd grade nor the updated one I played last week had any bugs or glitches. I miss the days when day-one glitches were the exception instead of the norm. If you’re a gamer out there who has a small child, why should you consider Zoombinis? Well, if teaching your child about the wonderful world of logical deduction ain’t enough reason, Zoombinis is a great game because it teaches its players how to think critically. Each deduction puzzle revolves around this theme and gets tougher the more you go in. The charm of the Zoombinis and their little world serves as the catalyst for your kid’s interest. It’s also a game that they can play themselves or with you. That’s about it for tonight, folks. Zoombinis is not a narrative masterpiece or the latest masterwork of some indie story studio, but it’s the best edutainment game I’ve ever played, and I played a fair share back in the day. The modern Steam remake retains all of the original game’s charm and challenge, but your computer won’t have an aneurysm trying to run it. I highly recommend Zoombinis, especially if you’re looking for a place for your kid to start their own video gaming journey. Give it a go and see how well you do making pizza for trolls. Thank you for reading! My next review will be posted in a few days. You can follow Art as Games on Twitter @IanLayneCoppock, or friend me at username Art as Games on Steam. Feel free to leave a comment or email me at [email protected] with a game that you’d like to see reviewed, though bear in mind that I only review PC games.
Mid
[ 0.611253196930946, 29.875, 19 ]
The movie is based on a Roman (novel) with the same title by Lothar-Günther Buchheim. Meanwhile there are four versions of the movie which partly are pretty different. The most famous version is the first one, the „cinema version“ from 1981 that also has been nominiert (nominated) for an Oscar. In the USA this version was shown in 1982. In 1985 a version for TV was made. This one is much longer (almost 5 hours) than the first version. In 1997 the Director´s Cut was published. This version is totally remastered and has much better sound and images than the original one. And last but not least, in 2004 the TV Version was also released on DVD. This one is remastered as well, but the Director´s Cut is still better in quality. Well, the action takes place in 1941 during World War 2. German U-Boote (submarines) have the mission to sink Handelsschiffe (merchant ships) that supply England with important goods. But the battle gets harder and harder because the merchant ships are accompanied by destroyers. After a hard drinking party in La Rochelle (France), the submarine U96 also has to set sail. Kriegsberichterstatter (war correspondant) Lt. Werner (Herbert Grönemeyer) is also aboard. He will soon change his romantic imagination of a dangerous trip with enemy contact like this. But first, there is no enemy contact so the crew is beginning to get bored and partly starts mobbing Lt. Werner. But the situation changes. A torpedo attack to a single Zerstörer (destroyer) fails and the U 96 is being attacked by Wasserbomben (depth charges) for the first time. After this, heavy storms appear and the submarine has to dive from time to time because it´s hard to keep on track on the surface and it´s getting impossible to locate the own position. After meeting another German crew, they realize that other ships are off track as well, so there are big gaps in the Überwachungskette (supervisory chain). Eventually, U 96 approaches to an enemy convoy and sinks two enemy ships and damages a third one seriously. The submarine again is hardly attacked by depth charges but can escape knapp (curtly). After going up, they see the crew of a damaged ship burning and jumping into the water. But the captain refuses to held the enemy crew so every member of the U 96 who observed this is hardly disappointed and the moral of the crew gets down. Back in La Rochelle, the U 96 gets a new Einsatzbefehl (mission order). The crew is supposed to go to La Spezia in the Mediterranian Sea. Therefore they have to pass the Strait of Gibraltar which is crowded by enemy battle ships. The crew tries to pass with a trick, but being attacked by Flugzeuge (airplanes) and battle ships, the submarine is forced to dive again. It´s getting out of control so it sinks down to the ground in a depth of 280 meters. The crew struggles to survive. Countless water inleakages have to stopped and the boat has to be fixed before there is even a chance to go up again. After more than 15 hours of fighting against time the boat finally reaches the surface again. The enemy thinks that they are all dead, so they can escape unbemerkt (unseen) back to La Rochelle. But while deboarding in La Rochelle most people of the crew are killed during a Luftangriff (air attack). I have to admit, that it´s been years since I saw the movie, but I still remember that it was totally spannend (suspenseful) and during the research for this article I really got keen on seeing the Director´s Cut. So hopefully there will be a chance to see it soon. My name is Jan and I live in the south west of Germany. My profession is being a project manager at a company that creates digital media (first of all internet related things). This is my job since over a decade so I´m quite familiar with the web and its tools. Whereat today almost every school kid does. But that´s one of the main reasons why nowadays there are quasi no more limits in the internet and so it can be used for all imaginable types of things. For example learning languages! And that´s where we are at the moment. I first got in touch with Transparent Language when my family and I used to live in France a couple of years ago. I just had a break from work and by coincidence I produced some cultural videos in French. A few months later the whole blogging thing came up and I was lucky to be a part of it. So now my (second) job is to feed you with information, exercises, vocabulary, grammar and stories about Germany and German language. For being a passionate videographer I´m trying to do this more and more by videos. If you have any wishes or needs of topics that should be treated here, please don´t hesitate to contact me via a comment field. I´m open to your suggestions (as long as they are not too individual) and will try to satisfy your needs. Comments: A better translation for “die Wasserbombe” is “depth charge” rather than water balloon. In English, the term water balloon is a small inflatable toy balloon that is filled with water instead of air. Children toss water balloons at one another hoping they burst on contact to soak the intended target. It’s all in good fun. “The Boot” is without a doubt one of the best German movies ever made AND one of the best WWII movies as well. I prefer such films much more, where the events are shown from the Germans’ point of view. And not just Herbert Grönemeyer makes a fantastic job in the movie, but also Jürgen Prochnow together with the unforgettable Hubertus Bengsch and Uwe Ochsenknecht 🙂 What makes the movie really special is that the crew talks in different German dialects (so it is especially worth to watch the movie in its original language :)) (So, what I wanted to say is that “Das Boot” was a bit too difficult for me, when I wanted to practice German for my language exam, because there are way too many technical terms in the movie. But it’s just my personal point of view, and if it helps anyone – like Randy -, then it’s great 🙂 And I just wonder if we could read posts about German TV-series as well, e.g. “Cobra 11” – I don’t know if there are such posts already, but if there aren’t, I’d be glad to read about them as well :)) Thanks to everyone who had comments about the movie Das Boot. I agree that it had some technical words that I did not always understand. However it is the same when I am talking to Germans.the longer the conversation the greater the possibility that some words will be “technical” or more difficult than what is spoken in general daily situations. Each time is a good opportunity to improve your German. I can recommend another good movie available on DVD. “Nowhere in Africa” 2002 Academy Award for Best Foreign Language Film. The story is interesting and the spoken German was easy to understand. I liked it very much and bought the DVD. One of the best for improving your German. Randy I have seen das boot with English subtitles many years ago, I enjoyed very much, in the end it only probes that war is nothing to admire but feel sorry that us human beings had to kill each other for ideals that maybe is not worthy. Life without war is always better. Humanity is leaning slowly, in fact it has been long with no world war again I first saw this movie with subtitles when it came to the U.S. And you’re right, it IS an excellent movie. It is also the only German movie I have seen. I also believe it shows pretty accurately what life aboard a submersible was like then. It portrayed the desperation and hopelessness grippingly. Even though I took 7 years of French, I always spoke German (3 years) easier. There is a lot of minor conversation aboard the sub that was not translated. At the time my German was still fresh enough that I could understand it all. I enjoyed the Platte dialects as well. Also as I have served time aboard Navy ships I enjoyed the conversations being similar to some I had had in my native English in a similar environment. We aren’t really all that different… Now (28 years later) my German is failing grammatically and vocabulary wise. I have found some German cousins on FBook that I chat with but they chuckle at my measly attempts. I know I’m struggling when they say, “Was meinst du??” Uh oh. At least they are still talking to me so it must have not been that bad… As for the technical words, hey. It’s a German movie in German. Germans are collectively a technical bunch and the main characters live inside a submersible machine. Look at how Germans design things. Besides, Germans seldom name anything, they describe it. French name things. In German everything is in order. To speak it you grab your spanners and wrenches and construct yourself some Genuine German. In French, it is all blended together and poured out for your perusal with a little heat added for emotive flavor. Maybe the first German could have built a proper lesson plan to help the first Frenchman with his spelling lesson… I enjoyed latest comment about German language movies and learning the language. I wanted to recommend some additional movies that I liked and also helped me to improve my German skills. Hope they will help others too! 1. Hanussen (Klaus Marie Brandauer)2.Colonel Redl (Oberst Redl also Brandauer)3.Die ehe der Maria Braun (Hanna Schygulla) also called The marriage of Maria Braun 4. Sophie School-The Final Days (Die Letzen Tage) 5. The White Rose ( Die Weisse Rose) with Lena Stolze 6. Untergang (Downfall) ALL of these were, in my opinion,very good and helped me improve my German. For a long time I was a member of a video rental company in America called “German Language Video Center”. They have a large selection of comedy, mystery, war, classic and others. Watch movies and learn! lots of luck. Randy Thanks for sharing such a useful information. The information provided is very very niche. I was just surfing on internet and found your blog after reading this i realize that i should come here often. I found so many entertaining stuff in your blog, especially its discussion. From the tons of comments on your articles, I guess I am not the only one having all the enjoyment here! Keep up the good work.
Mid
[ 0.5668202764976951, 30.75, 23.5 ]
extern int foo1b(void); int foo3(void) { return foo1b(); }
Low
[ 0.48325358851674605, 25.25, 27 ]
Breadcrumb Astrophysical instrumentation Projects The instrumentation to detect and analyze the light in the infrared range of the spectrum is one of the fields of knowledge of the Instrumentation Division. The design and construction of infrared cameras and spectrographs requires high vacuum and cryogenic technologies, involving the specialties of mechanical engineering, optics and electronics, mainly. The integration and operation of mechanisms and devices in a cryogenic environment is especially complex and requires highly specialized knowledge and equipment. The Instrumentation Division has a long history of international participation in this type of instrumentation. Title State Sort by Order EMIR - Multi-object Infrared Spectrograph EMIR is a wide-field camera and intermediate resolution near infrared spectrograph. It is a criogenic multi-object spectrograph installed and in operation in the GTC telescope. A criogenic robotic reconfigurable slit system allows to obtain spectra from up to 50 objects simultaneously. HARMONI is one of the three ELT (European Extremely Large Telescope) first light scientific instruments. It is a visible and near-infrared (0.47 to 2.45 µm) integral field spectrograph, providing the ELT's core spectroscopic capability, over a range of resolving powers from R (≡λ/Δλ) ~4000 to R~20000. FRIDA (inFRared Imager and Dissector for Adaptive optics) is an integral field spectrograph in the near infrared, also with image mode, which will be installed in the GTC telescope. It will use the adaptive optics system of GTC (GTCAO) to observe with very high spatial resolution and spectral resolutions of up to 30,000. NIRPS is a next-generation, near-infrared spectrograph that uses adaptive optics and is fed via a fiber link. It is a compact cryogenic Echelle spectrograph, capable of operating alone or in combination with the HARPS instrument. It will have a spectral resolution of 100000 or 75000, and will be installed in La Silla observatory (Chile). MIRADAS is an intermediate resolution infrared spectrograph for the GTC telescope. It will operate in the infrared range of 1 to 2.5 microns with a spectral resolution of 20,000. It is a multi-object spectrograph capable of observing up to 20 objects simultaneously, by means of a robot with 20 arms that can patrol a 5 arcminutes field. GRIS is the spectrograph developed by IAC, installed in the German telescope GREGOR, in the Teide Observatory. It is dedicated to the research in solar spectropolarimetry. It is in continuous improvement, to expand its scientific capability and to demonstrate technologies for the EST.
Mid
[ 0.6328125, 30.375, 17.625 ]
Introduction {#Sec1} ============ The process of cranial neural tube closure (NTC) creates the basic morphological scaffold for the central nervous system. Defects in this critical process result in lethal cranial neural tube defects (NTDs), most commonly expressed in humans as anencephaly. In human, NTDs including spinal defects such as spina bifida and craniorachischisis, occur in approximately one out of 1,000 births worldwide. NTDs have been studied intensively both epidemiologically in humans, and experimentally in animal models, including frogs, chickens, and rodents \[[@CR1]--[@CR4]\]. Although dietary folic acid fortification or supplementation efforts have been effective in preventing NTDs in human populations, little is known about how it works \[[@CR5]--[@CR7]\]. Understanding the basis of normal and abnormal NTC is not only fascinating from a biological perspective but also has important clinical relevance. There are many excellent reviews on NTC mechanisms in mammals (see \[[@CR2]--[@CR4], [@CR8], [@CR9]\]). Herein, we summarize previous and recent studies addressing the molecular and cellular mechanisms of cranial NTC in amniotes such as birds and mammals. It is now obvious that many signaling pathways and morphogenetic processes are evolutionary conserved among birds and mammals, although many differences exist as well. As it is more feasible to perform experimental manipulation in order to dissect molecular and cellular pathways in chicken than in mouse, studies on the morphogenetic mechanisms using chicken as well as those using mice greatly help to increase our understanding of mammalian cranial NTC. We will also discuss the questions and concepts that could be useful in further understanding various NTD mutant phenotypes and in developing approaches for future studies of cranial NTC. Mechanisms of cranial NTC {#Sec28} ========================= Tissue movement in mammalian cranial NTC {#Sec29} ---------------------------------------- Cranial NTC in mammals as well as in other vertebrates begins after neural induction that discriminates the neural plate from the adjacent surface ectoderm (Fig. [1](#Fig1){ref-type="fig"}a), and is achieved through sequential changes in the morphology of the neural plate as follows \[[@CR2], [@CR10]\].Fig. 1Morphological changes of neural plates to neural tube. After neural induction (**a**), the neural plate bends at MHP (**b**) and is elevated to form the neural fold (**c**). Subsequently, flipping of the edges (*asterisks*) and bending at DLHP occur (**d**), resulting in apposition and fusion of the edges (**e**). Remodeling takes place to separate neuroectoderm and surface ectoderm (**f**). Neuroectoderm (neuroepithelium): *pink*. Non-neural ectoderm (surface ectoderm): *green*. Boundary region within non-neural ectoderm: *Orange*. Notochord: *yellow* *Elevation and bending:* The neural plate changes its morphology by bending in two phases, each initiated at "hinge points" \[[@CR2]\]. The plate begins as a largely horizontal, although slightly convex, dorsal neuroectodermal field (Fig. [1](#Fig1){ref-type="fig"}b). The first morphological change is the bending of the plate at the midline, which forms the medial hinge point (MHP) (Fig. [1](#Fig1){ref-type="fig"}b, arrow), thus dividing the neural plate into bilaterally symmetric regions. These lateral regions are then elevated by intrinsic neuroectodermal cell movements, and possibly by the extrinsic expansion of the underlying cranial mesenchyme as well, to create the vertical, concave walls of neuroepithelium that make up the neural folds (Fig. [1](#Fig1){ref-type="fig"}c). At the same time, the neural plate elongates rostrocaudally through convergent extension and cell division (see a later section; "[Neurulation and body axis elongation through convergent extension, the PCP pathway, and oriented cell division](#Sec5){ref-type="sec"}").In the second phase, the neural folds bend at paired dorsolateral hingepoints (DLHP) (Fig. [1](#Fig1){ref-type="fig"}d). The exact location varies along the rostral-caudal axis.*Apposition and fusion:* Once the neural folds are elevated and have bent at both the MHP and DLHP, the tips of the neural folds are flipped (Fig. [1](#Fig1){ref-type="fig"}d, asterisks) and can be apposed (Fig. [1](#Fig1){ref-type="fig"}e). In apposition, the neural folds meet at the dorsal midline, after which the epithelium fuses by "zipping" or "buttoning-up" to form the neural tube.*Remodeling:* Once the tube is closed, the dorsal midline is remodeled to separate the inner neuroectoderm, or neuroepithelium, from the outer non-neural ectoderm (the surface ectoderm or future epidermis) (Fig. [1](#Fig1){ref-type="fig"}f). The above basic processes are observed commonly among vertebrates except fish, which forms neural keels before neural tube formation, but the mode and mechanism of cranial NTC appears most complicated in amniotes, especially in mammals \[[@CR11]\]. In most mouse strains, the above process of closure is initiated at several points along the neural tube, at different developmental stages \[[@CR8], [@CR12]--[@CR17]\]. At E8.5, when embryos have 6--7 pairs of somites (somite stage \[ss\] 6--7), the tips of the neural ridges typically have met and fused at the midline of the hindbrain/cervical boundary (Fig. [2](#Fig2){ref-type="fig"}a, b, shown by asterisk (\*)), and neural tube fusion proceeds both rostrally (toward the hindbrain) and caudally (toward the trunk) (summarized in \[[@CR10], [@CR13]\]). Closure initiated from this point is termed closure 1 (Fig. [2](#Fig2){ref-type="fig"}a, b). Around ss 10--13, the neural ridges meet and fuse at the forebrain/midbrain boundary (FMB), initiating closure 2, which also proceeds bi-directionally from the contact point (Fig. [2](#Fig2){ref-type="fig"}a--c, shown by cross (†)). A third fusion initiation point begins at the rostral end of the neural plate. This closure 3 proceeds caudally and meets the rostrally directed wave of closure 2 to seal the anterior neuropore (ANP) (Fig. [2](#Fig2){ref-type="fig"}a, b, d, shown by sharp (\#)). The caudally directed closure 2 meets the rostrally directed closure 1 (which is also sometimes referred to in the literature as closure 4 \[[@CR14]--[@CR16]\]) to seal the midbrain-hindbrain neuropore (MHNP) (Fig. [2](#Fig2){ref-type="fig"}b, e). Analogous multiple closure sites are also observed in other mammals including humans, and also in a bird \[[@CR8], [@CR18]--[@CR20]\].Fig. 2Multiple closures in cranial NTC of mouse embryos. **a** Bi-directional closure 1 occurs from the cervical region (*asterisks*) before embryonic turning in an E8.5 ICR embryo. **b** Schematic representation of multiple closures in an E8.75 ICR embryo. MHNP is closed by caudal closure 2 and rostral closure 1(closure 4), and ANP by caudal closure 2 and closure 3. *asterisks*: closure 1 start site. *Crosses*: bi-directional closure 2 start site. *Sharp*: closure 3 start site. Directions of the closures are shown by *red arrows*. **c** Frontal views of closure 2 at MHNP and ANP. **d** Ventral view of closure 3 at ANP. **e** Dorsal views of closure 1(4) and 2 at MHNP. Unclosed regions are colored *purple* Any disturbance in the dynamic, sequential events of cranial NTC can cause cranial NTDs \[[@CR10]\]. In particular, failure in closure 2 often causes exencephaly. The exencephalic brain often grows well in utero, but eventually undergoes neurodegeneration and the ultimate anencephalic phenotype ensues \[[@CR8], [@CR21], [@CR22]\]. Defective closure 1 between the midbrain and lower spine causes craniorachischisis, in which the neural tube is open along the entire axis of the body secondary to a complete failure of the neural folds to elevate and fuse. A partial failure in closure 1 to close the thoracic or lumbosacral region, or its re-opening, causes the common human birth defect spina bifida. Morphogens affecting the position of bending {#Sec3} -------------------------------------------- Neural fold bending at the MHP and DLHP are essential steps in cranial NTC. However, the precise mechanism(s) by which the bending position is determined at the molecular and cellular levels has remained unclear. A recent study using chicken embryos revealed how the MHP is determined \[[@CR23]\]. Within neural plates, a two-dimensional canonical BMP activity gradient exists, which results in a low and pulsed BMP activity at the MHP. Disturbing this gradient by overexpression BMP signaling antagonists (e.g., Noggin) can induce ectopic hinge-point formation in the more lateral neural plates, and conversely, overexpression of a constitutively active form of BMP receptor IA suppresses MHP formation. Thus, BMP blockage is necessary and sufficient for MHP formation in the chicken cranial region. Because BMP blockage does not affect the expression of *Shh,* or *phoxA2*, one of the ventral neural plate markers, the study suggests that the effect of BMP blockage on MHP formation is likely independent of Shh. How the BMP activity gradient is formed still remains unclear. The study also proposed that BMP attenuation induces neural plate bending via apical constriction, possibly through endocytosis of apical protein Par3 and N-cadherin \[[@CR23], [@CR24]\] (Fig. [3](#Fig3){ref-type="fig"}a). Further studies are required to examine whether similar BMP gradient is important for mammalian NTC. Interestingly, some of BMP signaling mutants including *Noggin* exhibit NTD in mice \[[@CR25]--[@CR27]\].Fig. 3Mechanisms of bending at MHP in the cranial regions identified in chicken embryos. **a** Signals involved in MHP formation, and mechanisms of their actions in chicken cranial region. **b** PCP signaling links convergent extension with neural plate bending via oriented apical constriction in chicken cranial region. Oriented apical constriction along mediolateral (*M*--*L*) axis within neuroepithelial cells (actin fibers are shown with *red*) couples elongation of the neural plate along anteroposterior (*A*--*P*) axis with its bending along *M*--*L* axis Compared to the MHP, it seems more difficult to understand how the position of DLHP formation is determined in the cranial regions. This is because position of the DLHP shifts during NTC and differs among species, and there are no known DLHP-specific molecular markers at present. The elevation and bending mechanisms of DLHP have been well studied in the mouse spinal region, where the structure is relatively simple and therefore, more feasible for analysis. Precise observations of relationships between gene expression patterns and DLHP formation in the neural folds suggested that integrative actions between Shh, BMPs, and the BMP antagonist Noggin, regulate the formation of the DLHP \[[@CR27], [@CR28]\]. Spina bifida and exencephaly are seen in mice overexpressing *Shh* or lacking *Noggin*, suggesting that a similar regulatory interaction likely operates in the cranial region, where bending of the DLHP is also a prominent event during neurulation \[[@CR25]--[@CR27]\]. As well as in the MHP, the ultimate mechanism by which these signals actually causes the neural plates to bend, remains to be revealed. ### Do growth factors regulate neural plate bending directly or through neural plate patterning? {#Sec4} With regard to the position of bending, in addition to morphogenetic movements, we should also consider dorsoventral (D--V) patterning of the neural folds. It is well known that the above-mentioned growth factors including Shh are crucial for D--V patterning of the neural tube. The nature of neural progenitors in the neural tube is specified gradually during development by a combination of transcriptional factors' expression, which is generated by morphogen gradients; Shh as the ventralizing factor, and Wnts and possibly BMPs as the dorsalizing factors \[[@CR29], [@CR30]\] (Fig. [4](#Fig4){ref-type="fig"}). The D--V patterning starts at the neural plate stage when the MHP begins to form, since Shh emanating from the notochord is already inducing a ventral identity (Nkx6.1-positive cells) in the medial region of the neural plate, and is acting continuously as a morphogen to progressively pattern the neural folds, at least in mouse \[[@CR29], [@CR31], [@CR32]\].Fig. 4Schematic illustration of events occurring in the dorsal neural folds during cranial NTC. Cellular behaviors and molecular mechanisms are shown with *blue* and *black fonts*, respectively. Neuroectoderm (neuroepithelium): *pink*. Non-neural ectoderm (surface ectoderm): *green*. Boundary region within non-neural ectoderm: *orange*. Boundary cells mediating fusion at tips: *red*. Cranial neural crest cells (CNC): *purple*. Head mesenchyme: *light blue*. Apoptotic dying cells: *gray*. A cell undergoing division is shown with *yellow* in surface ectoderm (*left*) In the mouse cranial region, if the dorsal neural folds are ventralized by either overactive Shh signaling or the suppression of dorsalizing signals such as Wnt, the consequences are often a deformed neural tube lacking DLHP formation, which results in cranial NTDs \[[@CR26], [@CR32]--[@CR36]\]. Likewise, mice deficient for transcriptional factors that are expressed in dorsal neural folds, including *paired box 3* (*Pax3*), *Zic2*, *Zic5*, and *sal*-*like 2* (*Sall2*), also exhibit cranial NTDs \[[@CR27], [@CR37]--[@CR40]\]. However, it appears that even if the neural folds are dorsalized by loss of Shh signaling or enhanced Wnt signaling, the DLHP forms and NTC is completed \[[@CR28], [@CR35], [@CR41]\]. These lines of evidence suggest that specification of dorsal neural folds is crucial for success in cranial NTC (Fig. [4](#Fig4){ref-type="fig"}). This raises several questions: What downstream factors characterize the hinge point formation site? Is there a specific border that marks DLHP formation along the dorsoventral (or, mediolateral) axis? In other words, is there a combination of transcription factors or a crosstalk of signaling activities that determines where hingepoints form? Answering these questions will require the precise characterization of the position of hingepoint formation along the D--V and rostrocaudal axis in the cranial neural folds. Neurulation and body axis elongation through convergent extension, the PCP pathway, and oriented cell division {#Sec5} -------------------------------------------------------------------------------------------------------------- The cellular movement of convergent extension occurs during gastrulation and neurulation in vertebrates \[[@CR42]\]. Convergent extension leads to the rostrocaudal extension of the body axis, driven by polarized cell rearrangement, including lateral-to-medial cell displacement within the tissue and cell intercalation at the midline. Convergent extension is governed by evolutionarily conserved planer cell polarity (PCP) signaling, which was originally identified to regulate cell polarity within the plane of the wing epithelium in *Drosophila*. Defective PCP signaling causes NTDs in vertebrates \[[@CR43]--[@CR45]\]. A severe NTD, craniorachischisis, is found in several mouse PCP-signaling mutants, including *loop*-*tail* (*Lp*) \[*Vangl2*\] mutants, *crash* \[*Celsr1*\] mutants, *circletail* \[*Scribble1*\]*, dishevelled 1* (*Dvl1*) and *Dvl2* double mutants, and *frizzled 3* (*Fz3*) and *Fz6* double mutants \[[@CR45]--[@CR50]\]. In these PCP-signaling mutants, the neural plate and the underlying notochord fail to elongate rostrocaudally, due to ineffective cell intercalation at the midline. As a result, the neural plate widens, hampering the apposition and contact of the neural folds at the midline, and eventually leading to NTDs \[[@CR43], [@CR50]\]. In addition to convergent extension, there would be another factor contributing to body axis elongation; cell division. In *Lp* mutants, defective elongation in the midline is seen mainly in the caudal notochordal region and to a lesser extent in the rostral region \[[@CR51]\]. This suggests that rostrocaudal elongation in the most anterior neural plate is relatively independent of convergent extension, and may instead result primarily from extensive longitudinally oriented mitoses occurring in the midline \[[@CR51]--[@CR54]\]. Whereas amphibian and fish do not significantly increase the embryonic cell numbers during NTC, higher vertebrates such as birds and mammals do substantially increase the cellular population of the neural tissues \[[@CR52], [@CR55]\]. Thus, inclusion of cell division into potential mechanisms involved in cranial NTC is plausible in birds and mammals. Mechanics of neural plate bending: links between the PCP pathway and apical constriction {#Sec6} ---------------------------------------------------------------------------------------- Among the cellular mechanisms that bend the neuroepithelial sheet, the contraction of subapical actin microfilaments in neuroepithelial cells is the most-studied intrinsic NTC mechanism \[[@CR56]--[@CR59]\]. Actin microfilaments (F-actin) accumulate to form a meshwork at the apical cortex which then contract, reducing the apex of the neuroepithelial cells during NTC. The contraction is driven by the molecular motor myosin. Several studies have shown that disrupting actomyosin with chemicals causes cranial NTDs \[[@CR58], [@CR60]--[@CR62]\]. Similarly, mice deficient for regulators of the cytoskeleton present with exencephaly (*Abl1/Abl2*, *n*-*Cofilin*, *Marcks*, *Mena*, *Mlp*, *Shroom*, *Palladin*, and *Vinculin*) \[[@CR63]--[@CR73]\]. In contrast, NTC in the spinal region does not appear to require actomyosin \[[@CR58], [@CR60]\]. During NTC, the neural plate bends only along the mediolateral axis. Such a polarized neural plate bending implies a polarized cellular contraction, otherwise the neural plate bending would occur only radially. A recent study using chicken embryos revealed that PCP signaling directly links apical constriction to convergent extension, promoting the polarized mediolateral bending of the neural plate (Fig. [3](#Fig3){ref-type="fig"}b) \[[@CR74]\]. During the bending process Celsr1, a vertebrate homolog of the *Drosophila* gene *Flamingo* (one of the core PCP members) \[[@CR49]\], concentrates in adherens junctions (AJs) oriented along the mediolateral axis of the neural plate, along with Dvl, DAAM1, and PDZRhoGEF, which together upregulate Rho kinase. This causes actomyosin-dependent planar-polarized contraction, which promotes the simultaneous apical constriction and midline convergence of the neuroepithelial cells (Fig. [3](#Fig3){ref-type="fig"}b). This system ensures that neural plate bending and body axis elongation are well coordinated \[[@CR74], [@CR75]\]. A similar mechanism may also operate in mammalian NTC. Besides these intrinsic neural plate mechanisms, NTC is also affected by extrinsic factors from surrounding tissues, the earliest of which is Shh emanating from the notochord. Signals from the head mesenchyme and non-neural surface ectoderm also shape morphogenetic events during cranial NTC. Head mesenchyme's role in closure {#Sec7} --------------------------------- The cranial neural plate is surrounded mainly by the head mesenchyme, which originates from primary mesenchyme, the earliest group of cells ingressing upon gastrulation \[[@CR76]\]. The head mesenchyme possibly affects cranial NTC, because mutant embryos lacking genes expressed in the head mesenchyme, e.g., *twist homolog 1* (*Twist*), *cartilage homeo protein 1* (*Cart1*), *aristaless*-*like homeobox 3* (*Alx3*), or the ubiquitously expressed *HECT domain containing 1* (*Hectd1*), often have exencephaly. It is thought that this is due to defective DLHP formation and neural fold elevation of the forebrain and midbrain, along with abnormal head mesenchyme density around the neural folds \[[@CR77]--[@CR80]\]. The density of the head mesenchyme decreases in the *Twist* or *Cart1* mutant, but increases in the *Hectd1* mutant. Thus, proper head mesenchymal cell behavior is likely required to regulate cranial NTC. Understanding how head mesenchyme affects the formation of the neural folds will require future study. Non-neural surface ectoderm: a supporting player {#Sec8} ------------------------------------------------ An effect of the non-neural surface ectoderm on the cranial NTC was first demonstrated in urodele amphibians and birds \[[@CR81], [@CR82]\]. In chicken embryos, medially directed non-neural surface ectoderm expansion is observed only in the cranial region, not the spinal region, and surgically removing the tissue prevents DLHP formation. Thus, as it expands, the non-neural surface ectoderm may physically force the neural plate to bend (Fig. [4](#Fig4){ref-type="fig"}) \[[@CR82]--[@CR84]\]. Since a small, narrow boundary region of the non-neural surface ectoderm adjacent to the neural plate is both sufficient and necessary to induce bending at the DLHP in the head region in chickens, and in the lower spinal region in mice \[[@CR28], [@CR84]\], another possibility arises, not mutually exclusive to the first: bending at the DLHP may be mediated by inductive interactions between the neural folds and the adjacent non-neural surface ectoderm (Fig. [4](#Fig4){ref-type="fig"}). In fact, as mentioned in the previous chapter, BMPs from the non-neural surface ectoderm induce expression of the BMP antagonist *Noggin* in the tips of the neural folds, and this antagonism of BMP signaling is necessary and sufficient to form the DLHP in the lower spinal regions in mice \[[@CR27]\]. The importance of the non-neural surface ectodermal cells for NTC is emphasized by several findings, one of which comes from a functional analysis of the *grainyhead*-*like* (*Grhl*) gene family. Grhl family genes encode transcription factors that includes the*Drosophila* gene *grainyhead* (*grh*), which is essential for epidermal differentiation and wound healing in the fly \[[@CR85], [@CR86]\]. Loss of the *Grhl2 or Grhl3* genes, which are specifically expressed in the non-neural surface ectoderm, causes NTDs \[[@CR87]--[@CR90]\]. Mutant of Grhl family genes interact with several of the PCP-signaling mutants, and exhibit PCP-like defects both in fly and mice \[[@CR91], [@CR92]\]. A series of studies indicate that the *Grhl* genes are indispensable for proper development of non-neural epithelial tissues in mice \[[@CR88], [@CR90], [@CR92]--[@CR96]\]. Thus, non-neural epithelial properties in the non-neural surface ectoderm defined by the *Grhl* genes are considered to be essential for successful cranial NTC (Fig. [4](#Fig4){ref-type="fig"}). Another finding supporting the importance of non-neural surface ectoderm in cranial NTC is that mouse embryos lacking *protease*-*activated receptor 1* (*Par1*) and *Par2* develop exencephaly and spina bifida \[[@CR97]\]. Both *Par1* and *Par2* are expressed in the non-neural surface ectoderm. *Par2* expression is restricted to the cells surrounding the neuropore, and possibly to boundary cells (Fig. [4](#Fig4){ref-type="fig"}, see below). Matriptase, a membrane-tethered protease that is activated by hepsin and prostasin activates Par2. Par2's downstream signals include the G protein-coupled receptors Gi/z and Rac1, as shown by the finding that conditionally ablating these genes in *Grhl3*-expressing cells causes NTDs \[[@CR97]\]. Thus, NTC requires local protease signaling in cells at the edge of the non-neural surface ectoderm (Fig. [4](#Fig4){ref-type="fig"}). Zipping and fusion: the non-neural ectoderm boundary plays a key role {#Sec9} --------------------------------------------------------------------- Cells at the edge of the non-neural surface ectoderm mediate zipping and fusion to seal the midbrain and hindbrain. Classic transmission electron microscopy studies, as well as recent live-imaging studies, showed that at the border of the mesencephalon and rhombencephalon, non-neural surface ectoderm cells overlying the neural folds make the initial contact in sealing the neuropore \[[@CR98]--[@CR100]\]. These non-neural surface ectoderm cells differ from the underlying neuroectoderm and the adjacent non-neural surface ectoderm in both morphology and location; they have a bipolar shape and are aligned along the rostral-caudal axis like a chain, whereas adjacent surface ectoderm cells are polygonal \[[@CR99]\]. These "boundary cells" are brought into proximity and then into contact by "zipping" at the rhombencephalon (rostral closure 1 and caudal closure 2) (Fig. [4](#Fig4){ref-type="fig"}) \[[@CR99], [@CR100]\]. Within the mesencephalon of cultured embryos, the boundary cells protrude from the epithelial layer on opposing sides of the neural folds, extend long cellular processes toward each other, and then form contacts between the juxtaposed folds \[[@CR99]\]. This "buttoning-up" closure eventually resolves into a continuous closure. This observation raises the possibility that the boundary cells, which are at the tips of the non-neural ectodermal cells covering the neural fold edge, are sufficient to complete fusion if they are appropriately juxtaposed prior to being zipped. To test this idea, we need an experimental innovation that allows cells from both sides to be forced into contact before normally closed while keeping the embryo and closure intact, whether in utero or in a culture system. If this approach could be achieved, it would be interesting to examine whether or not edge boundary cells can achieve fusion in various NTD mutants, such as the PCP or *Grhl2/3* mutants. This will help to determine whether NTDs arise solely from defective elevation and bending, or from the fusion process and its maintenance, as well. Molecules that mediate cell--cell interaction and fusion {#Sec10} -------------------------------------------------------- Compared to the mechanisms for elevation and apposition, little is known about the molecules that mediate the cell-to-cell interactions responsible for neural fold fusion \[[@CR2], [@CR101]\]. The subtypes of classic cadherins, which are cell-adhesion molecules, differ between neuroectoderm (N-cadherin^+^) and non-neural surface ectoderm (E-cadherin^+^), and subtype switching from E-cadherin to N-cadherin in neuroepithelium occurs during NTC \[[@CR102], [@CR103]\]. Deletion of *N*-*cadherin* results in increased cell death in cranial neural folds during NTC \[[@CR104]\]. Removal of *N*-*cadherin* specifically from dorsal edges of neural folds (Wnt1^+^) results in exencephaly, as well as cardiac defects caused by defects in the cardiac neural crest cells \[[@CR104]\]. Addition of blocking antibodies to N-cadherin or antisense-oligonucleotide against E-cadherin also disrupts the cranial NTC in chicken and rat \[[@CR105], [@CR106]\]. These results suggest that proper regulation of these classic cadherins is indispensable for cranial NTC. Mice carrying hypomorphic alleles of p38-interacting protein (p38IP) (*drey*) exhibit exencephaly or spina bifida \[[@CR107]\]. p38IP and p38 MAPK activation are required for downregulation of E-cadherin in mesoderm during gastrulation. It might be possible that downregulation of E-cadherin in neuroepithelium during normal NTC is also mediated by p38-dependent pathway. This regulation appears to be independent of Snail, a transcriptional factor that is a well-known regulator for switching of these cadherins in epithelial--mesenchymal transition during gastrulation \[[@CR103], [@CR108]\]. The mutually exclusive expression of these cadherins is likely based on a negative-feedback regulation, just as suppressing *E*-*cadherin* mRNA in cultured non-neural epithelial cell lines leads to the compensatory upregulation of N-cadherin, which is not normally expressed in those cells \[[@CR109]\]. In *Grhl2* mutant mice, which exhibit cranial NTDs as mentioned above, *E*-*cadherin* mRNA and its protein are decreased in the epidermis, but N-cadherin protein is increased \[[@CR88], [@CR90]\]. Other evidence of epithelial dysregulation in *Grhl2* mutants includes decreased expression of the tight junction protein claudin-4. Since E-cadherin appears to be expressed in the non-neural ectodermal boundary cells that directly mediate zipping and fusion \[[@CR90]\] (and our unpublished observation), a precise characterization of the behaviors and dynamics of these cells may shed light on the role of these classic adherence molecules during NTC (Fig. [4](#Fig4){ref-type="fig"}). Another cell--cell interaction system, the Eph-ephrin system, is also important for fusion (Fig. [4](#Fig4){ref-type="fig"}). Eph tyrosine kinase receptors and their membrane-bound ephrin ligands participate in several developmental processes, including repelling axonal growth cones and promoting cell migration. The cranial neural tube fails to close in a small percentage of mice deficient in *ephrin*-*A5* or its receptor, *EphA7* \[[@CR110]\]. *EphA7* and *ephrin*-*A5* are strongly expressed along the edge of the neural fold. *EphA7* has three splice-variant transcripts, and all of them are expressed in the neural folds. Two of the splicing variants encode a truncated form of EphA7 that lacks the tyrosine kinase domain, and these variants enhance cell adhesion with cells expressing *ephrin*-*A5* in vitro. These lines of evidence suggest that in the neural folds, the *EphA7* and *ephrin*-*A5* presumably act as a cell adhesion signal \[[@CR110]\]. The importance of the Eph-ephrin system in fusion is also reported in spinal NTC; blocking EphA activity in whole-embryo culture delays NTC at the posterior neuropore, without disturbing neural plate elevation or DHLP formation \[[@CR111]\]. Boundary regions and cranial neural crest cells {#Sec11} ----------------------------------------------- Cranial neural crest cells (CNCs) are generated at the dorsal edge of the neural folds (the boundary regions) (Fig. [4](#Fig4){ref-type="fig"}). Failure of the CNCs to develop or emigrate is often observed in cases of exencephaly \[[@CR3]\], although the mechanism how defects in CNCs lead to exencephaly is unclear. In mice, CNCs in the midbrain and hindbrain begin detaching from the edge of the neural folds and start migrating well before NTC is complete \[[@CR112], [@CR113]\]. A recent study reported that the non-neural surface ectoderm (*Wnt1*-*Cre*^+^/E-cad^+^/PDGFRa^+^) in the cranial boundary regions produces CNC-like cells (Fig. [4](#Fig4){ref-type="fig"}) \[[@CR114]\]. The "metablast" discussed in that study is likely a CNC subpopulation previously considered to arise only from neuroectoderm \[[@CR114]\]. Because properties of the boundary regions are important for successful closure, it should be interesting to examine whether CNC-defective mutants disrupting the cranial NTC have defects also in the boundary regions. Thus, it has become evident that cranial NTDs can be caused by disrupted signaling or cellular events in the boundary regions (Fig. [4](#Fig4){ref-type="fig"}), including CNCs' emigration, and cell death as follows. Programmed cell death in the boundary regions {#Sec12} --------------------------------------------- ### Apoptosis {#Sec13} Programmed cell death, especially apoptosis, was observed early in the study of NTC \[[@CR12], [@CR98], [@CR115]\]. Apoptosis, which is prominent during development, is propogated through signaling cascades that eventually converge on and activate cysteine proteinases, the caspases, which ultimately cause cell death through cleavage of their substrate proteins \[[@CR116], [@CR117]\]. At the boundary of the rhombencephalon and mesencephalon, extensive apoptosis is observed---both in the non-neural surface ectoderm and the neuroepithelium---before the neural folds are apposed and fused (Fig. [4](#Fig4){ref-type="fig"}). Because this pattern coincides with CNC generation, it has long been assumed that apoptosis contributes to CNC development, although its role has not been clearly determined \[[@CR118]\]. Apoptosis is also extensive at the anterior neural ridge (ANR), which is the boundary region of the most rostral prosencephalic region, and CNCs do not originate in this region. The role of apoptosis in the ANR is not yet known. Mice lacking intrinsic apoptotic pathway genes (*apaf-1*,*caspase-3*, or*caspase-9*) or harboring a mutant form of *cytochrome*-*c* that cannot activate apoptotic pathway but is intact for electron transport, or double-knockout mice for JNK1/JNK2 genes, exhibit cranial NTDs, including exencephaly \[[@CR119]--[@CR123]\]. These results indicate that regulation of apoptosis is involved in successful cranial NTC. Although many of the boundary cells responsible for fusion undergo apoptosis, inhibiting apoptosis does not affect the fusion process itself \[[@CR100], [@CR118]\]. Recently, live-imaging analysis revealed that in the absence of apoptosis mediated by caspase activation, DLHP formation and the flipping of the neural ridge are perturbed in MHNP, thus delaying cranial NTC \[[@CR100]\]. This suggests that apoptosis by caspase activation promotes the smooth progression of neural plate morphogenesis during cranial NTC. It is not yet clear how apoptosis mediated by caspase activation (which occurs mainly in the boundary regions) achieves this, nor whether apoptosis acts permissively or instructively on the progression of NTC. Apoptosis mediated by caspases may instructively help to generate forces that promote epithelial sheet morphogenesis, as shown in other model organisms and in cell culture systems \[[@CR124]--[@CR126]\]. To determine this conclusively in mice would require new tools to inhibit or induce caspase activation and apoptosis, with precise control over region and timing. It is also worth investigating whether apoptosis and caspase activation in the boundary regions acts through the adjacent neuroepithelium and surface ectoderm by releasing signaling molecules such as growth factors, small-molecule hormones, and fatty acids \[[@CR127]--[@CR132]\]. Apoptosis occurs continuously from the beginning through to the final step of NTC, including the entire remodeling process in which the neuroectoderm and the outer non-neural ectoderm are separated and arranged to make a rigid neural tube. However, the contribution of apoptosis to this remodeling process is still unclear, as it is in other tissue-fusion processes in which extensive apoptosis is observed \[[@CR101], [@CR133]\]. In cultured mouse embryos, it was reported that chemically inhibiting apoptosis does not affect the fusion or the separation of the neuroectoderm and epidermis \[[@CR118]\], suggesting that apoptosis is dispensable for fusion and separation. ### Non-apoptotic cell death and autophagy {#Sec14} Nevertheless, it is not yet clear whether cell death itself is non-essential in the remodeling process, because even in apoptosis-deficient embryos, alternate forms of cell death (non-apoptotic cell death) can occur \[[@CR134], [@CR135]\]. To clarify the impact of programmed cell death itself on NTC, it is necessary to determine whether non-apoptotic cell death such as caspase-independent cell death or autophagic cell death occurs in these embryos during NTC, and what actually causes the cells to die in the process. Cell death is often accompanied by autophagy \[[@CR136]\]. The role of autophagy during the cranial NTC remains to be elucidated. Mice deficient for *ambra1*, which is necessary for beclin1-dependent autophagosomal formation during murine development, exhibit exencephaly and spina bifida \[[@CR137]\]. Although macroautophagy is mainly mediated by Atg5 or Atg7, mutant mice deficient for those genes do not show any apparent developmental defects in NTC \[[@CR138], [@CR139]\]. Thus, NTC does not require Atg5/Atg7-dependent autophagy but does require the recently identified beclin1-mediated alternative autophagic pathway \[[@CR140]\]. Interestingly, *ambra1* KO embryos showed increased Shh signaling in the neural tube, suggesting that there might be an interaction between the regulation of Shh signaling and the Ambra 1 protein, and that this may be the cause of NTDs in these embryos \[[@CR137], [@CR141]\]. Further studies will be needed to reveal the complex interplays between cell death, autophagy, and cell differentiation programs during cranial NTC. Remodeling and the integrity of the neural tube and epidermis {#Sec15} ------------------------------------------------------------- Little is known about cellular and molecular mechanisms of remodeling in the midline after fusion (Fig. [1](#Fig1){ref-type="fig"}f) \[[@CR101]\]. During remodeling, dynamic cellular behaviors such as cell rearrangement, cell mixing, cell proliferation, and cell death are expected and indeed observed as mentioned above---but not precisely understood. Thus, it is difficult at present to assess the significance and normal characteristics of this remodeling. Detailed studies of these cellular behaviors are needed to determine the precise remodeling mechanisms that, if disrupted, would cause the neural tube to reopen, an event that results in exencephaly or spina bifida. In fact, reopening of the neural tube might be considered a remodeling failure. The tumor suppressor gene *neurofibromatosis type 2* (*Nf2*) likely contributes to the remodeling steps that prevent the neural tube from reopening: the *Nf2* gene product regulates the assembly of apico-lateral junctional complexes in the neuroepithelium \[[@CR142]\]. Eliminating *Nf2* specifically in developing neuroepithelium from E8.5 does not affect the initial fusion process, but the cranial neural tube reopens after E9.5, causing exencephaly. Thus, establishing the proper cell--cell adhesion structures during remodeling seems to be important for keeping the neural tube closed, although this concept has not been directly tested by experimental manipulation of the remodeling. Genetic background affects susceptibility {#Sec16} ----------------------------------------- The penetrance of exencephalic phenotypes in the presence of genetic or environmental perturbation can vary according to mouse genetic background \[[@CR9], [@CR143], [@CR144]\]. For instance, many knockout NTD mice maintained on the 129-dominant background exhibit exencephaly; while those on a C57B6L background do not---although the opposite has been reported in other cases (see \[[@CR9]\]). Mice that differ in NTD phenotype according to their genetic background include mutants or knockouts for transcription factors such as *p53, Cart1, Sall2*, and *splotch* (*Sp*^*2H*^) (pax3) \[[@CR17], [@CR40], [@CR78], [@CR145]\], apoptosis regulators such as *caspase*-*3* and *apaf*-*1* \[[@CR146]--[@CR148]\], growth factor signaling (*Noggin*) \[[@CR25], [@CR26]\], cellular trafficking \[[@CR149], [@CR150]\], and chromatin modifiers (*Jumonji*) \[[@CR151], [@CR152]\]. Presumably, multifactorial causes underlie the phenotypic differences and penetrance in these cases. Interestingly, the mode of closure 2 appears to affect susceptibility to exencephaly under certain kinds of genetic or environmental perturbation in cranial NTC \[[@CR15], [@CR17]\]. Although the point where closure 2 begins is usually at the forebrain-midbrain boundary (FMB) (Fig. [2](#Fig2){ref-type="fig"}b, c), this location varies among mouse strains \[[@CR10], [@CR15]--[@CR17]\]. The SELH/Bc strain, for example, likely lacks closure 2 \[[@CR15]\]; the forebrain is sealed only by closure 1 and closure 3, and this strain has a spontaneous exencephaly rate of about 20 %. This is not the only example. Closure 2 begins caudal to the FMB in the DBA/2 strain, and rostral to the FMB in the NZW strain \[[@CR17]\]. Interestingly, a *Sp*^*2H*^ (*pax3*) mutation introduced into DBA/2 background is less susceptible to exencephaly, but increases the susceptibility in NZW background \[[@CR17]\]. These data have prompted the suggestion that, as the starting site of closure 2 becomes more rostral, susceptibility to exencephaly increases in the presence of genetic perturbation or teratogenic agents \[[@CR14], [@CR17], [@CR153]\]. It is not yet clear what causes the differences in closure 2 position, or how a more rostral site increases the risk of exencephaly. A kinetic view of cranial NTC: Is there a closure deadline? {#Sec17} ----------------------------------------------------------- The more rostrally positioned closure 2 results in increase of the length of the MHNP that must be sealed by closures 1 and 2. This presumably lengthens the time it takes to close the MHNP, thus delaying the completion of NTC. We performed a live-imaging observation of delayed closure and perturbed neural fold movement in the absence of apoptosis. On the basis of the results, we proposed a working model of a deadline for cranial NTC. This "developmental time window" hypothesis holds that forces counteracting closure are generated and eventually surpass those promoting closure as embryonic brain development proceeds. In normal development, cranial NTC completes before counteracting forces become strong enough to interfere with closure (Fig. [5](#Fig5){ref-type="fig"}) \[[@CR100]\]. However, if the progress of closure is delayed, whether due to genetic, environmental, or physical disturbances, cranial NTC fails---or the closure reopens as observed in a live-imaging analysis \[[@CR100]\]---due to the stronger counteracting forces. This model explains why a disturbance in the NTC process would more severely impact mice with a rostrally positioned closure 2 or with delayed closure. The penetrance of NTDs varies in mice harboring mutations known to cause them. Viewing NTC kinetically from this model may explain the variable phenotypic penetrance.Fig. 5Developmental time-window model for cranial NTC. NTC must be completed by a hypothetical developmental deadline (about somite stage 20), when forces incompatible with NTC may arise. Any perturbation on NTC program could delay NTC. Even when closure is delayed, the embryo can develop without NTDs, as long as NTC can be completed before the deadline (shown as "rescue form delay"). However, if closure is not completed by the deadline, cranial NTC ends in failure to close at the MHNP, resulting in cranial NTDs such as exencephaly Live imaging of cranial NTC with functional reporters for cellular or signaling activities {#Sec18} ------------------------------------------------------------------------------------------ Causal relationships between genetic mutations and NTDs have been identified by developing hundreds of mutant mouse models \[[@CR4], [@CR9]\]. In many cases, the linkage between a genetic mutation and the consequent NTD has not been clarified, since the complexity and dynamics of cranial NTC cannot be captured solely by static methods. Recently developed live-imaging analysis allows for more precise investigations. Using this approach, researchers have revealed how PCP signaling, cytoskeletal dynamics, or cellular behavior acts on neural tube morphogenesis in various animal models, including ascidian, fish, amphibians, and birds, whose embryos are more accessible for such analyses than are mammalian embryos \[[@CR36], [@CR43], [@CR74], [@CR154]--[@CR157]\]. Such analyses will also help to understand basic mechanisms of the mammalian cranial NTC, which is much more complex than non-mammalian vertebrates \[[@CR11]\]. Furthermore, live-imaging analysis with functional reporters for biological signals or molecules can now allow us to visualize and monitor the activities or behaviors of signaling molecules within living cells. Although it has long been difficult to generate transgenic mice that stably expresses genetically encoded fluorescent reporters monitoring biological signaling activities, several groups have succeeded to generate them \[[@CR100], [@CR158]--[@CR160]\]. With new fast-scanning confocal microscopy methods for high-resolution observation of living embryos, it has now become possible to observe real-time cell signaling during mammalian cranial NTC \[[@CR99], [@CR100]\]. This has revealed unexpected, differential modes of apoptosis occurring during NTC \[[@CR100]\]. It would be interesting to visualize neural plate morphogenesis and the specific signaling pathways responsible for closure. Further development of these new methods of analysis will allow us to gain new insights into the mechanics and dynamics of cranial NTC and etiology of NTDs. Nutrition, metabolism, and epigenetic regulation {#Sec25} ------------------------------------------------ We have summarized factors contributing to the process of cranial NTC, focusing on neural plate morphogenesis and cell--cell/tissue--tissue interactions. In addition to these embryonic mechanisms of NTC, we want to conclude by mentioning two other aspects that are important when considering the etiology of human NTDs; contribution of maternal nutritional factors including folic acid, and epigenetic regulation \[[@CR2], [@CR7], [@CR161]\]. Among maternal nutrient factors, the preventive effects of folic acid on human NTD risk have been well established, and several countries mandate folic acid fortification of the grain supply \[[@CR5], [@CR6]\]. However, how folic acid contributes to normal NTC or prevents NTDs is not well known. There are so far six genes that are identified to be responsible for folate transport in mammals; the glycosyl-phosphatidylinositol-anchored folate receptors (*Folr 1, Folr 2, Folr 3, and Folr 4*) \[[@CR162], [@CR163]\]*, the bidirectional reduced folate carrier 1* (*RFC1; also known as SLC19A1*) \[[@CR164]\], and *protom coupled folate transporter* (*PCFT*) \[[@CR165]\]. Mice deficient for either *Folr1* or *RFC* exhibit severe growth retardation and embryonic lethality before the beginning of NTC \[[@CR162], [@CR166]\]. Supplementation with a high amount of folates to pregnant mothers allows those mutant embryos to survive to birth, suggesting that folate transport from amniotic fluid to embryos is essential for embryonic growth \[[@CR166], [@CR167]\]. Embryos from mothers supplemented with modest levels of folates are rescued from early embryonic lethality but still exhibit NTDs \[[@CR168]\]. Interestingly, *Folr1* is strongly expressed in the dorsal tips of neural folds during NTC, implying that developmental events in those regions, including neural crest generation, may require a high amount of available folates \[[@CR169], [@CR170]\]. These lines of evidence together suggest that the adequate amount of folates available for embryos is a crucial factor throughout developmental stages from gastrulation to neurulation. However, it is not yet clear how this is related to preventive effects of folates on NTDs. There are both folate-sensitive and folate-resistant NTDs mouse models. Furthermore, recent studies suggest that excessive folic acid intake is deleterious to several NTD mouse mutants, and even to normal embryogenesis \[[@CR171], [@CR172]\]. These lead to general concerns about unintended consequences of folic acid supplementation \[[@CR5], [@CR6]\]. Folate availability affects the one-carbon metabolism that supplies, as its name suggests, the one-carbon groups required for de novo synthesis of purines and thymidylate, or methylation \[[@CR7]\]. Indeed, availability of folate impacts both nucleotide synthesis and DNA methylation \[[@CR173]--[@CR175]\]. Interestingly, deficient DNA methylation leads to cranial NTDs \[[@CR176], [@CR177]\]. Such epigenetic regulation by methylation may also be involved in the higher rate of exencephaly seen in female embryos \[[@CR177], [@CR178]\]. A recent study reported that loss-of-function mutations in the glycine-cleavage system, which is an important component of folate one-carbon metabolism in mitochondria, predispose to NTDs in humans and in mice \[[@CR179]\]. This suggests that functional folate one-carbon metabolism itself is crucial for NTC. How folate metabolism and epigenetic regulation fit into the developmental NTC mechanism remains to be determined in future studies. Conclusions and perspective {#Sec20} =========================== Cranial NTC is a fascinating, dynamic process that is crucial to the development of functional brain. In this review, we have attempted to clarify the mechanisms of cranial NTC. Normal developmental programs required for cranial NTC include neural plate patterning, signaling systems responsible for tissue movement or fusion, and mechanisms responsible for the coordination of cell division, cell differentiation, and cell death. By examining these developmental programs, it will be possible to understand the mechanical and kinetic aspects of closure that may largely affect the occurrence of cranial NTDs. Newly emerging techniques including functional live-imaging analysis now allow researchers to analyze the interactions between signaling activity and morphological changes in detail in various model organisms including mice. With these tools, it may be possible to determine precisely when and how mutations disrupt normal developmental programs and produce NTDs. This knowledge may also help us to understand the action of teratogenic drugs and to find ways to prevent NTDs \[[@CR144]\]. We apologize to the colleagues whose works have not been cited or cited only indirectly because of space limitations. We are grateful to K. Nonomura, N. Shinotsuka, A. Shindo, H. Mitchell, and R. Finnell for critical reading of the manuscript. This work was supported by grants from the Japanese Ministry of Education, Science, Sports, Culture, and Technology (to Y.Y and M.M).
Mid
[ 0.625, 28.125, 16.875 ]
I like game development. Really, I do. But I find sitting in front of a computer and programming in C/C++ all day very boring. So, here's what I would like to do... I would like to write my own compiler using C that would allow me to write games using a custom BASIC-like language. Of course, it wouldn't be exactly like BASIC. It would include classes, inline assembly, etc. This same approach was taken by Naughty Dog for their Jak and Daxter series of games. They created a custom LISP variant called GOAL (Game Oriented Assembly Lisp). I know that I would need to do what I originally said I didn't want to do (sitting in front of a computer all day and programming in C/C++), but I'm willing to do that if it means I can use a powerful BASIC variant to program my games. So, what exactly should I start with? Should I use Bison to parse the language? Does anyone know of a good compiler design tutorial? Is this even a good idea? You'll spend more time debugging your compiler than you will be making games, in my opinion. You're right. But, this new language would be available for everyone to use. So my relentless debugging would help a lot of newbies who want to get into game development (yes... alright... I'm talking about myself here) Community projects are great, and if you're truly dedicated to writing this language, then perhaps it has a chance. Just keep in mind that the freely-available-software world doesn't need another project that's just going to get orphaned. That said, have you looked at any of the other 'starter' programming languages targetted at game programming? BlitzBasic? DarkBasic? PyGame? If not, I highly suggest you do and see if any of those fill your needs. If they don't, and you decide to go ahead with your own language, I suggest filling a niche that they don't, otherwise you might find it harder to get a community together when you'll be competing against these larger, established "easy-to-program" languages. Disclaimer: I've never used any of those, but we do get a lot of posts from people who do, who are mostly newbies. I've never tried BlitzBasic, but I do have some comments on DarkBasic and PyGame: For me, DarkBASIC seems limiting. I don't know why, but when I open up that ugly IDE, I just get the feeling that I'm in a box. With my language, I want it to be simple enough for a newbie, but powerful enough for the average programmer who's had a few years of experience. PyGame is nice, but Python is not my ideal language for writing games. I don't like that just anyone can edit the source. Sure, it might be ok for some games that I just let people do what they want with, but Python isn't for commercial games. I could use py2exe, but the programs it puts out are for Windows only and tend to be very large. My main design hurdle right now is the decision on usability. Should the language be designed for games alone? Or should I include other functions/libraries that make it suitable for everyday application development? To me, designing it for game programming only would be slightly easier. But allowing general application development would make it available to a wider range of programmers. If you want to program games, why waste at the minimum a year of your time, not programming games? The more powerfull you make the language, without 'limiting' it as you say, you induce complexity, which in turns, makes it harder to use, which in turn, doesn't make it newbie friendly. What is so boring about C/C++ that would make your language such a joy to use? I could go on, but I think I nailed a few key points there. I don't want to be a downer on your idea, but it just seems far fetched, and not really worth your time. Under no circumstances do I think it's a collosal waste of time! Writing a compiler and virtual machine was the most fun coding I have ever done, and I encourache everybody to at least try it out! Especially in these times where every new game seems to have some form of scripting... In response to 1 and 2: Peroxide script tutorials is where I learned the trade. The language is not the most complex, but it does work, and the framework is easy to expand. His 'hello world' example looks like this: // A 'hack' to print a string from script.. send it via a 'setint' on the player as the name.. void printLog(string msg) { setint(TYPE_PLY, msg, 2, 0); } program helloWorld trigger_on_init { printLog("Hello world from PxdScript!"); } and my expanded test script-code looks like this (built on his framework): This took about two moths to do, and that was while building the engine on the side. I do not agree that making a language too powerful will make it complex, on the contrary. Making a powerful script language is all about removing the low-level stuff from the user, and make simpler high-level interfaces that they can use instead. A good example of this is UnrealScript's networking architecture, where instead of having to write code for transferring data to and from the server, they simply mark what is to be sent, and what conditions have to be met for the transfer to occur. The compiler code to do this doesn't even have to complex, just flag class-members as stated in the scripts, and have the runtime send it. I too would also like to now what is so boring about C++? I use it all the time, and I love it! [1] No, but I have to start somewhere. >_> [2] No, but again, I have to start somewhere [3] Because, as I said, I don't like to write games in C. I like to write other stuff in C, just not games. [4] Well, I'm not going to make it completely easy. That would just be another BASIC clone. I think Classes, trash collection, inline assembly, etc would be a good addition to the traditional BASIC language. But, newbies wouldn't have to use those features if they didn't want to. By "limiting", I was speaking in terms of the feature set. [5] When I'm writing a game, I like to think from the design side. C/C++ is just not the language for me when I try to make a game. If I had something simple, yet powerful, I'm sure making games would be a lot easier for me. About my time... honestly, I have a ton of it. I've been thinking for the last 3 days about something I should work on. A compiler finally popped into my head and I decided to give it a try. Besides, who says you have to use it? Or anyone else? I'm really making it for my benefit because I think it would help me in the long run. If other people like it and want to use it, more power to them. When you say it's a waste of time, you're speaking from your point of view. But, here's the thing... you're not making the compiler. I am. So, it's really up to me whether it's a waste of time or not. Half of the things I end up coding turn into wastes of time. That said, they're wicked learning experiences. I wrote my own compiler tools for some work-related language stuff, and it was a *pain*. That said, it was super interesting, and it helped me understand why gcc/cl do the things they do, and it was very rewarding. I won't rain on your parade, but just know that it *can* be very difficult. I do wish you luck however. You could join an existing open-source Basic compiler project like Mattathias Basic which is a freeware sequel to Amos Basic on the old Amiga computers and an open-source counterpart to DarkBasic. We are aiming to make it cross-platform compliant so that it will still generate code for the next generation PowerPC-based Amigas as well as for PC. If you're interested you might like to join the mailing list and read up on the progress first. If you're looking for something closed-source that will only work on Linux, Mac, and PC, you're better off looking into buying BlitzMax instead. It's object-oriented like Mattathias but broke the backward-compatability mold with their Amiga version to better support the PC. A custom language doesn't have to take a ton of time. I used Flex++ and Bison++ to generate my own compiler, and with no previous compiler building experience I have my own language in less than 3 months time. [4] Well, I'm not going to make it completely easy. That would just be another BASIC clone. I think Classes, trash collection, inline assembly, etc would be a good addition to the traditional BASIC language. But, newbies wouldn't have to use those features if they didn't want to. By "limiting", I was speaking in terms of the feature set. Agreed, beginner developers would not need to use the more advanced features. But to create anything really deep, you would need to use those features. You can make it easier to interface with low level operations, agreed. I just don't see why you don't create a C++ class to do that for you, instead of an entirely new language. But, as you said, you don't like using C to program games in.@cypher543 [5] When I'm writing a game, I like to think from the design side. C/C++ is just not the language for me when I try to make a game. If I had something simple, yet powerful, I'm sure making games would be a lot easier for me. From what you are saying, it sounds as if you are an experienced programmer, expecially if you are willing to takle creating a new language(with the feature set you have described). So, why are you finding it so difficult programming in C/C++ for games? What makes a game so much harder than other apps you have written? The design side should be very code non specific. You should never even talk about what language you are going to use in your game design. The eventual implementation of your design will have tons to do with C++, not the creating of the design itself.@cypher543 About my time... honestly, I have a ton of it. I've been thinking for the last 3 days about something I should work on. A compiler finally popped into my head and I decided to give it a try. Besides, who says you have to use it? Or anyone else? I'm really making it for my benefit because I think it would help me in the long run. If other people like it and want to use it, more power to them. When you say it's a waste of time, you're speaking from your point of view. But, here's the thing... you're not making the compiler. I am. So, it's really up to me whether it's a waste of time or not. Agreed, it is your time, you can do with it as you please. But you also asked on a forum what people though of your idea, your time. Don't ask if you don't want answers. I don't think I was being rude in saying I thought it was a waste of time given your target goals, which is game programming, not tools programming(from the information you provided). If you indeed do feel that you are more interested in programming a language than go for it, it sounds like a great challange, and a blast. But if your goal is to program games? Then no, this is not worth your time.@Kenneth Gorking I do not agree that making a language too powerful will make it complex, on the contrary. Making a powerful script language is all about removing the low-level stuff from the user, and make simpler high-level interfaces that they can use instead. From what I gathered he wasn't talking about a simple scripting language, he was talking a fully featured programming language, his feature set seemed to indicate that as well. As I said above yea, you can reduce the complexity by making higher level interfaces to low level stuff, but in the end, your going to need to do more than basic read and writes, especially when it comes to games programming. Like inline assembly, he wants that, how is that easier in his proposed language than in C++? Cypher, in the end, it comes down to, do you want to program games right now, or later? If you want to program games right now, then this is a waste of your time, as you won't be programming games for quite a while(going under the assumption this is a fully featured language, not just a limited feature set scripting language). If you can wait, and you really like the idea of your proposed language, then go for it, it sounds like it would be a good time. They created a custom LISP variant called GOAL (Game Oriented Assembly Lisp). I'd just like to add that creating a variant of Lisp is a whole different ball-game than creating a completely different language compiler. Search around for LISP and "Domain Specific Languages". Lisp programmers tend to morph the language to suite there needs, rather than create a whole new language. I'm not trying to dissuade you, but I don't think the Jak and Daxter's guys went the the trouble of a lexer and parser. They probably used the Lisp compiler and directly editited the code tree with macros. I don't think I was being rude in saying I thought it was a waste of time given your target goals, which is game programming, not tools programming(from the information you provided). I'm sorry if I sounded annoyed. I didn't think you were being rude, at all. My target goal isn't really game programming, it's making game programming easier for myself and for people like me who tend to not get the whole process. Anyway, I think what I'm going to do is just write the basic compiler, then expand it with an OpenGL library. That way, people can write their own engines for it. Maybe I'll work on my own, that way newbies have something to work with. I know this is going to take awhile, but I'm so completely bored right now, and I need some exciting to do. I would say it would be well worth your time then, since you seem to want to focus more on a game language, rather than game development itself. Before you begin, it would be good to check out what some of the other game programming scripting languages do, and check on their forums to see what people do/don't like. That would give you a good idea of possibly where to start when implementing a feature set. If you want to start some where you should buy a thew good books on the subject, there is no substitute for this as online resources are generally sparse and/or poor. Tutorials are not enough, you need to understand theory aswell, i suggest starting here. Lastly do yourself a favour and use a more appropriate programming language for this such as SML, O'Caml, or haskell. snk_kid: you keep amazing me with good links about language ponderings..I had almost given up on finding those..there really isn't much good stuff(or it's hard to find) dealing with "new" languages.. thx Once you've got your head wrapped around these concepts the rest will just be learning syntax which is 1 a day job.@cypher543 Besides, I think Flex/Bison look very cool, and I doubt they generate lexers and grammar for SML, O'Caml, or haskell. Each of those language have there own versions of lex & yacc (which is what flex & bison desecend from). C is just awful for compiler development even when you have tools to help out. Don't let the fact that GCC is written in C make you think otherwise because it really is a bad idea and it doesn't mean you can't write a GCC front-end in any language because you can so it's all mute. The first place to start when creating a custom language (IMO) is the dragon book (it's proper name is "Compilers: Principles, Techniques and Tools"). It is the classical reference, and doesn't cover new or really advanced stuff. You might want to look at "Advanced Compiler Design and Implementation" by Muchnick, "Engineering a Compiler" by Cooper and "Programming Language Pragmatics" by Scott, but I don't know these books. Then there is alot of stuff on the .net.
Low
[ 0.51171875, 32.75, 31.25 ]
Q: Create Google Cloud instance with custom FreeBSD ISO I want to create a new Google Cloud instance with Hardenedbsd iso. Hardenedbsd is a FreeBSD based OS. I checked public documentation on https://cloud.google.com/compute/docs/images/import-existing-image but I couldn't see FreeBSD on supported OS section. Is there a way to do that? A: FreeBSD works pretty well in GCE, the upload procedure of a custom image or making your own is quite easy I would say even better than with AWS, therefore high are the changes the same could apply for Hardenedbsd, the only "trick" is that after you have your raw disk you need to use gnu tar to upload the image: gtar -cSzf freebsd.tar.gz disk.raw To create the disk.raw I use this script https://github.com/fabrik-red/images/blob/master/fabrik.sh (root on ZFS) to read more about the procedures you could check: https://fabrik.red/post/google/ For testing or getting an idea, you could try FreeBSD 12.0 https://github.com/fabrik-red/images/releases/download/12.0/disk.tar.gz
High
[ 0.698140200286123, 30.5, 13.1875 ]
--- abstract: 'We use time-dependent, axisymmetric, hydrodynamic simulations to study the linear stability of the stalled, spherical accretion shock that arises in the post-bounce phase of core-collapse supernovae. We show that this accretion shock is stable to radial modes, with decay rates and oscillation frequencies in close agreement with the linear stability analysis of Houck and Chevalier. For non-spherical perturbations we find that the $l=1$ mode is always unstable for parameters appropriate to core-collapse supernovae. We also find that the $l=2$ mode is unstable, but typically has a growth rate smaller than that for $l=1$. Furthermore, the $l=1$ mode is the only mode found to transition into a nonlinear stage in our simulations. This result provides a possible explanation for the dominance of an $l=1$ ’sloshing’ mode seen in many two-dimensional simulations of core-collapse supernovae.' author: - 'John M. Blondin' - Anthony Mezzacappa title: The Spherical Accretion Shock Instability in the Linear Regime --- Introduction ============ The modern paradigm for core-collapse supernovae includes a critical phase between stellar core bounce and explosion that is characterized by a stalled accretion shock, during which time neutrino heating is believed to reenergize, or at least play a critical role in reenergizing, the stalled shock \[[@bhf95; @mezzacappaetal98; @rj00; @liebendoerferetal01; @burasetal03; @fw04]\]. (For a review, see [@mezzacappa05].) This phase is expected to last of order a few hundred milliseconds. The past decade has seen significant interest in the multidimensional dynamics of this post-bounce accretion phase. Most two-dimensional supernova simulations exhibit strong turbulent motions below the stalled accretion shock \[[@hbc92; @mwm93; @herantetal94; @bhf95; @mezzacappaetal98; @burasetal03; @fw04]\]. In the past, this turbulent flow was attributed to convection driven by the intense neutrino flux emerging from the proto-neutron star at the center of the explosion. However, @bmd03 [hereafter Paper I] showed that the stalled accretion shock [*itself*]{} may be dynamically unstable. By using steady-state accretion shock models constructed to reflect the conditions in the post-bounce stellar core during the neutrino heating phase (as was shown in ) but characterized by flat or positive entropy gradients, and as such convectively stable, @bmd03 were able to isolate the dynamical behavior of the post-bounce accretion shock [*per se*]{}. They found that small nonspherical perturbations to the spherical accretion shock lead to rapid growth of turbulence behind the shock, as well as to rapid growth in the asymmetry of the initially spherical shock. This spherical accretion shock instability, or “SASI," is dominated by low-order modes, and is independent of any convective instability. Clearly, once the shock wave is distorted from spherical symmetry, the non-radial flow beneath it is no longer defined solely by neutrino-driven convection. The fluid flow beneath the shock is, at least initially, a complex superposition of flows generated by convection and by the SASI-distorted shock. Once the shock wave is distorted, it will deflect radially infalling material passing through it, leading to highly nonradial flow beneath it. With time, the fluid flow beneath the shock may in fact be determined by the SASI and not by convection. Instabilities such as neutrino-driven convection may be important only at early times in aiding the neutrino heating [@herantetal94; @bhf95; @mezzacappaetal98; @burasetal03; @fw04] and in setting the shock standoff radius while the explosion is initiated. The standoff radius will in turn determine the time scale over which the SASI may develop. @janka01 provides a qualitative description of this post-bounce accretion phase in terms of a simple hydrodynamic model, with the intent of providing an analytic model that can be used to investigate the conditions necessary for a successful supernova shock. In this picture the post-bounce phase is described by a standing accretion shock with outer core material raining down on the shock at roughly half the free-fall velocity. After traversing the shock, this gas decelerates and gradually settles onto the surface of the nascent neutron star. The pressure of this post-shock gas is dominated by electron-positron pairs and radiation, and as such can be modeled as a $\gamma=4/3$ gas. The approximation of an hydrostatic atmosphere immediately below the accretion shock then yields the result that the gas density increases as $r^{-3}$ and the gas pressure increases as $r^{-4}$ with decreasing radius behind the shock. Deeper within this settling region, the gas pressure becomes dominated by non-relativistic nucleons and the temperature becomes roughly constant due to neutrino emission. The flow below this transition radius can thus be approximated by an isothermal hydrostatic atmosphere. The steady nature of this accretion shock and post-shock settling solution is maintained by a balance between fresh matter accreting through the standing shock, and dense matter cooling via neutrinos and condensing onto the surface of the nascent neutron star [@chev89; @janka01]. This model of core-collapse supernovae described by @janka01 is similar to the analytic models presented by @hc92 [hereafter HC] to investigate spherical accretion flows onto compact objects. These latter models assume the flow can be treated as an ideal gas with a single effective adiabatic index, $\gamma$, and a cooling function with a prescribed power-law dependence on the local density and temperature of the gas. Using a linear stability analysis showed that extended shocks (where the shock radius, $R_s$, is much larger than the stellar radius, $r_*$) are unstable to radial oscillations. For $\gamma =4/3$ this critical radius was $R_s \sim 20 r_*$ or larger, depending on the cooling parameters. This is larger than is expected in the post-bounce accretion phase of core-collapse supernovae, suggesting that the stalled supernova shock is stable to radial perturbations. examined one case with $\gamma =5/3$ for the stability of nonradial modes, and found that only the lowest order nonradial mode ($l=1$ in terms of Legendre polynomials) was unstable. However, the growth rate for the $l=1$ mode was slower than that for the radial ($l=0$) mode. They did not present results for nonradial modes for any case with $\gamma =4/3$. The focus of this paper is to develop a deeper understanding of the SASI by focusing on its development in the linear regime and in the transition from linear to nonlinear behavior and to supplant through such analyses the numerical findings in . In so doing, ties to and extensions of the linear stability analyses of can be made as well, which in part serve to validate the findings made with our multidimensional hydrodynamics code. We begin by describing the model in Section 2 and the numerical method with which we evolve the time-dependent flow in Section 3. The results from one-dimensional models are reported in Section 4, along with the corresponding results from . In Section 5 we present two-dimensional simulations, quantifying the SASI growth rate as a function of the Legendre wave number, $l$, and illustrating the physical origin of the $l=1$ instability. Spherical Accretion Shock ========================= We begin with the model of the post-bounce accretion shock and “settling" flow beneath it presented in . As we now discuss, this model is a limiting case of the model presented in [@janka01] and is defined following the prescription outlined in the earlier work by . In our model, we assume we have an ideal gas equation of state, a cooling function, and a hard reflecting boundary at the surface of the accreting compact object. We make the additional assumption that the gas in the postshock region is radiation-dominated—that is, that we have a single adiabatic index. This is appropriate for conditions in the postbounce stellar core at a time when explosion is initiated, during which time we expect an extended heating region overlaying a thin cooling region above the proto-neutron star surface. Of course, the equilibrium radius of the accretion shock is determined by the magnitude of the cooling in this cooling layer: stronger cooling leads to a shock radius closer to the inner reflecting boundary, while weaker cooling results in a shock with a large stand-off distance from the accreting object. While one could include a neutrino heating term in such a model [@bg93], this would possibly introduce convection in the dynamics, complicating the analysis. Because our goal is to separate the effects of shock-driven turbulence from thermally-driven convection, we do not include any heating term in our models. The time-evolution of the flow is given by the Euler equations for an ideal gas described by a velocity, $\bf u$, a mass density, $\rho$, and an isotropic thermal pressure, $p$: $$\partial_t \rho + \nabla \cdot \rho {\bf u} = 0$$ $$\partial_t \rho {\bf u} + \nabla \cdot \rho {\bf uu} + \nabla p = -\rho{GM}/{r^2}$$ $$\partial_t \rho{\cal E} + \nabla \cdot (\rho{\cal E}{\bf u} + p{\bf u}) = -{\cal L}$$ where the total energy per gram is given by ${\cal E} = \frac{1}{2}{\bf u}^2 + e - GM/r$, the internal energy, $e$, is given by the equation of state: $\rho e = p/(\gamma -1)$, and $M$ is the mass of the accreting star. Following , the cooling term is parameterized by two power-law exponents: $${\cal L} = A \rho^{\beta-\alpha} p^\alpha .$$ Using the parameterization of effective neutrino cooling provided by @janka01, with $\dot E \propto \rho T^6$, and assuming $P\propto T^4$, we arrive at values of $\alpha = 3/2$ and $\beta=5/2$ for our model. Following Paper I, we normalize the problem such that $GM=0.5$, $\dot M=4\pi$, and the equilibrium accretion shock is at $R_s=1$. Note that this results in the same normalization for density, velocity, and pressure as used in . Assuming free-fall velocity just above the accretion shock, the immediate post-shock values are then given by $$u_s = \frac{\gamma - 1}{\gamma +1}, \rho_s = \frac{\gamma + 1}{\gamma -1}, p_s = \frac{2}{\gamma +1}.$$ The equilibrium solutions are obtained by integrating the Euler equations inward from the shock front until $u=0$, corresponding to the surface of the accreting star. These equilibrium solutions are shown in Figure \[fig:analytic\] for three different values of $r_*$, and hence three different stellar radii relative to the normalized shock radius. ![Equilibrium solutions for the spherical accretion shock model of a core-collapse supernova are shown for different values of the radius of the proto-neutron star ($r_*$ = 0.04, 0.2, 0.5) relative to the shock radius. The velocity and entropy are integrated inwards from the accretion shock at $R_s=1$ to the stellar surface at $r_*$.[]{data-label="fig:analytic"}](f1) The entropy profiles show in Figure \[fig:analytic\] illustrate the regime of strong neutrino cooling in this supernova model. In the absence of cooling the flow would remain isentropic, thus the drop in entropy seen in Figure \[fig:analytic\] reflects the local efficiency of cooling, which is predominantly confined to a thin layer at the surface of the accreting star. Furthermore, as the stellar radius recedes from the shock front (a more extended shock region), this cooling layer becomes progressively thinner. There are two characteristics of our models that warrant further discussion, particularly as they compare to past supernova models: (1) The mass accretion rate through the shock is assumed to be constant. In reality, the accretion rate decreases with time owing to the decreasing density and velocity in the preshock gas. Therefore, the results we present here would be enhanced if the drop-off in mass accretion rate were included. (2) We assume a high Mach number for the shock (anything $\sim3$ or larger would be sufficiently large to be consistent with our models). This assumption is consistent with the profiles found in supernova models (e.g., see [@mezzacappaetal01]). Numerical Model =============== As described in in more detail, we use the time-dependent hydrodynamics code VH-1 (<http://wonka.physics.ncsu.edu/pub/VH-1/>) to study the dynamics of a spherical accretion shock in both one and two dimensions. A critical aspect of this numerical implementation is the use of dissipation to maintain a smooth flow in the absence of any perturbations . These models are described by four parameters, $\gamma$, $\alpha$, $\beta$, and $r_*$ (or $A$). Note that although the parameters $A$ and $r_*$ are equivalent, they are not related by a closed-form expression. We therefore numerically search for the value of $A$ that produces a steady shock at a radius of unity for a given inner reflecting boundary at a radius of $r_*$. This steady-state solution is then mapped onto a radial grid extending from $r_*$ to $2 R_s$ to initialize the numerical simulations. The shock front is smoothed over two numerical zones to minimize spurious waves at the start of the simulation. The numerical resolution required for an accurate solution depends on the parameters of the problem, namely the scale-height of the cooling region. We found appropriate resolutions empirically by evolving one-dimensional simulations at various resolutions and comparing the time-evolution of the shock radius. We used grids with 300 to 450 radial zones, including a small increase (never more than 1%) in the radial width of the zones to maximize resolution near $r_*$ where rapid cooling generates strong gradients. The two dimensional simulations use the same number of zones used in the radial direction to cover the polar angle (a limitation of the parallel algorithm used by VH-1) from $0$ to $\pi$, assuming axisymmetry about the polar axis. The fluid variables at the outer boundary are held fixed at values appropriate for highly supersonic free-fall at a constant mass accretion rate, consistent with the analytic standing accretion shock model. Reflecting boundary conditions were implemented at the inner boundary, $r_*$. For the two-dimensional simulations, reflecting boundary conditions were applied at the polar boundaries and the tangential velocity was initialized to zero everywhere in the computational domain. Code Verification ================= The results of provide an opportunity to verify our numerical code on a time-dependent flow problem of direct relevance to core-collapse supernovae. Their linear stability analysis shows that spherical accretion shocks are unstable to growing oscillations in the shock radius (for the fundamental mode) if the shock is relatively extended. To confirm this stability analysis and to verify our numerical code, we have run simulations matching the parameterization for post-supernova fallback used in . They considered the problem of fall back onto the nascent neutron star on the time scale of hours to days following the supernova explosion. In this case the gas is optically thick and radiation-dominated ($p\propto T^4$, $\gamma=4/3$, as in the present post-bounce model). Near the surface of the accreting neutron star, the gas is loosing energy through neutrino cooling with a negligible density dependence but a strong temperature dependence ($\dot E \propto T^{10}$). These properties are approximated in the present model by considering the parameter values $\alpha=\beta=2.5$ [@chev89]. We note that the fall-back solutions used by and the post-bounce solutions described in this paper are very similar. ![The real and imaginary parts of the growth rate, $\omega$, as a function of the shock height, $R_s/r_*$ for the fall-back model. Results are shown for the linear stability analysis of (solid line) and for the 1D numerical simulations described in this paper (dashed line). Note that the values from have been transformed to the units used in this paper. []{data-label="fig:omega"}](f2) We perturb the equilibrium solutions by dropping an over-dense shell onto the shock, which compresses and pressurizes the shock region. The overpressure drives the shock back outwards, and in addition a strong pressure wave rebounds off the stellar surface and drives the accretion shock out even faster. This sets up an oscillation of the shock region on the sound crossing timescale. For parameters typical in core-collapse supernovae this oscillation is damped, as it was in the adiabatic models of . For very extended shocks this oscillation is overstable; for the fall-back case this critical radius is $r_*\approx 0.05$. To extract a complex growth rate from these simulations we fit the simulation data for the shock radius, $R_s(t)$, to an analytic function of the form $$R_s(t) = R_0 + R_1e^{\omega_r t}\sin(\omega_i t + \delta).$$ Using least-squares fitting we obtain the real and imaginary parts of the growth rate, $\omega$. The results from several simulations of the fall-back model are shown in Figure \[fig:omega\], together with the linear growth rates derived by . There is only a limited range in $R_s/r_*$ where the linear analysis and the numerical simulations overlap, but within that overlap the agreement is remarkably good. Simulations of very extended shocks ($r_* \ll R_s$) become computational expensive because of the large dynamic range in spatial coverage and the extremely short time scale. As $r_*$ becomes much smaller than $R_s$, the region of strong cooling (for $\gamma = 4/3$) shrinks to a small fraction of $r_*$. This forces the use of a high resolution spatial grid near $r_*$, which in turn requires a large number of computational zones and a very small time step due to the Courant condition: $\Delta t < \Delta r / c_s$. As an extreme example, to simulate a model with $r_* = 0.01$ required 200,000,000 timesteps. These simulations do two important things: they confirm the linear stability analysis of , and they validate our time-dependent numerical model. Linear Evolution of the SASI ============================ We performed a series of two-dimensional axisymmetric simulations following the same procedure outlined above for one dimension, down to the same radial gridding for a given model. We experimented with a variety of ways to perturb the equilibrium solution with the goal of exciting a single mode (in terms of spherical harmonics) with as little power in other modes as possible. This goal was best achieved when using density enhancements in the preshock gas as had been used in 1D. These density variations were typically between 0.1% and 1%; large enough to excite the instability but small enough that the perturbations could grow in amplitude by more than an order of magnitude while still remaining small. This extended regime of linear growth facilitated an accurate measurement of the linear growth rate. As in , all of the two-dimensional simulations were unstable. Here we attempt to quantify the growth rate of this instability in the linear regime as a function of the wave number. To track the importance of various modes affecting the stability of a spherical accretion shock, the evolution was tracked using Legendre polynomials. Again, after trying several methods, we found the best approach was to first integrate the amplitude in a given harmonic for a fixed radius, $$G(r) = \int A(r,\theta) P_l(\cos\theta) d\cos\theta$$ and then integrate the power for that harmonic over radius: $$Power = 2\pi\int \left[G(r)\right]^2 r^2 dr$$ where $A(r,\theta)$ represents some local quantity affected by the perturbed flow, e.g., entropy or pressure, and $P_l$ is the Legendre polynomial of order $l$. ![The growth of the SASI in a simulation with $r_* = 0.2$ is quantified here by the power in the perturbed entropy for the $l=1$ mode (solid line). The best fit to this growth curve is shown as a dashed line. The SASI becomes nonlinear at a time around $t\approx 60$, as shown by the deviation of $\langle R_s\rangle$ from unity.[]{data-label="fig:sasigrowth"}](f3) An example of the linear growth of the SASI is shown in Figure \[fig:sasigrowth\] for $r_*=0.2$. These simulations exhibit a well-defined regime of exponential growth spanning at least an order of magnitude in amplitude. The beginning of a simulation is typically marked by a complex pattern of waves, but given an appropriate initial perturbation, a single mode soon dominates the evolution. To provide guidance on the relevance of the linear regime, we also show the angle-averaged shock radius throughout the evolution. During the linear regime, when perturbations to the spherical accretion flow are small, we expect the shock to remain nearly stationary. Once the average shock radius begins to deviate substantially from unity, the SASI has entered the non-linear regime. As in the analysis of the one-dimensional simulations, we can fit these growth curves from two-dimensional simulations with an exponentially growing sinusoid. In this case, we are fitting the power, not the radius: $$F(t) = F_1e^{2\omega_r t}\sin^2(\omega_i t + \delta).$$ The fitted frequencies are shown in Figure \[fig:omega2d\] for four different values of $r_*$ (0.5, 0.3, 0.2, and 0.1). We did not attempt simulations for smaller values because they would have required an extremely long integration, and the results for the $r_* = 0.1$ model were sufficiently noisy that we did not expect to be able to extract clean growth rates from more extended models. Note, however, that the growth rate of the $l=1$ mode appears to be decreasing for very extended shocks and that the $l=0$ mode becomes unstable for $r_* < .05$. We do not consider values of $r_*$ larger than 0.5 because this would not be consistent with our fundamental starting assumption of having conditions near explosion and a postshock gas described by a single adiabatic index. Under such conditions, we would have a large heating region dominated by radiation and a thin cooling layer at its base. A larger value of $r_*$ would imply a larger cooling region and, generally, a postshock region composed of gases with different adiabatic indices. ![The real and imaginary parts of the growth rate, $\omega$, as a function of the shock radius relative to the stellar radius, $R_s/r_*$ for three different axisymmetric modes, $l=0$, 1, and 2. []{data-label="fig:omega2d"}](f4) We could not isolate modes with values of $l>2$, nor could we adequately measure the growth of the $l=2$ mode in the most extended models with $r_*=0.1$. The results for the spherically symmetric mode ($l=0$) are taken from the one-dimensional simulations. As expected, the frequency of oscillation is a monotonically increasing function of the wavenumber. The growth rate, however, is not. In all cases we found that $l=1$ is the most unstable, and is always unstable. ![The growth of the SASI in a simulation excited with the $l=2$ mode (dashed line) but dominated at late times by the $l=1$ mode (solid line). The SASI becomes nonlinear at a time around $t\approx 120$, once the $l=1$ mode becomes dominant.[]{data-label="fig:saturate"}](f5) At late times the evolution is always dominated by the $l=1$ mode. In fact, this is the only mode that we have observed to reach a nonlinear stage. We show in Figure \[fig:saturate\] the growth of the different modes in a simulation for which we carefully excited the $l=2$ mode and not the $l=1$ mode. While the $l=2$ mode grows substantially over the course of a dozen oscillations, it stops growing before reaching the nonlinear stage. In contrast, the $l=1$ mode grows up out of the noise and becomes nonlinear at a time of about $t\approx 120$. We speculate that the linear growth of the $l=2$ mode stalls because power in that mode is lost to the rapidly growing $l=1$ mode. Note that the power in $l=1$ is very chaotic during this episode when the $l=2$ growth stalls. In fact, a comparison of Figures \[fig:sasigrowth\] and \[fig:saturate\] shows that the overall growth rate for $l=1$ is steeper in this latter model than observed for a simulation with only an $l=1$ mode excited. While it might be possible for the $l=2$ mode to reach a nonlinear stage if the $l=1$ mode was completely suppressed, one would not expect such an artificial situation to happen in Nature. To understand the physical origin of the SASI, we first note that the oscillation frequency of the SASI corresponds roughly to the time it takes a sound wave to cross the spherical accretion shock cavity. For the model with $r_*=0.2$, a sound wave travels from $R_s$ to $r_*$ in a time of 1.51. Neglecting the path around the stellar surface at $r_*$, a sound wave would travel back and forth across the spherical cavity in a characteristic time $\tau_s \approx 6$, giving an oscillation frequency of $\omega_i \approx 1$. Note that a more realistic path around the central star would give a longer travel time, both because of the longer path length and because the sound speed is slower at larger radii. Thus, one would expect a frequency somewhat smaller than unity, in agreement with the results shown in Figure \[fig:omega2d\]. In contrast, the advection time for a parcel of gas to drift in from $R_s$ to $r_*$ is 14.3 for this same model. Therefore, any small perturbations in advected quantities (e.g., $u_\theta$, vorticity, entropy) are seen to drift inwards on a time scale much longer than the characteristic time scale of the SASI. ![An example of the perturbed variables taken from the simulation excited with an $l=1$ mode. The entropy is shown in the top row and pressure in the bottom row. In each case blue represents positive deviations from equilibrium and white represents negative. Time evolves to the right over one period of oscillation; the first and last images represent the same phase. An mpeg animation of this evolution is available on line. []{data-label="fig:2dvar"}](f6){width="6.5in"} This difference between the propagation of sound waves and the slower drift of advected perturbations can be seen in Figure \[fig:2dvar\], where we show the time evolution of two flow quantities. In the case of entropy, which serves as a marker of fluid elements when the gas is adiabatic (which is the case all but near the accreting surface), the perturbations advect radially inward and fade away into the low-entropy gas of the cooling layer. These perturbations are generated at the shock and advect inwards with the accretion flow. As such, they have no means of directly influencing the accretion shock. Furthermore, there is no evidence of pressure perturbations at small radii that might represent acoustic waves originating from these flow perturbations as they advect inward—i.e., of a vortical–acoustic feedback. In contrast, the bottom row in Figure \[fig:2dvar\] shows something that resembles a standing wave pattern rather than features drifting radially inward with the flow. For example, the region of high pressure does not propagate across the equator of the shock, but rather fades in one hemisphere only to grow in the low pressure region in the opposite hemisphere. Based on these observations, we conclude that the SASI is the result of a growing standing pressure wave oscillating inside the cavity of the spherical accretion shock. ![The effect of changing the shock position on the post-shock pressure profile is illustrated here with two different equilibrium solutions. The dashed line shows the immediate post-shock pressure as a function of shock location. Thus in each case the pressure begins on this line at the position of the shock and varies approximately as $r^{-4}$ inward. If the shock is displaced outward by a small amount, from a radius of 1.0 to a radius of 1.1, the pressure of the inflowing gas behind the perturbed shock will start off at a lower value, but it will be higher at each radius below 1.0 than it is for the inflowing gas behind the original equilibrium shock. This positive feedback drives the growth of the acoustic waves, creating the SASI.[]{data-label="fig:presprofiles"}](f7) The origin of the growth of this standing wave can be traced to the response of the post-shock pressure to changes in the shock radius. If the pressure in one hemisphere becomes slightly higher than equilibrium, it will push the spherical accretion shock outwards. Because the preshock ram pressure drops with increasing radius (as $r^{-2.5}$), shown by the dashed line in Figure \[fig:presprofiles\], the outward shock displacement leads to a smaller pressure immediately behind the shock. However, given that the postshock pressure increases with decreasing radius as $r^{-4}$, the postshock pressure for the perturbed shock will be greater than the postshock pressure for the unperturbed shock at each radius below the radius of the original unperturbed shock. This is illustrated with the two cases shown in Figure \[fig:presprofiles\]. The postshock pressure immediately behind the shock for the perturbed shock at radius 1.1 is lower than the postshock pressure immediately behind the unperturbed shock at radius 1.0 given the decrease in the preshock ram pressure given by the dashed line. However, at each radius below 1.0 the postshock pressure for the perturbed shock is higher than the postshock pressure for the unperturbed shock. There is an additional effect on the immediate post-shock pressure due to the change in shock velocity. As the shock is being pushed outward, the local shock velocity is larger than for a stationary shock at that radius, leading to a slightly higher post-shock pressure. Note that the change in pressure due to shock velocity is out of phase with respect to the change in pressure due to shock displacement, with the former peaking as the shock is moving outward, and the latter peaking at a phase $\pi/2$ later when the shock has reached its maximum extent. Nonetheless, both effects act to amplify the pressure variation of the standing wave. For the observed frequencies of the $l=1$ mode of the SASI, the effect of changing shock velocity is a few times smaller than the change in ram pressure due to shock displacement. The linear phase of the SASI is characterized by a nearly spherical accretion shock and approximately radial post-shock flow. Once the amplitude of the standing pressure wave becomes large enough to significantly break the spherical symmetry of the accretion shock, the SASI enters the non-linear phase. In this phase the radially infalling gas above the shock strikes the shock surface at an oblique angle, generating strong, non-radial post-shock flow. This transition from the linear to non-linear phase is illustrated in Figure \[fig:transition\]. The effect of a distorted shock on the post-shock flow is quite dramatic even for the relatively slight changes in shock position shown in the second frame of Figure \[fig:transition\]. Although the accretion shock is nearly spherical in this frame, it is significantly displaced upward. As a result, the post-shock flow is no longer radial, and in some regions is almost entirely tangential to the radial direction. As a consequence of this non-radial flow, perturbations generated on one side of the shock can be advected across the interior and over to the other side of the accretion cavity. For example, the second frame in Figure \[fig:transition\] shows a shell of high-entropy gas (shown in blue) in the upper hemisphere being advected around the central star and toward the lower hemisphere. ![The transition from the linear to the non-linear regime is illustrated with these two images spanning one oscillation period. The color shows positive (blue) and negative (yellow/white) deviations to the equilibrium value of gas entropy, and the lines represent streamlines integrated along the instantaneous velocity field. The transition to the nonlinear regime is largely characterized by the transition from radial to nonradial flow. An mpeg animation of this transition is available on line. []{data-label="fig:transition"}](f8){width="6.5in"} Discussion and Conclusion ========================= The linear stability analysis presented here has illuminated the underlying physical origin of the SASI instability first presented in . The SASI is not the result of a vortical–acoustic feedback [@foglizzo02] seen in other contexts, as thought previously (). Rather, it is the result of a growing standing acoustic wave in the spherical cavity bounded below by the surface of the compact object and above by the accretion shock. This is our primary finding. In addition, our techniques for exciting different modes in isolation, first defined in and refined here, and for defining and extracting the growth rate of these modes in the linear regime, have confirmed that the SASI is an $l=1$ instability, first proposed in . This result provides a possible explanation for the large-scale structure seen in recent two-dimensional supernova simulations performed on numerical grids covering a full $\pi$ in angle [@jankaetal041]. Our results clearly show that the $l=2$ mode is unstable, but we have not observed this mode becoming nonlinear. Rather, the amplitude of the $l=1$ mode always overtakes that of the $l=2$ mode during the linear regime. In the two-dimensional case considered here, it is apparent that, in the linear regime, power is transferred from the $l=2$ to the $l=1$ mode with time. Under near explosive conditions at the onset of a core collapse supernova (large neutrino heating region, thin neutrino cooling region), our results strongly suggest the SASI will develop. Moreover, the SASI has been confirmed in a two-dimensional model by [@jankaetal042] that included neutrino transport and suppressed neutrino-driven convection in order to isolate, as we have done here, postshock flow induced by the SASI versus postshock flow induced by convection. And the development of an obvious $l=1$ mode in the explosion of an 11 M$_\odot$ progenitor in a simulation performed by the same group on a 180 degree angular grid, without any such suppression, is strong evidence for the SASI in a complete model that attains explosive conditions [@jankaetal041]. Moreover, given that this very same model did not explode when a 90 degree grid was used [@burasetal03], one must consider that the SASI will play an important role in the explosion mechanism [*per se*]{}, as proposed in [@bmd03], not just in defining gross characteristics of the explosion. Generally speaking, we have affirmed the discovery of the SASI in core collapse supernovae and supplanted our understanding of its origin and development. Relatively small-amplitude perturbations, whether they originate from inhomogeneities in the infalling gas, aspherical pressure waves from the interior region, or perturbations in the postshock velocity field, can excite perturbations in the standing accretion shock that lead to vigorous turbulence and large-amplitude variations in the shape and position of the shock front. In addition, we have confirmed the linear stability analysis of for spherically symmetric modes, providing a critical test of our numerical hydrodynamic algorithm in the context of core-collapse supernovae. This work is supported by a SciDAC grant from the U.S. DOE High Energy, Nuclear Physics, and Advanced Scientific Computing Research Programs. A.M. is supported at the Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725. We thank the Center for Computational Sciences at ORNL for their generous support of computing resources. Blondin, J. M., Mezzacappa, A., & DeMarino, C. 2003, ApJ, 584, 971 Buras, R., Rampp, M., Janka, H.-Th. & Kifonids, K. 2003, PhRvL, 90, 1101 Burrows, A. & Goshy, J. 1993, ApJ, 416, L75 Burrows, A., Hayes, J. & Fryxell, B. A.1995, ApJ, 450, 830 Chevalier, R. A. 1989, ApJ, 346, 847 Foglizzo, T. 2002, A&A, 392, 353 Fryer, C. L. & Warren, M. S. 2004, ApJ, 601, 391 Herant, M., Benz, W. & Colgate, S. A. 1992, ApJ, 395, 642 Herant, M., et al. 1994, ApJ, 435, 339 Houck, J. C. & Chevalier, R. A. 1992, ApJ, 395, 592 Janka, H.-T. 2001, A&A, 368, 527 Janka, H.-T., Buras, R., Kitaura Joyanes F.S., Marek, A. & Rampp, M. 2004, astro-ph/0405289 Janka, H.-T., Scheck, L., Kifonidis, K., Müller, E. and Plewa, T. 2004, astro-ph/0408439 Janka, H.-Th. & Müller, E. 1996, A&A, 296, 167 Liebendoerfer, M. et al. 2001, PhRvD, 63, 3004 Mezzacappa, A. 2005, ARNPS, in press Mezzacappa, A., et al. 1998, ApJ, 495, 911 Mezzacappa, A., et al. 2001, PhRvL, 86, 1935 Miller, D. S., Wilson, J. R. & Mayle, R. W. 1993, ApJ, 415, 278 Rampp, M. & Janka, H.-Th. 2000, ApJ, 539, L33
Mid
[ 0.6224256292906171, 34, 20.625 ]
The Use of Play in Speech and Occupational Therapy Sensory Processing Disorder and speech impairment affect millions of children in the United States. Sensory Processing Disorder (SPD) affects a child’s development leading to difficulties with “detecting, modulating, interpreting, and/or organizing sensory stimuli” (Miller, Nielsen & Schoen, 2012, p.804). Furthermore, these children may find it difficult to self-regulate their behavior. Speech impairment is typically described as speech sound disorders (SSD), which involves a child having difficulties with communicating or correctly producing their native language (Brumbaugh, Smit, Nippold & Marinellie, 2013). Brumbaugh et al. (2013) also found that these children were likely to develop a poor self-image which provides even more incentive to find effective therapies. Furthermore, children with SPD and SSD are likely to have other behavioral disorders such as Autism Spectrum Disorder (ASD) or Attention Deficit Hyperactivity Disorder (ADHD) (Carr, Agnihotri, & Keightley, 2010; Cheung & Siu, 2009). Occupational therapy is often used to treat SPD and speech therapy for SSD. Occupational therapists may employ treatments such as sensory integration approach or Sensory Integrative Treatment Protocol, which has been found to have promising results increasing sensory integration in children (Case-Smith & Bryan, 1999; Paul et al. 2003). Speech therapists use play therapy as it has been proven effective in helping children improve their speech as well as helping children with autism (who tend to be seen in speech therapy) learn to interact with other children (Danger & Landreth, 2005). The interactive activities used in play therapy have been shown to improve multiple behavioral disorders, including SPD. This was the motivation behind creating an interactive game for children to play while in therapy sessions. Although there have been proven tasks and activities that help children improve upon their developmental delays from their behavioral disorder, there has been little research on a formal game that can be used in therapy. After researching and brainstorming, the interactive game developed in this project became known as Hands Up, Speak Up! The inspiration for the game was Cranium, an entertaining, but interactive board game. Melissa Quinn, teacher in a specialty classroom, and Nancy Koppl, speech therapist, were used as consultants for the game and allowed the children in their classrooms at C.L. Smith elementary school be used in the pilot of the game. Ms. Koppl recommended the use of the 80% rule as a main goal of the game, as this rule encourages learning and builds a child’s confidence. The 80% rule states that children should complete the task correctly 80% of the time; if the child is under then the task should be made easier, if the child is over then task should be made more difficult. The target audience for the interactive game was elementary school students in speech or occupational therapy with multiple behavioral disorders (SPD, SSD, ASD, etc). The game consists of five sections: Act Up, Build Up, Speak Up, Hands Up, and Community, which are all aimed to benefit children in speech or occupational therapy. During the pilot of the game, which consisted of four rounds, one of the creators played the game with the children while the other observed. The 12 children ranged from first to fourth grade and were all apart of Ms. Quinn’s specialty classroom. Modifications made to the game after the pilot were the addition of a game master (a therapist or trained adult who could provide help during the game and scaffold the tasks to fit the child’s needs) and beginning the game with a Community game for increased engagement. After these modifications were made, a second pilot was conducted and demonstrated these changes to be helpful in increasing interest and engagement. In the future, it would be noteworthy research to assess if Hands Up, Speak Up! holds statistical value in improving children’s fine motor skills, gross motor skills, articulation, or expressive vocabulary.
High
[ 0.662100456621004, 36.25, 18.5 ]
A man on horseback watches a flock of dark colored chickens in a yard at the North Dakota State Penitentiary in Bismarck. The chicken house is visible in the background, as is a flock of white chickens. Four women, a boy, and two dogs stand in front of a wooden house in Hettinger County. A barn, sod building, and large haystacks are visible in the background, along with many chickens. A plowed field is visible in the foreground. Three people sit against a white house on the Conrad Iverson farm. Several chickens stand in front of an outbuilding. A haybale and a cart are parked next to the outbuilding. A plowed field and other buildings are visible in the distance. Two small boys identified as Hans Walker Jr. and Melvin stand outside a log home on the Fort Berthold Indian Reservation. One of the boys sits in a wheeled walker chair. A chicken is visible in the background. A view of three men identified as Ger. Kline, P. Derby, and Hartley Meyers, sit in a wagon pulled by a team of horses somewhere in North Dakota. Two of the men are holding guns and there is also a dog visible in the back of the wagon next to one... David Herndon, Henry Hutchinson and their hunting dog pose with the fruits of a recent hunt. Their bag includes 2 hawks, 17 sharptail grouse, 1 jack rabbit, gulls, a weasel, and about 30 prairie chickens. By the time of Sanford’s service, the Army was providing more nutritious food for soldiers. His notes about meals indicate that vegetables (usually potatoes and onions) and fruits (plum duff and apples) were regularly served at post meals. At... Hand-colored postcard showing sod house with addition and wheat fields in background. Farmer and his wife and wife and four children are standing in foreground with tools of farmer: plow, wagon, chickens.
Mid
[ 0.6036036036036031, 33.5, 22 ]
/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.tez.mapreduce.hadoop; import java.io.File; import java.util.HashMap; import java.util.Map; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapreduce.MRJobConfig; import org.apache.hadoop.util.Shell; import org.apache.hadoop.yarn.api.ApplicationConstants.Environment; import org.apache.hadoop.yarn.api.records.Resource; import org.apache.tez.dag.api.TezConstants; import org.apache.tez.runtime.library.api.TezRuntimeConfiguration; import org.junit.Assert; import org.junit.Test; public class TestMRHelpers { private Configuration createConfForJavaOptsTest() { Configuration conf = new Configuration(false); conf.set(MRJobConfig.MAPRED_MAP_ADMIN_JAVA_OPTS, "fooMapAdminOpts"); conf.set(MRJobConfig.MAP_JAVA_OPTS, "fooMapJavaOpts"); conf.set(MRJobConfig.MAP_LOG_LEVEL, "FATAL"); conf.set(MRJobConfig.MAPRED_REDUCE_ADMIN_JAVA_OPTS, "fooReduceAdminOpts"); conf.set(MRJobConfig.REDUCE_JAVA_OPTS, "fooReduceJavaOpts"); conf.set(MRJobConfig.REDUCE_LOG_LEVEL, "TRACE"); return conf; } @Test(timeout = 5000) public void testMapJavaOptions() { Configuration conf = createConfForJavaOptsTest(); String opts = MRHelpers.getJavaOptsForMRMapper(conf); Assert.assertTrue(opts.contains("fooMapAdminOpts")); Assert.assertTrue(opts.contains(" fooMapJavaOpts ")); Assert.assertFalse(opts.contains("fooReduceAdminOpts ")); Assert.assertFalse(opts.contains(" fooReduceJavaOpts ")); Assert.assertTrue(opts.indexOf("fooMapAdminOpts") < opts.indexOf("fooMapJavaOpts")); Assert.assertTrue(opts.contains(" -D" + TezConstants.TEZ_ROOT_LOGGER_NAME + "=FATAL")); Assert.assertFalse(opts.contains(" -D" + TezConstants.TEZ_ROOT_LOGGER_NAME + "=TRACE")); } @Test(timeout = 5000) public void testReduceJavaOptions() { Configuration conf = createConfForJavaOptsTest(); String opts = MRHelpers.getJavaOptsForMRReducer(conf); Assert.assertFalse(opts.contains("fooMapAdminOpts")); Assert.assertFalse(opts.contains(" fooMapJavaOpts ")); Assert.assertTrue(opts.contains("fooReduceAdminOpts")); Assert.assertTrue(opts.contains(" fooReduceJavaOpts ")); Assert.assertTrue(opts.indexOf("fooReduceAdminOpts") < opts.indexOf("fooReduceJavaOpts")); Assert.assertFalse(opts.contains(" -D" + TezConstants.TEZ_ROOT_LOGGER_NAME + "=FATAL")); Assert.assertTrue(opts.contains(" -D" + TezConstants.TEZ_ROOT_LOGGER_NAME + "=TRACE")); } @Test(timeout = 5000) public void testContainerResourceConstruction() { JobConf conf = new JobConf(new Configuration()); Resource mapResource = MRHelpers.getResourceForMRMapper(conf); Resource reduceResource = MRHelpers.getResourceForMRReducer(conf); Assert.assertEquals(MRJobConfig.DEFAULT_MAP_CPU_VCORES, mapResource.getVirtualCores()); Assert.assertEquals(MRJobConfig.DEFAULT_MAP_MEMORY_MB, mapResource.getMemory()); Assert.assertEquals(MRJobConfig.DEFAULT_REDUCE_CPU_VCORES, reduceResource.getVirtualCores()); Assert.assertEquals(MRJobConfig.DEFAULT_REDUCE_MEMORY_MB, reduceResource.getMemory()); conf.setInt(MRJobConfig.MAP_CPU_VCORES, 2); conf.setInt(MRJobConfig.MAP_MEMORY_MB, 123); conf.setInt(MRJobConfig.REDUCE_CPU_VCORES, 20); conf.setInt(MRJobConfig.REDUCE_MEMORY_MB, 1234); mapResource = MRHelpers.getResourceForMRMapper(conf); reduceResource = MRHelpers.getResourceForMRReducer(conf); Assert.assertEquals(2, mapResource.getVirtualCores()); Assert.assertEquals(123, mapResource.getMemory()); Assert.assertEquals(20, reduceResource.getVirtualCores()); Assert.assertEquals(1234, reduceResource.getMemory()); } private Configuration setupConfigForMREnvTest() { JobConf conf = new JobConf(new Configuration()); conf.set(MRJobConfig.MAP_ENV, "foo=map1,bar=map2"); conf.set(MRJobConfig.REDUCE_ENV, "foo=red1,bar=red2"); conf.set(MRJobConfig.MAP_LOG_LEVEL, "TRACE"); conf.set(MRJobConfig.REDUCE_LOG_LEVEL, "FATAL"); final String mapredAdminUserEnv = Shell.WINDOWS ? "PATH=%PATH%" + File.pathSeparator + "%TEZ_ADMIN_ENV%\\bin": "LD_LIBRARY_PATH=$TEZ_ADMIN_ENV_TEST/lib/native"; conf.set(MRJobConfig.MAPRED_ADMIN_USER_ENV, mapredAdminUserEnv); return conf; } private void testCommonEnvSettingsForMRTasks(Map<String, String> env) { Assert.assertTrue(env.containsKey("foo")); Assert.assertTrue(env.containsKey("bar")); Assert.assertTrue(env.containsKey(Environment.LD_LIBRARY_PATH.name())); Assert.assertTrue(env.containsKey(Environment.SHELL.name())); Assert.assertTrue(env.containsKey("HADOOP_ROOT_LOGGER")); /* On non-windows platform ensure that LD_LIBRARY_PATH is being set and PWD is present. * on windows platform LD_LIBRARY_PATH is not applicable. check the PATH is being appended * by the user setting (ex user may set HADOOP_HOME\\bin. */ if (!Shell.WINDOWS) { Assert.assertEquals("$PWD:$TEZ_ADMIN_ENV_TEST/lib/native", env.get(Environment.LD_LIBRARY_PATH.name())); } else { Assert.assertTrue(env.get(Environment.PATH.name()).contains(";%TEZ_ADMIN_ENV%\\bin")); } // TEZ-273 will reinstate this or similar. // for (String val : YarnConfiguration.DEFAULT_YARN_APPLICATION_CLASSPATH) { // Assert.assertTrue(env.get(Environment.CLASSPATH.name()).contains(val)); // } // Assert.assertTrue(0 == // env.get(Environment.CLASSPATH.name()).indexOf(Environment.PWD.$())); } @Test(timeout = 5000) public void testMREnvSetupForMap() { Configuration conf = setupConfigForMREnvTest(); Map<String, String> env = new HashMap<String, String>(); MRHelpers.updateEnvBasedOnMRTaskEnv(conf, env, true); testCommonEnvSettingsForMRTasks(env); Assert.assertEquals("map1", env.get("foo")); Assert.assertEquals("map2", env.get("bar")); } @Test(timeout = 5000) public void testMREnvSetupForReduce() { Configuration conf = setupConfigForMREnvTest(); Map<String, String> env = new HashMap<String, String>(); MRHelpers.updateEnvBasedOnMRTaskEnv(conf, env, false); testCommonEnvSettingsForMRTasks(env); Assert.assertEquals("red1", env.get("foo")); Assert.assertEquals("red2", env.get("bar")); } @Test(timeout = 5000) public void testMRAMJavaOpts() { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_ADMIN_COMMAND_OPTS, " -Dadminfoobar "); conf.set(MRJobConfig.MR_AM_COMMAND_OPTS, " -Duserfoo "); String opts = MRHelpers.getJavaOptsForMRAM(conf); Assert.assertEquals("-Dadminfoobar -Duserfoo", opts); } @Test(timeout = 5000) public void testMRAMEnvironmentSetup() { Configuration conf = new Configuration(); conf.set(MRJobConfig.MR_AM_ADMIN_USER_ENV, "foo=bar,admin1=foo1"); conf.set(MRJobConfig.MR_AM_ENV, "foo=bar2,user=foo2"); Map<String, String> env = new HashMap<String, String>(); MRHelpers.updateEnvBasedOnMRAMEnv(conf, env); Assert.assertEquals("foo1", env.get("admin1")); Assert.assertEquals("foo2", env.get("user")); Assert.assertEquals(("bar" + File.pathSeparator + "bar2"), env.get("foo")); } @Test(timeout = 5000) public void testTranslateMRConfToTez() { Configuration conf = new Configuration(false); conf.setLong(TezRuntimeConfiguration.TEZ_RUNTIME_IO_SORT_MB, 1000); conf.setLong(org.apache.tez.mapreduce.hadoop.MRJobConfig.IO_SORT_MB, 500); Configuration conf1 = new Configuration(conf); MRHelpers.translateMRConfToTez(conf1); Assert.assertNull(conf1.get(org.apache.tez.mapreduce.hadoop.MRJobConfig.IO_SORT_MB)); Assert.assertEquals(1000, conf1.getLong(TezRuntimeConfiguration.TEZ_RUNTIME_IO_SORT_MB, 0)); Configuration conf2 = new Configuration(conf); MRHelpers.translateMRConfToTez(conf2, true); Assert.assertNull(conf2.get(org.apache.tez.mapreduce.hadoop.MRJobConfig.IO_SORT_MB)); Assert.assertEquals(1000, conf2.getLong(TezRuntimeConfiguration.TEZ_RUNTIME_IO_SORT_MB, 0)); Configuration conf3 = new Configuration(conf); MRHelpers.translateMRConfToTez(conf3, false); Assert.assertNull(conf3.get(org.apache.tez.mapreduce.hadoop.MRJobConfig.IO_SORT_MB)); Assert.assertEquals(500, conf3.getLong(TezRuntimeConfiguration.TEZ_RUNTIME_IO_SORT_MB, 0)); } }
Mid
[ 0.5777777777777771, 32.5, 23.75 ]
[Neuromyelopathy in the population of Noir-marron of Saint-Laurent du Maroni in French Guiana]. The neurological observations have been reported at André Bouron Hospital of Saint-Laurent du Maroni and at General Hospital of Cayenne during a period of 5 years. All patients belonged to the "Noir Marron" ethnic group and lived in the area of Saint-Laurent. There were six women and four men, aged 15-35 years. Neurological symptoms were isolated or associated to other organ failure. Neurological manifestations included retrobulbar optic neuropathy, spastic paraparesis, sensitive ataxia and cerebellar ataxia, psychiatric symptoms were observed. Other organs affected were cardiovascular, digestive, cutaneous or endocrinologic (thyroid). Diet consist mainly in cassava. Thiamin deficiency has been observed several times. Improvement of neurological deficits following thiamin administration points towards Thiamin as an etiological factor. Ethnological specificity of Saint-Laurent area may explain that such neurological manifestation have not been observed in the rest of the department.
Mid
[ 0.633802816901408, 33.75, 19.5 ]
652 N.E.2d 1167 (1995) 273 Ill. App.3d 866 210 Ill.Dec. 191 Robert F. SMITH, as Independent Adm'r of the Estate of Barbara A. Smith, Deceased, Plaintiff-Appellant, v. Craig T. HARAN and Judy M. Haran, Defendants-Appellees. No. 1-94-0624. Appellate Court of Illinois, First District, Second Division. June 27, 1995. Rehearing Denied July 28, 1995. *1169 John W. Turner, Chicago, for appellant. Di Monte Schostok & Lizak, Park Ridge (Andrew D. Werth, of counsel), for appellee. Justice DiVITO delivered the opinion of the court: Robert F. Smith, as independent administrator of the estate of Barbara A. Smith, deceased (the Estate), brought this action against Craig Haran and Judy Haran (the Harans) to collect on an instrument (Instrument) signed by them. The circuit court found that the Instrument was not negotiable and that the Estate was not a holder in due course. Following a bench trial, the court ruled that the Instrument failed to create an enforceable contract because it contained no consideration. The issues on appeal include whether (1) the circuit court erred in finding that the Instrument did not contain a "promise or order to pay" or words of equivalent import; (2) the provisions in Article 3 of the Uniform Commercial Code—Commercial Paper (UCC), (Ill.Rev.Stat.1985, ch. 26, par. 3-101 et seq. (now, as amended, 810 ILCS 5/3-101 et seq. (West 1992))) create a rebuttable presumption that consideration was given in exchange for the Instrument; (3) the circuit court erred in excluding the testimony of the Harans pursuant to the Dead-Man's Act (Ill. Rev.Stat.1985, ch. 110, par. 8-201 (now 735 ILCS 5/8-201 (West 1992))); and (4) the evidence rebuts the presumption of consideration. For reasons that follow, we reverse the judgment of the circuit court and remand the cause for a new trial. Prior to trial, the circuit court ruled that the Instrument was not negotiable because it did not contain an unconditional promise to pay and it was not payable to order or to bearer. The trial proceeded under fundamental contract principles, not under the provisions of the UCC. At trial, Robert Smith testified that his mother, Barbara Smith (decedent), died in December 1991. About one month after her death, Robert, as administrator of his mother's estate, found the Instrument in her wall safe. The Instrument provides: "Nov. 13, 1986 Mrs. Barbarba [sic] Smith 315 Kenilworth Prospect Heights, II PROMISSORY NOTE WE—Craig T. Haran & Judy M. Haran owners of the property and house located & described below, 1833 N. Hicks Road Palatine, II 60074 Legal Descriped [sic] as— The North 125 feet of the South 1690 Feet of the West 390 feet of the SouthEast quarter of Section 2, township 42 North, Range 10 East of the Third Principal Meridan [sic ], in Cook County, Illinois— Also Described as Lot 13 in Kliens subdivision of Part of the SouthEast quarter of Section 2, Township 42 North, Range 10 East of the Third Principal Meridian, according to the Plat thereof recorded October 11, 1949 as Doucment [sic] No. 14651080, in Cook County Illinois. We collaterize this note of $125,000 using our above property and house. Note to be paid back within 12 months of above date with 10% interest. /s/ Craig T. Haran /s/ Judy M. Haran." Other valuable items found in the safe included a deed to a home that she owned, insurance *1170 papers, and jewelry. At the time of the death of Robert's father in February 1985, decedent, who was very good about keeping records, had between $300,000 and $400,000 cash in her home. Craig Haran testified that he first met decedent in 1976 when she hired his construction company to build an addition to her home. They became good friends, and Craig kept in contact with decedent up until a couple of weeks before her death. Judy Haran prepared the Instrument, and she and Craig signed it. About 30 days after Craig delivered the Instrument to decedent, he brought her a land survey of the property described in the note. Craig denied ever transacting business with decedent and stated, "We were about to, but we didn't." He also testified that he never received a demand for payment of the Instrument until after decedent's death. The Instrument was not recorded until January 10, 1992, about a month after decedent passed away. Objections to the Harans' remaining testimony relating to their transaction with decedent were sustained pursuant to the Illinois Dead-Man's Act (Ill.Rev.Stat.1985, ch. 110, par. 8-201 (now 735 ILCS 5/8-201 (West 1992))). Gerald Rintz testified that he is a construction consultant and a good friend of Craig. Rintz attended two meetings with decedent and Craig in the summer of 1986 regarding the proposed development of industrial commercial condominiums. After the first meeting, Rintz looked at some potential sites for the development and presented this information in their second meeting. Decedent then appeared to have "cooled off on the idea." Rintz was not aware of any deal being consummated between decedent and Craig. He estimated that the cost to start up one of these ventures in 1986 would have been between $300,000 and $400,000. Rintz left Illinois and went to San Diego in early October 1986. In a written opinion, the circuit court ruled that the Instrument failed to create an enforceable contract because it lacked any consideration. The court noted that there was no paper trail indicating an exchange of cash or deposit of funds, no evidence that the venture was ever started, and no testimony from bankers or attorneys who would ordinarily have been involved in such a deal. The Estate appealed. I The Estate first contends that the circuit court erred in concluding that the Instrument did not contain a "promise to pay." One of the conditions required for negotiability is that an instrument "contain an unconditional promise or order to pay a sum certain in money." (Ill.Rev.Stat.1985, ch. 26, par. 3-104(1)(b).) Section 3-102(1)(c) defines "promise" as "an undertaking to pay and must be more than an acknowledgement of an obligation." (Ill.Rev.Stat.1985, ch. 26, par. 3-102(1)(c).) The Illinois Code Comment to section 3-102(1)(c) states: "This paragraph is a restatement of Illinois case law. In Hibbard v. Holloway, 13 Ill.App. 101 (1st Dist.1883), the court said that either words `promise to pay' or words of equivalent import must be used. In Weston v. Myers, 33 Ill. 424 (1864), `Good for 50 cents' was held sufficient." Ill.Ann. Stat., ch. 26, par. 3-102, UCC Comment, at 9 (Smith-Hurd 1963). Generally, a court of review will not disturb a circuit court's findings unless they are manifestly against the weight of the evidence. (Northern Illinois Medical Center v. Home State Bank (1985), 136 Ill.App.3d 129, 142, 90 Ill.Dec. 802, 812, 482 N.E.2d 1085, 1095.) Construction and legal effect of an instrument, however, raise a question of law, and a court of review may review these conclusions under a de novo standard of review. Northern Illinois Medical Center, 136 Ill. App.3d at 142, 90 Ill.Dec. at 812, 482 N.E.2d at 1095; Naylor v. Kindred (1993), 250 Ill. App.3d 997, 1003, 189 Ill.Dec. 552, 556, 620 N.E.2d 520, 524. In this case, the relevant parts of the Instrument are as follows: the name "Mrs. Barbarba [sic] Smith," her address, and the heading "Promissory Note" listed at the top of the Instrument; the phrases "We collaterize this note of $125,000.00 * * *" and "Note to be paid back within 12 months * * *"; *1171 and, importantly, the signatures of the Harans. The term "note" is used three times in this Instrument, which was prepared by Judy and signed by each of the Harans. "Note" is defined as "[a]n instrument containing an express and absolute promise of signer (i.e. maker) to pay to a specified person or order, or bearer, a definite sum of money at a specified time." (Emphasis added.) (Black's Law Dictionary 1060 (6th ed. 1990).) Although the mere use of the term "note" does not, by itself, turn a piece of paper into a note, its repeated use in the Instrument here is a factor to consider in determining whether it contains a promise to pay. The Harans are chargeable with knowing the common meaning of a word they chose to use. See Symanski v. First National Bank (1993), 242 Ill.App.3d 391, 396, 182 Ill.Dec. 455, 458, 609 N.E.2d 989, 992, appeal denied (1993), 151 Ill.2d 578, 186 Ill. Dec. 395, 616 N.E.2d 348 (instrument will be most strongly construed against the party who prepared it); Johnstowne Centre Partnership v. Chin (1983), 99 Ill.2d 284, 287, 76 Ill.Dec. 80, 81, 458 N.E.2d 480, 481 (document's meaning must be determined from words or language used). There appears to be some tension between the definition of "promise" in section 3-102(1)(c) and the Illinois comment to that same section. Specifically, it is difficult to discern how "Good for 50 cents" is "more than an acknowledgement of an obligation." Nevertheless, the legislature's intent to codify the holding in Weston must be given effect. (See Antunes v. Sookhakitch (1992), 146 Ill.2d 477, 484, 167 Ill.Dec. 981, 984, 588 N.E.2d 1111, 1114.) Because there is no meaningful difference between "Good for 50 cents" and "Note [of $125,000] to be paid back within 12 months," we conclude that the Instrument bears a promise to pay. This conclusion is consistent with decisions from other jurisdictions that construe the same UCC provisions. For example, in Fejta v. Werner Enterprises, Inc. (La.App.1982), 412 So.2d 155, 157, cert. denied (La.1982), 415 So.2d 953, plaintiff brought suit on an alleged promissory note, which provided: "Promissory Note Werner Enterprises, Inc. by resolution and signature acknowledges that a debt of $8000.00 is owed to Mr. Stan Fejta (Fejta Construction Company) regarding the construction of `Pontchartrain Plaza,' 1930 West End Park. This note is payable at maturity on or before May 19, 1979, plus 10% (percent) interest. Date: April 4, 1979" The writing was followed by signatures of the parties, as well as signatures of two witnesses. Defendant argued that the note did not bear an unconditional promise to pay and was merely an acknowledgement of a preexisting debt. The court rejected that argument, stating that the "word `promise' is not sacramental in a promissory note." (Fejta, 412 So.2d at 158 (on denial of rehearing), quoting De Rouin v. Hinphy (La.App.1968), 209 So.2d 352.) Citing section 3-102(1)(c) of the UCC, the Fejta court noted that "although some of the instrument's language indicates it is merely a recognition that a debt exists, examination of the entire writing convinces us that it is a written promise." (Fejta, 412 So.2d at 157.) The court further observed that the styling of the instrument as "Promissory Note" and the language "note is payable at maturity" supported its interpretation. Fejta, 412 So.2d at 157-58. Similarly, in Mauricio v. Mendez (Tex.Ct. App.1987), 723 S.W.2d 296, 297 (emphasis in original) plaintiff sued to recover on the following note: "10-9-84 To Whom it may Concern Equipment sold to Jose Mendez or Carolina S. Mendez From Paul Mauricio Amount 9373.00 down payment 1000.00 _______ Balance due— 8373.00 There will be no intrest [sic] charged until 10-9-85. Intrest will be at the rate of 12% per year Mr. & Mrs. Mendez will pay as much as possible per month Minimum amount will be $500.00 per month Seller /s/ Paul Mauricio buyer /s/ Jose Mendez S." Citing the pertinent sections of the UCC, the court concluded: "The written agreement contains an unconditional promise to pay *1172 plaintiff at least a certain sum of money each month. It is, therefore, in the form of a note." Mauricio, 723 S.W.2d at 298. Furthermore, numerous pre-UCC cases have held that no particular words of promise are required in a promissory note as long as there can be deduced a promise to pay. See, e.g., De Rouin v. Hinphy (La.App.1968), 209 So.2d 352, 353-54, cert. denied (1968), 252 La. 465, 211 So.2d 330 (finding that the words "I have this day borrowed * * * $12,100 to be paid on demand" constitute a promise to pay); McDonald v. Hanahan (1952), 328 Mass. 539, 540-41, 105 N.E.2d 240, 241-42 (holding that the words "Rec. of * * * [$500] as a loan, payments arrangements to follow at later date" create a promissory note); In re Nellis' Will (1926), 126 Misc. 638, 214 N.Y.S. 378, 380 ("A statement that a person has borrowed the sum of $2,000, `which is subject to and payable on demand,' imports a promise to pay"). The foregoing discussion persuades us that the circuit court erred in finding that the Instrument did not contain a promise to pay or words of equivalent import. II The Estate's next contention is that it is entitled to a rebuttable presumption that decedent gave consideration for the Instrument. Section 3-805[1] provides that article 3 of the UCC "applies to any instrument whose terms do not preclude transfer and which is otherwise negotiable within this Article but which is not payable to order or to bearer, except that there can be no holder in due course of such an instrument." (Ill.Rev.Stat. 1985, ch. 26, par. 3-805.) In order for an instrument to be considered negotiable, it must: "(a) be signed by the maker or drawer; and (b) contain an unconditional promise or order to pay a sum certain in money and no other promise, order, obligation or power given by the maker or drawer except as authorized by [Article 3 of the UCC]; and (c) be payable on demand or at a definite time; and (d) be payable to order or to bearer." Ill. Rev.Stat.1985, ch. 26, par. 3-104(1). If an instrument meets the requirements of section 3-805, the following provisions contained in article 3 apply to the instrument. Section 3-307 states that "[w]hen signatures are admitted or established, production of the instrument entitles a holder to recover on it unless the defendant establishes a defense." (Ill.Rev.Stat.1985, ch. 26, par. 3-307(2).) Furthermore, one who does not have the rights of a holder in due course takes the instrument subject to the defense of want or failure of consideration. Ill.Rev. Stat.1985, ch. 26, par. 3-306(c). In this case, the Instrument is not payable to order or to bearer, but is otherwise negotiable. It is signed by the makers (the Harans); contains an unconditional promise to pay a sum certain in money ($125,000) and no other promise, order, obligation or power given by the maker; and is payable at a definite time (within 12 months of November 13, 1986). (See Ill.Rev.Stat.1985, ch. 26 par. 3-109(1)(a).) Because the terms of the Instrument do not preclude transfer, it falls under section 3-805 and is governed by article 3 of the UCC. Pursuant to section 3-307(2), the Estate is entitled to recover on the Instrument unless the Harans establish a defense since the Harans admit that they signed it. Under section 3-805, the Estate cannot be considered a holder in due course *1173 of the Instrument because it is not payable to order or to bearer. The Estate thus takes the Instrument subject to the defenses in section 3-306, including the defense of want of consideration. Ill.Rev.Stat.1985, ch. 26, par. 3-306(c). Therefore, the Estate should recover on the Instrument unless the Harans establish that no consideration was given.[2] III The Estate next asserts that the circuit court was correct in barring the Harans' testimony concerning their dealings with decedent pursuant to the Dead-Man's Act (Act) (Ill.Rev.Stat.1985, ch. 110, par. 8-201 (now 735 ILCS 5/8-201 (West 1992))). The Harans submit that the Act is intended to be used as a shield to protect estates from fraudulent claims, but cannot be used as a sword to prevent the opposing party from presenting a legitimate defense. Alternatively, they claim that their testimony that decedent never gave them $125,000 is outside of the scope of the Act because it does not relate to an "event which took place in the presence of the deceased," as required by the Act. Section 8-201 states in pertinent part: "In the trial of any action in which any party sues or defends as the representative of a deceased person * * *, no adverse party or person directly interested in the action shall be allowed to testify on his or her own behalf to any conversation with the deceased * * * or to any event which took place in the presence of the deceased * * *." (Ill.Rev.Stat.1985, ch. 110, par. 8-201 (now 735 ILCS 5/8-201 (West 1992)).) The goals of the Act are to protect decedents' estates from fraudulent claims and to equalize the position of the parties in regard to the giving of testimony. (Fleming v. Fleming (1980), 85 Ill.App.3d 532, 538, 40 Ill.Dec. 676, 680, 406 N.E.2d 879, 883.) The Act bars only that evidence which decedent could have refuted. (Rerack v. Lally (1992), 241 Ill.App.3d 692, 695, 182 Ill.Dec. 193, 196, 609 N.E.2d 727, 730, appeal denied (1993), 151 Ill.2d 577, 186 Ill.Dec. 393, 616 N.E.2d 346.) The circuit court's evidentiary ruling is a matter of discretion and will not be reversed absent a clear abuse of that discretion. In re Estate of Hoover (1993), 155 Ill.2d 402, 420, 185 Ill.Dec. 866, 874, 615 N.E.2d 736, 744. Here, the Harans' first argument that the Act may not be used as a sword is without merit. The Act specifically states that it pertains to any action in which the representative of the deceased person "sues or defends." Therefore, the Act contemplates actions, such as this one, where the representative of the decedent "sues" to protect the interests of the estate. See Hartman v. Townsend (1988), 169 Ill.App.3d 111, 119 Ill.Dec. 731, 523 N.E.2d 199 (using the Dead-Man's Act to exclude testimony even though the suit was initiated by the estate of decedent to recover money allegedly owed to the estate). The Harans' next contention is that their testimony that decedent never gave them money should not have been excluded. They claim that this nonevent could not have taken place "in the presence of the deceased" and therefore is not covered by the Act. "The word `event' as ordinarily used and understood refers to a `happening or occurrence.'" (Manning v. Mock (1983), 119 Ill.App.3d 788, 799, 75 Ill.Dec. 453, 459, 457 N.E.2d 447, 453, citing Webster's New World Dictionary 485 (2d coll. ed. 1976).) In Hartman (169 Ill.App.3d at 116-17, 119 Ill.Dec. at 734, 523 N.E.2d at 202), the court construed the term "event," as used in the Dead-Man's Act. There, the executor of decedent's estate sued defendant, seeking the return of $20,000 paid by decedent to defendant. The testimony of plaintiff's witnesses suggested *1174 that decedent made a bad investment in a motel owned by defendant. Defendant's theory was that the money was paid to his wife, who allegedly had lived with and had been employed by decedent. At trial, defendant was permitted to testify that no other person had ever shared the ownership of the motel with him. Defendant and his wife also testified, over objection, that she at one time lived with decedent. Concerning defendant's testimony that no other persons had an ownership interest in the motel, the appellate court held that the "negative" testimony was not an "event" which took place in the presence of decedent. It also held that the testimony that defendant's wife resided with decedent was not an event within the meaning of the Act. It stated: "[P]erhaps the act of `moving in together' could be correctly termed an `event,' but the continued relationship * * * over some period of time * * * is more of a `status' than a `happening' or an `occurrence.'" 169 Ill.App.3d at 117, 119 Ill.Dec. at 734, 523 N.E.2d at 202. Similarly, in Rerack (241 Ill.App.3d at 695, 182 Ill.Dec. at 196, 609 N.E.2d at 730), the court rejected an "overly broad" construction of the term "event." In that case, a vehicle driven by decedent struck the back of plaintiff's car, which had already come to a complete stop. At trial, plaintiff was not permitted to testify to the overall mechanical condition of his car, to the weather conditions at the time of the accident, that his vehicle was stopped for two minutes, that his foot was on the brake pedal of his car continuously, that he heard no sound prior to the accident's impact, and that he observed damage to the rear of his vehicle the day after the occurrence. The reviewing court held that although plaintiff was properly barred from testifying with regard to the collision itself, none of the precluded testimony above reasonably could be said to have occurred during the event, which it concluded was the accident. Further, the testimony did not relate to an occurrence in the "presence" of decedent. In this case, as in Hartman and Rerack, decedent's failure to give the Harans money does not qualify as an "event" under the Act. If the testimony was to be that decedent did indeed give them money at some specific point in time, that would clearly qualify as an event. But decedent's failure to give the Harans money at any point in time cannot be so characterized. This is similar to the finding in Hartman that negative testimony is not an event that took place in the presence of the decedent. (Hartman, 169 Ill.App.3d at 116, 119 Ill.Dec. at 734, 523 N.E.2d at 202.) The Harans' proposed testimony is similarly negative. Alternatively, these facts require a finding that the bar of the Dead-Man's Act has been waived. The note in question here, to be where it was found, must have been given to the deceased (an event in her presence) and, since it is unrealistic to assume that it was merely given to her without any communication whatsoever, there must have been conversation about it. Indeed, the Estate relies entirely on inferences—on the existence of the instrument, its having having been retained by decedent, and its having been kept by her in a special place—as evidence that consideration was given. In Hoem v. Zia (1994), 159 Ill.2d 193, 201 Ill.Dec. 47, 636 N.E.2d 479, our supreme court dealt with an analogous situation. In that medical malpractice case, the plaintiff's expert was allowed to interpret and, according to the supreme court, "put his gloss" on the notes of the defendant treating doctor. This was done in order to show that the treating doctor failed to recognize his now-deceased patient's clear signs of a prior heart attack and clear warnings of an impending heart attack, and thus failed to initiate a program of cardiac diagnosis and treatment. In concluding that the expert's testimony constituted a waiver of the bar of the Dead-Man's Act, the supreme court said: "The purpose of the Dead-Man's Act is to remove the temptation to the survivor to a transaction to testify falsely and to equalize the positions of the parties in regard to the giving of testimony. (M. Graham, Cleary & Graham's Handbook of Illinois Evidence § 606.1, at 314-15 (5th ed. 1990).) In this case, allowing the representative of the deceased to introduce her version of why [the deceased] went to [the treating doctor], without giving an *1175 equal opportunity to [the treating doctor], would not advance the policy behind the Act. Under these circumstances, we find it fundamentally unfair to deny [the treating doctor] an opportunity to explain his view of what happened. Left unchallenged, [the expert's] comments would have remained with the jury as the only testimony regarding the conversation between [the treating doctor] and [the deceased]." (Hoem, 159 Ill.2d at 201-02, 201 Ill.Dec. at 51, 636 N.E.2d at 483.) Those words apply with equal force to the instant case. While the goals of the Dead-Man's Act are to protect decedents' estates from fraudulent claims and to equalize the position of the parties in regard to the giving of testimony (Fleming, 85 Ill.App.3d at 538, 40 Ill.Dec. at 680, 406 N.E.2d at 883), "the Act is not designed to disadvantage the living." (In re Estate of Justus (1993), 243 Ill.App.3d 737, 740, 183 Ill.Dec. 832, 834, 612 N.E.2d 89, 91, appeal denied (1993), 152 Ill.2d 560, 190 Ill. Dec. 890, 622 N.E.2d 1207.) If we were to find that the Act bars the Harans' testimony in this case, however, we would be doing exactly that. In effect we would be finding that there is a rebuttable presumption that decedent gave the Harans $125,000 on the one hand, and then we would prevent them from rebutting the presumption on the other. We find the circuit court's barring of the Harans' testimony to have been an abuse of discretion. Because there is room for disagreement in this area (see, for example, the dissent to this opinion) and because the Act generates so much controversy and litigation, many commentators have suggested that the time has come for the legislature to repeal or modify the Dead-Man's Act, as have more than half the States. (See, e.g., Kahn, Repeal of Dead Man's Act Advocated, 55 IL B.J. 430 (1967); Barnard, The Dead Man's Act Rears Its Ugly Head Again, 72 IL B.J. 420 (1984); Barnard, The Dead-Man's Act Is Alive and Well, 83 IL B.J. 248 (1995).) For the reasons given, however, we conclude that the Act does not bar the Harans' testimony in this case. IV The Estate's final contention is that the Harans failed to rebut the presumption of consideration. Because we reverse and remand on other grounds, and because at a new trial they will have a new opportunity to rebut the presumption, we need not reach this issue. The judgment of the circuit court is reversed and the cause remanded for a new trial. Reversed and remanded for a new trial. McCORMICK, J., concurs. HARTMAN, J., concurs in part and dissents in part. Justice HARTMAN, concurring in part and dissenting in part: Because I would hold, on remand, the Harans should not be permitted to testify directly that decedent never gave them money in exchange for the promissory note, I respectfully dissent from that part of the majority's opinion which holds to the contrary. There are other methods available to defendants to prove their case, if such they have, without standing the Dead Man's Act (Act) on its head, as the majority's disposition accomplishes. Both Hartman and Rerack, discussed in the majority opinion, are entirely distinguishable from the instant facts. The "negative" testimony or "nonevent" in Hartman, that no other persons had an ownership interest in the motel, is not a distinct "happening" or "occurrence" that could have taken place in the "presence" of decedent. In contrast, the disputed fact in this case, whether or not decedent ever gave the Harans the money, is a distinct happening or occurrence which, if true, would have taken place in the presence of decedent. Similarly, in Rerack, the excluded testimony did not relate to an occurrence in the presence of decedent but to the condition of plaintiff's car or whether his foot was on the brake pedal, happenings or occurrences that did not take place in decedent's presence, who was occupying a different car. In In re Estate of Osborn (1992), 234 Ill. App.3d 651, 175 Ill.Dec. 315, 599 N.E.2d *1176 1329, the court properly applied the Act, as this court should do in the instant case. There, two daughters, in a suit to contest the will of their deceased mother, filed affidavits asserting that decedent never discussed her will or estate during visits by the daughters at the hospital where she had been staying. The circuit court struck these statements as violative of the Act. In affirming, the appellate court stated: "a statement that a particular subject was never discussed violates the statutory prohibition against testifying to any conversation with the deceased." Osborn, 234 Ill.App.3d at 659, 175 Ill.Dec. 315, 599 N.E.2d 1329. It is clear that where an executor sues a defendant to recover on a note, the defendant may not testify as to payments made to the deceased. (See Karlos v. Pappas (1954), 3 Ill.App.2d 281, 121 N.E.2d 611 (abstract of opinion).) There is no valid reason to depart from this rule in this case, where the Harans claim that decedent never paid them the money. It is evidence decedent could have refuted if she had been alive to testify, and it relates to an event, or the absence of one, that would have taken place in her presence. The outcome the Harans seek, upon which the majority stamps its imprimatur, places the parties on unequal footing, a result precluded by the Act and one rejected by the court in Osborn and Pappas. Numerous courts in other jurisdictions have similarly held, under comparable "Dead Man's" statutes, that testimony asserting the deceased did not do a certain act is equivalent, for the purposes of the Act, to testimony that he did that act. See, e.g., In re Estate of Mason v. Mason (1986), 289 S.C. 273, 279-80, 346 S.E.2d 28, 33; Bauer v. Riggs (Tex.Ct.App.1983), 649 S.W.2d 347, 350; Stebnow v. Goss (Fla.App.1964), 165 So.2d 251, 255 n. 8; Martin v. Shaen (1946), 26 Wash.2d 346, 351-54, 173 P.2d 968, 971-72.) None of the legal articles cited in the majority opinion discuss whether an interested party may testify to an event that did not occur in the presence of the deceased. Moreover, legal scholars do not unanimously favor the repeal of the Dead-Man's Act. (See, e.g., Hunter, The Dead Man's Act Must Be Retained, 55 Ill. B.J. 512 (1967).) Nevertheless, any action taken to repeal the Dead-Man's Act must originate from the legislature, not from this court. In sum, the law is irrefutable: testimony that one did not do a certain act is equivalent, for purposes of the Act, to testimony that he or she did the act and is prohibited. Similarly, the majority's conclusion that the Estate waived the protection of the Dead-Man's Act is unsupported by the evidence. This issue is raised and ruled on by the majority in this appeal; it was never raised by the parties in this appeal with good reason. Assuming, arguendo, that the issue of waiver was properly before the court, the majority has applied it erroneously here in order to achieve the result. The exception provides: "If any person testifies on behalf of the representative to any conversation with the deceased or person under legal disability or to any event which took place in the presence of the deceased or person under legal disability, any adverse party or interested person, if otherwise competent, may testify concerning the same conversation or event." Ill.Rev.Stat.1985, ch. 110, par. 8-201(a) (now 735 ILCS 5/8-201(a) (West 1992)). At trial in this case, the Estate introduced the Instrument into evidence during the testimony of Robert Smith. He testified that he and his two sisters discovered the Instrument and other valuables in decedent's wall safe about a month after she died. The Estate offered no other testimony describing, interpreting, translating or relating to the Instrument. As the Estate argued during the trial, it "assiduously avoided" offering further testimony so as not to open the door for rebuttal under section 8-201(a). The majority's ruling here allows the Harans to open the door themselves and to submit impermissible testimony. The case relied upon by the majority, Hoem v. Zia (1994), 159 Ill.2d 193, 201 Ill. Dec. 47, 636 N.E.2d 479, is clearly distinguishable and has no conceivable application to the instant situation. In Hoem, plaintiff's medical expert read to the jury the defendant *1177 doctor's medical notes, describing the deceased patient's complaints to the doctor and eventually rendering the opinion that the doctor should have recognized the fatal symptoms. The supreme court found that the expert was doing more than merely "interpreting" or "translating" the doctor's note for the benefit of the jury. (Hoem, 159 Ill.2d at 201, 201 Ill.Dec. 47, 636 N.E.2d 479.) Instead, the expert put his "gloss on the notes," "insinuating" that the doctor should have treated the patient's complaints differently. (Hoem, 159 Ill.2d at 201, 201 Ill.Dec. 47, 636 N.E.2d 479.) Because plaintiff was allowed to introduce her version of why the deceased visited the doctor, the supreme court understandably concluded that the doctor should have been permitted to explain his view of what happened. (Hoem, 159 Ill.2d at 202, 201 Ill.Dec. 47, 636 N.E.2d 479.) Nothing even remotely resembling the events in Hoem took place in the case at bar. Here, the Estate offered no testimony or any other evidence to interpret or translate the contents of the Instrument, much less put its "gloss" upon it or "insinuate" anything beyond the bare instrument. Rather, the Estate simply laid the proper foundation and introduced the Instrument into evidence. The Harans were in no way disadvantaged, as the doctor would have been in Hoem, because neither side should be permitted to interpret the Instrument or present evidence regarding actions or nonactions relating to it. The Dead-Man's Act does not entirely preclude the Harans from defending or rebutting the presumption of consideration in the retrial of the case. They may, if they can, produce such evidence as income tax or bank records detailing their business ventures that they have entered into; a list of investors with whom they have joined, showing amounts contributed, which may demonstrate the omission of decedent, and convince the trier of fact that no deal was ever consummated between the parties; and bank deposits or withdrawals from both parties, which may shed some light on whether any funds were exchanged for the Instrument. Disinterested third parties, such as lawyers or accountants, involved in the proposed venture may similarly testify that the condominium project never commenced. The creative work of lawyers in the case can find additional evidence, which is not barred by the Act, to rebut the presumption of consideration. To sanction the Harans' direct testimony that no money was ever exchanged, however, clearly and impermissibly defeats the purposes of the Dead-Man's Act and judicially repeals its provisions. NOTES [1] Section 3-805 was repealed by Public Act 87-582, § 2, effective January 1, 1992. The Estate claims that because the instant facts took place prior to January 1, 1992, section 3-805 still applies to this case because the amendatory act diminishes substantive rights and, therefore, should apply prospectively only. Statutory amendments that are substantive in nature rather than procedural are prospective in application. (Johnson v. Johnson (1993), 244 Ill.App.3d 518, 526, 185 Ill.Dec. 214, 220, 614 N.E.2d 348, 354.) Although there is no provision in the revised article 3 that is identical to former section 3-805, new section 3-104 provides for a similar section, but applies only to checks. (810 ILCS Ann. 5/3-104(c) UCC Comment 2 (Smith-Hurd 1993).) Because the amendatory act diminishes substantive rights formerly available in section 3-805, it must be applied prospectively. Accordingly, former section 3-805 is applicable to this case. [2] Because of this conclusion that the Estate is entitled to a rebuttable presumption of consideration, there is no need to address the Estate's argument that a presumption of consideration is created under section 3 of "An Act to revise the law in relation to promissory notes * * *" (Ill. Rev.Stat.1985, ch. 17, par. 601 (formerly Ill.Rev. Stat.1979, ch. 98, par. 1)) (now 815 ILCS 105/3 (West 1992)). See Ill.Ann.Stat., ch. 26, par. 3-805, UCC Comment, at 425 (Smith-Hurd 1963) (explaining that section 601 has virtually the same effect as section 3-805).
Mid
[ 0.562118126272912, 34.5, 26.875 ]
The U.K. is set to leave the European Union on January 31, but this will not be the end of the Brexit process. Both sides of the English Channel will be entering detailed negotiations on their future relationship from then. Failure to reach a second deal by the end of the 2020 would still mean higher costs and barriers when trading goods and services. CNBC takes a look at the main Brexit dates in the new year. Mid-January – European lawmakers meet for the first time in 2020 and are expected to green light the Withdrawal Agreement – the document that outlines how the U.K. should leave the European Union. These 541 pages have been approved in the House of Commons earlier this month and are under further scrutiny in Parliament. January 31 – The U.K. is set to officially leave the European Union at 11 p.m. London time. A transition period will then begin from that moment onwards. This means that nothing will change for businesses and citizens. However, the U.K. government will be losing voting rights in Brussels, EU law will still be applicable in U.K. territory and the British government will be able to conclude trade deals with other world countries during this period. The aim of the transition period is to allow both sides to put together a second deal on their future relationship. This includes new trade arrangements as well as agreements on security and data sharing; aviation standards; supplies of electricity and regulation of medicines. February 25 – European ministers are scheduled to meet in Brussels. This could be the moment when they approve a new negotiating mandate for Michel Barnier, who has been heading the Brexit process from the European side since the U.K.'s official request to leave the EU in 2017 This means that talks on their future relationship could start in late February, early March. June – A EU-U.K. summit is expected to take place. At this point both sides will have to decide whether they can finalize their new trade relationship by the end of 2020. Prime Minister Boris Johnson has said that he does not want to prolong the transition and he has legislated against further delays to the Brexit process.
Mid
[ 0.600877192982456, 34.25, 22.75 ]
Viewers in Britain and the United States have been clamoring for the return of the critically acclaimed BBC series “Sherlock,” which debuts Jan. 1 with a special episode set in Victorian times. But where else in the world might the British broadcaster find viewers for the contemporary interpretation of Sir Arthur Conan Doyle’s fictional detective? To sniff out clues, BBC Worldwide has retained Parrot Analytics, a New Zealand firm that uses artificial intelligence and data science to evaluate global demand for TV shows. “Parrot suggests very very strong global demand, including in Germany, China, India and Singapore,” BBC Worldwide Executive Vice President David Boyle. “What’s most interesting are the countries with the highest demand but where we haven’t seen it come through in previous deals.” Since the advent of television, programmers have struggled with measuring the audiences they already have, let alone predicting where they might be in the future. The industry’s traditional approach to estimating audience size — TV ratings — doesn’t count viewing across multiple screens, distributors and markets around the world. Measurement firms have been scrambling to fill in the gaps. The dominant player, Nielsen, plans to introduce a new total audience measurement for the U.S. early next year that includes online and mobile viewing. It also has partnered with Twitter to develop a separate rating that reflects social media conversations about TV shows. Specialized research firms such as Fizziology plug into Twitter, Facebook, Tumblr, Instagram and blogs to give Hollywood studios insights into online conversations about movies. Parrot takes a different — and, it argues, more comprehensive — approach to evaluating interest in TV shows in markets around the globe. It creates a measurement called a “demand rating” that reflects interest in a TV show as expressed across photo-sharing sites like Instagram, online video sites like YouTube, social media platforms like Facebook, file-sharing sites and fan and critic blogs. “If I want to express my demand for a piece of content, say, ‘House of Cards,’ I can stream it on Netflix or I can watch clips on YouTube or [post comments] to microblogging sites like Reddit, where 200 million people discuss TV content,” said Parrot Chief Executive Wared Seger. “You look at all of this and essentially you now have a truly ubiquitous measure that tells you how much demand there is for a piece of content.” Parrot’s technology, developed by a team of data scientists and entertainment executives pulled from Sony Pictures, MGM Studios, the MIT Media Lab and Pukeko Pictures, uses pattern identification and contextual techniques to synthesize petabytes of data from 249 countries into meaningful information. The technology weighs viewer sentiment, evaluating just how obsessed people are with a show (“Liking” “Orange Is the New Black” on Facebook is less of a sign of true fandom than blogging about it). “Not all fans are equal,” Seger said. “Some will talk about it, advocate for it. Others will be passive consumers who drop off after the third episode. Our demand metric takes that into account.” Parrot’s demand rating is intended to help buyers and sellers of TV programs, such as BBC Worldwide and the New Zealand-based streaming service Lightbox, focus their global distribution efforts and inform programming decisions, like shifting the time or day a show airs when its TV ratings don’t line up with projected demand. Parrot is in active discussions with other studios, networks and streaming services, according to a source familiar with the matter. The British broadcaster conducted extensive testing, with a number of BBC Worldwide shows in a number of countries, before agreeing to work with the nascent company. “It took me six months of working through that detail and testing it and trying it out to feel confident enough to showcase it and promote it and advocate for it throughout BBC Worldwide,” Boyle said. “Most people don’t make time to properly investigate things like this and so they walk away too soon when they can’t get quick wins.” Boyle is a believer in the power of data. He said the BBC’s consumer research in South Korea suggested strong demand for “Doctor Who,” the long-running series featuring an alien time-traveler who moves through space and time in the Tardis, a spaceship that resembles the blue police boxes that were ubiquitous when the series launched in 1963. Regional teams were skeptical. As a test of viewer interest, BBC Worldwide included Seoul in a 2014 “Doctor Who” promotional world tour that invited fans to snap selfies in the Tardis and buy tickets to meet the actor portraying the Doctor, Peter Capaldi, and the actress who plays his companion, Jenna Coleman. Some 50,000 people signed up in minutes for a chance to purchase the 4,000 available tickets, Boyle said. This proof of concept for data-driven insights set the stage for BBC Worldwide’s more recent work with Parrot, helping it evaluate the 200-plus markets where it functions as a studio, distributor or broadcaster. Boyle said Parrot’s data helped bring one unidentified broadcaster back to the bargaining table, after conversations had gone cold. The data indicated strong demand in the country — and the program has been successful for the network. Another Parrot insight is causing BBC Worldwide to rethink its distribution strategy for another show, whose traditional TV ratings are down but still enjoys strong demand among online viewers. “It provides new ways to understand this kind of stuff,” Boyle said. “What these guys do is bigger, better, more scalable than research we could possibly do — by orders of magnitude.”
Mid
[ 0.6425120772946861, 33.25, 18.5 ]
Spanrde Professional Table Tennis Shoes Spandre £41.91£41.91 Save £13.08 Quantity Spanrde Unisex Professional Table Tennis Shoes Breathable Anti-Slippery Table Tennis shoes available in two colours red or blue and sizes from 6-10 US size. The shoes have Anti-Slippery folding resistant TPR so if the shoes were to fold they would spring back into their original shape. High Quality PU Fabric is also used which make the texture quite exquisite. Wear-resistant waterproof and easy clean. They have breathable holes in the toe, heel and sole. About the size: Shoes toe maybe marrow, if your feet are wide or thick, you should consider to choose half size bigger or one size bigger. for example: if your foot length 235 mm, better to choose 240mm. Please check with the size chart in the pictures listed to make sure you receive the correct size
High
[ 0.679558011049723, 30.75, 14.5 ]
/* * Copyright 2017 the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.gradle.integtests.composite import org.gradle.integtests.fixtures.build.BuildTestFile import spock.lang.Ignore @Ignore("No longer works with parallel execution of included builds. See gradle/composite-builds#114") class CompositeBuildDestroyablesIntegrationTest extends AbstractCompositeBuildIntegrationTest { BuildTestFile buildB def setup() { buildB = multiProjectBuild("buildB", ['b1', 'b2']) { buildFile << """ allprojects { apply plugin: 'java' version "2.0" repositories { maven { url "${mavenRepo.uri}" } } } """ } includedBuilds << buildB } def "clean build and build clean work reliably with composite build"() { given: dependency "org.test:buildB:1.0" buildA.buildFile << """ clean { dependsOn gradle.includedBuild('buildB').task(':clean') } """ when: args "--parallel" execute(buildA, "clean", "build") then: result.assertTaskOrder(':buildB:clean', ':buildB:compileJava', ':clean', ':compileJava') when: args "--parallel" execute(buildA, "build", "clean") then: result.assertTaskOrder(':buildB:compileJava', ':compileJava', ':buildB:clean', ':clean') } }
Mid
[ 0.6021505376344081, 35, 23.125 ]
The EU ruins Christmas with another painful proposal to cap bonuses Just when you thought you could relax and enjoy Christmas, safe in the knowledge that the European Union had postponed its decision on capping investment banking bonuses until at least March 2013, disaster has struck: the European Parliament has brought forward its bonus vote and produced a particularly harsh proposal for the eventual cap. The Economic and Monetary Affairs Committee, a subsection of the European Parliament with responsibility for the regulation of the financial system, today met with a negotiating team from the EU Council in an attempt to negotiate a compromise on the proposed bonus rules. The compromise they reached is a particularly harsh one: bonuses will be restricted to 100% of salaries, unless over 75% of a bank’s shareholders say otherwise. If 75%+ of shareholders do say otherwise, bonuses may be increased – to 200% of salaries and no more. If the new compromise comes to pass, it could be a disaster for some investment bankers in London. As our chart here shows, banks like Barclays pay their ‘approved staff’ (the staff that would be impacted by the bonus rules) bonuses equal to 400% of salaries on average. Fortunately, this may not be the end of the matter. Alex Beidas, a Linklaters employee incentives lawyer, points out that the European Parliament still has to vote on the Committee’s compromise. If that vote passes, the proposal then has to be agreed to by the European Council. Neither the Council nor the Parliament have so far given any indication that they will accept restrictions as stringent as those proposed today. However, it’s possible that they are becoming more hard-line in their approach to bonuses. “This latest proposal is more restrictive than I had anticipated,” says Beidas. “But the people in the Committee have tried to find a solution which they hope will make it through the Parliament and the Council,” she adds. The proposal is due to be voted on in Parliament between January 14th and January 17th. If approved, it will then need to be confirmed by the Council. And if confirmed by the Council, it will need to be voted into individual law in nation states. The good news is that it’s unlikely therefore to be voted into law in time to affect the 2012 bonus round.
Low
[ 0.495780590717299, 29.375, 29.875 ]
Introduction ============ Extremity casts are frequently applied for routine immobilization for many acute fractures. The period of immobilization varies according to the patient and the fracture. For example, a non-operatively treated tibial fracture is rarely immobilized for longer than six months. Total contact casting has been used in the treatment of Charcot\'s neuropathy for periods of up to one year \[[@B1]\]. We report a case of a below knee cast removal after 28 months. Case presentation ================= When she was 40 years old, a Caucasian woman underwent bunion surgery for pain whilst ambulating. The wounds healed without complication but she went on to develop mechanical allodynia, intermittent swelling and a bluish discoloration of the foot, consistent with a diagnosis of type 1 complex regional pain syndrome. She received many different treatments for continued pain over the subsequent years. Drug therapies using pregabalin, strong opiates and epidural analgesia were not fully successful and she was offered a below knee cast as a temporizing measure. There was no pre-existing psychiatric diagnosis but the patient developed a psychological dependence upon this cast. She was reluctant to have it removed, believing that her pain remained inadequately treated. She failed to attend several appointments at the pain clinic. When she did return, the anesthetists asked for orthopaedic assistance to remove her cast. By this point she was 45 years old and had spent the previous 28 months in the same below knee cast. She was no longer taking regular analgesia but was unable to tolerate anyone touching her leg and therefore received a general anesthetic to facilitate the cast removal. The cast was found to be intact, despite having been worn for such a long period. This can be explained by the fact that she had been using crutches and the plaster was reinforced with a heel stirrup. The resin surface was filthy (Figure [1](#F1){ref-type="fig"}). The toes were swollen and erythematous with thick scales in the web spaces; the toenails showed evidence of onychocryptosis and onychogryphosis and had not been cut. The deep cotton bandages were intact but appeared soiled on removal of the cast. The exposed leg was covered in thick yellow skin scales (Figure [2](#F2){ref-type="fig"}) which were easily exfoliated by hand (Figure [3](#F3){ref-type="fig"}). There were no significant areas of skin loss with integument intact over bony protuberances. Dense heel callosities were removed with a sharp blade. Closer inspection of the skin surface revealed small pitted ulcers 1-2 mm in diameter replacing the normal skin pores. Healthy pink granulation tissue was seen at the base of these ulcers which appeared clean and were not infected (Figure [4](#F4){ref-type="fig"}). They did not bleed on palpation and required no dressing. Some superficial telangiectasia were also noted on the anterior aspect of the ankle joint which were not present elsewhere on her limbs. There was no change in skin pigmentation. The leg circumferences were reduced by 5.5 cm at the calf and 1.5 cm at the ankle when compared to the normal leg. Passive dorsiflexion was symmetrically zero degrees. Passive plantar flexion was 30° in the cast leg and 40° in the normal leg. Her passive knee movements were normal. Doppler ultrasound showed good flow at the dorsalis pedis and posterior tibial pulses. Swabs, skin and toenails sent at time of the removal of the cast showed no growth of any organisms or fungal species. ![**Photograph of below knee cast prior to removal**.](1752-1947-5-74-1){#F1} ![**Photograph demonstrating the appearance of leg after cast removal**.](1752-1947-5-74-2){#F2} ![**Photograph showing yellow scales being exfoliated by hand**.](1752-1947-5-74-3){#F3} ![**Photograph of showing small skin pits with pink granulation tissue following removal of scales**.](1752-1947-5-74-4){#F4} She was later reviewed in the pain clinic. Her skin was healthy but her allodynia remained symptomatic. At this stage she was reluctant to pursue any further treatment. Discussion ========== Cast immobilization is a routine orthopedic treatment which is administered for short periods of time in order to limit its complications. Total contact casts are used for longer time periods but are changed quite often in order to monitor for complications \[[@B1]\]. A patient found to have been wearing the same cast for 28 months is extremely rare and there have been no previous cases reported in the literature. Patients who are known to be wear casts occasionally fail to attend for cast removal. In this scenario an awareness of the extent of potential complications is useful for this less compliant patient group. Halanski and Noonan \[[@B2]\] reviewing plaster cast complications describe joint stiffness, muscle atrophy, cartilage degradation, ligament weakening and disuse osteoporosis. Joint stiffness was present in this case but was relatively insubstantial with only 10° of relative reduction in passive plantar flexion. This finding suggests that any stiffness observed after cast removal may be attributable to capsular stretch pain. Muscle atrophy as a consequence of cast immobilization has been described \[[@B3]\] and was observed in this case where the leg circumference was substantially reduced. Research has attributed this change to an increase in both the resting inorganic phosphate concentration in skeletal muscle \[[@B4]\] and a change in the neural command of muscle contraction \[[@B5]\] with immobilization. Skin complications have been described following plaster cast immobilization. Ulceration occurs where there is insufficient padding over bony protuberances and excoriation is known to occur particularly in casts worn by children which have become soiled \[[@B6]\]. One case describes skin atrophy and hyperpigmentation thought to be a variant of stasis dermatitis \[[@B7]\]. In this case the skin under the dense scales was relatively healthy. The small and regularly distributed pitted ulcers occurred where each individual skin pore had become blocked. The tissue at the base of these pits was healthy. Conclusion ========== Prolonged cast immobilization is extremely rare and occurs in non compliant patients. This case demonstrates muscle atrophy which was anticipated. The stiffness of the ankle joint was not marked. Skin changes were minor with no substantial areas of ulceration or stasis dermatitis. Where patients choose to remain in their cast for prolonged duration the complications may only be minor. Competing interests =================== The authors declare that they have no competing interests. Consent ======= Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal. Authors\' contributions ======================= CFY and DWE reviewed the patient and performed the operation. SE researched for previous case reports and evidence. HI documented and described the findings. HI, SE and DWE contributed to the writing of the manuscript. All authors reviewed and approved the final manuscript.
Mid
[ 0.612813370473537, 27.5, 17.375 ]
About The Ilve PDWI90E3SS The Roma 90cm Twin Induction Range Cooker hosts two ILVE multifunction ovens side by side with a full induction hob top and a full width storage drawer. Both ovens have cool-to-touch, triple glazed oven doors and stainless steel handles. The Roma 90cm Twin is available in a selection of different colours. The left hand main oven boasts a 53.4 litre multifunction oven with 9 oven functions including Quick start which heats the oven to 200°C in only 6 minutes. The right hand oven has a 30.8 litre capacity and features the useful rotisserie function which cooks the perfect roast chicken and lamb. Both ovens on the Roma 90cm Twin range cooker come with ILVEs unique E3 precision temperature control, which means that the oven temperatures can be controlled via the digital display and can reach between 30°C and 300°C. Be the first to write a review for the Ilve PDWI90E3SS and help other customers. We're here to help with any product questions you have. Simply submit your question by clicking on the Ask Question button and we'll inform you once we've answered it which is normally within 2 working days (Monday-Saturday). Do remember to check if your question has been answered below. Ask Question
Mid
[ 0.553106212424849, 34.5, 27.875 ]
# Add Stateful Widgets! ## Goals - Make the CategoryRoute and ConverterRoute StatefulWidgets. Visually, nothing has changed. ## Steps 1. Fill out the TODOs in `category_route.dart` and `converter_route.dart` using the specs below. ## Specs - CategoryRoute is a StatefulWidget. - ConverterRoute is a StatefulWidget. - Inside the CategoryRoute, the list of Categories is saved as part of the State. ## Screenshots ### Start and Solution (visually the same) <img src='../../screenshots/05_stateful_widgets.png' width='350'><img src='../../screenshots/05_stateful_widgets_2.png' width='350'>
Mid
[ 0.565982404692082, 24.125, 18.5 ]
The embodiments described herein relate generally to an arc chute assembly for a circuit breaker, and more particularly, to methods and systems used to distribute gas pressure formed within a circuit breaker. The capability of circuit breakers for current-interruption can be dependent, in part, upon the ability to extinguish the arc that is generated when the breaker contacts open. Even though the contacts separate, current can continue to flow through the ionized gases formed by vaporization of the contacts and surrounding materials. Circuit breakers require expedient and efficient cooling of the arc to facilitate effective current interruption. Circuit breakers include sub-poles that are located in arc chutes. The arc chutes are configured to extinguish the arc that is produced when the breaker is tripped and the contacts of the breaker are rapidly opened. Typically, each arc chute is associated with a single phase, for example, one phase of a 3-phase power distribution system. Conventional arc chutes include a series of metallic plates that are configured in a spaced apart relationship and held in place by dielectric side panels. When the contacts of the breaker are opened, the resulting arc is driven to the metallic plates of the arc chute where the arc is then extinguished by the plates. The metallic plates increase the arc voltage in the circuit breaker to produce a current-limiting effect thereby providing downstream protection. Each sub-pole for the current path of the circuit breaker includes an arc chute. The sub-poles are electrically connected in parallel and separated inside the circuit breaker by a divider wall. Due to component variations, one sub-pole may experience a higher pressure than the other sub-pole when the breaker is tripped. While increasing the volume of gas generated during current-interruption and enhancing current flow aids in extinguishing the arc, the increased volume of gas increases pressure within the sub-poles, and therefore, on the arc chute and the circuit breaker housing. In some cases, the sub-pole that is exposed to the higher pressure may experience damage to the housing walls and the arc chute which may limit the current-interruption capability of the circuit breaker.
Mid
[ 0.546168958742632, 34.75, 28.875 ]
Association Between Serum LDL-C and ApoB and SYNTAX Score in Patients With Stable Coronary Artery Disease. The aim of this study was to examine the relationship between low-density lipoprotein cholesterol (LDL-C) and apolipoprotein (Apo) B levels and the SYNergy between percutaneous coronary intervention with TAXus and cardiac surgery (SYNTAX) score (SS) in patients with stable angina pectoris. We enrolled 594 patients who were suspected to have coronary heart disease (CHD) and who underwent coronary angiography. Patients were divided into 4 groups based on the SS: normal (SS = 0, n = 154), low SS (SS ≤ 22, n = 210), intermediate SS (22 < SS < 32, n = 122), and high SS (SS ≥ 33, n = 63). Positive correlations between lipoprotein (a), LDL-C, ApoB, total cholesterol, and SS were significant ( r = 0.132, 0.632, 0.599, and 0.313, respectively; P < .01), whereas high-density lipoprotein cholesterol (HDL-C), ApoA1, and ApoA1/ApoB levels showed a significant negative correlation ( r = -0.29, -0.344, and -0.561, respectively; P < .01). Multivariate linear regression analysis revealed that LDL-C, ApoB, ApoA1/ApoB, fibrinogen (Fg), and HDL-C levels had an effect on SS (standardized regression coefficients were 0.41, 0.29, -0.12, 0.08, and -0.09, respectively; P < .05). In conclusion, LDL-C, ApoB, ApoA1/ApoB, Fg, and HDL-C levels affected the SS and were predictors of CHD complexity.
High
[ 0.6802721088435371, 31.25, 14.6875 ]
Q: How to give a "Empty ListView Message" when there is no data source My application have a ListView with GridLayout. I am now trying to introduce ListView groups into my application. Say, the data source would usually have Group1, Group2 and Group3. I would like to display all 3 groups all the time regardless there is element in it or not. When there is no element in a group, I want to display a "empty group" message under the group title. I think the default way that WinRT handles it is not to display the empty group and it makes a lot of sense in many scenarios. To do this, I know that I maybe able to add a dummy item to the list view when there is no data, but this is kind of hacky. So, is there a better way to do this? A: Just bind your ListView to a collection of Group objects (where Group is a class you define and Group1, Group2 and Group3 are such Group objects). In addition to Group level properties (such as a Title), have a Group contain a collection of Item objects. In the ListView's datatemplate, use another ListView to show the Item elements for each Group. Be careful though, the nesting of GridViews will result in nested ScrollViewers. You would want to remove the ScrollViewer from the inner GridViews by changing their control template.
High
[ 0.6998444790046651, 28.125, 12.0625 ]
Nail Polish: No7: Stay Perfect — 300 Stand Back This lovely gradient from fuchsia pink to purple was created with Madeline Poole’s technique. I used W7 — 78 Fuchsia as a base for my gradient. Then I applied Rimmel: 60 Seconds — 810 Blue My Mind to the tips.
Mid
[ 0.544052863436123, 30.875, 25.875 ]
Effectiveness of using sialang honey on wound bed preparation in diabetic foot ulcer. The aim of this study is to determine the effectiveness of sialang honey on wound bed preparation in diabetic foot ulcer. The study design was quasy experiment with one group pre test post test design approach. The sampling technique used was consecutive sampling, where respondents were selected based on the criteria that had been planned. Instrument used in this study was wound bed score, where the measurement results will be analyzed using Wilcoxon test with 95% confidence level. Result of this study was average wound bed score before intervention was 2.75 and became 9.25 after the intervention on a scale of 0-16. Wilcoxon test in this study obtained p value 0.011 with the conclusion that sialang honey had a significant effect on wound bed preparation in diabetic foot ulcer. Statistically, honey can help the occurrence of wound bed preparation in diabetic foot ulcer.
High
[ 0.6666666666666661, 32, 16 ]
/* * Copyright (c) 2019 Contributors to the Eclipse Foundation * * See the NOTICE file(s) distributed with this work for additional * information regarding copyright ownership. * * This program and the accompanying materials are made available under the * terms of the Eclipse Public License 2.0 which is available at * http://www.eclipse.org/legal/epl-2.0 * * SPDX-License-Identifier: EPL-2.0 */ package org.eclipse.ditto.signals.commands.cleanup; import static org.eclipse.ditto.json.assertions.DittoJsonAssertions.assertThat; import static org.mutabilitydetector.unittesting.AllowedReason.provided; import static org.mutabilitydetector.unittesting.MutabilityAssert.assertInstancesOf; import static org.mutabilitydetector.unittesting.MutabilityMatchers.areImmutable; import org.eclipse.ditto.json.JsonObject; import org.eclipse.ditto.model.base.entity.id.DefaultEntityId; import org.eclipse.ditto.model.base.entity.id.EntityId; import org.eclipse.ditto.model.base.headers.DittoHeaders; import org.eclipse.ditto.signals.commands.base.Command; import org.junit.Test; import nl.jqno.equalsverifier.EqualsVerifier; /** * Unit test for {@link CleanupPersistence} command. */ public class CleanupPersistenceTest { private static final EntityId ID = DefaultEntityId.of("thing:eclipse:ditto"); private static final JsonObject KNOWN_JSON = JsonObject.newBuilder() .set(Command.JsonFields.TYPE, CleanupPersistence.TYPE) .set(CleanupCommand.JsonFields.ENTITY_ID, ID.toString()) .build(); private static final DittoHeaders HEADERS = DittoHeaders.newBuilder().correlationId("123").build(); @Test public void assertImmutability() { assertInstancesOf(CleanupPersistence.class, areImmutable(), provided(EntityId.class).isAlsoImmutable()); } @Test public void testHashCodeAndEquals() { EqualsVerifier.forClass(CleanupPersistence.class) .usingGetClass() .withRedefinedSuperclass() .verify(); } @Test public void toJsonReturnsExpected() { final JsonObject jsonObject = CleanupPersistence.of(ID, DittoHeaders.empty()).toJson(); assertThat(jsonObject).isEqualTo(KNOWN_JSON); } @Test public void fromJsonReturnsExpected() { final CleanupPersistence commandFromJson = CleanupPersistence.fromJson(KNOWN_JSON, HEADERS); final CleanupPersistence expectedCommand = CleanupPersistence.of(ID, HEADERS); assertThat(commandFromJson).isEqualTo(expectedCommand); } }
Mid
[ 0.623015873015873, 39.25, 23.75 ]
AJ's Cleaning & Restoration Description AJ'S Cleaning & Restoration is focused on providing high-quality service at an affordable rate to the Des Moines and surrounding areas. We understand that when looking for a company to clean your carpets, tile & grout, or any other surface there are tons of options and that is why I personally guarantee that AJ's Cleaning & Restoration is the right choice. AJ's Cleaning & Restoration is a subsidiary of AJ's Handyman, which allows us to be your "one stop shop" sort of say.. for home repair, cleaning, and general up-keep.
Mid
[ 0.5603112840466921, 36, 28.25 ]
A police officer places flowers at the entrance of Masjid Al Noor mosque in Christchurch, New Zealand, March 17, 2019. REUTERS/Jorge Silva WELLINGTON (Reuters) - The death toll in the New Zealand mosque shootings has risen to 50 after investigators found another body at one of the mosques, New Zealand Police Commissioner Mike Bush said on Sunday. “It is with sadness that I advise that number of people who died in this event has now risen to 50. As of last night we were able to take all of the victims from both of those scenes. In doing so we were able to locate a further victim,” Bush told a media conference. The number of injured after the attack was also 50, he said.
Low
[ 0.48856548856548804, 29.375, 30.75 ]
Fabrication of a one-dimensional array of nanopores horizontally aligned on a Si substrate. A one-dimensional array of nanopores horizontally aligned on a silicon substrate was successfully fabricated by anodic aluminum oxidation (AAO) using a modified two-step procedure. SEM pictures show clear nanostructures of well-aligned one-dimensional nanopore arrays without cracks at the interfaces of the sandwiched structures. The processes are compatible with the planar silicon integrated circuit processing technology, promising for applications in nanoelectronics. The formation mechanism of a single nanopore array on Si substrates was also discussed.
High
[ 0.716119828815977, 31.375, 12.4375 ]
import { KeyValue } from './common'; import { EditorTheme } from './editor-theme'; interface StyleFunction { ({ theme }: { theme: EditorTheme }): KeyValue | void; } export interface EditorStyleConfig { wrapper?: StyleFunction; editor?: StyleFunction; toolbar?: { top?: StyleFunction; inline?: StyleFunction; }; separator?: StyleFunction; spinner?: StyleFunction; select?: { wrapper?: StyleFunction; label?: StyleFunction; menu?: StyleFunction; option?: StyleFunction; }; popup?: { wrapper?: StyleFunction; arrowTop?: StyleFunction; arrowBottom?: StyleFunction; }; modal?: { wrapper?: StyleFunction; title?: StyleFunction; header?: StyleFunction; main?: StyleFunction; }; input?: { wrapper?: StyleFunction; input?: StyleFunction; }; icon?: StyleFunction; button?: { toolbar?: StyleFunction; primary?: StyleFunction; }; table?: { table?: StyleFunction; cellMenu?: { wrapper?: StyleFunction; icon?: StyleFunction; option?: StyleFunction; }; }; } export interface EditorStyle extends EditorStyleConfig { constants: EditorTheme; }
Mid
[ 0.6496163682864451, 31.75, 17.125 ]
Former Trenton Mayor Douglas H. Palmer said Monday he was named an honorary co-chair of Jackson’s transition team. The mayor from 1990 to 2010 said his title is “more ceremonial than anything.” “If they ask for input or how did things work before, I’d be glad to give it to them,” he said. “I look at it as an honor more.” Palmer endorsed Jackson during the mayoral campaign, though attacks were often linked to lumping the two together. Jackson was criticized by opponent Paul Perez, whom he defeated in last Tuesday’s runoff election by approximately 1,150 votes, for being a continuation of the city’s failed administrations during the past 24 years. Palmer said he met with Jackson’s transition team on Saturday and discussed the relationship between the state and the city. “I talked about how we need a strong relationship to work with the Christie administration and the county and how to bring that about,” said Palmer, who recently moved to Princeton from Yardley, Pa. “I gave my opinions about how that should happen. (MIDJersey Chamber of Commerce CEO) Bob Prunetti was in the meeting too, a good example of how he and I work together.” Palmer also had a role on Jersey City Mayor Steven Fulop’s transition team where he attended daily meetings and compiled reports. But Palmer said his stint with Jackson is much different. “I’m not day-to-day,” he said. “I’m not telling people what to do. If asked, I’ll give my input, but I’m very confident in the team the mayor-elect is putting together.” The former city mayor said State Sen. Shirley Turner (D-Mercer/Hunterdon), Assemblywoman Bonnie Watson Coleman (D-Mercer/Hunterdon), who recently won the Democratic primary for U.S. Rep. Rush Holt’s seat in Congress, and Mercer County Executive Brian Hughes were also selected as honorary co-chairs of Jackson’s transition team. “I’m just delighted to be asked to help move Trenton forward and to help the Jackson administration in any way that I can, particularly as it relates on the state level,” Turner said Monday. “I think it just reinforces my belief that he is the best candidate that was running because he has a real dynamic, very bright, talented professional team to work with him on his transition.” The longtime 15th district legislator said she had conversations with Jackson throughout the campaign. “I’ve said to him each time we talked that Trenton has many, many challenges and we agreed that most of those challenges are priorities,” Turner said. “We’re all very supportive. He has a huge load on his shoulders.” Turner is confident in Jackson. “He has really hit the ground running and he hasn’t had much time,” she said. “I’m just confident that he is going to do an outstanding job in terms of Trenton moving forward and getting Trenton back on the right track.” Jackson, a former city public works director under Palmer’s administration, said Monday that a press release of his complete transition team will be made public at some point this week. “We’re in the mode for transition right now so we’re prepared to move on July 1 expeditiously,” he said. “That’s the main thing, trying to make sure we can gather information.” Jackson said he will be meeting with acting Mayor George Muschal on Wednesday to “go over a few items and keep the ball rolling.” “He’s been excellent in regard to communication and allowing access,” the mayor-elect said.
Mid
[ 0.625570776255707, 34.25, 20.5 ]
The present invention relates to an improvement in an agricultural implement such as, for example, a front attachment of an agricultural harvester and to a hat-like conveying element for the improved agricultural implement. For that matter, the invention is not limited to agricultural implements designed as front attachments. Agricultural implements are known from the prior art to comprise a first, central, and fixed implement section and multiple second implement sections which adjoin the first implement section on both sides thereof. The second implement sections are displaceable relative the first implement section. When the agricultural implement is a front attachment for an agricultural harvester, such as, for example, a maize front attachment for a forage harvester, the central first implement section is also referred to as the central trough and the displaceable second implement sections adjacent thereto are also referred to as arms. The invention is not limited to agricultural implements designed as front attachments. The agricultural implement also can be a tillage device or a haying machine such as, for example, a swather. EP 1 464 214 B1 makes known a harvester comprising a front attachment which has a central first implement section and two second implement sections on each of the two sides thereof. The inner second implement sections are each displaceable, with respect to the first implement section, about a pivot axis, wherein the outer second implement sections, which adjoin the inner second implement sections opposite the first implement section, are likewise displaceable about pivot axes with respect to the particular inner second implement section. The implement sections of the implement are displaceable with respect to one another between a working position and a transport position in such a way that the second implement sections adjacent to the fixed first implement section extend toward one another, in the transport position, so as to form an obtuse angle or in the manner of a roof. A further front attachment of an agricultural harvester is known from EP 2 111 740 B1. This front attachment also comprises a first, central, and fixed implement section and second implement sections which adjoin the first implement section on both sides thereof and are each displaceable, specifically in such a way that the inner second implement sections are each displaceable, with respect to the first implement section, about a pivot axis, and the outer second implement sections, which adjoin the particular inner second implement section opposite the first implement section, are each displaceable about a pivot axis relative to the particular inner second implement section. The implement sections are asymmetrically displaceable, specifically in such a way that, in the transport position, all the working implements are disposed in four layers which extend parallel to one another. It is known from the prior art that each implement section of such an agricultural implement comprises mowing mechanisms and/or conveying mechanisms. Mowing mechanisms are used for cutting and conveying the crop, whereas the conveying mechanisms are used exclusively for conveying crop which already has been cut. In the transport position of the implement, the mowing mechanisms and/or conveying mechanisms of implement sections, which are displaceable relative to one another, must not collide with one another. The dimensions of the mowing mechanisms and/or the conveying mechanisms are therefore limited in order to ensure a compact transport position.
Low
[ 0.533333333333333, 33, 28.875 ]
The development of a clinical syndrome of asymptomatic pancreatitis and eosinophilia after treatment with clozapine in schizophrenia: implications for clinical care, recognition and management. Clozapine, the first atypical antipsychotic, is indicated for the treatment of therapy-resistant schizophrenia. It needs to be monitored closely because of its well-known potential side-effects, especially agranulocytosis. We present a case of a middle-aged woman with chronic schizophrenia, who was treated with clozapine and developed a clinical syndrome of asymptomatic pancreatitis and eosinophilia within the fifth week of treatment. Asymptomatic pancreatitis has rarely been reported up to now and is not recognized as a typical side-effect of clozapine. In our opinion, pancreatic enzymes should be monitored especially in the first 6 weeks of clozapine treatment.
High
[ 0.658602150537634, 30.625, 15.875 ]
Residents of Ocracoke and vendors will be given toll exempt status until the Hatteras ferry channel resumes operations. The division is monitoring traffic and will make additional changes if needed. Operations at the Hatteras-Ocracoke ferry route were suspended Jan. 18 until further notice, due to the ferry channel at markers #9 and #10 becoming completely shoaled over and impassable. The channel falls under the jurisdiction of the U.S. Army Corps of Engineers, which has hired a contractor to dredge the channel. Work is under way, but it could take several weeks of dredging before the channel is safe for ferry operations to resume.
Low
[ 0.474088291746641, 30.875, 34.25 ]
Anybody looking for a pair of rear brake rotors? They are drilled and slotted. I bought them off WS6store back in May and still haven't put them on yet. And I changed my mind on them. So if you know anybody looking for some let me know. They fit 98-02 I would imagine, LS1. Bought them for like $100. They are brand new still in the shipping box they were sent to me in lol. Only have been taken out once for me to take a look at them. If you are interested make me a reasonable offer and I'll pay the shipping.
Low
[ 0.477777777777777, 26.875, 29.375 ]
Influence of donor/recipient HLA-matching on outcome and recurrence of hepatitis C after liver transplantation. The aim of this study was to analyze the effect of human leukocyte antigen (HLA) matching on outcome, severity of recurrent hepatitis C and risk of rejection in hepatitis C positive patients after liver transplantation (LT). In a retrospective analysis, 165 liver transplants in patients positive for hepatitis C virus (HCV) with complete donor/recipient HLA typing were reviewed for recurrence of HCV and outcome. Follow-up ranged from 1 to 158 months (median, 74.5 months). Immunosuppression consisted of either cyclosporine-A- or tacrolimus-based quadruple induction therapy including or an interleukin 2-receptor antagonist. Protocol liver biopsies were performed after 1, 3, 5, 7, and 10 years and staged according to the Scheuer scoring system. The overall 1-, 5-, and 10-year graft survival figures were 81.8%, 69.11 and 62%, respectively. There was no correlation in the study population between number of HLA mismatches and graft survival. The number of rejection episodes increased significantly in patients with more HLA mismatches (P < 0.05). In contrast to this, the fibrosis progression was significantly faster in patients with 0-5 HLA mismatches compared to patients with a complete HLA mismatch. In conclusion, HLA matching did not influence graft survival in patients after LT for end-stage HCV infection, however, despite less rejection episodes, the fibrosis progression increased in patients with less HLA mismatches within the first year after LT.
Mid
[ 0.644549763033175, 34, 18.75 ]
“It was very, very frightening, I can tell you that,” Margie Grilley recalled from watching her neighbor’s house on fire. “I would look up and it was just coming down, like heavy snow or rain. It was very scary.” Someone living on the other side of White Bear Lake spotted a fire on the back porch of a home and called 911. Firefighters describe the scene they pulled up to as a rainstorm of ashes, and burning pieces of wood flying in the air. Within 45 minutes, the house was gone. The people who lived there were out of town, only to return Friday morning. They told WCCO off-camera that there’s nothing they can do but move forward. The realities of the fire are just setting in for Mike Nightingale and his family. “It didn’t really register at first, and then I saw my neighbor’s house,” he said. A firefighter woke him up saying he and his two young kids needed to get out. “The kids didn’t want me to leave them alone, and I wanted to see what was going on,” he said. The dry conditions and gusting winds pushed the flames from his neighbor’s home onto his. While smoke and fire ruined the outside, water destroyed the inside. But, he said, all is not lost. “We’re finding some of the things in the house; kind of was a little more uplifting,” Nightingale said. “A lot of the pictures on the walls were still there. A lot of sentimental stuff was still there.” Neighbors said they used their hoses to water down their yards so the embers wouldn’t spark fires at their homes. Fire investigators say they’re not sure how the first fire started. Tracy Perlman Twin Cities native Tracy Perlman is an Emmy Award-winning producer and blogger at WCCO-TV. She's been with WCCO-TV since 2009. While attending the University of Kansas, Tracy received a Society of Professional Journalists award for online sports...
Mid
[ 0.607538802660753, 34.25, 22.125 ]
Massively Overthinking: What would you want out of a World of Warcraft 2? Let’s pretend for a moment that the reason Blizzard appears to be dropping the ball with World of Warcraft right now is that it’s working on World of Warcraft 2 for announcement at some point once Classic is out the door. It’s pure speculation, but fun speculation, and really, wouldn’t you be shocked to find out Blizzard isn’t incubating a new Warcraft game? So that’s our topic for Massively Overthinking for the week: What would you want to see from WoW 2? And maybe more importantly, what don’t you want to see in WoW 2? ​Andrew Ross (@dengarsw): In WoW 2, I’d like to something like The Secret World or PlanetSide 2. Leveling skills instead of your character, factions but the ability to play the game together, open world, varied gameplay, plenty of tools to teleport players to the field, flying vehicular combat… and probably some kind of mobile app companion. Maybe a pedometer with some mini-game to level up pets or earn mounts. Oh, and player housing. Seriously, Blizz, you wanted fishing like Animal Crossing but couldn’t get it to work, so maybe try housing like Animal Crossing – just something fun but light. What I wouldn’t want to see is a huge push for raiding. Scalable content would be nice, no doubt, but a living world would be nicer. The lore writers can only do so much. Let players play out scenarios that actually change the way the game’s developed. Bring back the RP in MMORPG! Brianna Royce (@nbrianna, blog): When World of Warcraft was first announced, I pretty much shrugged – I had no affinity or affection for the franchise, so to me it was just another MMO among the many dozens that were in production back then. Fast-forward a few years and I could see its potential and had a few years of good memories to build on. I really loved the Wrath period. I loved the Pandaria period. I don’t even completely hate the Burning Crusade and Vanilla periods. Along the way, the way was lost. I perceived that “way” as inclusivity, for want of a better word – the designers back then seemed to recognize that it had a giant, unwieldly, messy playerbase with lots of playstyles, and it tried, in varying degrees, to cater to all of them rather than pacify the majority in favor of catering to a small but loud raiding minority. So I suppose in WoW 2 I’d want to see the “way” restored. I don’t really care about the lore that much, but I’d want to see a more robust themepark (elaborate quest lines, please) with higher-end graphics and sandbox elements (like farms and housing) that aren’t discarded every expansion, PvP that isn’t sidelined, crafting and economy with some heft, and plenty of content and rewards for small and large group PvE. Ideally, I’d like it to be fully cross-platform too – yes, console, PC, and mobile. I don’t want WoW 2 to be untrue to WoW or Blizzard. I just want it to be its best self, and WoW isn’t that right now and hasn’t been for a long time, no matter that hope buoys its box sales every other year. The reboot wouldn’t be my favorite MMORPG of all time because I will probably always be a crafter-centric everythingbox gamer at heart, but it’d be a clean slate and fresh start for everyone and something I’d put a lot of time into. Chris Neal (@wolfyseyes, blog): I feel like I’ve mentioned this before either in chat with the MOP staff or in some other chat somewhere else, but I would love to see WoW 2 done like a Kingdom of Amalaur: Reckoning. I’m talking a more freeform combat system and level progression, a large open world, and maybe some all new arrative beats that hit the story’s reset button. Failing that, I also wouldn’t hate it if they tried a few things that EverQuest Next wanted to do in terms of its exploration and sandbox proclivities. Or maybe a combination of both. Eliot Lefebvre (@Eliot_Lefebvre, blog): Wow, is it really already time for a sequel to World of Warcraft? The answer, obviously, is “of course not.” It was time for a sequel a long while ago, now we’re into that point of the running time wherein the original has gone on for too long and now you just cannot seem to get to a concluding point. But that’s not the actual question, it’s about what I’d want to see in such a sequel. Bear in mind, we’re going afield for this one. I think there’s actually a lot of space for this as a potential thing taking place after this particular period of Azeroth’s history, but I think that the best route to go starts with removing the factional limitations. That doesn’t mean removing factions, though; it means further splintering, anchoring around some of the biggest racial factions. Orcs, Trolls, Tauren, Goblins, Humans, Dwarves, Blood Elves, Night Elves… instead of having an overarching pact between multiple different groups, these groups pursue their own objectives, a bit more closely related to the now-classic strategy games insofar as they aren’t strictly allied or opposed in all instances. Moreover, since there are so many variants we see on races, I’d like to see things go a bit further by allowing you to apply templates to the various playable races. You’re not just confined to being a Death Knight; you can be an Orcish Revenant, or a Fel Tauren, or a Lightforged Night Elf, and so forth. Every race has a few different variants if they’re not outright universal. Of course, that’s partly at odds with the idea of Hero classes, but that’s also part of the point. No, I’m not proposing a classless system; quite the opposite because Warcraft loves its classes. It even has multiple odd classes that have their own unique tricks, and while WoW has tried to combine a lot of those different identities into shared classes, I’d rather see them just get split back up again. You choose one of the now-common nine “core” classes, but that simply determines which of the game’s many, many different subclasses you can pick up. For example, let’s say that you make yourself a Lightforged Blood Elf Paladin. Sure, that makes sense. This means you start as a Blood Knight and get access to those abilities as you level up. But your Blood Knight level isn’t the same as your overall level; by the time you’re level 20 Blood Knight, you’ve learned everything unique to Blood Knights and you still have, say, 40 levels to go to the level cap. So where do you go? Why, you head down to Stormwind, you negotiate with the trainers, and you start your training as a Level 1 Silver Hand Paladin. This is meant as a combination of both D&D-style prestige classes and Job System-style artistry. As you level, you equip individual “careers” as level blocks, so (for example) your Blood Elf ends up with Blood Knight, Silver Hand Paladin, and Sunwalker as three 20-level blocks. But what if you want to learn, say, how to be a Spellbreaker? You pick that up and level it, enjoying the benefits of the level cap while also gaining experience and improving your Spellbreaker. And not every career has 20 levels; some have 10, some have 5. You can custom-tune what you’ve got set up, and you can always pick up something new. As a result, you never need to introduce new classes, but every class has lots of options for what can be accessed, and you can restrict entry into any given career based on core class. (For example, a Paladin can learn to be a Footman or a Grunt or a Spellbreaker, but not how to be a Pyromancer or a Demonologist.) Add in some good level syncing, some class-based customization to abilities and interplay (so, say, a Warrior and Paladin might train the same careers but have wildly different gameplay), a decent set of questing and level scaling lessons learned from The Elder Scrolls Online, and some dungeon lessons learned from Final Fantasy XIV? Yeah, I’d play that. What would I actually expect? Tab-targeting Overwatch without an ounce of irony. Justin Olivetti (@Sypster, blog): This is a tough question because I guarantee you that no two fans would agree on this. Some would merely want an engine upgrade, while others would advocate for a top-to-bottom revamp. While I think that any sequel would be ill-advised, as there is no way that Blizzard could top itself (and thus be saddled with a sequel that was less desirable and less popular than the original), the only way that such a project could achieve decent success would be to present a new type of MMORPG than what Blizzard has been running for so long. Too similar invites comparisons, while a radically different game would at least be its own animal. What that animal would be… I honestly don’t know. Virtual reality, sandbox, mobile integration, time traveling, integration with World of Warcraft, mobile — it could utilize any of these. But no matter how different a WoW 2 would be, it would still need that have that World of Warcraft feel that is the hallmark of the MMO. Casual accessible, stylistic, and polished to a fault. I dont see the need of WoW2. WoW dated very well, it looks way better than many MMOs that came after it. Anything they could do with WoW2 they can do in WoW. But if they ever do it, I want it to be built around VR You must be logged in to vote0You must be logged in to vote 7 months ago Reader Veldan Are you kidding? It looks outdated af. Every big MMO from after 2010 looks better. You must be logged in to vote0You must be logged in to vote 7 months ago Reader Oleg Chebeneev WoW looks amazing for 15 years old and there arent many post 2010 MMOs that look better. You must be logged in to vote0You must be logged in to vote 7 months ago Reader Danny Smith Start with a timeskip and losing the faction war sports team mentality horseshit. Just take the whole alliance vs horde thing out back and put it down. Personally i would like to see something a bit closer in setting to fable 2 after fable. The first was swords and sorcery and isolated towns connected by traders on foot but the second was large cities with cobblestone streets, townhouses and factories with steam trains and boats connecting places much faster. I mean there has to come a point after years of the explorers guild researching Ulduar that they go ‘hey that train thing seems like a useful idea instead of goats and kodos right?’. In the long term it feels like the natural progression for the setting. Many races like orcs,humans, dwarves, gnomes and goblins would naturally follow a path to nationwide industrialisation and going beyond steampower as a whole. But then does that cause some divide between them with races like night elves and tauren? do you skip forward on azeroth 200 years and Gilneas is a repopulated metropolis of industry with factories belching smog into the air while their former night elf allies become more withdrawn as their connection to nature makes them seem more distance and alien than even the dreanei to future generations? Or does it cause a schism in their societies themselves? would some Tauren fully reject the advancement of technology and stick to their nomadic tribal roots while others, if you pardon a unintentional cowboy pun, make the south of kalimdor their own wild west where a tauren with a repeater rifle hunting outlaws for bounties is more common than a cowman in beads and feathers praying to the earth mother? The only real ‘war’ warcraft as an mmo ever had between the factions was the whole orcs vs night elves deal in the forests of northern kalimdor. Thrall for no sensible reason decided to settle the horde capital in a dustbowl shithole with no resources so they need the lumber, but the night elves see the logging as a abhorrent affront to nature and a natural clash occurs without the need of Sylvanas level moustache twirling bullshit. I think a big time skip and a worldwide industrial revolution occurring would be the thing to spark far more understandable conflict on azeroth without needing to turn it into comic book fan fiction where character personalities do 180’s. Maybe actually mix up the races ideals again. Maybe Humans are becoming a little too greedy and expansionist and more ashvane than proudmoore in nature after being on top for so long? maybe a couple centuries after kicking the habit and having their sunwell back the blood elves became far more isolationist again? or night elves numbers dwindled to such a degree they are no longer an organised people but extremely technophobic woodland recluses that people barely see and exist more as a fantasy version of hit and run eco terrorists that see orcs and humans working together as one big chain of industry as some andrew ryan tier nightmare waiting to happen? OR blizzard just resets the world with some action combat and pushes the red vs blue meme even harder and injects esports and more minibuys if we are realistic about something they will never make anyway :p You must be logged in to vote0You must be logged in to vote 7 months ago Reader Tony McSherry It’s fairly obvious from some of the comments that a few people haven’t played WoW in a long time. For me and a lot of the other millions that still play, WoW is the best it’s ever been and will continue to improve. WoW is like a gorgeous, multi-faceted gemstone that constantly gets new facets and polish. It’s going to lose players constantly as they change personally or decide the latest changes represent the last straw. Like Eliot, you can pull out your jeweller’s loupe and search for flaws, which of course it has, but you miss out on enjoying its beauty and depth. It also gains new players, which most commenters seem to forget. Often the attrition rate is higher than recruitment and it may eventually fail or reach a steady state, but that’s still a while in the future. I played WoW Classic the other day and it was a lesson in how much it’s improved. Running my Druid through the Barrens was nostalgic, but the quests soon depressed me. I think I walked over the body of Mankrik’s wife a few times before I noticed her (no sparkles around the quest items) and harvesting boar tusks for no apparent reason from npcs at the same level or higher soon had me logging off permanently. How did I find Mankrik’s wife given there’s no directions and she’s pretty far away from Crossroads? Well I’d been through that long ago and if I remember correctly, I got the answer from a WoW database in the end, as wandering around aimlessly has never been fun for me. No problem with people who want to enjoy classic, but it’s too much like masochism for me. My typical daily hour or two in WoW gives me lots of choices. I can do the main story quests, side quests, world quests, world bosses, warfronts, island expeditions, rare NPCs and bosses, invasions, PvP battlegrounds, arenas, world PvP and even dungeons and raids. So far in the current expansion, I’ve done no dungeons or PvP and only LFRed a few times, but unlike earlier days, my two main characters are well geared and effective. The quests are many and varied and I’ll make choices based on my preferences for the day. Do I want mini games like battle pets and the Tortollan series (gotta save those baby turtles) or sending my followers off or do I want vehicle quests or am I driven by a need for a drop or rep or azurite or a mount or pet etc., or do I just want a big boss to kill. Most importantly, it offers short term and long term goals. WoW’s tech is solid. With phasing, the results of my actions can be permanent. With shared realms, there’s always a group a button click away or there’s enough other players to take down a difficult NPC or boss without grouping and it’s virtually seamless. The total number of players may have dropped, but it seems more populated than ever. They’ve also solved the PvP/PvE problems, so the only way I can be flagged for PvP is to travel through the wrong area or flag myself. All classes are finally effective at PvE and I play as a druid healer all the time. I could go on, but this post is too long already. Reading through the comments you can see everyone wants different things – sandbox/no sandbox, better graphics/not important, action combat etc, etc. WoW’s not perfect and never will be, but it does keep changing and in my opinion, improving. I don’t think there will ever be a WoW 2, just WoW with better graphics, some VR and more stories, quests and whatever Blizzard dreams up in the future and I’m happy with that. It doesn’t scratch my itch for a space sim MMO, which was why I was a backer of the space sim that should not be named. Luckily, I have the Dual Universe alpha for that, which is about all I can say until the NDA is lifted ;-) You must be logged in to vote0You must be logged in to vote 7 months ago Reader Lucky Jinx Well, it definitely should not be a sandbox, for that’s not WoW, and I like sandboxes as a concept. I don’t really have a proper answer for this because Blizzard has always been pretty good at taking existing ideas and improving upon them, so WoW2.. damn, I just don’t know. Maybe it’s just not meant to be? Whatever they’d do, it would be underwhelming to their previous success. You must be logged in to vote0You must be logged in to vote 7 months ago Reader Chosenxeno . WildStar was WoW 2. /thread You must be logged in to vote1You must be logged in to vote 7 months ago Reader Lucky Jinx Nope. You must be logged in to vote0You must be logged in to vote 7 months ago Reader Nate Woodard No. It definitely wasn’t. And that’s why it’s dead. You must be logged in to vote0You must be logged in to vote 7 months ago Reader fansid Challenge from the beginning. You must be logged in to vote0You must be logged in to vote 7 months ago Reader Corey Evans The staff responses are disheartening and, quite frankly, the reason MMOs have been stuck in a rut for the last 10 years. Jeez, y’all. You can dream up any kind of concept you want, and all you guys go for in your imagined best version of WoW 2 is marginal, iterative improvements? This ain’t Madden 19 into 20 for pete’s sake. This is your imagined turning one of the most important video games of the last two decades into another potentially all-decade sequel. “I’m not mad, just disappointed.” =p For my part, I would say: – Making WoW 2 look like what the WoW trailers look like, for a start. Where a giant, eight-foot-tall orc swinging a four-foot club would actually be a visceral, impactful experience. Where animations have weight and the thing just feels more modern. Doesn’t need to be slow and clunky like Dark Souls, but making it look like the hack-n-slash-happy Korean MMOs isn’t it, either. – Giving players unique, meaningful gameplay without having to replay static content**. ** This could be SO FREAKING EASY to do in a minimum viable way without expending much effort. Instead of some named boss that respawns in the same place because the dungeon is cool or whatever, how about more generic ‘boss style mobs’ with all the same tricks and mechanics, but in random places throughout the world? That way YOU ALONE (or your group) would be the one to save the Westshire farm from the rampaging whatever-it-is. That way you get a quest from a farmer “hey my farm is being ravaged. Can you help me out?” and you can have a more personal, if not grandiose, impact on the world. If you die, the farm is not saved, but if you survive, the farm is no longer under attack (for a time, since it’s a dangerous world.) You must be logged in to vote0You must be logged in to vote 7 months ago Reader Nate Woodard Dude, if that’s all you want, go play GW2. Trust me. It’s very underwhelming. But, like you said, Blizzard is the king of taking someone else’s ideas and improving upon them, so who knows?! Maybe they could actually pull it off. You must be logged in to vote0You must be logged in to vote 7 months ago Reader Anstalt A sandbox WoW is the only MMO with enough of a playerbase to be able to make a success of a sandbox. We’ve never, ever had a AAA sandbox MMO in the west and it’s about time someone did one. In order to have a positive effect on the genre, the first one really needs to be a success so that other devs wont be put off, so WoW 2 is the natural candidate. Beyond that, my personal preference is for horizontal progression (so that we’re not building artifical barriers between players, so that we can play together easier), a deep combat system (so that im not bored within a few hours of playing, as happens in all action combat systems) and objective based world pvp (because large scale pvp fighting over keeps and objectives is my favourite activity…..but it needs to be consensual, and no full looting please!). Will never happen, I dislike Blizzard and they don’t make games for people like me. But, u never know. You must be logged in to vote0You must be logged in to vote 7 months ago Reader Utakata Just to note (as I’ve seen this mentioned a few times), if WoW 2 was released it would be presumed the graphics engine would be much more advance, both visually and functionally and hopefully, much more optimized than the current game live. So I am not sure why folks going on about, “graphics updates” when this would most likely be without saying. Therefor, this should be the least of our worries, while content and game mechanics should be our biggest concern, IMO. You must be logged in to vote4You must be logged in to vote 7 months ago Reader Matt Comstock True! Its why its at the bottom of my list ;) But, I suppose it could have been left off entirely.
Mid
[ 0.5456570155902001, 30.625, 25.5 ]
The absorption of iron, calcium, phosphorus, magnesium, copper and zinc in the jejunum-ileum of control and iron-deficient rats. The effects of iron deficiency on the absorption of different dietary sources of iron were studied, together with the interactions between iron, calcium, phosphorus, magnesium, copper and zinc in the jejunum-ileum of control and iron-deficient rats. In this study, three perfusion solutions containing different iron sources: ferric citrate, haemoglobin, and equal parts of ferric citrate and haemoglobin were used. In addition, the same perfusion solutions were used with and without 2,4-dinitrophenol, an inhibitor of oxidative phosphorylation. Iron absorption in anaemic rats was greater than in the controls, except after perfusion with solutions containing haemoglobin. The absorption of calcium, copper and zinc in iron-deficient animals was not significantly affected, while the absorption of phosphorus and magnesium increased, with respect to animals in the control group. After perfusion with solutions containing haemoglobin, the absorption values of calcium, copper and zinc were lower than after ferric citrate in both groups (control and iron-deficient rats).
Mid
[ 0.6270022883295191, 34.25, 20.375 ]
Mechanical dyssynchrony or myocardial shortening as MRI predictor of response to biventricular pacing? To investigate whether mechanical dyssynchrony (regional timing differences) or heterogeneity (regional strain differences) in myocardial function should be used to predict the response to cardiac resynchronization therapy (CRT). Baseline mechanical function was studied with MRI in 29 patients with chronic heart failure. Using myocardial tagging, two mechanical dyssynchrony parameters were defined: the standard deviation (SD) in onset time (T onset) and in time to first peak (T peak,first) of circumferential shortening. Electrical dyssynchrony was described by QRS width. Further, two heterogeneity parameters were defined: the coefficient of variation (CV) in end-systolic strain and the difference between peak septal and lateral strain (DiffSLpeakCS). The relative increase in maximum rate of left ventricle pressure rise (dP/dt max) quantified the acute response to CRT. The heterogeneity parameters correlated better with acute response (CV: r = 0.58, DiffSLpeakCS: r = 0.63, P < 0.005) than the mechanical dyssynchrony parameters (SD(T onset): r = 0.36, SD(T peak,first) r = 0.47, P = 0.01, but similar to electrical dyssynchrony (r = 0.62, P < 0.001). When a heterogeneity parameter was combined with electrical dyssynchrony, the correlation increased (r > 0.70, P incr < 0.05). Regional heterogeneity in myocardial shortening correlates better with response to CRT than mechanical dyssynchrony, but should be combined with electrical dyssynchrony to improve prediction of response beyond the prediction from electrical dyssynchrony only.
High
[ 0.690958164642375, 32, 14.3125 ]
WASHINGTON – AUGUST 31: The desk of U.S. President Barack Obama sits in the newly redecorated Oval Office of the White House August 31, 2010 in Washington, D.C. U.S. President Barack Obama will give his second address from Oval Office August 31, 2010 to mark the shift away from combat in the war in Iraq. […] This is an archived article and the information in the article may be outdated. Please look at the time stamp on the story to see when it was last updated. WASHINGTON — The Obama administration is ending the policy that granted residency to Cubans who arrived in the United States without visas. That’s according to a senior administration official, who said the policy change was effective immediately. The official said the U.S. and Cuba have spent several months negotiating the change, including an agreement from Cuba to allow those turned away from the U.S. to return. The move comes about a week before President Barack Obama leaves office and is likely the last major change he will make to his overhaul of the U.S. relationship with Cuba. The official insisted on anonymity in order to detail the policy ahead of an official announcement. 38.907192 -77.036871
Low
[ 0.5331010452961671, 38.25, 33.5 ]
/* tslint:disable no-unused-expression */ // Global styles /* By default, this file does two things: 1. Importing `styles.global.scss` will tell Webpack to generate a `main.css` which is automatically included along with our SSR / initial HTML. This is for processing CSS through the SASS/LESS -> PostCSS pipeline. 2. It exports a global styles template which is used by Emotion to generate styles that apply to all pages. /* // ---------------------------------------------------------------------------- // IMPORTS /* NPM */ import { css } from "@emotion/core"; /* Local */ // Import global SASS styles that you want to be rendered into the // resulting `main.css` file included with the initial render. If you don't // want a CSS file to be generated, you can comment out this line import "./styles.global.scss"; // ---------------------------------------------------------------------------- // Global styles to apply export default css` /* Make all <h1> tags orange */ h1 { background-color: orange; } `;
Low
[ 0.508163265306122, 31.125, 30.125 ]
Here's a thought experiment: Imagine you're running a major cable TV network and your fastest-growing distributor (and largest, by number of subscribers) offers to license your content for approximately $300 million each year, a sum that is about 10 times the amount it has been paying under the current deal struck less than 3 years ago. The new deal would have a very material impact on your P&L as your company's operating income last year was about $400 million. Seems like a pretty tough offer to turn down, right? However, there are certain catches. First, this distributor is considered a disruptive competitor by all of your other long-time distributors (who collectively paid you about $1.3 billion last year). If you proceed with this new deal, you're concerned that these other distributors may retaliate by paying you less when they renew their deals in the future. Second, this distributor wants a degree of exclusivity that limits your ability to make incremental deals with companies it deems as competitive. Third, key suppliers of your content have escalation clauses that entitle them to incremental payments if you proceed with this new deal, which would in turn erode your margins. And last, but not least, the manner in which this distributor wants to compensate you would alter the way you are positioned in the market - from a "premium" to a "basic" channel - consequently risking a perception that your content will be irreparably devalued by consumers and other distributors. Got all that? If so, then you grasp the quandary that Starz's executive team found itself in as it evaluated a huge license renewal offer from Netflix. Last Thursday Starz announced its decision, choosing to rebuff Netflix's rich offer, at least for now. But as the math below shows, combined with what I've learned from individuals familiar with Starz's economics, Netflix's putative $300 million/year offer was far more than Starz could generate otherwise, making its decision to walk away all the more difficult. In its statement, Starz said its "decision is a result of our strategy to protect the premium nature of our brand by preserving the appropriate pricing and packaging of our exclusive and highly valuable content." In other words, Starz wanted more than just big money from Netflix, it also wanted Netflix to change how it offered Starz content to its subscribers: from being part of an all-inclusive monthly fee (as it has been, starting at $8/month) to being available incrementally on a higher-level, more expensive tier. Starz's hope was that by aligning Netflix with how pay-TV providers offer Starz on various premium tiers, the above concerns would be ameliorated. Premium tiers for networks like Starz, HBO and EPIX are important to the studios who supply these networks because they're a cornerstone of the "windowing" model that allows the studios to monetize their films multiple sequential times. In fact, Sony and Disney already had provisions in their agreements with Starz that would trigger as Netflix grew bigger (this is the reason Sony's movies were pulled from Netflix a few months ago). The problem with premium tiers is that consumers have to be persuaded to subscribe to them, an increasingly difficult challenge as basic services have grown ever more expensive. Between the sluggish economy, proliferating ways of accessing films on various devices and historically high churn rates, being on a premium tier means that only a relatively small percentage of addressable homes actually become paying subscribers. To help understand how much revenue Starz may have made from being on a newly-created premium tier from Netflix, I've created 3 scenarios below based on potential tier prices of $7/mo, $10/mo and $15/mo at 4 different penetration levels, 2.5%, 5%, 10% and 20%. I assume Starz would keep 50% of the revenue generated, which is probably generous. Note that only 2 of these scenarios - full 20% penetration, at $10/mo and $15/mo - generate revenue for Starz that exceeds the $300 million Netflix was offering to keep Starz available to all of its streaming subscribers. And the likelihood of achieving these levels is practically zero, since they would entail Netflix more than doubling the total monthly fees these subscribers would be paying, significantly dampening interest. In fact, it's virtually inconceivable Netflix would even go down the premium tier path in the first place, since it would mean pulling out and charging more for prized content that has been available as part of its core service. Think about the uproar caused by Netflix's recent move to charge separately for DVD and streaming and you get a sense of the revolt that would ensue. Stepping back, the $300 million would have paid Starz the equivalent of about $.83 per Netflix subscriber per month, assuming an average 30 million Netflix streaming subscribers in 2012. That amount would likely place Starz in the lower part of the top 10 of basic cable networks, in a range with USA Network for example. But this would be only about 20-25% of what Starz gets from other pay-TV operators that distribute it as a premium network. The fundamental incompatibility of Starz being treated as a basic network by Netflix, while trying to maintain its position and value as a premium network by all other pay-TV distributors is where this renewal deal broke down. Despite the fact that Starz subscribers have actually nudged higher since the original Netflix deal went live, and that there's no tangible proof that it has actually caused any cord-shaving or cord-cutting, pay-TV operators have made no secret of their unhappiness that Starz's content is available at a far lower price point via Netflix. A big new multi-year renewal deal would have raised all kinds of risks for Starz. In particular, how Starz's 3 largest pay-TV distributors, Comcast, DirecTV and DISH Network, which together accounted for 56% or $744 million of Starz's 2010 revenue, might react to a new Netflix deal was no doubt upper-most for Starz's management. What if, when it was time for them to renew, they each cut their offers by 25%? That alone would mean almost $190 million in lost revenue by Starz. And if other distributors were emboldened to play hardball, the full $300 million in gains from Netflix could quickly evaporate. Then there's the ire of Sony and Disney Starz would have to contend with, plus the higher payouts to them that would be triggered. And finally there's the opportunity cost of not being able to pursue incremental deals with Amazon, Wal-Mart/Vudu, Apple and others due to some degree of exclusivity that Netflix was requiring. In this context, although the $300 million from Netflix was almost certainly more than a premium tiered approach would generate for Starz, by insisting on it as a renewal condition, Starz was clearly more focused on the multiple risks to its existing ecosystem of partners than on the prospect of guaranteed additional income. With the current deal not expiring for another 6 months, there's still the possibility of a resolution between Starz and Netflix. Like many other content providers these days, Starz is walking a tightrope trying to balance rich new online opportunities with the value of its traditional partners. Our Sponsors About VideoNuze VideoNuze is the authoritative online source for original analysis and news aggregation focused on the burgeoning online video industry. Founded in 2007 by Will Richmond, a 20-year veteran of the broadband, cable TV, content and technology industries, VideoNuze is read by executive-level decision-makers who need to get beyond the standard headlines and achieve a deep understanding of online video’s disruptive impact.
Mid
[ 0.5477178423236511, 33, 27.25 ]
/* * The MIT License (MIT) * * Copyright (c) 2018 Nathan Osman * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #include <QSocketNotifier> #ifdef Q_OS_UNIX # include <sys/signal.h> # include <sys/socket.h> # include <signal.h> # include <unistd.h> #endif #include <nitroshare/signalnotifier.h> #include "signalnotifier_p.h" #ifdef Q_OS_UNIX // Socket pair used to trigger the signal int socketPair[2]; // Signal handler that writes data to the socket void signalHandler(int) { char c = 0; write(socketPair[0], &c, sizeof(c)); } #endif SignalNotifierPrivate::SignalNotifierPrivate(SignalNotifier *parent) : QObject(parent), q(parent) { #ifdef Q_OS_UNIX if (socketpair(AF_UNIX, SOCK_STREAM, 0, socketPair) == 0) { struct sigaction action = {}; action.sa_handler = signalHandler; sigemptyset(&action.sa_mask); action.sa_flags = SA_RESTART; if (sigaction(SIGINT, &action, 0) == 0 && sigaction(SIGTERM, &action, 0) == 0) { connect( new QSocketNotifier(socketPair[1], QSocketNotifier::Read, this), &QSocketNotifier::activated, q, &SignalNotifier::signal ); } } #endif } // TODO: cleanup during destructor SignalNotifier::SignalNotifier(QObject *parent) : QObject(parent), d(new SignalNotifierPrivate(this)) { }
Mid
[ 0.544464609800362, 37.5, 31.375 ]
--- author: - | Ruth Britto, Edoardo Mirabella\ Institut de Physique Théorique, CEA-Saclay, F-91191, Gif-sur-Yvette cedex, France\ Email: bibliography: - 'references.bib' title: External leg corrections in the unitarity method --- Acknowledgments {#acknowledgments .unnumbered} =============== We are grateful to Leandro Almeida for collaborating in the early stages of this project. We thank Zvi Bern and David Kosower for insightful comments and feedback on a draft of this manuscript, and Alexander Ochirov for corrections to the first version. We acknowledge Simon Badger and Harald Ita for useful discussions. This research was supported in part by the National Science Foundation under Grant No. NSF PHY05-51164; we thank the KITP for its hospitality. R.B. is supported by the Agence Nationale de la Recherche under grant ANR-09-CEXC-009-01. E.M. is supported by the European Research Council under Advanced Investigator Grant ERC-AdG-228301.
High
[ 0.670520231213872, 29, 14.25 ]
Dubai's opera house will be multi-purpose An opera house to be built in Downtown Dubai will feature multi-purpose technology that can convert it into an exhibition space or banquet hall. According to the National, the design will allow for 900 of the house's 2,000 seats to be removed via hydraulic technology - located underneath the facility when not in use - and put in storage out of the way to open up more space. The news came during Cityscape Global, which featured Theatre Project Consultants alongside the other two firms designing the project. Director of Theatre Projects Consultants Tom Davis likened the system to a school gym - where bleachers fold away - but much more advanced. He said: "What we do is use a series of mechanisms that collapse into each other. It's a process called spiralling. The seats are compressed into a cassette at the bottom and then into a column. It has been used before in a number of venues, but the challenge for us is to do it on this scale and to world standards." The 60,000 sq m opera house is also designed and shaped to resemble a traditional sailing vessel. It is set to open in 2015 and was originally announced last year by Dubai's leader Sheikh Mohammed bin Rashid to form the focal point of a 500-acre culture area in Downtown Dubai, which aims to attract more high-end tourists to the city. Its potential use for both entertainment and business represents a larger, growing dynamic within the emirate. It is well known for both, with other locations such as the Dubai World Trade Centre hosting a variety of each. Within less than a month, the centre has recently featured Dubai Music Week and Game 13. It is also set to host Gitex Technology Week on October 23rd. The design of the culture district was also advised by Mirage Leisure and Development. Dene Murphy, the company's founder, said big shows often need two years notice before booking, with a number of acts already in discussions as a result. He said promoters need to see the opera house as standing on the world circuit.
Mid
[ 0.6406570841889111, 39, 21.875 ]
1. rightness (n) a: accordance with conscience or morality b: appropriate conduct; doing the right thing c: conformity to fact or truth 2. truth (n) a: the state of being the case b: the body of real things, events, and facts Wednesday, December 03, 2008 THE TURKISH CONSTITUTION AND THE FACADE OF DEMOCRACY, PART 3 "And the closer the elections get, the following messages will be delivered secretly: "If we can weaken DTP and militarily control PKK, we will recognize your rights without facing any clashes by convincing the general staff." ~ Mithat Sancar, Ankara University. Below is the third and final part of the interview Taraf's Neşe Düzel conducted with Ankara University's Mithat Sancar. Part 1 can be found here and Part 2 is here. ND: Why did AKP become so hawkish on the Kurdish question and see that military operations is the only solution? Can it be the only party with this way? MS: Someone had convinced AKP that with such hawkish strategies it could finish DTP and weaken PKK. AKP thinks that the more it becomes hawkish, the more powerful it would become in the Southeast. And the closer the elections get, the following messages will be delivered secretly: "If we can weaken DTP and militarily control PKK, we will recognize your rights without facing any clashes by convincing the general staff." In the election, they will show the candidates who will give the messages that they have not given up the policies they had promised. In this way, AKP is planning to annihilate the Kurdish political movement. There is a thesis that if AKP loses the Southeast, the only bridge between the Turkish east and west will collapse. ND: What kind of thesis is this? MS: This is very dangerous because with this thesis is the understanding that ignores the importance of political representation with one's own identity. However, the demand for representation with one's own identity is very important for a democratic solution in ethnic conflict. DTP, or a party like that, their existence will reinforce Kurdish unity with Turkey because it will make Kurds feel they are represented. If DTP is annihilated, the possibility of speaking out [with] Kurdish identity and the feeling of political representation will weaken. And it will be that point that the dissolution of Kurds from the country and from the state will begin. DTP's loss will not facilitate the problem. On the contrary, it will make the solution difficult. ND: In AKP's cadres, it's as if there's a chauvinistic discourse. The national defense minister said that if the Rom and Armenians didn't go, we could not be a nation-state the way that we are today. How do you comment on this speech? MS: With one word. I read his speech with great fear. The mentality which approves of the Armenian Genocide and confesses it wants to base its problem-solving on homogeneity. This mentality cannot solve all of its Kurdish and non-Muslim problems. On the contrary, it will make more violence occur and make more pains felt on the Kurdish question. ND: Does the minister advocate Armenian forced immigration? MS: Clearly, he says forced migration was necessary, it had to be. My main concern is this: there wasn't any criticism from either the government or the prime minister about Vecdi Gönül's words. ND: Where does he find this courage? MS: This is the problem. AKP is facing tides since 2005. The more it gets away from its democratization goal, the more deeply nationalistic AKP is revealed [to be]. Today the nationalistic vein in AKP is greater than ever because, to date, there were three main traits that kept AKP from being a statist, right-wing party. These are the EU, the democrats in AKP and outside AKP. Today the AKP administration excludes these three elements. ND: Is Turkey sliding into racism? MS: In terms of political culture and daily life, Turkey is becoming a society that has a powerful racist vein. Daily racism has increased a lot in Turkey. I mean racism became normal. The disasters which are created by the normalization of racism can be understood by looking at world examples and the pains that result [from racism]. This racism destroys the base of democracies and it makes it easier in Turkey to polarize society and then drag people to the point that they can slaughter each other. In fact, in the last year, torture incidents in Turkey, deaths in prison and in detention, and shooting on the streets have increased. All these incidents are related to each other. ND: Has Turkey become militarized? MS: The pace of the last five months is not a civilian pace, but it is a militarized pace. The things that we live today are the signs that we are being dragged into militarism, nationalism, authoritarianism. If the prime minister maintains hawkish discourses, Turkey will have deeper polarizations and will move farther away from democracy. Nationalism, authoritarianism, racism will wrap up all parts of Turkey. Turkey will become a ghetto. However, we must not forget that in Turkey, there exists democrats whose power is not proportional to their numbers. ND: How can they be effective. MS: They are effective because Turkey has a powerful conscience. I trust in Turkey's conscience. And let's not forget that there are also democratic people in AKP. By the way, there was a little something a few days ago on Southern Kurdish reaction to the Status of Forces Agreement (SOFA) in Iraq. From IPS via Alternet: "Kurdish leaders have very fervently talked about approving the agreement and have appeared to be like the number one attorneys for this deal," Nawshirwan Mustafa, a former deputy to Iraqi President Jalal Talabani, wrote in Sbeiy, a Kurdish news website he founded. Mustafa resigned from Talabani's Patriotic Union of Kurdistan after disagreements over the party management style. "They [Kurdish leaders] have thought they should unconditionally support whatever America does and consider it as good." Ooooh . . . Bad idea. Very bad idea. Now, the extent of fears are such that senior Kurdish lawmakers broke their silence in the past few days demanding amendments to the deal in a way that would curb the central government's hand in using the country's military to "settle scores" with its political opponents. What makes it even more worrying for Kurds is that the deal commits the U.S. military to back the Iraqi army in its operations. But Iraqi Prime Minister Nouri al-Maliki has firmly rejected any changes, saying that parliamentarians should either accept the deal in its entirety or reject it altogether. I guess these guys have already forgotten the betrayals of the past. Shame, shame, shame.
Mid
[ 0.5400843881856541, 32, 27.25 ]
Media The Secret Ingredient is Silicon Feedback from The Battery Power Show Sep 24, 2018 The Battery Show in Novi Michigan, the largest battery show in America, was attended by over 8000 industry professionals, and 600 vendors. One clear take-away -- the electric vehicle trend is large and gaining momentum. Countries including France, Norway, Netherlands, Germany, India and others have put in place regulations to ban combustion vehicles, spurring rapid growth. The overall theme of The Battery Show was improving battery technology, and a clear example of this was the prevalent discussion for how to use silicon to get the most gain in Power Capacity (Watt Hours). Coretec met with several companies eager to evaluate and leverage CHS’ value in making their own advanced Li Ion batteries. As a logical next step, Coretec will work with them to cement plans to evaluate CHS as a premier Si precursor to improve the performance of their next generation Li Ion battery. The Technical Conference was also well attended, including Dr. Elgammal’s presentation entitled “Tunable Syntheses of Advanced Silicon Anodes Using Cyclohexasilane,“ which was very well received by attendees. In fact, many of those in the room had immediate follow-up questions for Dr. Elgammal and Michael Kraft. Overall, the experience expanded Coretec’s potential customer base, positioned the technology very well as a premier Si precursor, and identified several strategic partners to help Coretec with R&D as well as distribution and customer service. The growth of the EV sector as noted above, combined with the extensive growth of electronic mobility, drives huge demand for lithium ion batteries. According to some market reports, the lithium ion battery market will reach $100-140B by 2026-2027 with a 17% CAGR, making it one of the largest and fastest growing markets worldwide. Billions of dollars are invested annually to improve the current Li Ion battery which has reached 90% of theoretical energy storage capacity. With the largest improvement in energy capacity reached by adding silicon, the potential exists for a 300% improvement.
High
[ 0.6599496221662461, 32.75, 16.875 ]
BHEL commissions 600-MW thermal plant in Telengana Bharat Heavy Electricals Limited (BHEL) has commissioned a 600 MW coal-based thermal power plant in Telangana. The unit has been commissioned at the upcoming 2x600 MW Singareni Thermal Power Project (TPP) located in Adilabad district. The project is developed by Singareni Collieries Company. This is the second thermal plant commissioned by BHEL in Telangana within three months. Earlier in December, 2015, BHEL had commissioned a 600 MW thermal power plant at Kakatiya. The second unit of Singareni TPP is in an advanced stage of construction. BHEL-built 600 MW rating sets comprise a four-cylinder turbine, designed in-house, demonstrating the engineering prowess of BHEL. So far, the company has contracted 21 sets of 600 MW each, of which 15 have been commissioned.
Mid
[ 0.6343283582089551, 31.875, 18.375 ]
% Options for packages loaded elsewhere \PassOptionsToPackage{unicode}{hyperref} \PassOptionsToPackage{hyphens}{url} % \documentclass[a4paper ]{ubarticle} \usepackage{graphicx} \usepackage{amssymb,amsmath} \usepackage{parskip} \usepackage{tabu} \usepackage{longtable} \makeatother \usepackage{seqsplit} \usepackage{hyperref} \hypersetup{ pdftitle={DPPPT API}, hidelinks, pdfcreator={LaTeX via pandoc}} \urlstyle{same} % disable monospaced font for URLs \setlength{\emergencystretch}{3em} % prevent overfull lines \providecommand{\tightlist}{% \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}} \setcounter{secnumdepth}{-\maxdimen} % remove section numbering \title{DPPPT API} \date{\today} \author{pepp-pt} \begin{document} \begin{titlepage} %\includegraphics[width=7cm]{ubique-logo.png} \hspace{4.3cm} {\raggedleft \textbf{DP3T} \\ %\hspace{11.5cm} Niederdorfstrasse 77 \\ % 8001 Zürich \\ \vspace{0.3cm} \par} \vspace{3cm} {\Huge {{title}} \par} \vspace{1.5cm} {\huge Documentation \par} \vspace{3cm} { \large \today } \end{titlepage} \thispagestyle{empty} \clearpage \tableofcontents \clearpage \include{introduction} \part{Web Service} \subsection{Introduction} A test implementation is hosted on: {{host}}. This part of the documentation deals with the different API-Endpoints. Examples to fields are put into the models section~\ref{sec:Models} to increase readability. Every request lists all possible status codes and the reason for the status code. {{#requests}} {{> request }} {{/requests}} \part{Models} All Models, which are used by the Endpoints are described here. For every field we give examples, to give an overview of what the backend expects. \label{sec:Models} {{#schemas}} {{> schema }} {{/schemas}} \end{document}
Low
[ 0.529411764705882, 33.75, 30 ]
Thursday, July 29, 2010 One day in the summer he was climbing a steep path toward a busy crossroads when, in an absent-minded daydream experienced by anyone walking familiar streets with only boredom and solitude to share, he saw among the cars and pedestrians the profile of a long-lost friend. It has to be a daydream, he thought, identical to the one back then, when they were still close. He had glimpsed her on the same street, except that time walking towards him and beside someone else. A shock enough for him to take refuge in the darkness of an adjacent shop, reeling from the revelation and hoping that neither had looked ahead, only to discover a few seconds later, as they passed, that it wasn't them at all, merely everyday strangers. So, he thought, it has to be an illusion, only now there is no need to hide. It wasn't her. And what if it had been? No matter how well-attuned one is to the light, I think, life remains hooded by such fictions. They shadow the mundane present; stories overwritten mostly, sometimes flaring for a few hours, sometimes days, sometimes branded for years in a synaptic loop. Together they form a consciousness veiled by invention, hallucination and stupefaction. A good reason not to live, I think, if the alternative were not so much worse: a life exposed to the light. And then she turned toward me and smiled. We chatted for five minutes, perhaps more. At the time, I knew it wasn't much. We exchanged small talk about work and health, queries about long-lost mutual friends and about our current activities; the usual stuff. It really wasn't much. We carried on to our destinations in opposing directions, slightly diverted, nothing more. What happened next is the source of what is written here. Next is five months later, in the cocoon of winter. Late one night I began writing at the top of a blank page. Sometimes it is necessary to turn white to black. I began writing about that chance meeting. As I could not quite remember the conversation, I let the pen find the words. Except, rather than the words themselves, what emerged were memories of the physical shifts and gestures between us; the awkward corners and delicate pauses. Spaces grew around the words and resonated with the past we shared. Was this the person I had met, or even the person I knew back then? The questions staggered me. Was I imagining it? After one and half pages, the writing ended. I've written nothing like it since and perhaps never will. [Moving beyond fiction] This summer, the one and a half pages of notes became a fetish for me, offering the possibility of a more elemental form of writing, one which dissolves well-attuned habit and reveals an alternative life; not, that is, a different life but the one waiting to be discovered. Why else would a few hundred words scratched out in a brief, forgotten time stir me while all the intricate ideas, elaborate plans and laborious executions leave me blank and disconnected? On what does the appearance of its alternative depend? Chance alone it would seem. While it would not be presumptuous to dismiss such writing as occasional autobiographical digressions carrying its charge in the singular impact it has on the writer, this would obscure what needs to be isolated as unique to writing. But how can it be maintained or codified into a public form? I was reminded of these questions as Geoff Dyer and Lee Siegel added to the surge of voices condemning the worldly disappointments of contemporary fiction and instead advocating creative non-fiction. Both arguments rest on the notion of the novel as a means of narrating events in the empirical world and of engaging readers with company, information and meaning. The novel may be the apotheosis of "characterisation, observation and narrative drive" but now it has a more worldly equal. Given the examples offered, it's no wonder the war reportage Dyer celebrates appears more vital, exciting and relevant, while Siegel's call (couched in tabloid sneer) for literary fiction to be more commercial and realistic in order "illumine the ordinary events of ordinary lives" also seems fair if we assume that war and peace are the poles between which real life spins; a roadside bomb and a divorce spraying shrapnel into flesh and spirit. So how can the writing that stirs me – haphazard, unworldly – respond to these rousing condemnations? First, we have to recognise the limits of the prevailing distinction between fiction and non-fiction. Both Dyer and Siegel appeal to cultural relevance to justify the relegation of contemporary literary fiction. For one, war is "the big story of our times – the al-Qaida attacks on New York and the Pentagon, and the subsequent wars in Iraq and Afghanistan" means that "long-form reporting ... has left the novel looking superfluous" while, for the other, a fast-tracked biography of Barack Obama is "overflowing with sharp character portraits," has "keen evocations of American places" and is, moreover, "a ripping narrative". Both cite novels from the past as exemplary and now impossible because, according to Dyer, "the time has passed" when "human stories contained within historical events ... could only be assimilated and comprehended when they had been processed by a novel". Siegel makes perhaps the more telling observation that novel writing has become a profession rather than a vocation, thereby producing novels in which "carefulness ... cautiousness [and] professionalism" are considered desirable literary attributes rather than "existential urgency and intensity". Certainly these latter qualities are to the fore in war reportage and in Janet Malcolm's article cited by Siegel but, from the evidence supplied, they appear to rely on familiar techniques of genre fiction. The old-fashioned quest to tell a story drives these books rather than for anything beyond themselves, a connection, for instance, with what escapes the rhetoric of style and technique. Indeed, Dyer says one war book is "like a traditional third-person novel" giving "the chaos of events ... narrative shape" with "scrupulous observation and phrasing" spiced with "damaged lyricism" (a soldier's ruptured skull echoes an earlier description of the moon). In raising voices against the new type of book, Dyer first offers the straw man of an unnamed "fiction lobby" who say it's "too soon to tell" if the novel is out-dated, and, second, the fact-checking culture of the New Yorker. Deviation from the latter – a "willingness to digress" from strict factual accuracy – seems to be the only border war reportage shares with fiction. That's it. The "new" form involves arguing for this small creative licence. It is only right at the end of the essay that Dyer introduces a more challenging voice of opposition in the form of Martin Amis' thirty-year-old critique: Amis claims that the non-fiction novel, as practised by Mailer and Capote, lacks "moral imagination. Moral artistry. The facts cannot be arranged to give them moral point. There can be no art without moral point. When the reading experience is over, you are left, simply, with murder – and with the human messiness and futility that attends all death." The essay is an old one, and the point can now be seen to contain its own limitation and, by extension, refutation. We are moving beyond the non-fiction novel to different kinds of narrative art, different forms of cognition. Loaded with moral and political point, narrative has been recalibrated to record, honour and protest the latest, historically specific instance of futility and mess. Dyer is right that Amis adds little weight to counter his argument. Moral point is an inevitable consequence of all writing. However, its lightness may be deceptive because Dyer's dismissal appeals only to taste over judgement and immediacy over vintage. I would suggest Amis is wrong, instead, only because the new books seek to eliminate art (if not craft), or at least our perception of it, and this would be, in its ambition, the very height of art and, thereby, the height of morality, even if it is a studied amorality. "Art always throws off the appearance of art" according to Adrian Leverkühn in Thomas Mann's Doctor Faustus. We should remember who helped guarantee his art throw off its appearances. Despite the respectable ambition, it's hard to see from what has been presented how these invariably America-focussed war books are in any way "moving beyond" anything or in any way different from the careful, cautious and professional fictions of which Lee Siegel is so contemptuous. Their "existential urgency and intensity" emerges first from their subject matter and then, more significantly, from a fiercely limited perspective. Death lurks around each corner; soldiers can live or die in the next sentence. Unlike in the novel, the author here has no control over life and death. In this – perhaps paradoxical – way, war reporting has erased chance from writing. Paradoxical because, while chance fills the lives of the soldiers, it is erased in the telling: everything is necessary, already written in nature. This is of course a particularly thrilling reading experience – the illusion of extreme chance while one is safely removed, at rest with a book. While these narratives may appear to represent a "different form of cognition", it is merely a symptom of the triumph of genre. The only essential difference Dyer offers between it and Kathryn Bigelow's fictional confection The Hurt Locker is in its credibility. One we know is fiction because it is presented as such even if, in the telling, we are persuaded to believe, while the other is presented as truth "with multiple layers of dreadful, unresolved irony". But how can we know which is credible and which not without first having been convinced of non-fiction? We have, after all, not been soldiers in the front line. The issue is one of trusting the sincerity of the author. What is happening here is the familiar trajectory of a loss of suspension of disbelief followed by a knowing cynicism eager to be seduced again. Except, the only thing that has changed is that the magical force of fiction has been renewed elsewhere, in light disguise. Behold, the Emperor has new clothes. [A realm beyond light] I began this piece with a run-of-the-mill story of memory and imagination skewing an ordinary experience and then how its reconstruction in words changed the perspective, enabling the writer to loosen the self's armour of habit, perhaps opening it to danger, perhaps to relief. While I recognise its banal nature, I think it offers an insight into a more worthwhile, time-independent distinction between fiction and non-fiction or, better still, between formal adventure and storytelling. Next I need to describe more respectable examples. It is well known that after initially resisting the idea, Henry James used a notebook to develop plots for stories and novels. Often these ideas were taken from anecdotes heard in drawing rooms or salons. The Turn of the Screw is a famous example, taken from an outline given by the Archbishop of Canterbury who himself was only relating a story told by "a lady who had no art of relation". The notebook repeats the outline and adds that the story "is all obscure and imperfect" yet recognises "a suggestion of strangely gruesome effect in it". Such obscurity may have convinced a lesser writer to abandon the story or to develop it to the point where it becomes a familiar ghost story. Indeed, the editors of the notebooks insist this latter project is all the writer intended. Of course James does neither. He makes the decisive move to have the story told "by an outside spectator, observer", in this case the governess who takes a job in the house where the events unfold. But why decisive? In his short essay on the notebooks, Maurice Blanchot uses The Turn of the Screw as an example to show how the development in the notebook of the obscure and imperfect aspects of the story led to its unique qualities. By deciding to place a step between the narration and the events in the form of the governess' letter, the plot of the story becomes the lucidity and obscurity of the governess' experience. James uses the distance between the real words and the real world to create the ambiguity of the children's innocence ("one of his most cruel effects"): ... an innocence which is pure of the evil it contains; the art of perfect dissimulation which enables the children to conceal this evil from honest folk amongst whom they live, an evil which is perhaps an innocence that becomes evil in the proximity of such folk, the incorruptible innocence they oppose to the true evil of adults. [from The Sirens' Song, trans. Sacha Rabinovitch] The complexity of this ambiguity may be easily correlated to the narration of writers embedded in an occupying army among the ghostly, recalcitrant servants of Afghanistan. The governess becomes the imperial force invading an alien land, seeing danger and evil everywhere except in itself. In fiction, however, the reader is astute enough to recognise the governess may not be reliable: It is she who talks about [ghosts], drawing them into the imprecise space of narration – that unreal beyond where everything is apparition, slippery, evasive, present and absent – that symbol of a lurking Evil which is, according to Graham Greene, James' subject matter and is perhaps only the satanic core of all fiction. The implications of these observations is that what we think of as "plot" undergoes a change. From first being considered merely the thrilling sequence of empirical events orchestrated by a masterful author, plot is now the coercive presence of narration itself: "a presence seeking to penetrate the heart of the story where [the governess] is an intruder, an outsider forcing her way in, distorting the mystery, perhaps creating it, perhaps discovering it, but certainly breaking in, destroying it and only revealing the ambiguity which conceals it." The plot of The Turn of the Screw then is "quite simply James' talent", that is: the art of stalking a secret which, as in so many of his books, the narration creates and which is not only a real secret – some event, thought or fact which might come to light – nor a simple case of intellectual duplicity, but something which evades elucidation because it belongs to a realm beyond light. James is a peculiar case in literary history because his fiction was written at the height of the Victorian era and then over the cusp of the outbreak of Modernism. He lived in an era, as Blanchot says, "when novels were not written by Mallarmé, but by Flaubert and Maupassant". Except Mallarmé was writing then and, as Peter Brooks has made clear, James was infected by his time amongst the radical artists of Paris, however long the virus lay dormant. The attractive question for us is: are we living through a similar shift toward "different kinds of narrative art"? If there are indeed "different forms of cognition", then Geoff Dyer and, most prominently, David Shields in his Reality Hunger, are merely outriders for a new literary epoch. [Masters of war] Of course, it could be that they're just unwitting conservative backsliders unable any longer to tolerate the perennial challenge of the imagination. But how would this manifest? Perhaps in one of the most notable aspects in Dyer's piece about war reportage: its circumspect phrasing; this passage in particular: August sees the publication of Jim Frederick's Black Hearts, which investigates the disintegration, under intolerable pressure, of a platoon of American soldiers of the 502nd Infantry Regiment in Iraq's "triangle of death" in 2005-06, culminating in the rape and murder of a 14-year-old Iraqi girl and the execution of her family by four members of the platoon. The focus according to this summary is on the soldiers exposed to "intolerable pressure" rather than the monstrosity of their crime; that is, it is much like the bigger picture of the war according to official inquiries and polite opinion: a tragic procedural error for want of better management planning. One or two of the commenters to The Guardian's website where Dyer's essay appeared have already pointed out the warzones featured in these books are "home" to many and books about them are conspicuous by their absence. What is their experience, what "intolerable pressure" leads them to defend their sovereign lands with armed and political resistance, just as they did with our enthusiastic support less than thirty years before? Isn't this also "the big story of our times"? The nearest we seem to get is a brief mention of Lawrence Wright's The Looming Tower in which "complex and developing individuals" bear "the weight of larger historical drives or circumstances". Except this is a history book, hardly a radically new kind of narrative art. So, then the issue becomes: how might a writer begin to approach "the enemy" with anything like the same embedded empathy as displayed for those with the "acronym-intensive argot ... worldview of the USMC"? It's a huge, intractable issue, perhaps necessarily so. For Dyer, however, "the biggest question mark about this [epic, ongoing, multi-volume work in progress] concerns the way in which it is illustrated". [Shadows and shimmerings] In these eye-level narratives, the moral point that Martin Amis sees as missing is the moral point precisely. Their evasion is as necessary to the books as it is to the military action itself. In their forensic attention to detail and narrative drive, they match the military's unflinching prosecution of executive orders. I'm reminded here of the standard efficiency and disinterested perception of Maximilien Aue, the narrator of Jonathan Littell's The Kindly Ones, as he pursues military orders. The reader of this book is forced to confront the contradiction of a deeply cultured and vigilant man with whom we are compelled to identify who also takes an active role in mass killings. Aue is aware that this is part of a necessary career path, even if he claims to find it unpalatable. Whilst massacring perceived enemies, he claims the alibi of the search for knowledge. (His modern, real life equivalent may be found in a character like General Stanley McChrystal for whom audiobooks replace Aue's Plato and Aeschylus). As The Kindly Ones is a novel, the narrative is able to implicate itself in its evasions by opening onto the consequences; the writing done by evasion. It has this in common with The Turn of the Screw: a luminosity terrifying for the shadows it casts. The ambiguity of knowledge and ignorance, innocence and guilt, good and evil, plays its part in some of the great novels that Geoff Dyer, David Shields and others regard as supplanted by non-fiction. As I've described, Blanchot argues that the plot of The Turn of the Screw is the very stalking of a secret elucidated in a realm beyond light which, because it is thereby also beyond darkness, still irradiates each sentence. It means the story is potentially as evanescent as the ghosts haunting the governess. It takes a special writer to follow the chimera shimmering on the horizon without losing touch with it or his readers. Blanchot aligns such fragility with the plot of Kafka's The Trial, and includes a perfect formulation of the novel's soul: "The story of a man pursued by his own conscience as though by some invisible judge before whom, precisely because he is invisible, he cannot justify himself" which, he concedes, "can hardly be said to constitute a story, let alone a novel", yet it is for Kafka the essence of his life: "a guilt whose weight is overwhelming because it is the shadow cast by innocence." Writing manifests both innocence and guilt; "a sweet and wonderful reward" he tells Max Brod in a letter, "the reward for serving the devil". Joseph K's adventures are then comical and disturbing in equal measure because the narrative moves between darkness and light without him or the reader being able to judge which is which. It is a story borne on its own anxiety for solace and closure. So, with this in mind, we can wonder again how non-fiction war reportage might partake of the apparently unique power of fiction to countenance the turning of the screw. Perhaps, if the genre in which they are constructed is, as Dyer explains, determined by a culture of magazine journalism in which "current events", the "big story of our times" and "characterisation, observation and narrative drive" replace the shimmering and shadows, it is a literary oxymoron and thereby inconceivable. If it isn't, then asking the question, merely wondering aloud, is perhaps the first step on the path. Writing of the kind I have raised in contrast to war reportage also seems contingent on breaking certain silences and privacies. It often emerges from morbid isolation uncongenial to the security of public discourse to which Dyer and Siegel appeal. While we know about Franz Kafka's loneliness, Blanchot notes that James was like all artists in that he profoundly mistrusted himself – "Our doubt is our passion and our passion is our task" – and wished only to be able to let go in order to enter that realm beyond light. Instead, distance became the necessary passion and in turn it generated the narration which enabled the cruel effects of The Turn of the Screw. It is also an effect of the "essential loneliness" James expressed in his letter to Morton Fullerton: a loneliness "deeper about me ... than anything else: deeper than my genius, deeper than my 'discipline,' deeper than my pride, deeper, above all, than the deep countermining of art." Yet he also appreciated his notebook as a magical arena in which chance rather than facts and experience enters the creative process. As he wrote in his little book, his became "the deciphering pen", and he experienced what Blanchot calls "the pure indeterminacy of a work"; a time full of possibility and hope; perhaps even an end to loneliness. For some, however, writing which enters and maintains itself in an abandoned space disturbed only by ghosts can no longer be justified. They turn instead to narratives "recalibrated" to accept the rewards underwritten by empire. (We should remember that Brod was not the only person who would refuse his request, as Kafka surely knew. His girlfriend Dora Diamant, with whom Kafka lived in Berlin, away at last from the claws of Prague, retained twenty of his notebooks and held them for nine years after his death. His instructions were followed only when, in 1933, they were confiscated by the Gestapo; a fact that should put into perspective the moral ambivalence commonly attached to Brod's actions.) The sad story of Esther Hoffe's legacy may at least help us to appreciate what it means for a work to be lost. Imagine the non-existence of The Judgment and The Trial. If you find it impossible, try imagining their existence. Thursday, July 08, 2010 Harvard University's website offers for download in PDF a conversation between Christie McDonald and Leslie Morris about the acquisition of the proofs of Maurice Blanchot's L’Entretien infini [The Infinite Conversation] by Harvard's Houghton Library: They were described by the seller: "[these] may be the only remaining materials reasonably describable as 'manuscripts' to have been preserved from among his effects at his death in 2003, and it was only by chance that these survived. They were salvaged from the rubbish-bin by the husband of Blanchot’s long-time housekeeper." All were priced accordingly. An appealing story, and sure to whet the collector's appetite with its claim of extreme rarity, a 'last chance' to own a piece of one of France's most important literary theorists. Was it true? You can read the answer yourself. It includes a description of the content of the book itself: L’Entretien infini is a book largely constituted from work written between 1958 and 1969. The book crosses disciplines (literary criticism, philosophy, and political thought) and genres, presenting a series of fragmentary dialogues (with anonymous interlocutors), meditations, and complex arguments. It is widely considered his theoretical masterpiece, and the proofs bear witness to the reformulations of Blanchot’s thought during this period: his continuing search for a form through which to express them. The conversation also reveals the minor, indirect influence a certain literary blog had in starting the process of acquiring the manuscripts. (Link via Charlotte Mandell). Tuesday, July 06, 2010 I suspected once that any human life, however intricate and full it might be, consisted in reality of one moment: the moment when a man knows for all time who he is. Borges in Other Inquisitions. On June 29th, when I posted similar quotations from Beckett and Blanchot, I had a sense of déjà vu. Hadn't I raised the coincidence before? As it happens, yes, on December 29th, 2003 at In Writing, a short-lived shared blog that no longer exists. There is one aspect of Kierkegaard's work that can never be taken over and carried forward, either by philosophers or theologians, and that is his incommunicable existence. Paul Riceour, in Kierkegaard: A Critical Reader Perhaps the moment is the moment one knows that moment can never be communicated. Everyone would agree that the story of [Kierkegaard's] existence continues something quite unique in the history of thought: the dandy from Copenhagen, with his bizarre engagement to Regine, the devastating critique of Bishop Mynster, the unfortunate victim of the Corsair, the sick man dying in the public hospital – none of these characters can be repeated, or even correctly understood. But of course the same applies to any other existence as well. But the case of Kierkegaard is exceptional all the same: no one else has ever transposed autobiography into personal myth as he did. By means of his identifications with Abraham, Job, Ahasuerus and several other fantastical characters, he elaborated a kind of fictive personality which conceals and dissimulates his real existence. And this poetic character – like a character in fiction, or the hero of a Shakespearean tragedy – can never be situated with the framework or landscape of ordinary communication. Looking at the In Writing archive, I note with surprise the 69,000 words attributed to me were written in a single year. Looking a little closer I notice quotations I have reused since with the same violence of appropriation. To try to resist this, I've copied quotations as they occur to me, taken from desultory scanning of my notebooks. Of course, what is offered to our philosophical understanding, and withheld as well, is a character, a hero, created by his own writings; an author, the creature of his works, an existing individual who has de-realized himself and thus avoided capture by any known discipline. He does not even fit in with his own 'stages on life's way'. He was not enough of a seducer, a Don Juan, to be an aesthete. Nor did he succeed with his life of ethics: he was unmarried and childless, and he did not earn his living his profession, so he was excluded from the ethical existence described by Judge Wilhelm in Either/Or. But if Kierkegaard failed to live either an aesthetic or an ethical life, what then of religion, in this sense? Surely the Christianity he described is so extreme that no one could possibly practise it. The subjective thinker before God, the pure contemporary of Christ, suffering crucifixion with Him, without church, without tradition, and without ritual, can only exist outside of history. To refer then to Kierkegaard as "the father of Existentialism" is a means of entering history and avoiding what makes his work exceptional. We remained in the station on a wooden bench. We spent the night, and I left before him. Even now I find it really astonishing and very moving. It was a kind of madness, idiocy, to travel from Munich to the Jura to pass a few hours of the night with me. It was utterly inhuman to sit next to a being whom you sense desires you so much and not even to have been touched. Above all, I thought, I must be very careful with everything I say to him because he understands things in quite an alarming way, in an absolute way. Gabrielle Buffet-Picabia in Calvin Tomkins' Duchamp Since that year, my two fellow In Writing bloggers have published five books, with two more forthcoming. You cannot refute Kierkegaard: you must simply read him, consider, and then get on with your work – butwith your eyes fixed upon the exception. From blog to book suggests progress; from Hell to Purgatory. A book promises a system in its unity as a book between two covers. In contrast, literary blogging is a frenetic, damnable repetition. Each week, the same stories are raised by the same people: The same people are compelled to reply with the same futile points and refutations. [There are] considerable similarities between Glenn Gould's musical views and Thomas Bernhard's prose style. [Both] artists appreciated the fugal nature of Baroque music, which mixes without dissolving the differences between two, three and even four distinct voices. Mark Anderson, afterword to The Loser It has to be said that the avalanche of new books is also a crushing repetition; a hell of sorts. This isn't simply a hysterical comparison. Teodolinda Barolini has explained how, as in the experience of life, Dante's Inferno is narrative journey "predicated on a principle of sequentiality, on encounters that occur one by one ... in which each new event displaces the one that precedes it" and indeed that the entire Commedia is "informed by a poetics of the new". A poetics of the new is different from the new itself in that Dante seeks to untangle himself from the unseen loop. The pursuance of the new is, however, and as Dante discovered after leaving his dark wood, a form of despair. WH Auden recognised the possibility that the gates of Hell are always standing wide open and that the damned are perfectly free to leave whenever they like, only they don't because it would mean admitting that the gates are indeed open and that another life is possible. They are addicted to their present existence in which suffering defines who they are and each moment of agony is a new event (necessarily so). Except, it isn't so straightforward. Barolini points out that the angels in Heaven remember nothing and nothing new ever occludes their sight. They remember nothing because they can see everything: "they have no need of memory / since they do not possess divided thought". Heaven can be hellishly boring. It would be very unjust to say that you deserted me, but that I was deserted and sometimes terribly so, is true. Kafka, Diaries 1922. What I notice in the quotations so far is their focus on an escape from history, from change: Borges' moment, Kierkegaard's existence without; his ineffable exception, Duchamp's "madness" and absolute understanding; the condition of the angels. I'm reminded of Cioran on Beckett: "He does not live in time but parallel to time". Then there's Kafka's sentence which does not seem to fit. So would Paradise be the book with no memory, with a vision uninterrupted, unable to be distracted by the infernally new? Perhaps. But such a book is already the outside, an exception to the book. The same problem of writing remains. After my death no one will find even the least information in my papers (this is my consolation) about what has really filled my life; find the inscription in my innermost being which explains everything and what, more often than not, makes what the world would call trifles into, for me, events of immense importance, and which I too consider of no significance once I take away the secret note which explains it. Kierkegaard, Papers & Journals.
Mid
[ 0.5383022774327121, 32.5, 27.875 ]
RELATED: cnxps.cmd.push(function () { cnxps({ playerId: '36af7c51-0caf-4741-9824-2c941fc6c17b' }).render('4c4d856e0e6f4e3d808bbc1715e132f6'); }); An Iranian website calling the Holocaust "the great lie" and depicting an alternative version of events in Jewish history in cartoon form has been launched.The site is reportedly financed by a cultural foundation, is not government affiliated, and is based mainly on a book of cartoons first published in 2008.The site's creators say that they intend to show the world that the Holocaust has been entirely fabricated by the Jews, who not only invented it but have used it to their advantage ever since.The book's preface begins by saying that the its purpose is "to denounce the conspicuous lie of the planned murder of 6 million Jews during the Second World War allegedly called 'Holocaust.'"It continues, calling the Holocaust: "The lie by which the Palestine occupier Zionists have justified their occupying of Palestine and lots of their crimes for years."Turning the pages of the book by clicking on icons depicting a swastika, the reader is told 'a story' of the plight of the Jewish people, from before the Second World War onwards.Each page has text accompanied by a cartoon, and uses derogatory, offensive and blatantly anti-Semitic imagery and language.It describes how the Jews 'fabricated' the existence of the gas chambers, using them to garner the world's sympathy with a view to receiving money for the trials they had endured, saying:"Later on, the devoted Jew artists rebuilt a number of gas chambers and crematories in other parts of the world by using their mental and imagination power."It also accuses the 'Zionist Jews' of storing enough atomic weapons to "destroy the entire world."Several years ago Iranian President Mahmoud Ahmadinejad famously made headlines after calling the Holocaust 'a fairytale'.Yad Vashem commented Thursday saying that the launch of the website is yet the latest salvo emanating from Iran that denies the facts of the Holocaust and attempts to influence those who are ignorant of history."The vulgar and cynical approach of the website, a combination of Holocaust denial and distortion, illustrated with antisemitic caricatures, further illustrates Iran’s disregard for reality and truth vis-à-vis the Holocaust, Jews and Israel," the Yad Vashem comment continued.
Low
[ 0.505791505791505, 32.75, 32 ]
Background ========== The prevalence of dementia worldwide is significantly growing, with the majority of the persons with dementia dying in nursing homes \[[@B1],[@B2]\]. Therefore, the provision of high-quality end-of-life care for nursing home residents with dementia is essential \[[@B3]-[@B6]\]. However, the literature reports numerous shortcomings in the end-of-life care for dementia, suffering of residents and unfulfilled needs of families \[[@B6]\]. For example, an Italian study reported high levels of pressure ulcers, burdensome interventions such as tube- and PEG-feeding, psychotropic drugs and poor decision-making in the last month of life of nursing home residents with dementia \[[@B7]\]. Despite some encouraging trends from The Netherlands and the U.S. regarding improved symptom management in dementia \[[@B8]-[@B10]\], improvement of end-of-life care for dementia remains a research priority \[[@B11]\]. Systematic assessments of care performance that are compared to professional targets or standards (hereafter referred to as audit and feedback) is widely used as a strategy to improve professional care practice \[[@B12]\]. In the nursing home setting, there are indications that audit and feedback using cumulative quality of care scores based on a group of patients may improve nursing home care in general \[[@B13]-[@B15]\], including nursing home care for residents with dementia \[[@B16]\]. In the US, audit and feedback is already structurally applied to improve hospice and palliative care services (Family Evaluation of Hospice Care Survey of the National Hospice and Palliative Care Organization, \[[@B17]\]). The literature suggests that audit and feedback is more effective when accompanied by either active interventions (such as educational outreach, integration within an overall quality improvement framework), or passive interventions (such as publication of performance), with active interventions generally being more successful then passive interventions \[[@B15],[@B18]-[@B20]\]. So far, only audit- and feedback strategies using cumulative scores relating to care performances of care teams have been reported previously in the literature (e.g., Zuidgeest et al. \[[@B21]\]). However, this audit- and feedback strategy is time consuming due to the administrative tasks involved, which potentially creates barriers for the nursing homes to use audit- and feedback for care quality improvement. Therefore, a feedback strategy based on discussing evaluations on a patient level, is an appealing, and possibly less time consuming, alternative design. Such patient specific audit- and feedback also allows for individual care workers to relate more directly the feedback to their own care performance. Due to a lack of studies that directly compare different strategies of audit and feedback, evidence for the effectiveness of different audit and feedback strategies is limited \[[@B15],[@B19]\], and this includes the nursing home setting. Moreover, the influence of the organizational context on audit- and feedback and its implementation has not been addressed. More generally, earlier work in the area of evidence-based clinical practices in health care organizations found three organizational elements to influence implementation processes of evidence-based clinical practices: active leadership, process adaptation and involvement of management structures and processes \[[@B22]\]. Implementation of guidelines is affected by the specific characteristics of the guidelines, the target group and of the social or environmental context \[[@B23]\]. The aim of the Feedback on End-of-Life care in dementia (FOLlow-up) project is to assess the effect of the implementation of two audit- and feedback strategies on the quality of care and quality of dying of nursing home residents with dementia: a generic feedback strategy using cumulative care performance scores generated by a feedback program, and a patient specific strategy. Effects of implementation are assessed with the End-of-Life in Dementia -- Satisfaction With Care (EOLD-SWC) scale and the End-of-Life in Dementia -- Comfort Assessment in Dying (EOLD-CAD) scale \[[@B24]\]. Families evaluate and provide feedback on the quality of end-of-life care and the quality of dying of nursing home residents with dementia, as families' perceptions are intrinsically valuable in palliative care \[[@B25]\]. These instruments had the best psychometric properties and feasibility for use among bereaved family members \[[@B26]-[@B28]\]. Further, this study improves our understanding of facilitators and barriers of implementation, and of effectiveness to improve care of audit and feedback in the nursing home setting using a mixed-method process evaluation. Methods ======= Study design ------------ The effects of active implementation of the EOLD-instruments is tested using a Randomized Controlled Trial (RCT) design. Nursing homes are randomized into three groups. Two intervention groups implement the EOLD-instruments according to the generic or the patient-specific feedback strategy, and a control group is created to control for changes that occur over time in the nursing home setting (2005--2010) independent from feedback on quality of care \[[@B9]\]. Setting and study population ---------------------------- Participating nursing homes implement the EOLD-SWC and EOLD-CAD instruments on psychogeriatric wards (almost all dementia, and patients generally stay there until death). A specially trained elderly care physician employed by the nursing home is responsible for the care, including the residents' last stage of life \[[@B29]-[@B31]\]. The study population comprises family caregivers (i.e., the main contact person) of nursing home residents with dementia who died on a psychogeriatric ward. Families of residents who stayed at least 16 days of the last month of their life in the nursing home are eligible to provide written feedback. Further, potential respondents need to be able to read Dutch. The nursing home invites the family member most involved in care during the last month (usually the same person throughout admission) to provide feedback. Power analyses and recruitment of nursing homes ----------------------------------------------- The power analyses were based on a minimum number of family assessments to generate feedback; from there, we calculated the number of facilities in each group, from which followed a minimum and average number of beds per facility. For the cumulative feedback strategy, a minimum of 10 to 15 feedback reports is required to generate reliable total EOLD-SWC and EOLD-CAD scores and compare with national means, and we departed from an average total of 30 feedback reports for the complete data collection period. Further, the minimum relevant difference to be detected on the EOLD instruments before and after implementation of the feedback was 3 points. Based on three previous Dutch studies using the EOLD-SWC and EOLD-CAD instruments, we assumed an Intra Class Correlation Coefficient of 0.07 for the EOLD-CAD and 0.01 for the EOLD-SWC \[[@B9]\]. Additionally, when taking into account a significance level (alpha) of 0.05 and a power (beta) of 0.80, a minimum of five nursing homes per intervention group is needed. Based on a rate of 55% for eligibility and response, each participating nursing home needs to have a minimum of 22 decedents with dementia per year, and the average across facilities should amount to 33. Assuming a quarter of the nursing home residents die each year \[[@B32]\], the minimum number of beds of the psychogeriatric wards of participating nursing homes is 88, and the average over all facilities should amount to 132. Nursing homes meeting the criterion of the availability of a minimum of around 88 psychogeriatric care beds have been recruited from all over the country. Nursing homes that were planning an organizational change that might affect the study's outcomes were excluded from participation. Fifty-six nursing homes with the required number of psychogeriatric beds located throughout the country have been approached to be involved in the study. From the approached nursing homes, two nursing homes could not participate due to the exclusion criteria. A total of 18 nursing homes agreed to participate in the study (recruitment rate: 32%). The most common reasons not to participate were lack of time, organizational changes or staff shortage, and nursing homes not having end-of-life care quality improvement as their current priority. Randomisation ------------- Based on the variability in factors potentially affecting resident outcome and family satisfaction with care as reported in the literature (reviewed by Van der Steen, 2013 \[[@B32]\]), three groups were matched to ensure similar distributions with regard to the following characteristics: size, geographic location, religious affiliation and the availability of a palliative care unit, since a spill-over effect of hospice services on residents who were not on hospice has been noted. Subsequently, the three groups were randomly assigned to one of the two intervention groups or the control group. The intervention ---------------- ### Theoretical framework and hypotheses The FOLlow-up project aims at changing the behavior of professional caregivers on different levels in the nursing home due to the implementation of the EOLD-instruments in the nursing home practice (Figure [1](#F1){ref-type="fig"}). We hypothesize that informing nursing homes on their cumulative EOLD-scores using the generic feedback strategy linked to identified care deficits will motivate nursing homes to improve both as an organization and as a care team. Similarly, we assume that patient specific feedback may, in addition to changes in care performance on an organizational level and team level, result in behavioral changes of an individual professional caregiver. For example, if a physician received feedback from a family that the explanation of medication issues was unclear, he may improve the informing about medication to family members. Further, discussing of this in the care team possibly has a spin-off to practice of colleagues, which may result in standard offering of an information leaflet on selected medication. ![Conceptual model for effectiveness of two feedback strategies.](1472-684X-12-29-1){#F1} The EOLD- instruments --------------------- Earlier research reviewed eleven measurement instruments developed to assess the quality of end-of-life care and quality of dying of nursing home residents with or without dementia. The End-of-Life in Dementia-Satisfaction With Care scale (EOLD-SWC) and the End-of-Life in Dementia-Comfort Assessment in Dying scale (EOLD-CAD) were identified as the most appropriate instruments with regard to, for example, validity, reliability, and ease of use, to assess quality of end-of-life care and quality of dying in dementia, respectively \[[@B26]-[@B28]\]. The EOLD--SWC is a 10-item scale that was developed for after-death assessment of satisfaction with care by family members of residents with dementia. Examples of items are 'I felt fully involved in all decision making' or 'The health care team was sensitive to my needs and feelings.' For both scales, higher scores reflect higher levels of comfort and higher levels of satisfaction respectively. The EOLD--CAD is a 14-item scale developed to assess the condition of the care recipient during the dying process. The scale comprises the subscales physical distress, dying symptoms, emotional symptoms, and well-being \[[@B24]\]. Data collection and procedures ------------------------------ We also ask families to provide socio-demographic characteristics of both the respondent (age, gender, marital status, relationship to the nursing home resident) and of the decedent (age, gender, marital status and date of death). The participating nursing homes send the questionnaire with the EOLD-instruments to the family caregiver of a nursing home resident who died with dementia. During 20 months the deaths on the nursing homes' psychogeriatric wards are recorded. Six to eight weeks after the death of their loved one, the nursing home sends the questionnaire to the family caregivers. Along with the questionnaire, the family caregivers receive a letter that explains the involvement of the nursing home in the FOLlow-up project, and the returning of a completed questionnaire is considered as informed consent to participate. Further, the exact dates on which the questionnaires were sent out and received back, as well as the number of residents with dementia who died and whose family caregiver could not provide written feedback, and the reasons for ineligibility are registered. It is up to the nursing home to decide which staff member is most eligible to be responsible for the registration and sending of the questionnaires, but usually these tasks are performed by a member of the nursing homes' administrative support team. Strategies for implementation ----------------------------- The two strategies to implement the EOLD-instruments 1) the generic feedback strategy and 2) the patient specific feedback strategy both link to specific suggestions on how to improve care. The improvement suggestions were developed based on the latest national and international literature and care guidelines in the field of end-of-life- and palliative care, and when available, specific to dementia \[[@B6],[@B18],[@B33]-[@B37]\]. They also included practical suggestions to inspire improvements even in the absence of evidence. Subsequently, the improvement suggestions were reviewed by professionals in the field on their practical applicability to improve care quality. Table [1](#T1){ref-type="table"} provides an example of an item of the EOLD-SWC scale with the related suggestion for care improvement. ###### Example of an EOLD- SWC item with improvement suggestions **End-of-life in dementia -- satisfaction with care (EOLD-SWC) scale, item 7** -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------- **Improvement suggestions (version 2.0):** **Involved disciplines:** a\) Make clear to the family caregivers what the possible options are for nursing assistance for their relative with dementia. To support information provision, a booklet with information regarding nursing assistance in the last stage of life of residents with dementia may be handed out. Physicians and nurses/nurse aides b\) In communication with family caregivers, you may wish to be realistic about the prognosis of their relative with dementia. Provide contact details of the staff members with whom family caregivers may talk with regard to the prognosis of their relative and its risks.¹ Physicians and nurses/nurse aides c\) Evaluate frequently (at least once in six months) in multi-disciplinary team meetings whether all possible nursing assistance is provided to the residents with dementia. Physicians and nurses/nurse aides ¹To feel home. \[Guidance for family caregivers and nursing home staff to work together to provided care with dignity\], 2009. The generic feedback strategy links cumulative EOLD-scores to specific targets to improve care quality. For this, a user-friendly import program has been developed for nursing homes to enter their EOLD item scores and generate total EOLD scores after the scores of at least ten residents are entered. The total EOLD-scores are compared with a norm based on mean EOLD item- and total scores collected nation wide in nursing homes using family caregivers' evaluations of quality of care and quality of dying. The scores that are significantly higher or lower than the national mean item- and total scores are signaled. The program links to improvement suggestions tailored to the specific areas where the nursing home scored significantly lower, to trigger actions for care quality improvements. In the patient specific strategy, individual patient EOLD-item scores are discussed in multi-disciplinary team meetings. To support the team discussions, the nursing homes using the patient-specific strategy will receive a printed version of all the improvement suggestions. The nursing homes of the intervention groups report the improvement actions initiated after receiving feedback to improve care quality. Evaluation of the FOLlow-up project ----------------------------------- The effect of active implementation of the EOLD-instruments on quality of care is tested with a quantitative effect evaluation. Further, to assess the impact of the implementation of the instruments in the nursing homes, a process evaluation is performed. The development of the instrument for evaluation is informed by pilot work, exploring receptiveness of nursing homes to employ the EOLD-instruments. A pilot survey study among 40 Dutch nursing homes assessed their willingness to use these instruments in their daily psycho-geriatric practice as well as barriers and facilitators for effective use of the EOLD-instruments for care quality improvement. From the surveyed nursing homes, 63% would be willing to use the instruments. Their main motivation was the wish to understand the quality of care they provided and the possibility to improve this. The barriers named by the nursing homes were the expected additional workload and time investment. Involvement of the nursing home staff, varying from the nursing homes' management to the care staff, as well as grassroot support from the field and incorporation in the care quality framework were named as important facilitators for effectiveness of the instruments for quality improvement. From this pilot we learned that some support and guidance may be needed for successful implementation. Therefore, we aim at testing effects of an intervention that is sustainable with limited external support. Effect evaluation ----------------- Starting the first of May 2012, the nursing homes of all three groups administer the EOLD-instruments for the complete period of data collection. After 10 months, the nursing homes of the two intervention groups actively deploy the feedback with the help of the improvement suggestions for care quality improvement according to the audit and feedback strategy they were randomly assigned to. The nursing homes report to the research team the improvement actions that were initiated following the feedback. After having received the feedback, the nursing homes of the two intervention groups continue to administer the EOLD-instruments for another 10 months, along with using the improvement suggestions. The nursing homes of the control group will administer the EOLD-instruments during the full data collection period of 20 months while providing their usual care. After the data collection, those nursing homes will receive their EOLD-scores along with suggestions to improve care quality as well as support to implement improvements actions among the care teams similar to the nursing homes participating in the intervention groups. The participating nursing homes are responsible for the collection of data, with limited support from the research team. The support of the research team comprises instruction meetings with nursing home staff involved in the data collection, written instruction material and regular contact by telephone or email. Statistical analysis -------------------- To compare the data longitudinally within and between the intervention groups and control groups, the participating nursing homes hand over the questionnaires they receive back from the families to the research team to enter the data in SPSS. Subsequently, we explore any changes over time in all three research groups, and in the intervention groups over the two periods of data collection separately. In all research groups, the EOLD-scores collected during the pre- and post-test data collection are compared per home and over all homes with paired tests. If needed due to changes over time related to, for example, a general trend or an increased focus on end-of-life care related to study participation, time dependent analyses are performed to control for the changes over time. Process evaluation ------------------ All participating nursing homes regardless their research group are invited for a mixed-method process evaluation after 10 months and after 20 months from the start of the data collection. The Linnan and Steckler \[[@B38]\] framework is used to guide the process evaluation. To assess the dose delivered and dose received, a written survey is administered to collect information regarding the number of surveys that were sent out, and the number of surveys that the nursing home received back. Further, nursing home representatives are asked to estimate additional time- and financial investments. Second, to assess the fidelity (i.e., whether the intervention was implemented as intended) and the facilitators and barriers of audit- and feedback in the nursing homes using the EOLD-instruments, a qualitative interview with the nursing home staff involved in data collection (such as the elderly care physician, management, administrative support) is performed. These interviews are transcribed verbatim, coded by more than one member of the research team, and themes defined. The information collected in the interviews is compared with the nursing homes' own registrations of the dose delivered and dose received and logs of their time- and material investments. Last, the reports of the nursing homes with respect to the improvement actions initiated will be analyzed. Each reported improvement action will be categorized in whether it aims a behavioral change of an individual professional caregiver, of a team of professional caregivers or a change on the organizational level (Figure [1](#F1){ref-type="fig"}). ### Ethical considerations The study protocol was approved by the Medical Ethics Committee of the VU University Medical Center. The research group receives coded family evaluations from the participating nursing homes with the key to remain in the nursing home. Discussion ========== The FOLlow-up project is, to the best of our knowledge, the first study to implement and compare audit-and feedback strategies in the nursing home setting specifically to improve end-of-life care in dementia. We assume the implementation of audit- and feedback in the nursing home to be a complex process involving multiple processes of care in the nursing home (e.g., care quality coordination, administrative support, management structures and multi-disciplinary care giving). The assessment of the effects of audit- and feedback on care quality using a RCT combined with the evaluation of organizational and social elements possibly influencing audit and feedback will contribute to its theoretical understanding and practical lessons for future implementation in nursing homes. Further, our study will advance our understanding of how to monitor care outcomes in the realm of end-of-life care in dementia. Indeed, the EOLD-instruments showed a positive trend in EOLD- scores over time \[[@B9]\] and differences in EOLD-scores were found between countries \[[@B39],[@B40]\]. Our data will increase the understanding of the differences between nursing homes in quality of care and quality of dying using EOLD-scores, as well as the possibilities of the nursing home care staff to influence them. This knowledge may provide an evidence base for the development of quality indicators needed to systematically improve end-of-life care in dementia. Nevertheless, the design of the study involved a few important choices with regard to the development of the audit- and feedback strategies, the research setting and the data collection. First, regarding the design of the audit- and feedback strategies, to develop a care standard needed for the feedback program used in the generic feedback strategy, a norm was created based on data collected in previous Dutch research \[[@B9]\]. In the data, mean satisfaction with care (EOLD-SWC) scores did not significantly vary across different geographic areas, although slightly lower mean scores for quality of dying (EOLD-CAD) were found for densely versus less densely populated areas. Nevertheless, we wish to employ a single national standard of quality, and single mean score for both instruments for any region in the Netherlands was the norm. Because of trends in time, it is important to continue monitoring EOLD scores, and we may consider an update of the norm using the pre-test data collected in FOLlow up, if we find important differences from the existing norm. Second, we have no data on which to base an estimate of the response rate with patient specific feedback strategy. Based on previous Dutch research using coded data without names \[[@B9],[@B27],[@B32],[@B41]\], we assumed in our power calculation, a response rate of 55-60% and a few cases being ineligible. In the FOLlow-up project, the participating nursing homes are fully responsible for the data collection process. Nursing homes directly communicate with the family caregivers to obtain their care evaluations. This avoids asking family caregivers' consent to participate prior to sending the EOLD-instruments and they may feel that their privacy is better guaranteed, compared to data collection that involves contact with the University. We expect that because of this protocol for data collection, family caregivers will be more forthcoming with their feedback compared to our previous research, potentially increasing the response rate. However, in the patient-specific feedback strategy, family caregivers will be explicitly asked permission allowing their feedback to be discussed non-anonymously in a multidisciplinary team meeting. This may lead to a different response rate between the two audit- and feedback strategies due to family caregivers being more hesitant to participate in the patient-specific strategy compared to the generic strategy. Third, with respect to choice of research setting, previous research performed on psychogeriatric units (for dementia) of residential care homes found the level of comfort assessed with the CAD-EOLD to be lower than in nursing homes \[[@B9]\]. Therefore, implementation of the EOLD-instruments in residential care homes potentially involves more significant care improvements, if potential barriers such as less physician oversight or leadership can be addressed. Nevertheless, due to the small size of psychogeriatric units in residential care homes in the Netherlands (typically around 20 beds) compared to nursing homes and the absence of an in-home elderly care physician, we test the effectiveness of the EOLD-instruments only in nursing homes. Despite the benefits of giving nursing homes responsibility for the data collection, it may also negatively influence the outcomes of the project. The research team cannot fully control the protocols and implementation of the audit- and feedback strategies as intended. However, only by giving nursing homes responsibility over the data collection is it possible to evaluate the practical implications and effect of audit- and feedback on quality of care and quality of dying in dementia. Further, the nursing homes receive limited, but continuous support during the data collection by the researchers. When audit- and feedback is proven effective in improving the quality of care in dementia, our findings may be implemented on a larger scale, along with specific recommendations for effective implementation of audit- and feedback in nursing homes. Competing interests =================== The authors declare that they have no competing interests. Authors' contributions ====================== All authors have made substantial contributions to conception and design of the study. JAB, MvS-P, HCWdV and JTvdS have drafted the manuscript. All authors have revised it critically for important intellectual content and have given final approval of the version to be published. Pre-publication history ======================= The pre-publication history for this paper can be accessed here: <http://www.biomedcentral.com/1472-684X/12/29/prepub> Acknowledgment ============== We thank Dr. Dinnus H.M. Frijters for his contribution to the development of the feedback program. Funding ======= This study is supported by a grant from ZonMw, The Netherlands Organisation for Health Research and Development (Palliative Care in the Terminal Phase program, a supplement for implementation to grant number 1150.0003), and Fonds NutsOhra, national insurance company (grant number 0904--020), and by the VU University Medical Center, EMGO Institute for Health and Care Research, Department of General Practice & Elderly Care Medicine, Amsterdam. The Netherlands National Trial Register (NTR). Trial number: NTR3942. This registry shares registered trials with WHO's International Clinical Trials Registry Platform Search Portal: <http://apps.who.int/trialsearch/>. Previous publications ===================== Abstract submitted to the annual congress of Alzheimer Europe in October 2012. The abstract is available on the internet site of Alzheimer Europe.
Mid
[ 0.59437751004016, 37, 25.25 ]
The demographic is a fixture in marketing research, but psychographics contribute to our demographic understanding by examining a wide range of factors that focus on the quantitative and psychological perspectives of why consumers behave the way they do. This paper examines the concept of psychographics, its origins, and the debate over reliability and validity. Two of the more accepted tools, VALS and RISC are examined in detail, and some of the current uses of psychographics are also reviewed. The Evolution from Demographics to Psychographics The demographic profile is a fixture in marketing research; profiles are collected as a matter of routine in the belief that age, income, education and other measurable factors can indicate product or brand preference, media preference or preference about programming choices (Wells, 1975). Demographic information has severe limitations, however. Demographics are not homogenous blocks, and can lead to over simplification, stereotypes of demographics may be incorrect, and demographics do not really provide guidance in marketing messages, the consumer’s problems and needs or what their lifestyles or values are (Langer, 1985). The reason a person buys a particular product or brand, or has explicit media preferences go beyond how old the person is or how much money the person makes (Bainbridge, 1999). The Holy Grail of marketing is based on discovering that succinct difference of between what consumers do and why they do it (Booth, 1999). However, it wasn’t until the social upheavals of the 1960s, which shattered the mass-market approach, that methods of measuring the values and lifestyles of consumers were developed (Heath, 1995). Demby (1994) claims to be the first person to make up the name psychographics in 1965, although he admits that the term was used as early as World War I to describe a method of classifying people by physical appearance, rather than demographics. It later evolved in the 1920s as a term used to classify people by attitudes. Heath (1995) notes that the term appeared in Grey Advertising’s publication Grey Matter in 1965, and that Haley was the first to publish the term. Demby’s own description uses the term to combine psychological, sociological and anthropological factors such as self-concept and lifestyle to segment markets by purchase decisions or media use; demographics are used as a check to see if psychographic segmentation improves on other segmentation methods (1994). The Definition of Psychographics Heath (1995) notes that if ten marketers were asked to define psychographics, ten different answers would be received. Eckman, Kotsiopulos and Bickle (1997), using the construct developed by Tigert in the 1970s (Demby, 1994), assert that psychographics measures lifestyles that are evaluated through activities, interests and opinions, and that psychographics are more effective than demographics. Silverberg, Backman and Backman (1996) state that psychographics is a way of describing customers and charting new trends. Booth (1999) describes psychographics as the why of consumer behavior, attempting to ascertain the motivation behind consumer purchasing decisions. Wyner (1992) calls psychographic measures attempts to isolate personality types across product category boundaries. Wells (1975, p.197), in his award-winning article for the Journal of Marketing Research, notes that psychographics are something beyond demographics, but that the dimensions studied in the field encompass “a wide range of content, including activities, interests, opinions, needs, values, attitudes and personality traits.” However, his interpretation allows this wide range, when he defines psychographics operationally as “quantitative research intended to place consumers on psychological—as distinguished from demographic—dimensions” (Wells, 1975, p. 197). Hasson (1995) adds to this, positing that psychographics may be the ultimate phase of motivation research developed in the 1950s, using quantification, multi-dimensional analysis and graphic representation. The one thing most researchers seem to agree upon is that geodemographic tools, such as PRIZM or Donnelly Marketing’s ClusterPlus, that describe averaged demographic information about product usage by geography, do not fall into a psychographic definition (Heath, 1995). Although they fit the lifestyle definition portion of psychographic definition and have been used to examine lifestyle imagery and reference groups (Englis & Solomon, 1995), they do not fulfill the psychological dimension of the operational definition; however, they do offer one benefit when used in conjunction with psychographic information: They also identify where the consumer lives (Heath, 1995). How Psychographic Measures Are Conducted The original psychographic studies were primitive by nature, using Q clustering programs that were computer driven, although computing power had its own limitations (Demby, 1994). The clustering program used is actually an inverse factor analysis, and is also called Q-factor analysis (Heath, 1995). Basically, it looks for correlations between pairs of respondents, and then clusters respondents who have high correlations to similar answers (Heath, 1995). Typical psychographic studies today show consumers a large list of statements and they are asked to indicate on a five to seven point Likert scale how well each statement describes their attitudes or lifestyles; factor analysis is then used to identify underlying factor loading patterns, upon which factor scores are computed, and either principal component analysis or varimax rotation of retained components is used to approximate a structure in which each variable correlates highly with only one factor (Wind, Rao & Green, 1991). The Debate Over Psychographic Reliability and Validity While seemingly straightforward, there has been much debate about the reliability and validity of psychographic research in terms of individual items and scales, reliability of dependent variables, relationships, and structure.Wells (1975) notes that the scales used most by psychographic researchers, published in literature reviews, indicate reliabilities from .70 to .90. Due to the instability of consumer choice, perhaps the most that can be expected from psychographic research is satisfactory reliability; however, if important decisions are made with psychographic information, cross-tabulations, regression or cross-validation against hold out samples are recommended (Wells, 1975). Equally problematic for psychographic measurement is validity, the degree to which the psychographic tool is measuring what it is supposed to be measuring (Isaac & Michael, 1995). While some measures of validity, such as construct validity, are handled in the same manner as reliability, using hold out samples (Wells, 1975), the predictive validity of psychographic tools is much more difficult to ascertain. Often these tools are measured against other psychological constructs to compare performance. SRI introduced VALS as a psychographic tool in 1978, and the instrument segmented consumers into nine groups based on their inner/outer orientation; it was the only commercially available psychographic tool to gain a large measure of acceptance(Riche, 1989). Kahle, Beatty and Homer (1986) conducted a study that indicated Rokeach’s List of Values (LOV) was a better predictor of consumer behavior than the original Values And Lifestyles (VALS) assessment tool. However, in a follow-up replication study, Novack and MacEvoy (1990) found serious methodology flaws in Kahle, Beatty and Homer’s study. Kahle et al. believed that because VALS had demographics built-in, but LOV did not, they included demographics in the regression model for LOV, but not for VALS (1986). In their replication, Novack and MacEvoy (1990) ran the same experiment, with extensions that included adjusting each instrument for demographics, ignoring demographics, and using demographics solely against LOV. Their conclusion was that VALS may be preferred over LOV as a segmentation tool, and that LOV was significantly less predictive than even VALS alone. They also noted that these findings could not be generalized to VALS’ eventual successor, VALS2. Novack and MacEvoy’s approach makes sense, since the intent of psychographic measurements is to add a dimension to demographic measures (Langer, 1985).Bainbridge (1999) agrees, noting that psychographics are an extension of demographics. Heath (1995) confirms that the purpose of psychographics is to measure demographic characteristics along with attitudes, opinions and interests. However, this was not the sole study finding some validity problems with psychographics in general, and the original VALS typology in particular. Lastovicka, Murry and Joachimsthaler (1990) measured the VALS typology and the Drinking-Driving (DD) typology quantitatively, using statistical modeling, and qualitatively, using judgmental coding of data collected from open ended and projective tasks. They used a multimethod-multitrait approach, and ANCOVA structures (LISREL) approach to examine the convergent and discriminant validity of each tool. They found less convergent and discriminant validity for VALS than for DD. Unfortunately, as they note, a major limitation of their study was that it was against a small sample of 100 18 to 24 year old men, and using a different sample against of the general U.S. population could produce differing results; additionally, there were logical subgroupings of VALS types other than those used in their research. They also noted that SRI, the owners of VALS typology were introducing VALS2, which might make their research findings mute. Wells (1975) points out three problems relating to the studies such as Lastovicka et al. (1990): Using psychographics to find relationships that should not have been expected and they fail to appear; When the finding is too abstract to be useful; When the measurement is so close to the behavior studied that the relationship is essentially redundant. However, popular and accepted psychographic measures do have these inherent dangers, too, which led to the downfall of the original VALS. Because VALS did not discriminate enough, the measurement, to some, was redundant. It contained one of the problems clustering can be prone to: Lack of discrimination among segments. “People would say, ‘If 40 percent are Belongers, why should we bother with the rest” (Riche, 1989)?Not only did marketers complain that VALS was not actionable, the originator of VALS, Arnold Mitchell, made a research design error, trying to prove an assumption. Designing the study in an attempt to prove Maslow’s theory of motivation, Mitchell placed a preconceived truth into the design (the assumption was that people buy according to where they fall in a hierarchy of needs), rather than attempting to discover what the real truth behind consumer motivation was (Heath, 1995). Not all researchers were that disenchanted with the original VALS, although there was agreement that the Belongers group needed to be split (Winters, 1992). However, some of those who liked the original VALS also shared in Mitchell’s Maslow based approach to its theoretical underpinnings and complained that VALS2 had no theoretical approach (Winters, 1992). VALS2 and RISC Ameriscan SRI totally redesigned the VALS typology to fix this error in 1989, and called the new product VALS2. Appendix A contains a visualization of the VALS2 framework. VALS2 was positioned at the time as moving away from values and lifestyles because it was too fragmented and did not adequately predict consumer behavior, which was shifting; instead, VALS2 was positioned as being designed to reveal unchanging psychological stances (Riche, 1989). However, as recently as 1996, SRI acknowledges that the attitudes of consumers can change, if not their VALS type (Heath, 1996). In fact, VALS current literature positions VALS as a combination of self-orientation and resources, noting that resources are the psychological, physical, demographic and material means upon which people can draw; these resources increase from adolescence through middle age, but decrease with extreme old age or mental or physical deterioration (http://future.sri.com/VALS/vals.segs.shtml). Using a national sample of 2,500, SRI developed a 43-question assessment tool that measures resources, including demographic information and internal resources, such as confidence, energy and intelligence; in addition, as noted above, the tool recognizes that resources tend to accumulate through middle age, and then decline (Riche, 1989). VALS2 identifies eight attitude and lifestyle segments, arranged primarily by three different orientations to buying. However, at the top of the rectangle are actualizers. Actualizers make up just 8% of the overall U.S. population (Bearden, Ingram & LaForge, 2001). Actualizers may be described as that segment of the population who have high resources with a focus on principle and action, who are active, take-charge in terms of expression of taste, independence and character. Their demographic characteristics include a median age of 43 and median income of $58,000 (Mowen & Minor, 1998). In addition, they are successful and sophisticated, can indulge in self-orientations; 95 percent have some college (Evans & Berman, 1997). Other VALS2 categories include individuals who are oriented toward principles (fulfilleds and believers). A primary differentiator between the two groups is the resources available to them, which affects their approach to lifestyles and values (Mowen & Minor, 1998). Fulfilleds make up approximately 11% of the population, while believers represent 16%. Those who are oriented toward status (achievers and strivers) focus on status and are also segmented by resources, including a strong orientation toward money and achievement. Achievers and strivers each represent 13% of the general population. Action oriented individuals would include the categories of experiencers and makers. Experiencers tend to be younger and focus on action as a means of excitement, while makers focus on practical action. Again, a differentiator between the two is the abundance or lack of resources. Experiencers represent 12% of the population, while makers account for 13%. Finally, VALS2 identifies the strugglers. Poor, with little education, the strugglers have few resources and their focus is on living and surviving for the moment. They represent 14% of the general U.S. population (Mowen & Minor, 1998). While SRI attempted to position VALS2 as representing a more entrenched psychological approach, with less emphasis on values and lifestyles, RISC (the International Research on Social Change) was incorporated in Switzerland in 1978 to monitor social change and trends in European countries, the United States, and Japan (Hasson, 1995). Like the reasoning behind VALS 2 and other psychographic tools, RISC decided to monitor social change due to the realization that demographics provide decreasingly discriminating markets or segmentation opportunities. Based a little more widely than psychological typography, RISC adopts the statistical and conceptual tools of psychographics, but tends to use a three dimensional approach to diagnose socio cultural trends, market dimensions and demographics (Hasson, 1995).Depending on the brand and choice of study, demographics could only explain 8-10% of brand choice for a particular brand; psychographics, using typologies, from 25-35%; however, socio cultural trends could explain 35-45% (Hasson, 1995). Appendix B depicts the RISC socio cultural trends model. These socio cultural trends include balanced and autonomous (A), eager and dedicated (B1), daring hedonist (B2), belongings and values (C1), transitional (C2), petit bourgeois (C3), self-centered impulsive pleasurist (C4), rational traditionalist (D1), anomy and disconnection (D2), withdrawn and distressed (D3). In many ways, the descriptions mirror some of the lifestyles and values of VALS, and the pyramid similarly seems aligned with resources. However, unlike VALS2’s description of an entrenched psychological approach, RISC’s methodology assumes that people and countries’ self-concept moves around the trends in a more dynamic manner, according to environmental influences, including the economy and social mores.Marshall Marketing uses this tool in the United States to help retailers and broadcasters to monitor and predict social trends for brands (www.mm-c.com/RISC/risc.htm). Psychographic Variations Wells (cited in Heath, 1995) noted that while psychographic studies now come in an infinite number of variations, there are five general types of psychographic study: 1) A lifestyle profile that includes questions on product use, media use, and demographic information, as well as psychographic and lifestyle information. Researchers then look for the information that discriminates between groups of users and nonusers of products; 2) Product specific psychographic profiles identifies the target group of consumers first, and then uses psychographic product relevant dimensions to segment the users; 3) Personality traits as descriptors analyzes dependent variables (e.g. specific attitudes, opinions or interests) and then uses personality traits as independent variables are highly correlated to the dependent variable, which is then used to segment markets; 4) General lifestyle segmentation is used to define a typology. While it collects much of the same information as a lifestyle profile, it does not assume what the common traits are, but does attempt to identify significantly different groups or significantly homogenous lifestyle segments. 5) Product specific segmentation alters general psychographic or lifestyle questions and adapts them to product specific statements, which are then analyzed, using factor analysis, to support or negate a hypothesis regarding the product. Among these various uses, many psychographic studies have been and are being conducted to determine the psychographic values or lifestyles that aid the researcher in understanding the why behind what consumers do and their addition to basic demographic information. Some Psychographic Applications While not purporting to be an exhaustive list of studies, it is interesting to note some of the research to which psychographic measurements have been applied, and their direct application to marketing. Ailawadi, Nesline & Gedenk (2001) used psychographics and demographics to identify value-conscious consumers and their perceptions of store brands versus national brands. They found that store brand use correlates with economic benefits and costs, and identified four specific market segments: deal focused customers, store brand focused customers, the use-alls (those who will go either way), and use-nones (those who don’t use either store brands or deals). Their study used a structural model and discovered that demographics do not affect consumer behavior directly, but are funneled through psychographics. This study demonstrated how manufacturers and retailers could avoid marketing their brands to the same segment in a consumer tug of war. Eckman, Kotsiuopulos & Bickle (1997) used psychographics to measure to examine the store patronage behavior of Hispanic versus non-Hispanic consumer and the role of store attributes. Previous psychographic studies on Hispanics have focused on sports, television viewing and religion.It is difficult to describe Hispanics as a homogenous group because there are many national subcultures that make up the Hispanic culture (Eckman, Kotsiuopulos & Bickle, 1997). However, they found that Hispanic consumers are less likely to participate in cultural activities and seek advice; however, they were more likely to experiment and proeducate. Attributes that were important to Hispanic consumers included services, language, resource management, pricing, comfort and selection. While non-Hispanics purchased in family-owned stores and catalogues, Hispanics purchased in second-hand stores more often. McCarty and Shrum (1993) examined the role of personal values and demographics in predicting television-viewing behavior. Using Rokeach’s LOV and a structural equation analysis, their study found that values do relate to television viewing, but the relationship and amount of influence is complicated or sometimes reduced to non-significance when demographics are factored in. They concluded that because the interrelationships between values, behavior and demographics are so complex, that segmentation schemes should employ both demographics and values in their consideration. They also found differences not only in values, but also in the demographic information they measured (gender, age, income and education) and their effects on viewing. Lin (1999) furthered research in this area by examining the relations between perceived television use and online access motives, using uses and gratifications perspectives. However, she found a weak correlation between user motives for television exposure and potential online access. The study also noted that the online world, at least for now, tends to be supplementary to television; however, when full convergence is achieved, there may be a need for advertisers to adjust their approach to the online world. Lin also criticizes SRI’s VALS2 for its lack of attention to measuring online users’ predisposition to technology, something that iVALS attempts to correct. A number of studies have been conducted on psychographics and travel related behavior. Silverberg, Backman and Silverberg (1996) investigated the psychographics of nature-based travelers in the United States and their relationship with attitudes about the environment, travel behavior and demographics. Using factor analysis, they found that that travelers whose primary nature-based activity was viewing nature participated in nature-based activities differed from those whose trip was for educational purposes. They also found differences between campers and non-campers and social travelers and all other groups. Six variables were found to be significant predictors of campers versus non-campers: education, age, likelihood of taking a nature-based trip, conservationist attitude, consumptive attitude, and involvement in other nature based activities. Non-campers are more highly educated, have a consumptive attitude and more likely to take a nature-based trip. Campers, on the other hand, tended toward conservationism and a greater involvement in other nature related activities. Without calling it psychographics per se, Stephens (1991) conducted an interesting study linking cognitive age with consumer behavior, especially when used in conjunction with demographic age. She found that it provided important clues regarding attitudes toward purchasing and consuming, and could aid targeting decisions, creative executions and media selection. Ohanian (1990) used psychographics to construct and validate a scale to measure celebrity endorsers’ perceived trustworthiness, expertise and attractiveness. She found that source credibility, defined by these three factors directly impacted intent to purchase and recommended that researchers could use this tool to investigate the credibility of political candidates, that advertisers use the scale as part of effectiveness testing and that celebrity endorsers be calibrated against varying demographic and psychographic groups. Dychtwald and Zitter (1988) recommend that hospitals use psychographics as part of their basic strategic marketing plan, noting that this tool may especially be useful in targeting the elderly population. They identify three separate segments from that group: The vitally active, who are still involved in the world and continue to grow; the adapters who face significant real health problems, but have either overcome or accepted those problems; the overwhelmed, who are anxious about the future and unable to manage their problems. Hornick (1990) used psychographics to predict smoking intensity and found that it was more meaningful and valid than consumer time preference (which relates to the timing of an outcome and perceived payoffs/time tradeoffs) and demographic characteristics. He discovered that adding psychographic variables to demographic variables improved the explained variance by more than 73%. Wells (1975) noted that already as of that date, psychographics had contributed to an understanding of opinion leadership, retail shopping, private brand buying, consumer activism, store loyalty, differences between Canada and the United States, as well as differences between English and French speaking Canadians. The Future of Psychographics With the continued interest in the psychological factors that drive consumers to purchase, it is doubtful that the interest in psychographics will lessen in the future. The running feud between quantitative advocates of research and the qualitative advocates has abated with the more stringent adoption of statistical methods, better use of computer based research and applications in qualitative design (Heath, 1996). In fact, if anything, the trend in research has shifted to the qualitative approach (Heath, 1996). Tools such as VALS2 are being adapted to provide varying types of consumer information. iVALS was constructed to focus on the attitudes, preferences, and behaviors of Internet users, addressing some of the concerns Lin (1999) expressed. Not only did early results reinforce the notion of a dual-tiered have/have not society, it found that half of the web users were Actualizers, that three out of four were men, and virtually all had gone to college (Heath, 1996). Not only are broadcasters and cable channels tailoring their content according to their viewers’ psychographic profiles (Heath, 1995), but marketers are attempting to determine what their brands look like in consumers’ eyes (Heath, 1996). Perhaps the biggest caveat, though, comes from Wells’ (1975) seminal article.The tools may not be valid or reliable if searching for relationships that do not exist, if they are too abstract, or if the behavior studied is essentially redundant to other variables. Hasson (1995), in discussing socio cultural models, notes that many argue that models such as these are reductive and too systematic. He questions whether the problem is the systematic nature of the model or whether researchers fall too much in love with their model and reduce everything down to their system. He further notes that there are no universal tools; in fact, if research were in 100% agreement, it might be indicative of a totalitarian society, but when clients successfully use the information, it might not be a proof, but it is a reward (Hasson, 1995). The future use of tools such as psychographics will be determined by the validity and reliability of the instrument, the application’s usefulness, and its ability to predict consumer behavior, which might not serve as a prove of their universal accuracy, but will be rewarding to marketers that use the tool well.
High
[ 0.7222982216142271, 33, 12.6875 ]
The aim of this protocol is to obtain pulmonary and nutritional information on infants with Cystic Fibrosis (CF) identified through the newborn screen. This protocol continues to thrive with approximately 18 new patients added each year. This past year we began studies of Pseudomonas antibodies and CA-19,9 - a potential biomarker of mucous accumulation. The current studies have continued to identify early abnormalities in infants with CF, and they also provide information on surrogate markers of outcome.
High
[ 0.6994535519125681, 32, 13.75 ]
AppCoins News Update, or ANU for short, is a regular bi-weekly update by the AppCoins team. As usual, we are going to cover dev updates, market reports, team members and upcoming events. This week’s focus is on the latest updates regarding the upcoming Ritchie Release, and an overview of the business developments that have taken place over the last 7 months for the success of the AppCoins project. Quicklinks Dev Update APPC Markets Report Featured Team Member Protocol Business Review We’ve been actively working on the release of the 2nd of September — named as Ritchie Release — which will come with a lot of changes that remove adoption friction and will enable users to start using APPC to pay for in-app items as easily as they do today. We’ll go as far as to say that until the end of the year we’ll be able to have payments using Blockchain tech faster than users have today when using credit cards. As we’ve explained in the last ANU, we are also working on easing the life of app developers when it comes to In-App Billing (IAB) integration using APPC. Those devs who integrated the App Store Foundation (ASF) SDK and had to manage their in-app items (SKUs) exclusively by themselves in the SDK/app and possibly using their backend will be able to integrate a billing system very similar to the one from Google Play. App developers will be able to have purchases and SKUs managed by our infrastructure and rely on Blockchain tech for verifiability and data reliability. The AppCoins Wallet will perform the role that the Google Play app performs in the Google Billing System, as it will be the artifact doing the calls to our API. We plan to release the SDK and the AppCoins Wallet with this functionality during the next 2 weeks, which means we’ll be doing a major-minor release before the Ritchie release. We are fortunate to be already working with several developers and want to have them test the system in order to have it completely ready for the Ritchie release. As we’ve also mentioned in the last ANU, we’re working to give users the ability to use their credit card to pay for in-app items as they do now, while maintaining the advantages of Blockchain tech to verify their purchases. This will be published on the Ritchie release. This is a major development for the AppCoins project because it shatters one of the greatest sources of friction to the adoption of the protocol, and of cryptocurrencies in general for that matter. In addition, we’re working hard on the Unity plugin to make its integration as smooth as possible. We’ve been solving a few issues and making the plugin more robust and reliable. At the same time, we’re making sure it stays up-to-date with the native SDK developments. Lastly, we are working on microsites that will cover the App Store Foundation conference in early November, as well as a page about the SDK which details better its added value for developers.
Mid
[ 0.655502392344497, 34.25, 18 ]
Friday, July 27, 2012 This post was written by Emily Shieh, of Shieh Design Studio. Thanks, Emily!! There are many ways to do gradient layering style in cold process soap. Some soapers are more into the exact measurements and step by step instruction. Some soapers, like me, are the spontaneous ones, we soap by feeling and everything is approximated, well, except recipe calculation of course, that's what soap calculator is for. Once you know the trick to gradient (or ombre) soap there are endless possibilities. But today I'm only going to write about the basic one color gradient, I'll write the more advanced gradient- multiple colors and mixing layering- later, maybe, if you are interested. There are two types of gradient you can achieve in cold process soap, one using non-bleeding colorants and the other using bleeding colorants. Non-bleeding colorants are typically ultramarines, mica, oxide or FD&C lake dye. Bleeding colorants are FD&C dye and lab colors. You should choose your colorants depending on what visual effect you want to achieve. Non-bleeding colorants will give you more of a landscape or rock formation layering. Bleeding colorants will give you smoother definition between layering like they are blending together over time. There's no better choice, just different choices. I'm going to show you both in this tutorial. Step 1: Chose a recipe that is not slow tracing, I found it easier to layer when the soap batter is not watery. When soap is slow to trace you run into situation as upper layer penetrating into lower layer. We want medium to heavy trace batter. If your recipe is slow to trace, consider water discount or choose a fragrance that speeds up trace. We don't want soap on a stick either! Step 2: I only need one more bucket other than the one I use to mix lye and oil/butter in. I don't like cleaning, so the least amount of tools I can use, the better. In one color gradient, I only need 2 containers and 2 spatulas. Mix in your fragrance choice for the whole batch. Now eyeball how many layers you want your soap to come out. I usually do between 7 or 8 for one color gradient, the most I have gone is 11 layers. Pour 1/7 of the batter into the free container, then pour 1/2 of what you just poured again. Again, I do approximation, and don't worry, you won't mess it up in anyhow. Think about it this way, let's say you are doing 7 layers in total including the top white layer, if your first layer is 1/7, you will need 1.5 times that to mix in your darkest color. If you are confused, don't be, you will understand why later. Step 3: Add the heaviest amount of colorant you want your bottom most layer to be in the batter portion you just poured out. In my case, I chose activated charcoal to show you the non-bleeding gradient soap. Now pour 1/2 of the colored soap batter into the mold. You might want to smack your mold on your tabletop to 'burp' your soap to avoid trapping air in the soap. I didn't do it because I was lazy. Now you should only have 1/2 of the colored batter left in your container #2. Eyeball what you have left in container #2, now pour about the same amount of container #1 (the uncolored one) into container #2 and mix well. You should now see the color in container #2 is much lighter than layer #1. Pour 1/2 of container #2 into the mold for layer #2. Step 4: Now repeat that process of adding uncolored batter into colored batter to 'thin' down the color and pour 1/2 into the mold to layer up. Tip: if your batter is too watery, by all means, use a spoon to scoop or pour on the back of your spatula down low to avoid penetrating into the layers you already finished. Step 5: Do this until you see only 1/7 of total batter left, pour that layer down and proceed to do the fancy peak top you want, the batter should be pretty heavy traced by now. Or if you prefer, make a smooth top by running your spatula side edge across your mold cavity from one end to the other to level it out. This is the other batch I did at the same time, but using lab color. You can tell the layers are not as defined as the activated charcoal above. Here are the cut photos: In time, this blue soap should have the layer edges blend smooth together creating a less defined seam. Here's a photo of the Mango Lava I made 3 weeks ago using lab color. You can the bottom layers are not having defined edges anymore. You can see the top 2 layers are still holding the defined layer line because I added titanium dioxide to achieve a whiter soap. After you master the trick of my one bucket gradient soap, you can start thinking outside the box and apply it to more advanced projects like these: Please visit Emily at her shop!Thank you, Emily, for a fantastic tutorial!! Wednesday, July 25, 2012 Emily, of Shieh Design Studio, has created a gradient soap tutorial for us and I am thrilled. She makes beautiful soap and her clear soap molds make it easy for us to peek at her lovely layers. I am waiting for one photo and then it will be posted. Friday, July 6, 2012 I had decided not to do soap reviews anymore.Since then, Julia, of Cocobong Soap, ended her soap review duties, I thought I should get back into it.I'm glad I did, because I am going back to what I love to do on my spare time.See, touch and use other handcrafted soaps.There is an deep-seeded pleasure I get from holding a bar made with care and thought from another person in this world. I am going back to the beginning when I was eager to begin my soap blog and got an overwhelming response with soaps from around the world...I loved giving back to the handmade community with reviews, which at the time, didn't exist on the internet. Tracy Wells, of Santa Barbara Soaps, sent me bamboo charcoal when I was clean out of it for my Detox soap. WOW...she sent it for no other reason than she had some and I didn’t.What a generous gesture -- I was so impressed by the offering, I can't even tell you.Two weeks later I received a soapy package from her, including Salted Lavender Salt Soap. Now, as many of you know,I am not a huge lavender fan.I mostly like lavender blends…yes, it has grown on me, just like patchouli has (in a blend, mind you). But Tracy's soap was so pleasant smelling. Tracy’s soap popped into the shower with me when I received the package from California. The smell was mellow, not medicinal, and the color, a calming light lilac with a swirled top.The soap was shaped like a cake instead of a straight up bar and the edges were smoothed when it arrived. Because salt soap is typically hard and rigid, I found the smoothed edges to be much more gentle as the bar slid over my wet skin.A stroke of genius, I say. Bubbles, big and lovely, came to fruition quickly in the shower and then went to the tight, creamy bubbles that soothed my skin.Washing with Salted Lavender was a delight and my skin felt so nice afterwards.Sometimes with salt bars there is a slight tightening of the skin, which I did get with her bar, but I didn’t feel dry necessarily, just more firm, I guess, is the way to describe it. I've tried a few others of Tracy's and they are quite nice. Check out how pretty her Frisky Kitten Soap is. So cute and feminine! Photo of Frisky on my dining room table..... At the moment, and perhaps even forever, Tracy is only doing wholesale, so this post may be a bit of a tease, but I leave you with photos and maybe in the future, she will sell to the public.Her wholesale clients are currently in the Santa Barbara area, so if you live there you can find them.Go to Tracy’s FB page for info or to browse some of the photos she has posted…. https://www.facebook.com/SantaBarbaraSoaps  One of Tracy's Salt Soaps....so you get the idea. Photo: courtesy of Tracy.
Mid
[ 0.592105263157894, 33.75, 23.25 ]
ESTA is an international association for violin, viola, cello and double bass players and teachers who are concerned with improving the quality of music education in general and string playing in particular. The British Branch has been supporting string players since 1973 and is delighted to be working with Musicteachers.co.uk to identify teachers with a passion for learning. What's in it for teachers? Membership of ESTA shows a commitment to being a learning teacher/performer - reflecting on old ideas, discovering new ones and developing a deepening understanding of string teaching and playing. A quarterly professional development magazine, arco, is sent to all members and the members' area of the website allows members to access resources such as the forum, articles and research and the repertoire database 24/7. Additionally membership of ESTA provides crucial liability insurances and legal cover. Find out more. Membership of ESTA gives confidence that a teacher appreciates the need to keep in touch and helps avoid the isolation than can be a common experience of many instrumental teachers. The network of ESTA members provides support through house meetings and other events while conference and the annual professional development summer school allow for a more in depth discovery and reflection. What's in it for parents and students? Parents can rest assured that their child's teacher has signed up to be a learning teacher while the students of ESTA members benefit from the magazine and website of JESTA (Junior ESTA). The magazine is issued twice a year and ESTA members can request one copy for each of their students. JESTA Play-days, courses, workshops and masterclasses are offered to the students of ESTA members at a reduced rate and students are eligible for bursaries.
Mid
[ 0.655581947743467, 34.5, 18.125 ]
Statins and peripheral arterial disease: effects on claudication, disease progression, and prevention of cardiovascular events. Peripheral arterial disease (PAD) of the lower limbs is the third most important site of atherosclerotic disease alongside coronary heart disease (CHD) and cerebrovascular disease (CVD). Best medical treatment is beneficial even in patients who eventually need invasive treatment, as the safety, immediate success, and durability of intervention is greatly improved in patients who adhere to best medical treatment. In recent years, a number of studies have suggested that the ACE-inhibitor ramipril and different statins, together with antiplatelet drugs, reduce cardiovascular morbidity and mortality in PAD. Patients with PAD are really a category of patients with a very high cardiovascular risk burden for fatal and nonfatal cerebrovascular and cardiovascular events; therefore, they need to be treated not only for local problems deriving from arteriopathy (intermittent claudication, rest pain and/or ulcers) but, above all, for preventing vascular events. Statins not only lower the risk of vascular events, but they also improve the symptoms associated with PAD. Statins exert beneficial pleiotropic effects on hemostasis, vasculature and inflammatory markers; there is also evidence that statins improve renal function considering that the plasma creatinine level is considered as an emerging vascular risk factor.
High
[ 0.679558011049723, 30.75, 14.5 ]
Design officials reject plans for Terraces of Lafayette complex LAFAYETTE -- Design officials have rejected plans for a much-debated 315-unit apartment complex proposed for a Lafayette hillside -- and they're advising planning leaders to do the same. Members of the city's design review commission unanimously agreed Monday to forward a recommendation to the planning commission to deny an application for the Terraces of Lafayette apartment complex. Commissioners cite concerns about the project's design and overall inconsistency with the general plan, among other issues. They also suggested developers "restart" the project. Design commissioners are scheduled to approve a resolution formalizing the recommendation Nov. 25 in advance of a December planning commission meeting. The decision applies to the application developer O'Brien Land Company submitted to the city more than two years ago to build the moderate-income apartment complex on the prominent 22-acre property at the corner of Deer Hill Road and Pleasant Hill Road. The application includes a number of permit requests, such as one that would allow for residences on land zoned for administrative and professional offices. At the first design review hearing Sept. 30, commissioners indicated they would not approve the project as proposed, and recommended the developer return with an alternative. On Oct. 15, the developer submitted a plan to reduce the number of proposed units to 208, and scale back parking spaces from 569 to 375. The number of planned buildings would remain at 14. Advertisement Other changes include placing taller three-story buildings in the site's interior to reduce their visibility, and balancing on-site grading to lessen environmental impacts. Despite these modifications, project manager David Baker acknowledged the new plans were not as fleshed out as they could be. "A month is a very short amount of time to design a project this complex," Baker said. Acting as a liaison, planning commissioner Tom Chastain said he couldn't remember having a project arrive at a commission as a "non-starter," and asked the developers whether they had pre-application meetings or a design review study session with city staff. The city offers such sessions for a fee so developers and others can get preliminary feedback on issues that may arise. Project architect Norm Dyer responded that developers had initial discussions with city staff but were told further discussions were not part of the city process. In addition to the design review commission's recommendation, the planning commission will consider input from the city's parks, trails and recreation, and circulation commissions. Should planning officials decide to deny the application for the 315 units, developers could resubmit an application for the 208 units or another configuration. They could also appeal the decision to the City Council.
Low
[ 0.509977827050997, 28.75, 27.625 ]