text
stringlengths
8
5.74M
label
stringclasses
3 values
educational_prob
sequencelengths
3
3
Changes in the Navy tech tree for Germany and USSR
Mid
[ 0.554809843400447, 31, 24.875 ]
The Curry Resident Senior Meals Association is in need of help. The association receives some funds from the state but heavily relies on donations. Because of the Covid-19 outbreak there have only been a small number of donations and they need your help! Any funds received goes to paying suppliers and the staff to make and deliver the meals.Because of the current situation no one is allowed to eat in the dinning area. They are still making meals every day, delivering to home bound seniors, and prepping to-go meals for people to pick up. Many of the local Seniors depend on this service to have a healthy meal everyday.Anything will be a big help to them and to the community.
Mid
[ 0.6437346437346431, 32.75, 18.125 ]
Hydrogen is supplied to customers connected to a hydrogen pipeline system. Typically, the hydrogen is manufactured by steam methane reforming in which a hydrocarbon and steam are reacted at high temperature in order to produce a synthesis gas containing hydrogen and carbon monoxide. Hydrogen is separated from the synthesis gas to produce a hydrogen product stream that is introduced into the pipeline system for distribution to customers that are connected to the pipeline system. Alternatively, hydrogen produced from the partial oxidation of a hydrocarbon can be recovered from a hydrogen rich stream. Typically, hydrogen is supplied to customers under agreements that require availability and on stream times for the steam methane reformer or hydrogen recovery plant. When a steam methane reformer is taken off-line for unplanned or extended maintenance, the result could be a violation of such agreements. Additionally, there are instances in which customer demand can exceed hydrogen production capacity of existing plants. Having a storage facility to supply back-up hydrogen to the pipeline supply is therefore desirable in connection with hydrogen pipeline operations. Considering that hydrogen production plants on average have production capacities that are roughly 50 million standard cubic feet per day or greater, a storage facility for hydrogen that would allow a plant to be taken off-line, to be effective, would need to have storage capacity in the order of 1 billion standard cubic feet or greater. The large storage capacity can be met by means of salt caverns to store the hydrogen underground. Low purity grades of hydrogen (i.e., below 95% purity) as well as other gases have been stored in salt caverns. Salt caverns are large underground voids that are formed by adding fresh water to the underground salt, thus creating brine, which is often referred to as solution mining. Caverns are common in the gulf states of the United States where demand for hydrogen is particularly high. Such hydrogen storage has taken place where there are no purity requirements or less stringent (<96% pure) requirements placed upon the hydrogen product. In such case, the stored hydrogen from the salt cavern is simply removed from the salt cavern without further processing. High purity (e.g., 99.99%) hydrogen storage within salt caverns presents several challenges. For example, storing large quantities (e.g., greater than 100 million standard cubic feet) of pure (e.g., 99.99%) gaseous hydrogen in underground salt caverns consisting of a minimum salt purity of 75% halite (NaCl) or greater without measurable losses of the stored hydrogen is difficult based on the properties of hydrogen. Hydrogen is the smallest and lightest element within the periodic table of elements, having an atomic radius measuring 25 pm+/−5 pm. Further, hydrogen is flammable, and therefore a very dangerous chemical if not handled properly. Salt caverns consist of salt walls that have various ranges of permeability (e.g., 0-23×10^-6 Darcy) that if not controlled properly could easily allow gaseous hydrogen to permeate through the salt walls and escape to the surface of the formation. If the stored hydrogen within an underground salt formation was to escape and permeate through the salt formation to the surface, a dangerous situation could arise with fatality potential or immense structural damage potential. Consequently, high purity hydrogen is typically considered one of the most difficult elements to contain within underground salt formations. As will be discussed, among other advantages of the present invention, an improved method and system for storing hydrogen in a salt cavern is disclosed.
High
[ 0.6795180722891561, 35.25, 16.625 ]
DE 4 444 115 A1 discloses a single-occupant light vehicle comprising a three-wheeled carriage, wherein the three-wheeled carriage comprises two non-driven front wheels which are coupled to a steering, and a non-jointed rear wheel which is coupled to a drive. The rear wheel is driven via a pedal crank together with a chain drive. A battery-powered electric drive can be additionally provided. The two front wheels comprise an independent wheel suspension with a double wishbone. All three wheels feature an inclination of the wheels for cornering. The vehicle can comprise full panel covering. A driver's area is arranged between the axles of the front wheels and the axle of the rear wheel, wherein a recumbent position is envisaged for the driver. An arrangement for additional luggage is not described. DE 10 2006 042 119 A1 discloses a load scooter comprising an essentially comparable three-wheeled carriage. This three-wheeled carriage also comprises two non-driven front wheels which are coupled to a steering, and a non jointed rear wheel which is coupled to a drive. It is additionally known from www.constin.de that the drive of the rear wheel can be embodied as an electric drive. A transport platform is situated above the axle of the front wheels. A driver's area is arranged behind it, as viewed in the direction of travel, wherein an upright position is envisaged for the driver. DE 10 2007 062 017 A1 discloses a multi-functional ground conveyor comprising a three-wheeled carriage. Various variants of an electric drive are described for the drive. With regard to the steering, however, the single wheel is always described as a jointed wheel, and the two laterally spaced wheels are always described as non-jointed wheels. A driver's area is provided between the axle of the single wheel and the axles of the two laterally spaced wheels, wherein an upright position is envisaged for the driver. A trailer device is arranged at the rear end of the vehicle, and an additional work apparatus, such as for example a load fork, can be fastened to the front end of the vehicle.
Mid
[ 0.5720620842572061, 32.25, 24.125 ]
[Antitumor immunity in patients with melanoma of the uveal tract]. Over the years 1977-1987 142 patients with malignant melanoma of the uvea were admitted to the Department of Ophthalmology, Teaching Hospital, Comenius University in Bratislava. Of the 119 patients treated by enucleation of the bulb 74.8% were alive after a mean follow up period of 5.9 years, whereas 25.2% of these patients died of consequences of metastases of the melanoma of the eye. The highest death rate was recorded in the first and second year following enucleation (57%) and then in the fourth year (17%). After the eighth year following enucleation of the bulb not a single case of death from consequences of metastases of melanoma was recorded. Eight patients with localization of the tumor in the anterior part of the uveal tract (iris, ciliary body) were treated by microsurgical removal of the tumor from the bulb. Two patients developed relapse of the tumor and the eyeballs had to be enucleated. After a mean follow up period of 6.6 years all these patients are alive and so far no signs of metastases have been observed. Moreover, the visual acuity has remained good in the patients whose eyes were not enucleated. Electrocoagulation was applied in six patients. Within a year the eyes had to be enucleated in three of them due to tumor progression. In two patients the growth of the tumor seems to have been stopped, yet regression of the tumor has not been observed. Complete healing was recorded only in one patient. After a mean follow up period of 4.5 years all the patients are alive without signs of metastases. A series of eight patients refused any kind of treatment of melanoma. Over a follow up period of 8.5 years six of them (75%) died of consequences of metastases within an average of 3.2 years from the establishment of diagnosis. Although the majority of these cases had small and middle sized melanomas, at present only two patients (25%) are still alive. In 127 patients with histologically proved malignant melanoma of the uvea the LAI test for the determination of uveal tumor antigen was also performed. The test was positive in 78.7% of the patients and negative in 23.3%. In a control group of patients with nontumorous eye diseases the LAI test was positive only in 2.9% and negative in 91.1%. Four months after enucleation of the bulb the LAI test was repeated and positivity was found to persist in 57.8% of the patients.(ABSTRACT TRUNCATED AT 400 WORDS)
Mid
[ 0.647058823529411, 33, 18 ]
--- abstract: 'In this paper, we will study the existence problem of minmax minimal torus. We use classical conformal invariant geometric variational methods. We prove a theorem about the existence of minmax minimal torus in Theorem \[convergence theorem\]. Firstly we prove a strong uniformization result(Proposition \[uniformization for torus\]) using method of [@AB]. Then we use this proposition to choose good parametrization for our minmax sequences. We prove a compactification result(Lemma \[compactification\]) similar to that of Colding and Minicozzi [@CM], and then give bubbling convergence results similar to that of Ding, Li and Liu [@DLL]. In fact, we get an approximating result similar to the classical deformation lemma(Theorem \[main theorem\]).' author: - Xin Zhou title: '**On the existence of min-max minimal torus**' --- \[section\] \[section\] \[section\] \[section\] \[section\] \[section\] Introduction ============ The existence problem of minimal surfaces is always an interesting topic. We know the existence of minimizing minimal disk, i.e. the classical Plateau problem (see Chapter 4 of [@CM1]) since 1931. There are many results from that time. In general, a minimal surface is a harmonic conformal branched immersion from a Riemann surface to a compact Riemannian manifold. Most results only consider existence of **area minimizing** minimal surfaces in a given homotopy class. In particular, the existence of area-minimizing surfaces has been proved for all genus in a suitable sense (cf. [@SU1], [@SU2], [@CT] etc.).\ Besides minimizing minimal surfaces, we naturally ask whether there exist **min-max** minimal surfaces. Here **min-max** means the area of the minimal surfaces is just the min-max critical point of the area functional in a homotopy class. In general, suppose $A$ is a functional on a Banach manifold $\mathbf{M}$, $\Omega=\big\{v(t):[0,1]\rightarrow \mathbf{M},\ v\in C^{0}([0, 1], \mathbf{M})\big\}$ the path space in $\mathbf{M}$ with $\sigma\in\Omega$. Then $\mathcal{W}_{A}=\underset{\rho\in[\sigma]}{inf}\quad\underset{t\in[0,1]}{max}A\big(\rho(t)\big)$ is the min-max critical value in the homotopy class of $\rho$. It will be more complicated when considering min-max minimal surfaces than the minimizing case. From the point of view of variational method, the approximation sequences will be one parameter families of mappings, which makes it difficult to do compactification. J. Jost gave such an approach in his book [@J1]. Recently Colding and Minicozzi [@CM] also gave such an approach in the case of sphere using geometric variational methods. They all used the bubble convergence of almost harmonic mappings from closed surfaces given by Sacks and Uhlenbeck [@SU1]. Colding and Minicozzi also found a good approximation sequence which plays an important role in their proof of finite time extinction of the Ricci flow.\ We will extend Colding and Minicozzi’ approach to the case of torus, i.e. the existence of min-max minimal torus. In fact, we give a stronger approximation for a special minimizing sequence. *Using notations in Section 2.1*, the main result is: \[main theorem\] For any homotopically nontrivial path $\beta\in\Omega$, if $\mathcal{W}>0$, there exists a sequence $(\rho_{n}, \tau_{n})\in[\beta]$, with $\underset{t\in[0,1]}{max}E\big(\rho_{n}(t), \tau_{n}(t)\big)\rightarrow\mathcal{W}$, and $\forall\epsilon>0$, there exist $N$ and $\delta>0$ such that if $n>N$, then for any $t\in(0, 1)$ satisfying: $$\label{energy condition} E\big(\rho_{n}(t), \tau_{n}(t)\big)>\mathcal{W}-\delta,$$ there are possibly a conformal harmonic torus $u_{0}:T^{2}_{\tau_{0}}\rightarrow N$ and finitely many harmonic sphere $u_{i}:S^{2}\rightarrow N$, such that: $$d_{V}\big(\rho_{n}(t), \underset{i}{\cup}u_{i}\big)\leq\epsilon.$$ Here $d_{V}$ means varifold distance as in Appendix A in [@CM]. It is a corollary of Theorem \[convergence theorem\] and Appendix A in [@CM]. It is a stronger approximation result than Theorem 1.14 of [@CM]. We use the energy condition inequality \[energy condition\] for the special sequence $\rho_{n}$, while Theorem 1.14 of [@CM] use area condition.\ In the case of torus, we have to include the variation of conformal structures as discussed in [@SU2] and [@SY]. The analysis of singularity in the bubble convergence will be more complicated than in the case of sphere. We will give existence results similar to that of Ding, Li and Liu [@DLL]. In the following, we will first give our notations, and then give the sketch of this paper an the end of Section \[sketch of the variational approach\].\ **Acknowledgement.** This is part of my master degree thesis in Peking University. I would like to sincerely thank my advisor Professor Gang Tian for his longtime help and encouragement. I also would like to thank Professor Weiyue Ding for several valuable talks with me about this problem. I would like to thank Professor Richard Schoen, Professor Tobias Colding and Professor William P. Minicozzi for their interest in this work. I would like to thank Yalong Shi for carefully reading and suggestions on the paper. Finally I would like to thank Professor Bin Xu for his patience to listen to my primitive ideas of this paper during his visit to Beijing International Center for Mathematical Research. Sketch of the variational methods for min-max minimal torus =========================================================== In the paper [@CM], Colding and Minicozzi used variational methods to give the existence of min-max minimal spheres. Let’s firstly sketch their idea. Let$(N, h)$ be the ambient space. $\Omega=\bigg\{\gamma(t)\in C^{0} \Big([0,1],C^{0}\cap W^{1,2}(\mathbf{S^{2}}, N)\Big)\bigg\}$ is the path space. Here for all $\gamma(t)\in\Omega$, $\gamma(0)$, $\gamma(1)$ are constant mappings. We call all such one parameter family of mappings $\gamma(t)\in\Omega$ **paths** in the following. For $\beta\in\Omega$, let $[\beta]$ be the homotopy class of $\beta$ in $\Omega$. The min-max critical value is $\mathcal{W}=\underset{\rho\in [\beta]}{inf}\quad\underset{t\in [0,1]}{max}Area\big(\rho(t)\big)$. They want to learn the behavior of critical points corresponding to $\mathcal{W}$. They firstly chose an arbitrary minimizing sequence $\tilde{\gamma}_{n}(t)\in [\beta]$, such that $\lim_{n\rightarrow\infty}\underset{t\in[0,1]}{max}Area\big(\tilde{\gamma}_{n}(t)\big)=\mathcal{W}$. Then they did almost conformal reparametrization for these paths to get $\gamma_{n}(t)\in [\beta]$ which are almost conformal, i.e. $E(\gamma_{n}(t))-Area(\gamma_{n}(t))\rightarrow 0$. Finally they perturbed $\gamma_{n}(t)$ to $\rho_{n}(t)$ by local harmonic replacement so that the new paths $\rho_{n}(t)$ have certain compactness. The existence of min-max minimal spheres follows from this construction and Sacks and Uhlenbeck’s bubbling compactness [@SU1].\ We want to extend the min-max variational method given by Colding and Minicozzi to the case of torus $T^{2}$. The difference between sphere and torus is that torus has more than one conformal structures, while the conformal structure of sphere is unique. Generally speaking, the pull-back metrics of the mappings on the area minimizing sequence of paths will correspond to different conformal structures. It is natural to include the variance of the conformal structures in the min-max construction. In fact, we need to consider the Teichm$\ddot{u}$ller space of torus in order to maintain the homotopy class of the paths as discussed in [@SY]. It will be difficult to do both conformal reparametrization and compactification, and we must also consider whether the corresponding conformal structures converge. Fortunately, the Teichm$\ddot{u}$ller space of $T^{2}$ is easy to manipulate, and the singularity arising from the absence of compactness of conformal structures has been given in [@DLL] by Ding, Li and Liu. Teichm$\ddot{u}$ller space of torus and the notations ----------------------------------------------------- We know that any flat torus $T^{2}$ can be viewed as the quotient space of $\mathbb{C}$ moduled by a lattice generated by bases $\{\omega_{1}, \omega_{2}\}$. After some conformal linear transformation, we can assume $\omega_{1}=1$, and $\omega_{2}=\tau=\frac{w_{2}}{w_{1}}$, where $\tau$ lies in the upper half plane $\mathbb{H}$. In fact *the Teichm$\ddot{u}$ller spaces of torus $\mathcal{T}_{1}$*, is just the upper half plane $\mathbb{H}$. We call each element $\tau\in\mathcal{T}_{1}$ a **mark**, and denote $\tau$ by a marked torus $(T^{2}, \tau)$ as in Definition 2.7.2 of [@J2], which means a torus by gluing edges of the lattice $\{1, \tau\}$ with the plane metric $dzd\overline{z}$. Denoting $\tau=\tau_{1}+\sqrt{-1}\tau_{2}$, we have another normalization such that the area of the corresponding torus $Area(\{\omega_{1}, \omega_{2}\})=1$, i.e. by letting $\omega_{1}=\frac{1}{\sqrt{\tau_{2}}}$, $\omega_{2}=\frac{\tau_{1}}{\sqrt{\tau_{2}}}+\sqrt{-1}\sqrt{\tau_{2}}$. Let $T^{2}_{0}$ be the marked torus $(T^{2}, \sqrt{-1})$, then there is *a natural diffeomorphism $i_{\tau}$* from $(T^{2}, \tau)$ to $(T^{2}, \sqrt{-1})$, which is the quotient map of the linear map of $\mathbb{C}$ keeping $1$ and sending $\tau$ to $\sqrt{-1}$. So we can also denote $\tau\in\mathcal{T}_{1}$ as $(T^{2}_{\tau}, i_{\tau})$ as in page 78 of [@J2]. We will show that every metric on $T^{2}_{0}$ is conformal to a marked torus $(T^{2}, \tau)$, while keeping the conformal homeomorphism in the homotopy class of $i_{\tau}^{-1}$.\ Let $\tilde{\Omega}=\bigg\{ \big(\gamma(t), \tau(t)\big); \gamma(t)\in C^{0}\Big([0,1], C^{0}\cap W^{1,2}\big( (T^{2}, \tau(t)) , N \big)\Big), \tau(t)\in C^{0}\big([0,1], \mathcal{T}_{1}\big) \bigg\}$, and $\Omega=\bigg\{ \gamma(t)\in C^{0}\Big([0,1], C^{0}\cap W^{1,2}(T^{2}_{0}, N) \Big) \bigg\}$. We assume $\gamma(0), \gamma(1)$ are constant mapping or map the torus to some circles in $N$. And $\tau(0), \tau(1)=\sqrt{-1}$, if mappings on the endpoints are constant mappings, and not restrained if not. We use varying domains $(T^{2}, \tau(t))$ in the definition of $\tilde{\Omega}$, and there are two ways to understand this: we can pull back all $\gamma(t)$ to $T^{2}_{0}$ by $i_{\tau_{t}}^{-1}$ and the continuity is defined w.r.t the same domain $T^{2}_{0}$; we can also consider $\gamma(t)$ as defined on a large ball of $\mathbb{C}$ containing all parallelograms generated by $\{1, \tau(t)\}$, and continuity is defined w.r.t. the plane ball. Since $\tau(t)$ is continuous, the two definitions are equivalent. Here $\tilde{\Omega}$ and $\Omega$ are our variational spaces.\ For the area functional, we only need to consider variational problem in the space $\Omega$, since changing domain metrics will not change the area. But for energy functional, different conformal structures may lead to different energy, so we have to consider variational problem in the space $\tilde{\Omega}$. Fix a homotopically nontrial path $\beta\in\Omega$, $\big(\beta(t), \tau_{0}(t) \big)\in\tilde{\Omega}$.[^1] Let $[\beta]$ be the homotopy class of $\beta$ in $\Omega$. Since path $\gamma(t)\in\tilde{\Omega}$ may have different domains $T^{2}_{\tau(t)}$, *the homotopy equivalence $\alpha\sim\beta$ of $\alpha(t):T^{2}_{\tau(t)}\rightarrow N$ and $\beta(t):T^{2}_{\tau^{\prime}(t)}\rightarrow N$* is defined as follows. We can identify $T^{2}_{\tau(t)}$, and $T^{2}_{\tau^{\prime}(t)}$ to $T^{2}_{0}$ by $i_{\tau(t)}$ and $i_{\tau^{\prime}(t)}$, then we can view $\alpha(t)$ and $\beta(t)$ as mappings defined on the same domain $T^{2}_{0}$ and hence define their homotopy equivalence. Let $\mathcal{W}=\underset{\rho\in[\beta]}{inf}\quad\underset{t\in[0,1]}{max}Area\big(\rho(t)\big)$. Considering the energy, similarly define $\mathcal{W}_{E}=\underset{(\rho, \tau)\in[(\beta, \tau_{0})]}{inf}\quad\underset{t\in[0,1]}{max}E\big(\rho(t), \tau(t)\big)$ [^2]. In fact, we will show that $\mathcal{W}=\mathcal{W}_{E}$ in Remark \[equivalence of minmax area and energy\]. What we are interested is the case when $\mathcal{W}>0$. So we assume that $\mathcal{W}>0$ in the following. Sketch of the variational approach {#sketch of the variational approach} ---------------------------------- **Question**: Whether one can find a minimal torus or a minimal torus together with several minimal spheres with total area equal $\mathcal{W}$? Here we will follow the method of Colding and Minicozzi. We want to reduce the variational problem for the area functional to that of the energy functional, i.e. to change a variational problem in $\Omega$ to one in $\tilde{\Omega}$. **Firstly** choose a sequence $\tilde{\gamma}_{n}(t)\in[\beta]$, such that $\lim_{n\rightarrow\infty}\underset{t\in[0,1]}{max}Area\big(\tilde{\gamma}_{n}(t)\big)=\mathcal{W}$. By a smoothing argument, we can assume $\tilde{\gamma}_{n}(t)$ varies in the $C^{2}$ class w.r.t $t$, i.e. $\tilde{\gamma}_{n}(t)\in C^{0}\Big([0,1],C^{2}(T^{2}_{0}, N)\Big)$. Pull back the ambient metric $\tilde{g}_{n}(t)=\tilde{\gamma}_{n}(t)^{*}h$. We want to show that $\tilde{g}_{n}(t)$, which may be degenerate, determine a family of marks $\tau_{n}(t)\in\mathcal{T}_{1}$, such that there exist almost conformal parametrizations $h_{n}(t): T^{2}_{\tau_{n}(t)}\rightarrow T^{2}_{\tilde{g}_{n}(t)}$ isotopic to $i_{\tau_{n}(t)}$. Hence the reparametrization $\big(\gamma_{n}(t), \tau_{n}(t)\big)=\Big(\tilde{\gamma}_{n}\big(h_{n}(t), t\big), \tau_{n}(t)\Big)\in\big[\big(\tilde{\gamma}_{n}(t), \tau_{0}\big)\big]$ have energy close to area, i.e. $E\big(\gamma_{n}(t), \tau_{n}(t)\big)-Area\big(\gamma_{n}(t)\big)\rightarrow 0$. **Next** we want to perturb $\gamma_{n}(t)$ to $\rho_{n}(t)$ to get bubble compactness. Clearly, we can not globally change the mappings on each path to harmonic or almost harmonic ones like in the Plateau Problem. Local harmonic replacement is a good choice here, and this is just what Colding and Minicozzi did. **Finally** we will study what we will get when the the corresponding marks $\{\tau_{n}\}\subset\mathcal{T}_{1}$ converge or degenerate. If the marks $\tau_{n}$ being considered will not degenerate, we will get a good solution to this variational problem. In fact, we will show that $\big(\rho_{n}(t), \tau_{n}(t)\big)$ are almost conformal when their energy are closed to the min-max value $\mathcal{W}_{E}$.\ We will give details of the above approach in the following sections. Conformal parametrization ========================= We will do almost conformal reparametrization for the minimizing sequence of paths $\tilde{\gamma}_{n}(t)$, and we can assume that $\tilde{\gamma}_{n}(t)$ have some regularity. \[mollifying\] (Lemma D.1 of [@CM])Suppose $\tilde{\gamma}_{n}(t)$ are chosen as a minimizing sequence of paths as above, we can perturb them to get a new minimizing sequence in the same homotopy class $[\beta]$. If denoting them still as $\tilde{\gamma}_{n}(t)$, we have $\tilde{\gamma}_{n}(t)\in C^{0} \Big([0,1],C^{2}(T^{2}_{0}, N)\Big)$. Uniformization for torus ------------------------ We need the following uniformization result. For a marked torus $T^{2}_{\tau}$, we have a standard covering map $\pi_{\tau}:\mathbb{C}\rightarrow T^{2}_{\tau}$, which is just the map quotient by the lattices generated by $\{1, \tau\}$. We denote $\pi_{0}=\pi_{\sqrt{-1}}$. \[uniformization for torus\] Let $g$ be a $C^{1}$ metric on $T^{2}_{0}$. We can view $g$ as a metric on the complex plane $\mathbb{C}$, with double periods. Then there is a unique mark $\tau\in\mathcal{T}_{1}$, and a unique orientation preserving $C^{1,\frac{1}{2}}$ conformal diffeomorphism $h:T^{2}_{\tau}\rightarrow T^{2}_{g}$, such that $h$ is isotopic to $i_{\tau}$, with normalization that if pulling the map back to $\mathbb{C}$ by $\pi_{\tau}$ and $\pi_{0}$, it maps $0$ to $0$, $1$ to $1$ and $\tau$ to $\sqrt{-1}$. Furthermore, if $g(t)$ is a family of $C^{1}$ metrics on $T^{2}_{0}$ which varies continuously in the $C^{1}$ class, i.e. $g(t)\in C^{1}\big([0,1],\ C^{1}\ metrics\big)$, and $g(t)\geq\epsilon g_{0}$ for some uniform $\epsilon>0$, let $\tau(t), h(t)$ be the corresponding marks and normalized conformal diffeomorphisms, then $\tau(t)$ varies continuously in $\mathcal{T}_{1}$ and $h(t)$ varies continuously in $C^{0}\cap W^{1,2}(T^{2}_{\tau(t)}, T^{2}_{0})$. Here the space $C^{0}\cap W^{1,2}(T^{2}_{\tau(t)}, T^{2}_{0})$ have different domain spaces $T^{2}_{\tau(t)}$, and the continuity is defined as the Section 2. The existence of a lattice $\{1, \tau\}$ and the conformal homeomorphism $h:T^{2}_{\tau}\rightarrow T^{2}_{g}$ follows from Theorem 3.3.2 of [@J1] by variational methods.\ We firstly give the existence of a conformal homeomorphism satisfying the above normalization. Let $f: T^{2}_{g}\rightarrow T^{2}_{\tau}$ be the inverse mapping of the conformal homeomorphism $h$ given by the variational methods. Pulling back $T^{2}_{g}$ to $\mathbb{C}$ by $\pi_{0}$, $g$ can be viewed as double periodic metrics $(g_{ij})$. By Lemma \[domain metric\], we can write $g=\lambda|dz+\mu d\overline{z}|^{2}$, with $|\mu|\leq k<1$. Let $\tilde{f}$ be the lifting of $f$ to the covering space $\tilde{f}:\mathbb{C}\rightarrow\mathbb{C}$ by $\pi_{0}$ and $\pi_{\tau}$. After possibly composing with a conformal diffeomorphism of $T^{2}$, we can assume $\tilde{f}(1)=1$. By the uniqueness of $\mu$-conformal homeomorphisms which fix $(0,1,\infty)$ as described in section 6.1, we know that $\tilde{f}$ is just the map $w^{\mu}$ given by Ahlfors and Bers in [@AB]. Since $\tilde{f}$ is orientation preserving, $\tilde{f}(\sqrt{-1})\in\mathbb{H}$. Denoting $\tau^{\prime}=\tilde{f}(\sqrt{-1})$, since $f_{\#}$ is homeomorphism between $\pi_{1}(T^{2}_{0})$ and $\pi_{1}(T^{2}_{\tau})$, we know $\{1, \tau^{\prime}\}$ is another generator of the lattice generalized by $\{1, \tau\}$. After pulling down $\tilde{f}$ by $\pi_{0}$ and $\pi_{\tau^{\prime}}$, we get $f^{\prime}$. In fact $f^{\prime}$ differs from $f$ by an automorphism $\pi_{\tau^{\prime}}\circ\pi_{\tau}^{-1}$ of $T^{2}_{0}$. $f^{\prime}$ maps $T^{2}_{g}$ conformally and homeomorphicly to $T^{2}_{\tau^{\prime}}$. Since $\tilde{f}$ maps $1$ to $1$ and $\sqrt{-1}$ to $\tau^{\prime}$, we know that $f^{\prime}$ is homotopic to $i_{\tau}^{-1}$ by Lemma 2.7.1 of [@J2]. So $f^{\prime}$ and $\tau^{\prime}$ are our unique conformal homeomorphism and mark, and we will denote them by $f$ and $\tau$. Let $h=f^{-1}:T^{2}_{\tau}\rightarrow T^{2}_{g}$ be our unique conformal homeomorphism, then $h$ is isotopic to $i_{\tau}$.\ The uniqueness under the above normalization and the continuous dependence of the conformal homeomorphisms and the marks on the variance of the metric follow from Appendix \[apeendix1\]. For a family of metrics $g(t)$, $g(t)=\lambda(z)|dz+\mu(t)d\overline{z}|^{2}$, with $|\mu(t)|\leq k(\epsilon)<1$. Here $\mu(t)$ are double periodic functions on $\mathbb{C}$ with periods generalized by $\{1, \tau_{0}\}$, and $\mu(t)=\mu_{t}$ change continuously in the $C^{1}$ class w.r.t $t$ by Lemma \[domain metric\] and the following Remark \[remark of domain metric\]. Let $f(t)$ be the inverse of $h(t)$, with $\tilde{f}(t)$ and $\tilde{h}(t)$ being pulled back by $\pi_{0}$ and $\pi_{\tau(t)}$. Hence $\tilde{f}(t)=w^{\mu_{t}}$ are just the maps given by Ahlfors and Bers described in Appendix \[apeendix1\].\ We will show that $\tau(t)$ vary continuously w.r.t $t$. We know that $\tau(t)=w^{\mu_{t}}(\sqrt{-1})$, and then $w^{\mu_{t}}(\sqrt{-1})\rightarrow w^{\mu_{t_{0}}}(\sqrt{-1})$ as $t\rightarrow t_{0}$. This is because we have convergence under sphere distance in Lemma \[cont1\], i.e. $d_{S^{2}}\big(w^{\mu_{t}}(\sqrt{-1}), w^{\mu_{t_{0}}}(\sqrt{-1})\big)\rightarrow 0$. And we know from the variational methods that $w^{\mu_{t_{0}}}(\sqrt{-1})=\tau_{t_{0}}$ is away from $\infty$, so all $\tau_{t}=w^{\mu_{t}}(\sqrt{-1})$ are away from $\infty$. Since the sphere distance is equivalent to plane distance of $\mathbb{C}$, we know $|w^{\mu_{t}}(\sqrt{-1})-w^{\mu_{0}}(\sqrt{-1})|\rightarrow 0$, i.e. $\tau(t)\rightarrow\tau(t_{0})$ in $\mathcal{T}_{1}$.\ We will give the continuous dependence of $h_{t}=f_{t}^{-1}$ on $t$. The lifting are $\mu(t)$-conformal $\tilde{h}_{t}:\mathbb{C}_{dwd\overline{w}}\rightarrow\mathbb{C}_{|dz+\mu(t)d\overline{z}|^{2}}$. Here, we only need to consider $\tilde{h}_{t}$ as mappings defined on a large ball $B_{R}$, which contains all the parallelograms of $\{1, \tau_{t}\}$. This is because $\tau_{t}$ vary continuously, so they will lie on a large ball $B_{R}$ for all $t\in[0,1]$. Here $\tilde{h}(t)$ are the conformal homeomorphism solutions of Lemma \[cont2\]. We know the convergence under sphere distance, i.e. equation \[sphere convergence for the inverse of u-conformal map\]. The image $\tilde{h}(t)(B_{R})$ are restrained to a neighborhood of $[0,1]\times[0,1]$, since $\tilde{h}(t)$ have uniform H$\ddot{o}$lder continuity and map parallelograms $\{1, \tau_{t}\}$ homeomorphicly to $T^{2}_{0}$. So $\|\tilde{h}_{t}-\tilde{h}_{t_{0}}\|_{L^{\infty}(B_{R})}\rightarrow 0$, as $t\rightarrow t_{0}$, and hence: $$\|h_{t}-h_{t_{0}}\|_{C^{0}(T^{2}_{\tau_{t}}, T^{2}_{0})}\rightarrow 0.$$ From the second convergence in Lemma \[cont2\], we know $\|(\tilde{h}_{t}-\tilde{h}_{t_{0}})_{w}\|_{L^{p}(B_{R})}\rightarrow 0$, as $t\rightarrow t_{0}$, so $\|(h_{t}-h_{t_{0}})_{w}\|_{L^{p}(T^{2}_{\tau_{t}}, T^{2}_{0})}\rightarrow 0$, and hence: $$\|h_{t}-h_{t_{0}}\|_{W^{1,2}(T^{2}_{\tau_{t}}, T^{2}_{0})}\rightarrow 0.$$ Construction of the conformal reparametrization ----------------------------------------------- As above, we consider $\tilde{g}_{n}(t)=\tilde{\gamma}_{n}(t)^{*}h$, which vary continuously in the $C^{1}$ class. Since there may be degenerations, we let $g_{n}(t)=\tilde{g}_{n}(t)+\delta_{n} g_{0}$, where $g_{0}$ is the standard metric of $T^{2}_{0}$, and $\delta_{n}$ arbitrarily small. The corresponding marks in $\mathcal{T}_{1}$ and conformal diffeomorphisms are $\tau_{n}(t)$ and $h_{n}(t)$ given by Proposition \[uniformization for torus\]. We have the following result. \[conformal parametrization\] Using the above notion, we have reparametrizations $\big(\gamma_{n}(t), \tau_{n}(t)\big)\in\tilde{\Omega}$ for $\tilde{\gamma}_{n}(t)$, i.e. $\gamma_{n}(t)=\tilde{\gamma}_{n}\big(h_{n}(t), t\big)$, such that $\gamma_{n}(t)\in\big[\tilde{\gamma}_{n}\big]$. And $$\label{equation of conformal parametrization} E\big(\gamma_{n}(t), \tau_{n}(t)\big)-Area\big(\gamma_{n}(t)\big)\rightarrow 0,$$ as $\delta_{n}\rightarrow 0$. We know that $h_{n}(t):T^{2}_{\tau_{n}(t)}\rightarrow T^{2}_{g_{n}(t)}$ are conformal diffeomorphisms. Let $\gamma_{n}(t)=\tilde{\gamma}_{n}\big(h_{n}(t), t\big):T^{2}_{\tau_{n}(t)}\rightarrow N$ be the composition of our test path with the almost conformal parametrization, we know $\gamma_{n}(t)\in\Omega$. The continuity of $t\rightarrow\gamma_{n}(t)$ from $[0,1]$ to $C^{0}\cap W^{1,2}(T^{2}_{\tau_{n}(t)}, N)$ follows from the continuity of $t\rightarrow\tilde{\gamma}_{n}(t)$ in $C^{2}$ by Lemma \[mollifying\], and $t\rightarrow h_{n}(t)$ in $C^{0}\cap W^{1,2}$ by Proposition \[uniformization for torus\]. We will show that $\gamma_{n}(t)\in[\tilde{\gamma}_{n}]$. From our discussion of homotopy equivalence of mappings defined on different domains in Section 2, we view $\gamma_{n}(t)$ as mappings defined on $T^{2}_{0}$ by composing with $i_{\tau_{n}(t)}^{-1}:T^{2}_{0}\rightarrow T^{2}_{\tau_{n}(t)}$ and compare it to $\tilde{\gamma}_{n}(t)$. Since $h_{n}$ are homotopic equivalent to $i_{\tau_{n}(t)}$ by Proposition \[conformal parametrization\], $h_{n}(t)\circ i_{\tau_{n}(t)}^{-1}$ is homotopic equivalent to identity map of $T^{2}_{0}$. While $\gamma_{n}$ are composition of $\tilde{\gamma}_{n}$ with $h_{n}(t)$, $\gamma_{n}\circ i_{\tau_{n}}^{-1}$ is homotopic equivalent to $\tilde{\gamma}_{n}$, hence $\gamma_{n}\sim\tilde{\gamma}_{n}$.\ We can get estimates as in Appendix D of [@CM]: $$\begin{split} E\big(\gamma_{n}(t), \tau_{n}(t)\big) &=E\big(h_{n}(t):T^{2}_{\tau_{n}(t)}\rightarrow T^{2}_{\tilde{g}_{n}(t)}\big)\leq E\big(h_{n}(t):T^{2}_{\tau_{n}(t)}\rightarrow T^{2}_{g_{n}(t)}\big)\\ &=Area\big(h_{n}(t):T^{2}_{\tau_{n}(t)}\rightarrow T^{2}_{g_{n}(t)}\big)\\ &=Area\big(T^{2}_{g_{n}(t)}\big)=\int_{T^{2}_{0}}[det\big(g_{n}(t)\big)]^{\frac{1}{2}}dvol_{0}\\ &=\int_{T^{2}_{0}}[det\big(\tilde{g}_{n}(t)\big)+\delta_{n}Tr_{g_{0}}\tilde{g}_{n}(t)+C(\tilde{g}_{n}(t))\delta_{n}^{2}]^{\frac{1}{2}}dvol_{0}\\ &\leq Area(T^{2}_{\tilde{g}_{n}(t)})+C(\tilde{g}_{n}(t))\sqrt{\delta_{n}}\\ &=Area\big(\gamma_{n}(t):T^{2}_{0}\rightarrow N\big)+C(\tilde{\gamma}_{n})\sqrt{\delta_{n}}. \end{split}$$ The first and last equality follow from the definition of energy and area integral, and the second inequality is due to the fact $\tilde{g}_{n}(t)\leq g_{n}(t)$. Hence we have equation \[equation of conformal parametrization\], as $\delta_{n}\rightarrow 0$. \[equivalence of minmax area and energy\] We point out that the above Lemma implies that $\mathcal{W}=\mathcal{W}_{E}$. Since we always have that $Area(u)\leq E(u, \tau)$, we get $\mathcal{W}\leq\mathcal{W}_{E}$. We will be done if we know $\mathcal{W}_{E}\leq\mathcal{W}$. By definition $\mathcal{W}_{E}\leq\underset{t\in[0,1]}{max}E\big(\gamma_{n}(t), \tau_{n}(t)\big)$. Since $\mathcal{W}=\underset{n\rightarrow\infty}{\lim}\underset{t\in[0,1]}{max}Area\big(\gamma_{n}(t)\big)$, we have $\mathcal{W}_{E}\leq\underset{n\rightarrow\infty}{\lim}\underset{t\in[0,1]}{max}Area\big(\gamma_{n}(t)\big)=\mathcal{W}$. Now we have reduced the problem in $\Omega$ to that in $\tilde{\Omega}$ as we discussed above, and it is now easy to deal with energy $E$ by analytical methods. Compactification for mappings ============================= In this case, we can view $\gamma_{n}(t)$ as double periodic mappings on $\mathbb{C}$, with periods generated by lattices $\{1, \tau_{n}(t)\}$. So all the mappings have the same domain, but with different periods, with periods varying continuously. We can do similar perturbation procedure as what Colding and Minicozzi did in the case of sphere in [@CM]. \[compactification\] Let $[\beta]$ and $\mathcal{W}_{E}$ be as in section 2. For any $\big(\gamma(t), \tau(t)\big)\in[\beta]\subset\tilde{\Omega}$ with $\underset{t\in[0,1]}{max}E\big(\gamma(t), \tau(t)\big)- \mathcal{W}_{E}\ll 1$, if $\big(\gamma(t), \tau(t)\big)$ is not harmonic unless $\gamma(t)$ is a constant map, we can perturb $\gamma(t)$ to $\rho(t)$, such that $\rho(t)\in[\gamma ]$ and $E\big(\rho(t), \tau(t)\big)\leq E\big(\gamma(t), \tau(t)\big)$, and for any $t$ such that $E\big(\gamma(t),\tau(t)\big)\geq\frac{1}{2}\mathcal{W}_{E}$, $\rho(t)$ satisfy: (\*) For any finite collection of disjoint balls $\underset{i}{\cup}B_{i}$ on $T^{2}_{\tau_{t}}$, which can also be viewed as disjoint balls on the parallegram generated by $\{1, \tau(t)\}\subset\mathbb{C}$, such that $E\big(\rho(t), \underset{i}{\cup}B_{i}\big)\leq\epsilon_{0}$, if we let $v$ be the energy minimizing harmonic map with the same boundary value as $\rho(t)$ on $\frac{1}{8}\underset{i}{\cup}B_{i}$, then we have: $$\label{compactification formula} \int_{\frac{1}{8}\underset{i}{\cup}B_{i}}|\nabla\rho(t)-\nabla v|^{2}\leq\Psi\Big(E\big(\gamma(t), \tau(t)\big)-E\big(\rho(t), \tau(t)\big)\Big).$$ Here $\epsilon_{0}$ is some small constant, and $\Psi$ is a positive continuous function with $\Psi(0)=0$. In the paper [@CM] of Colding and Minicozzi, all the results about harmonic maps on disks are still valid here. The other two most important ingredients are continuity of local maps and comparison of energy of local harmonic replacements. For the first one, since all the balls $\underset{i}{\cup}B_{i}$ can be viewed as balls on $\mathbb{C}$, and $\gamma(t)$ are continuous as mappings on $\mathbb{C}$, so continuity of $\gamma(t)$ restricted to local balls is valid. The comparison results are just for a fixed mapping $\gamma(t)$, and when $t$ is fixed, all the comparison results can be viewed as on the plane, so we can show that they are still valid here. We will give the proof by combining results in the following sections by following the proof of Theorem 2.1 of [@CM]. To do such compactification, we use repeated local harmonic replacements, which means that we replace the map $u$ on a ball $B$ by the energy-minimizing map $H(u)$ with the same boundary value as $u$. Harmonic replacement on disks ----------------------------- In this section, we will list some results about harmonic replacement on disks with small energy as given in Section 3 of [@CM]. Firstly we recall that for small energy harmonic map, energy gap can control the difference of $W^{1,2}$-norm. Here $B_{1}\in\mathbb{R}^{2}$ is the unit disk, and $N$ is the ambient manifold. \[energy gap\] (Theorem 3.1 of [@CM]) There exists a small constant $\epsilon_{1}$(depending on $N$) such that for all maps $u, v\in W^{1,2}(B_{1}, N)$ , if $v$ is weakly harmonic with the same boundary value as $u$, and $v$ has energy less than $\epsilon_{1}$, then we have: $$\int_{B_{1}}|\nabla u|^{2}-\int_{B_{1}}|\nabla v|^{2}\geq\frac{1}{2}\int_{B_{1}}|\nabla u-\nabla v|^{2}.$$ This theorem tells us that for small energy harmonic map, we can use the gap of energy to control the difference of $W^{1,2}$ norm. Hence we will focus on the energy gaps when we do harmonic replacement. It also implies the uniqueness of small energy weakly harmonic map among maps with the same boundary values(Corollary 3.3 of [@CM]). Using this theorem and boundary regularity of harmonic maps(i.e. [@Q]), we have the following continuity property of harmonic replacements. \[continuity of harmonic replacement\] (Corollary 3.4 of [@CM]) Let $\epsilon_{1}$ be as in the previous theorem. Suppose $u\in C^{0}(\overline{B}_{1})\cap W^{1,2}(B_{1})$ with energy $E(u)\leq\epsilon_{1}$, then there exists a unique energy minimizing harmonic map $v\in C^{0}(\overline{B}_{1})\cap W^{1,2}(B_{1})$ with the same boundary value as $u$. Set $\mathcal{M}=\{u\in C^{0}(\overline{B}_{1})\cap W^{1,2}(B_{1}), E(u)\leq\epsilon_{1}\}$. $\exists C$ (depending on $N$), $\forall u_{1}, u_{2}\in\mathcal{M}$, let $w_{1}, w_{2}$ be the corresponding energy minimizing maps, and let $E=E(u_{1})+E(u_{2})$, then we have: $$|E(w_{1})-E(w_{2})|\leq C\|u_{1}-u_{2}\|_{C^{0}(\overline{B}_{1})}E+C\|\nabla u_{1}-\nabla u_{2}\|_{L^{2}(B_{1})}E^{\frac{1}{2}}.$$ If we denote $v$ by $H(u)$, the mapping $H:\mathcal{M}\rightarrow \mathcal{M}$ is continuous w.r.t the norm on $C^{0}(\overline{B}_{1})\cap W^{1,2}(B_{1})$. Here the norm is the sum of $C^{0}(\overline{B}_{1})$-norm and $W^{1,2}(B_{1})$-norm. We will need the following extension of the above result: \[continuity of harmonic replacement2\] Suppose $u_{i}, u$ are defined on a ball $B_{1+\epsilon}$ with energy less than $\epsilon_{1}$. Suppose $u_{i}\rightarrow u$ in $C^{0}(\overline{B}_{1+\epsilon})\cap W^{1,2}(B_{1+\epsilon})$. Choose a sequence $r_{i}\rightarrow 1$, and let $w_{i}, w$ be the mappings which coincide with $u_{i}, u$ outside $r_{i}B_{1}$ and $B_{1}$ and are energy minimizing inside $r_{i}B_{1}$ and $B_{1}$ respectively. We have $w_{i}\rightarrow w$ in $C^{0}(\overline{B}_{1+\epsilon})\cap W^{1,2}(B_{1+\epsilon})$. Firstly we show the following claim: **Claim:** Let $\tilde{w}_{i}$ be the energy minimizing map with the same boundary value as $u$ on $r_{i}B_{1}$, then we have: $\tilde{w}_{i}\rightarrow w$ in $C^{0}(\overline{B}_{1+\epsilon})\cap W^{1,2}(B_{1+\epsilon})$. Since $E(u, B_{1+\epsilon})\leq\epsilon_{1}<\epsilon_{SU}$, with $\epsilon_{SU}$ the constant given in [@SU1], we know that $\tilde{w}_{i}$ have uniform inner $C^{2,\alpha}$ bounds on $B_{1}$, so $\forall r<1$, $\tilde{w}_{i}\rightarrow w^{\prime}$ in $C^{2,\alpha}(B_{r})$, and $w^{\prime}$ is a harmonic map on $B_{1}$. By scaling argument, we can show that there are no energy concentration near the boundary of $B_{1}$. So $\tilde{w}_{i}\rightarrow w^{\prime}$ in $W^{1,2}(B_{1+\epsilon})$. We also know from [@Q], as indicated by the proof of Corollary 3.4 of [@CM] that $\tilde{w}_{i}$ are equi-continuous near $\partial(r_{i}B_{1})$ and hence equi-continuous near $\partial B_{1}$ since $r_{i}\rightarrow 1$. So $\tilde{w}_{i}\rightarrow w^{\prime}$ in $C^{0}(\overline{B}_{1+\epsilon})$. By the uniqueness of small energy harmonic map of Corollary 3.3 of [@CM], we know $w^{\prime}=w$. So the claim holds.\ Let $v_{i}=\Pi(\tilde{w}_{i}+u_{i}-u)$ which have the same boundary value as $u_{i}$ and $w_{i}$ on $\partial(r_{i}B_{1})$. Here $\Pi:N_{\delta}\rightarrow N$ is the nearest point projection defined on a tubular neighborhood $N_{\delta}$. When $\delta$ is small enough, we have $|d\Pi|\leq2$. So $\|v_{i}-\tilde{w}_{i}\|_{W^{1,2}(B_{1+\epsilon})}\rightarrow 0$, hence $\|v_{i}-w\|_{W^{1,2}(B_{1+\epsilon})}\rightarrow 0$ by our Claim. By Corollary \[continuity of harmonic replacement\], $|E(w_{i})-E(\tilde{w}_{i})|\rightarrow 0$, hence $|E(w_{i})-E(v_{i})|\rightarrow 0$. By Theorem \[energy gap\], $\|w_{i}-v_{i}\|_{W^{1,2}(r_{i}B_{1})}\rightarrow 0$. So: $$\int_{B_{1+\epsilon}}|\nabla w_{i}-\nabla w|^{2}=\int_{r_{i}B_{1}}|\nabla w_{i}-\nabla w|^{2}+\int_{B_{1+\epsilon}\backslash r_{i}B_{1}}|\nabla u_{i}-\nabla w|^{2}\rightarrow 0.$$ The convergence to $0$ of the second part of the last term in the above is due to $u_{i}\rightarrow u$ and $w=u$ outside $B_{1}$. Hence $w_{i}\rightarrow w$ in $W^{1,2}(B_{1+\epsilon})$.\ To show the $C^{0}(B_{1+\epsilon})$ convergence, we know from similar argument as in the proof of the claim, that $w_{i}$ are equi-continuous near $\partial B_{1}$ by the equi-continuity of $u_{i}$. Recall that [@SU1] gives uniform inner $C^{2,\alpha}$ for $w_{i}$ on $B_{1}$. We have that every subsequence of $w_{i}$ must have $w_{i}\rightarrow w$ in $C^{0}(\overline{B}_{1+\epsilon})$ possibly after taking a further subsequence. So we get $C^{0}(\overline{B}_{1+\epsilon})$ continuity. \[remark of continuity of harmonic replacement2\] Since we always work on path of mappings, and we will do harmonic replacement on balls with continuously varying radii, this result tells us that harmonic replacements will give us another continuous path if we do harmonic replacement continuously on the initial path. We can continuously shrink the radii of the disks on which we do harmonic replacement to $0$, so the new path given by harmonic replacement can be continuously deformed to the original one, i.e. they lie in the same homotopy class. A comparison result for repeated harmonic replacement ----------------------------------------------------- In this section, we will extend the comparison result of local harmonic replacements given in Lemma 3.11 of [@CM] to the case of torus. We will use $\mathcal{B}$ to denote a finite collection of disjoint balls on the complex plane $\mathbb{C}$. If $\mu\in[0,1]$, we denote $\mu\mathcal{B}$ by a finite collection of balls with the same centers as $\mathcal{B}$, but the radii $\mu$ timing those of $\mathcal{B}$. If $u$ is a $C^{0}\cap W^{1,2}$ mapping on the complex plane with small energy on a collection $\mathcal{B}$, let $H(u, \mathcal{B})$ be the mapping which coincides with $u$ outside $\mathcal{B}$, and is the energy minimizing inside $\mathcal{B}$. If $\mathcal{B}_{1}, \mathcal{B}_{2}$ are two such collections, we denote $H(u, \mathcal{B}_{1}, \mathcal{B}_{2})$ to be $H\big(H(u, \mathcal{B}_{1}), \mathcal{B}_{2}\big)$. We will give the relationship between the energy gaps of $u$, $H(u, \mathcal{B}_{1})$ and $H(u, \mathcal{B}_{1}, \mathcal{B}_{2})$. \[comparison\] Fix a torus $T^{2}_{\tau}$ with mark $\tau\in\mathcal{T}_{1}$, and $u\in C^{0}\cap W^{1,2}(T^{2}_{\tau}, N)$. Let $\mathcal{B}_{1}$, $\mathcal{B}_{2}$ be two finite collection of disjoint balls on $T^{2}_{\tau}$, which can also be viewed as collections of disjoint balls on $\mathbb{C}$. If $E(u, \mathcal{B}_{i})\leq\frac{1}{3}\epsilon_{1}$, with $\epsilon_{1}$ as in Theorem \[energy gap\] for $i=1,2$, then there exists a constant $k$ depending on $N$, such that: $$\label{comparison inequality1} E(u)-E[H(u, \mathcal{B}_{1}, \mathcal{B}_{2})]\geq k\bigg(E(u)-E[H(u, \frac{1}{2}\mathcal{B}_{2})]\bigg)^{2},$$ and for any $\mu\in[\frac{1}{8}, \frac{1}{2}]$, $$\label{comparison inequality2} \frac{1}{k}\big(E(u)-E[H(u, \mathcal{B}_{1})]\big)^{\frac{1}{2}}+E(u)-E[H(u, 2\mu\mathcal{B}_{2})]\geq E[H(u, \mathcal{B}_{1})]-E[H(u, \mathcal{B}_{1}, \mu\mathcal{B}_{2})].$$ We know from the energy minimizing property of small energy harmonic maps that the following estimates hold: $$E(u)-E[H(u, \mathcal{B}_{1}, \mathcal{B}_{2})]\geq E(u)-E[H(u, \frac{1}{2}\mathcal{B}_{1})].$$ So the above three inequalities tell us the relationship of energy improvement between any two successive harmonic replacements. We will give the proof by constructing comparison mappings. We will use the following Lemma in our construction. Let $B_{R}$ be the ball of radius $R$ and center $0$ in $\mathbb{C}$, and $N$ the ambient manifold. \[construction from boundary\] (Lemma 3.14 of [@CM]) There exists a $\delta$ and a large constant $C$ depending on $N$, such that for any $f,g\in C^{0}\cap W^{1,2}(\partial B_{R}, N)$, if $f,g$ are equal at some point on $\partial B_{R}$, and: $$R\int_{\partial B_{R}}|f^{\prime}-g^{\prime}|^{2}\leq\delta^{2},$$ we can find some $\rho\in(0,\frac{1}{2}R]$, and a mapping $w\in C^{0}\cap W^{1,2}(B_{R}\backslash B_{R-\rho}, N)$ with $w|_{B_{R}}=f$, $w|_{B_{R-\rho}}=g$, which satisfies estimates: $$\int_{B_{R}\backslash B_{R-\rho}}|\nabla w|^{2}\leq C\big(R\int_{\partial B_{R}}|f^{\prime}|^{2}+|g^{\prime}|^{2}\big)^{\frac{1}{2}}\big(R\int_{\partial B_{R}}|f^{\prime}-g^{\prime}|^{2}\big)^{\frac{1}{2}}.$$ The condition and result of this Lemma are all scaling invariant, so we can apply it to balls of any radius $R$. (of Lemma \[comparison\]) We know that both $u$ and $H(u, \mathcal{B}_{1})$ have energy less than $\frac{2}{3}\epsilon_{2}$ on $\mathcal{B}_{1}\cup\mathcal{B}_{2}$, so Theorem \[energy gap\] shows that energy gaps can control $W^{1,2}-$norm gaps in this case. Denote balls in $\mathcal{B}_{1}$ by $B^{1}_{\alpha}$, and balls in $\mathcal{B}_{2}$ by $B^{2}_{j}$.\ **Step 1** (inequality \[comparison inequality1\]): Since if the second harmonic replacements are done on balls which are disjoint with the balls of the first step, the comparison is easy. So we divide the second class of balls into two disjoint subcollections $\mathcal{B}_{2}=\mathcal{B}_{2+}\cup\mathcal{B}_{2-}$, where $\mathcal{B}_{2+}=\{B^{2}_{j}: \frac{1}{2}B^{2}_{j}\subset B^{1}_{\alpha}\, or\ \frac{1}{2}B^{2}_{j}\cap\mathcal{B}_{1}=\emptyset\}$ for some $B^{1}_{\alpha}\in\mathcal{B}_{1}$. We know that: $$E(u)-E[H(u, \frac{1}{2}\mathcal{B}_{2})]=E(u)-E[H(u, \frac{1}{2}\mathcal{B}_{2+})]+E(u)-E[H(u, \frac{1}{2}\mathcal{B}_{2-})].$$ We will deal with $\mathcal{B}_{2+}$ and $\mathcal{B}_{2-}$ separately.\ For $\mathcal{B}_{2+}$, we have: $$\begin{split} E(u)-E[H(u, \frac{1}{2}\mathcal{B}_{2+})]&=\sum_{\{\frac{1}{2}B^{2}_{j}\cap\mathcal{B}_{1}=\emptyset\}}\big(E(u)-E[H(u, \frac{1}{2}B^{2}_{j})]\big)\\ &+\sum_{\{\frac{1}{2}B^{2}_{j}\subset B^{1}_{\alpha}\}}\big(E(u)-E[H(u, \frac{1}{2}B^{2}_{j})]\big). \end{split}$$ For balls $\frac{1}{2}B^{2}_{j}\cap\mathcal{B}_{1}=\emptyset$, we get from the minimizing property of small energy harmonic maps that: $$\begin{split} E(u)-E[H(u, \frac{1}{2}B^{2}_{j})] &=E[H(u, \mathcal{B}_{1})]-E[H(u, \mathcal{B}_{1}, \frac{1}{2}B^{2}_{j})]\\ &\leq E[H(u, \mathcal{B}_{1})]-E[H(u, \mathcal{B}_{1}, B^{2}_{j})]. \end{split}$$ So, we have: $$\begin{split} \sum_{\{\frac{1}{2}B^{2}_{j}\cap\mathcal{B}_{1}=\emptyset\}}\big(E(u) &-E[H(u, \frac{1}{2}B^{2}_{j})]\big)\\ &\leq\sum_{\{\frac{1}{2}B^{2}_{j}\cap\mathcal{B}_{1}=\emptyset\}}E[H(u, \mathcal{B}_{1})]-E[H(u, \mathcal{B}_{1}, B^{2}_{j})]\\ &\leq E[H(u, \mathcal{B}_{1})]-E[H(u, \mathcal{B}_{1}, \cup_{\frac{1}{2}B^{2}_{j}\cap\mathcal{B}_{1}=\emptyset}B^{2}_{j})]\\ &\leq E(u)-E[H(u, \mathcal{B}_{1}, \mathcal{B}_{2+})]. \end{split}$$ For balls $\frac{1}{2}B^{2}_{j}\subset B^{1}_{\alpha}$, we have $H(u, \mathcal{B}_{1}, \frac{1}{2}B^{2}_{j})=H(u, \mathcal{B}_{1})$, so $$\begin{split} \int_{B^{2}_{j}}|\nabla H(u, \mathcal{B}_{1}, B^{2}_{j})|^{2} &\leq \int_{B^{2}_{j}}|\nabla [H(u, \mathcal{B}_{1}, \frac{1}{2}B^{2}_{j})]|^{2}=\int_{B^{2}_{j}}|\nabla H(u, \mathcal{B}_{1})|^{2}\\ &\leq \int_{B^{2}_{j}}|\nabla H(u, \frac{1}{2}B^{2}_{j})|^{2}. \end{split}$$ Hence: $$\int_{B^{2}_{j}}|\nabla u|^{2}-\int_{B^{2}_{j}}|\nabla H(u, \frac{1}{2}B^{2}_{j})|^{2}\leq \int_{B^{2}_{j}}|\nabla u|^{2}-\int_{B^{2}_{j}}|\nabla H(u, \mathcal{B}_{1}, B^{2}_{j})|^{2}.$$ Summarizing all the results of this case, we have, $$\begin{split} \int_{\underset{B^{2}_{j}\subset B^{1}_{\alpha}}{\cup}B^{2}_{j}}|\nabla u|^{2} &-|\nabla H(u, \frac{1}{2}B^{2}_{j})|^{2} \leq \int_{\underset{B^{2}_{j}\subset B^{1}_{\alpha}}{\cup}B^{2}_{j}}|\nabla u|^{2}-|\nabla H(u, \mathcal{B}_{1}, B^{2}_{j})|^{2}\\ &\leq \int|\nabla u|^{2}-|\nabla u_{1}|^{2}+\int_{\underset{B^{2}_{j}\subset B^{1}_{\alpha}}{\cup}B^{2}_{j}}|\nabla u_{1}|^{2}-|\nabla H(u, \mathcal{B}_{1}, B^{2}_{j})|^{2} \end{split}$$ For the first term, by Theorem \[energy gap\], we have $\int|\nabla u|^{2}-|\nabla u_{1}|^{2}\leq\int|\nabla u-\nabla u_{1}|^{2}\leq 4\Big(E(u)-E(u_{1})\Big)$. For the second term, we have $E(u_{1})-E[H(u, \mathcal{B}_{1}, \underset{B^{2}_{j}\subset B^{1}_{\alpha}}{\cup}B^{2}_{j})]\leq E(u)-E[H(u, \mathcal{B}_{1}, \underset{B^{2}_{j}\subset B^{1}_{\alpha}}{\cup}B^{2}_{j})]$. Combining them together, $$E(u)-E[H(u, \frac{1}{2}\mathcal{B}_{2+})]\leq C\big(E(u)-E[H(u, \mathcal{B}_{1}, \mathcal{B}_{2+})]\big).$$\ For the collection $\mathcal{B}_{2-}$, we should consider balls separately. Specify a ball $B^{2}_{j}$, such that $B^{2}_{j}\cap B^{1}_{\alpha}\neq\emptyset$ for some $B^{1}_{\alpha}\in\mathcal{B}_{1}$. Denote $B^{2}_{j}$ by $B_{R}$, and $u_{1}=H(u, \mathcal{B}_{1})$. We will compare $E[H(u, \frac{1}{2}B_{R})]$ with $E[H(u_{1}, B_{R})]$. Using simple measure theory or the Courant-Leabesgue Lemma(Lemma 3.1.1 of [@J1]), we can find a subset of $[\frac{3}{4}R, R]$ with measure $\frac{1}{36}R$, such that for any $r$ in this subset, we have: $$\int_{\partial B_{r}}|\nabla u_{1}-\nabla u|^{2} \leq\frac{9}{R}\int^{R}_{\frac{3}{4}R}\int_{\partial B_{s}}|\nabla u_{1}-\nabla u|^{2}\leq\frac{9}{r}\int_{B_{R}}|\nabla u_{1}-\nabla u|^{2},$$ $$\int_{\partial B_{r}}|\nabla u_{1}|^{2}+|\nabla u|^{2} \leq\frac{9}{R}\int^{R}_{\frac{3}{4}R}\int_{\partial B_{s}}|\nabla u_{1}|^{2}+|\nabla u|^{2}\leq\frac{9}{r}\int_{B_{R}}|\nabla u_{1}|^{2}+|\nabla u|^{2}.$$ By choosing $\epsilon_{1}$ small enough, we can get $r\int_{\partial B_{r}}|\nabla u_{1}|^{2}+|\nabla u|^{2}\leq\delta^{2}$ and $r\int_{\partial B_{r}}|\nabla u_{1}-\nabla u|^{2}\leq\delta^{2}$ with $\delta$ as in the above Lemma \[construction from boundary\]. Since $\frac{1}{2}B_{R}\cap B^{1}_{\alpha}\neq\emptyset$, but $\frac{1}{2}B_{R}\nsubseteq B^{1}_{\alpha}$, $u$ and $u_{1}$ must be equal at some point on $\partial B_{r}$. So from Lemma \[construction from boundary\], we can find a $\rho\in(0,\frac{1}{2}r]$ and a mapping $w\in C^{0}\cap W^{1,2}(B_{r}\backslash B_{r-\rho})$ with $w|_{\partial B_{r}}=u_{1}$, $w|_{\partial B_{r-\rho}}=u$, and: $$\label{construction of w} \begin{split} \int_{B_{r}\backslash B_{r-\rho}}|\nabla w|^{2} &\leq C\big(r\int_{\partial B_{r}}|\nabla u_{1}-\nabla u|^{2}\big)^{\frac{1}{2}}\big(r\int_{\partial B_{r}}|\nabla u_{1}|^{2}+|\nabla u|^{2}\big)^{\frac{1}{2}}\\ &\leq C\big(\int_{B_{R}}|\nabla u_{1}-\nabla u|^{2}\big)^{\frac{1}{2}}\big(\int_{B_{R}}|\nabla u_{1}|^{2}+|\nabla u|^{2}\big)^{\frac{1}{2}}. \end{split}$$ Define a comparison map $v$ on $B_{R}$ such that: $$v = \left\{ \begin{array}{ll} u_{1} & \textrm{on $B_{R}\backslash B_{r}$}\\ w & \textrm{on $B_{r}\backslash B_{r-\rho}$}\\ H(u, B_{r})(\frac{r}{r-\rho} x) & \textrm{on $B_{r-\rho}$} \end{array} \right..$$ We know $E[H(u_{1}, B_{R})]\leq E(v)$ since $H(u_{1}, B_{R})$ is energy minimizing among all maps with the same boundary value on $B_{R}$. So we have: $$\begin{split} \int_{B_{R}}|\nabla H(u_{1}, B_{R})|^{2} &\leq\int_{B_{R}}|\nabla v|^{2}\\ &=\int_{B_{R}\backslash B_{r}}|\nabla u_{1}|^{2}+\int_{B_{r}\backslash B_{r-\rho}}|\nabla w|^{2}+\int_{B_{r-\rho}}|\nabla H(u, B_{r})(\frac{r}{r-\rho}\ \cdot)|^{2}\\ &=\int_{B_{R}\backslash B_{r}}|\nabla u_{1}|^{2}+\int_{B_{r}\backslash B_{r-\rho}}|\nabla w|^{2}+\int_{B_{r}}|\nabla H(u, B_{r})|^{2}. \end{split}$$ The second equation is due to conformal invariance of the Dirichlet integral. Hence $$\begin{split} &\int_{\frac{1}{2}B_{R}}|\nabla u|^{2}-\int_{\frac{1}{2}B_{R}}|\nabla H(u, \frac{1}{2}B_{R})|^{2} \leq\int_{B_{r}}|\nabla u|^{2}-\int_{B_{r}}|\nabla H(u, B_{r})|^{2}\\ &\leq\int_{B_{r}}|\nabla u|^{2}-\int_{B_{R}}|\nabla H(u_{1}, B_{R})|^{2}+\int_{B_{r}\backslash B_{r-\rho}}|\nabla w|^{2}+\int_{B_{R}\backslash B_{r}}|\nabla u_{1}|^{2}\\ &\leq\int_{B_{R}}|\nabla u_{1}|^{2}-\int_{B_{R}}|\nabla H(u_{1}, B_{R})|^{2}+\int_{B_{r}\backslash B_{r-\rho}}|\nabla w|^{2}\\ &+\int_{B_{r}}|\nabla u|^{2}-\int_{B_{r}}|\nabla u_{1}|^{2}. \end{split}$$ By argument similar to the above, we know $\int|\nabla u|^{2}-|\nabla u_{1}|^{2}\leq 4\Big(E(u)-E(u_{1})\Big)$. Put the estimates \[construction of w\] into the above inequality, and sum over $B^{2}_{j}\in\mathcal{B}_{2-}$: $$\begin{split} E(u)-E[H(u, \frac{1}{2}\mathcal{B}_{2-})]&\leq E(u_{1})-E[H(u_{1}, \mathcal{B}_{2-})]\\ &+C\big(E(u)-E(u_{1})\big)^{\frac{1}{2}}+E(u)-E(u_{1})\\ &=E(u)-E[H(u_{1}, \mathcal{B}_{2-})]+C\big(E(u)-E(u_{1})\big)^{\frac{1}{2}}\\ &\leq E(u)-E[H(u, \mathcal{B}_{1}, \mathcal{B}_{2})]+C\big(E(u)-E[H(u, \mathcal{B}_{1}, \mathcal{B}_{2})]\big)^{\frac{1}{2}}. \end{split}$$ Using the fact that all the maps have energy less than $\frac{1}{2}\epsilon_{1}$, we have: $$E(u)-E[H(u, \frac{1}{2}\mathcal{B}_{2-})]\leq C^{\prime}\big(E(u)-E[H(u, \mathcal{B}_{1}, \mathcal{B}_{2})]\big)^{\frac{1}{2}}.$$ Combining results on $\mathcal{B}_{2+}$ and $\mathcal{B}_{2-}$, we have: $$E(u)-E[H(u, \frac{1}{2}\mathcal{B}_{2})]\leq C\big(E(u)-E[H(u, \mathcal{B}_{1}, \mathcal{B}_{2})]\big)^{\frac{1}{2}},$$ i.e. the first inequality \[comparison inequality1\].\ **Step 2** (inequality \[comparison inequality2\]): In this step, we also divide $\mathcal{B}_{2}$ into two classes with $\mathcal{B}_{2+}=\{B^{2}_{j}:\mu B^{2}_{j}\subset B^{1}_{\alpha}\ or\ \mu B^{2}_{j}\cap\mathcal{B}_{1}=\emptyset\}$. For $\mu B^{2}_{j}\subset B^{1}_{\alpha}$, we have $H(u, \mathcal{B}_{1})=H(u, \mathcal{B}_{1}, \mu B^{2}_{j})$, so we need not to consider such ball. For $\mu B^{2}_{j}\cap\mathcal{B}_{1}=\emptyset$, we have: $$E[H(u, \mathcal{B}_{1})]-E[H(u, \mathcal{B}_{1}, \mu B^{2}_{j})]=E(u)-E[H(u, \mu B^{2}_{j})]\leq E(u)-E[H(u, 2\mu B^{2}_{j})].$$ So summing all the balls in $\mathcal{B}_{2+}$, we have: $$E[H(u, \mathcal{B}_{1})]-E[H(u, \mathcal{B}_{1}, \mu\mathcal{B}_{2+})]\leq E(u)-E[H(u, 2\mu\mathcal{B}_{2+})].$$\ For the class $\mathcal{B}_{2-}$, we use similar method as above. The difference are that $B_{R}=2\mu B^{2}_{j}$, and in the definition of $v$, the role of $u$, $u_{1}$ changed: $$v = \left\{ \begin{array}{ll} u & \textrm{on $B_{R}\backslash B_{r}$}\\ w & \textrm{on $B_{r}\backslash B_{r-\rho}$}\\ H(u_{1}, B_{r})(\frac{r}{r-\rho}\ x) & \textrm{on $B_{r-\rho}$} \end{array} \right..$$ So we have: $$\int_{B_{R}}|\nabla H(u, B_{R})|^{2}\leq\int_{B_{R}\backslash B_{r}}|\nabla u|^{2}+\int_{B_{r}\backslash B_{r-\rho}}|\nabla w|^{2}+\int_{B_{r}}|\nabla H(u_{1}, B_{r})|^{2}.$$ And $$\begin{split} &\int_{\frac{1}{2}B_{R}}|\nabla u_{1}|^{2}-\int_{\frac{1}{2}B_{R}}|\nabla H(u_{1}, \frac{1}{2}B_{R})|^{2}\leq\int_{B_{r}}|\nabla u_{1}|^{2}-\int_{B_{r}}|\nabla H(u_{1}, \frac{1}{2}B_{R})|^{2}\\ &\leq\int_{B_{r}}|\nabla u_{1}|^{2}-\int_{B_{R}}|\nabla H(u, B_{R})|^{2}+\int_{B_{R}\backslash B_{r}}|\nabla u|^{2}+\int_{B_{r}\backslash B_{r-\rho}}|\nabla w|^{2}\\ &\leq\int_{B_{R}}|\nabla u|^{2}-\int_{B_{R}}|\nabla H(u, B_{R})|^{2}+\int_{B_{r}\backslash B_{r-\rho}}|\nabla w|^{2}+\int_{B_{r}}|\nabla u_{1}|^{2}-|\nabla u|^{2}. \end{split}$$ Here we use our argument $\int|\nabla u|^{2}-|\nabla u_{1}|^{2}\leq 4\Big(E(u)-E(u_{1})\Big)$ again. Use estimates \[construction of w\] again observing that $u$, $u_{1}$ have local energy less than $\frac{1}{3}\epsilon_{1}$, and sum over $B^{2}_{j}\in\mathcal{B}_{2-}$: $$E(u_{1})-E[H(u_{1}, \mu\mathcal{B}_{2-})]\leq E(u)-E[H(u, 2\mu\mathcal{B}_{2-})]+C\big(E(u)-E(u_{1})\big)^{\frac{1}{2}}.$$ Combining results on $\mathcal{B}_{2+}$ and $\mathcal{B}_{2-}$, we will get inequality \[comparison inequality2\]. Construction of the perturbation -------------------------------- To construct a perturbation satisfying condition $(*)$ in Lemma \[compactification\], we can reduce to control the energy gaps instead of $W^{1,2}$-norm. Since we only focus on balls with small energy, there must be a maximal possible energy decrease for a fixed map on certain such balls. If we firstly do harmonic replacement on such balls, we can then control the energy decrease for harmonic replacement on other small energy balls by the comparison Lemma \[comparison\]. For a path $\big(\sigma(t), \tau(t)\big)\in\tilde{\Omega}$, $\epsilon\in(0, \epsilon_{1}]$, define: $e_{\epsilon, \sigma(t)}=\sup_{\mathcal{B}}\{E\big(\sigma(t), \tau(t)\big)-E[H(\sigma(t), \frac{1}{2}\mathcal{B}), \tau(t)]\}$. Here $\mathcal{B}$ are chosen as any finite collection of disjoint balls on $T^{2}_{\tau_{t}}$, satisfying: $E\big(\sigma(t), \mathcal{B}\big)\leq\epsilon$. We know $e_{\epsilon, \sigma(t)}>0$ if $\big(\sigma(t), \tau(t)\big)$ is not harmonic. $e_{\epsilon, \sigma}$ has some continuity as follows: \[continuity of maximal energy decrease\] Use notations as above, $\forall t\in(0,1)$, if $\sigma(t)$ is not harmonic, we can find a neighborhood $I^{t}\subset(0,1)$ of $t$ depending on $t$, $\epsilon$ and the path $\sigma$, such that $$e_{\frac{1}{2}\epsilon, \sigma(s)}\leq 2 e_{\epsilon, \sigma(t)},$$ for $s\in 2 I^{t}$. $\sigma(t)\in C^{0}\cap W^{1,2}(T^{2}_{\tau_{t}})$ can be viewed as defined on a uniform domain $B_{R}\subset\mathbb{C}$ with $\{1, \tau(t)\}\subset B_{R}$ for all $t\in[0,1]$, i.e. $\sigma\in C^{0}\big([0,1], C^{0}\cap W^{1,2}(B_{R}, N)\big)$. Since $e_{\epsilon, \sigma(t)}>0$, we can find a neighborhood $\tilde{I}$ of $t$ such that for all $s\in\tilde{I}$, and for any $\mathcal{B}\subset B_{R}$, we have $$\frac{1}{2}\int_{\mathcal{B}}|\nabla\sigma(s)-\nabla\sigma(t)|^{2}\leq min\{\frac{1}{4}e_{\epsilon, \sigma(t)}, \frac{1}{2}\epsilon\}.$$ For fixed $s\in\tilde{I}$, we can find a finite collection of balls $\mathcal{B}\subset B_{R}$, such that $E\big(\sigma(s), \mathcal{B}\big)\leq\frac{1}{2}\epsilon$ and $E\big(\sigma(s)\big)-E[H(\sigma(s), \frac{1}{2}\mathcal{B})]\geq\frac{3}{4}e_{\frac{1}{2}\epsilon, \sigma(s)}$ by the definition of $e_{\frac{1}{2}\epsilon, \sigma(s)}$. Hence $E\big(\sigma(t), \mathcal{B}\big)\leq E\big(\sigma(s), \mathcal{B}\big)+\frac{1}{2}\epsilon\leq\epsilon$, so we have $E\big(\sigma(t)\big)-E[H(\sigma(t), \frac{1}{2}\mathcal{B})]\leq e_{\epsilon, \sigma(t)}$. Thus: $$\begin{split} E\big(\sigma(s)\big)-E[H(\sigma(s), \frac{1}{2}\mathcal{B})] &\leq |E\big(\sigma(t)\big)-E\big(\sigma(s)\big)|+E\big(\sigma(t)\big)-E[H(\sigma(t), \frac{1}{2}\mathcal{B})]\\ &+|E[H(\sigma(t), \frac{1}{2}\mathcal{B})]-E[H(\sigma(s), \frac{1}{2}\mathcal{B})]|. \end{split}$$ By Corollary \[continuity of harmonic replacement\], after possibly shrinking the neighborhood $\tilde{I}$ to a smaller one $I$, we will have $|E\big(\sigma(t)\big)-E\big(\sigma(s)\big)|\leq\frac{1}{4}e_{\epsilon, \sigma(t)}$ and $|E[H(\sigma(t), \frac{1}{2}\mathcal{B})]-E[H(\sigma(s), \frac{1}{2}\mathcal{B})]|\leq\frac{1}{4}e_{\epsilon, \sigma(t)}$. So we know $E\big(\sigma(s)\big)-E[H(\sigma(s), \frac{1}{2}\mathcal{B})]\leq\frac{3}{2}e_{\epsilon, \sigma(t)}$, and hence $e_{\frac{1}{2}\epsilon, \sigma(s)}\leq 2e_{\epsilon, \sigma(t)}$. Now we will find a good family of coverings of the time parameter on which we do harmonic replacement for fixed $\gamma(t)$. In fact, there will be at most two overlaps for these coverings for a fixed time $t$. Let $\big(\gamma(t), \tau(t)\big)$ be as in Lemma \[compactification\], $B_{R}\supset\{1, \tau(t)\}$ as above. There exist $m$ collection of disjoint balls $\mathcal{B}_{1},\cdots, \mathcal{B}_{m}\subset B_{R}$, which are disjoint balls on $T^{2}_{\tau(t)}$ after quotient by $\{1, \tau(t)\}$, and continuous functions $r_{j}:[0,1]\rightarrow[0,1]$, $j=1,\cdots,m$, satisfying: $1^{\circ}$. At most two $r_{j}$ are positive for a fixed $t$, and $E\big(\gamma(t), r_{j}(t)\mathcal{B}_{j}\big)\leq\frac{1}{3}\epsilon_{1}$; $2^{\circ}$. If $t\in[0,1]$, such that $E\big(\gamma(t), \tau(t)\big)\geq\frac{1}{2}\mathcal{W}$, there exists a $j$, such that $E\big(\gamma(t)\big)-E[H(\gamma(t), \frac{1}{2}r_{j}\mathcal{B}_{j})]\geq\frac{1}{8}e_{\frac{1}{8}\epsilon_{1}, \gamma(t)}$. By continuity, $I=\{t\in[0,1]:E\big(\gamma(t), \tau(t)\big)\geq\frac{1}{2}W\}$ is a compact subset of $(0,1)$, since the boundary maps $\gamma(0), \gamma(1)$ have energy almost $0$ by our almost conformal parametrization. Since $\gamma(t)$ has no nonconstant harmonic slices, $\forall t\in I$, we can find a finite collection of disjoint balls $\mathcal{B}_{t}$, such that, $E\big(\gamma(t), \mathcal{B}_{t}\big)\leq\frac{1}{4}\epsilon_{1}$, and: $$E\big(\gamma(t)\big)-E[H(\gamma(t), \frac{1}{2}\mathcal{B}_{t})]\geq\frac{1}{2}e_{\frac{1}{4}\epsilon_{1}, \gamma(t)}>0.$$ By Lemma \[continuity of maximal energy decrease\] and continuity of $\gamma$, we can find a neighborhood $I^{t}\ni t$, such that: $e_{\frac{1}{8}\epsilon_{1}, \gamma(s)}\leq 2 e_{\frac{1}{4}\epsilon_{1}, \gamma(t)}$, and $E\big(\gamma(s), \mathcal{B}_{t}\big)\leq\frac{1}{3}\epsilon_{1}$ for $s\in 2I^{t}$. By the continuity of harmonic replacement Corollary \[continuity of harmonic replacement\], after possibly shrinking $I^{t}$, we can get for $s\in 2I^{t}$: $$|\{E\big(\gamma(t)\big)-E[H(\gamma(t), \frac{1}{2}\mathcal{B}_{t})]\}-\{E\big(\gamma(s)\big)-E[H(\gamma(s), \frac{1}{2}\mathcal{B}_{t})]\}|\leq\frac{1}{4}e_{\frac{1}{4}\epsilon_{1}, \gamma(t)}.$$ So we have $E\big(\gamma(s)\big)-E[H(\gamma(s), \frac{1}{2}\mathcal{B}_{t})]\geq\frac{1}{4}e_{\frac{1}{4}\epsilon_{1}, \gamma(t)}\geq\frac{1}{8}e_{\frac{1}{8}\epsilon_{1}, \gamma(s)}$, for $s\in 2I^{t}$. By the compactness of $I$, we can find a finite covering $\{I^{t_{i}}\}$ of $I$, and we can shrink $I^{t_{i}}$ such that each $I^{i}$ intersects at most two $I^{t_{k}}$, and these two intervals do not intersect with each other. Choose $\mathcal{B}_{j}=\mathcal{B}_{t_{j}}$, and choose $r_{j}$ which are equal to $1$ on $I^{t_{j}}$, and $0$ outside $2I^{t_{j}}$. We also urge that $r_{j}(t)=0$, if $t$ lies in other interval $I^{t_{l}}$ which does not intersect with $I^{t_{j}}$. It is easy to see these $\mathcal{B}_{j}$ and $r_{j}$ satisfy the Lemma. (of Lemma \[compactification\]) Choose the covering $\mathcal{B}_{j}$ and functions $r_{j}$ as the above Lemma. Let $\gamma^{0}(t)=\gamma(t)$, and $\gamma^{k}(t)=H\big(\gamma^{k-1}(t), r_{k}(t)\mathcal{B}_{k}\big)$, for $k=1, \cdots, m$. and let $\rho(t)=\gamma^{m}(t)$. We will show that $\rho\in[\gamma]$. By Corollary \[continuity of harmonic replacement2\], we know $t\rightarrow\gamma^{k}(t)$ is continuous from $[0,1]$ to $C^{0}\cap W^{1,2}$, so $\rho\in\Omega$. Since we can continuously shrink $r_{j}$ to $0$, and again Corollary \[continuity of harmonic replacement2\] and the Remark \[remark of continuity of harmonic replacement2\] show that we can hence continuously deform $\rho$ to $\gamma$ in $\Omega$. So $\rho\in[\gamma]$. Clearly we have $E\big(\rho(t)\big)\leq E\big(\gamma(t)\big)$.\ Now we show property $(*)$. Property $1^{\circ}$ of the above Lemma shows that there are at most two steps of harmonic replacements from $\gamma$ to $\rho$, and for fixed $t$ with $E\big(\gamma(t)\big)\geq\frac{1}{2}\mathcal{W}$ we denote the possible middle nontrivial harmonic replacement by $\gamma^{k}(t)$. For any finite collection of disjoint balls $\mathcal{B}=\underset{i}{\cup}B_{i}$ with $E\big(\rho(t), \mathcal{B})\leq\frac{1}{12}\epsilon_{1}$, we can assume that $\gamma(t), \gamma^{k}(t)$ have energy at least $\frac{1}{8}\epsilon_{1}$ on $\mathcal{B}$, or we have will a lower bound of $E\big(\gamma(t)\big)-E\big(\rho(t)\big)$, hence inequality \[compactification formula\] holds. By property $2^{\circ}$ of the above Lemma, the energy decrease from $\gamma(t)$ to $\gamma^{k}(t)$ or from $\gamma^{k}(t)$ to $\rho(t)$ is at least $\frac{1}{8}e_{\frac{1}{8}\epsilon_{1}, \gamma(t)}$. We have estimates at worst by Lemma \[comparison\]: $$E\big(\gamma(t)\big)-E\big(\rho(t)\big)\geq k\big(\frac{1}{8}e_{\frac{1}{8}\epsilon_{1}, \gamma(t)}\big)^{2}.$$ Now using inequality \[comparison inequality2\] of Lemma \[comparison\] with $\mu=\frac{1}{8},\frac{1}{4}$ twice in the case that two $r_{j}(t)>0$, we have: $$\begin{split} E\big(\rho(t)\big)-E[H(\rho(t), \frac{1}{8}\mathcal{B})] &\leq E\big(\gamma^{k}(t)\big)-E[H(\gamma^{k}(t), \frac{1}{4}\mathcal{B})]+\frac{1}{k}[E\big(\gamma^{k}(t)\big)-E\big(\rho(t)\big)]^{\frac{1}{2}}\\ &\leq E\big(\gamma(t)\big)-E[H(\gamma(t), \frac{1}{2}\mathcal{B})]+\frac{1}{k}[E\big(\gamma(t)\big)-E\big(\gamma^{k}(t)\big)]^{\frac{1}{2}}\\ &+\frac{1}{k}[E\big(\gamma(t)\big)-E\big(\rho(t)\big)]^{\frac{1}{2}}\\ &\leq e_{\frac{1}{8}\epsilon_{1}, \gamma(t)}+C [E\big(\gamma(t)\big)-E\big(\rho(t)\big)]^{\frac{1}{2}}\\ &\leq C[E\big(\gamma(t)\big)-E\big(\rho(t)\big)]^{\frac{1}{2}}. \end{split}$$ It is easy to get similar estimates in the case only one $r_{j}(t)>0$. If we choose $\epsilon_{0}=\frac{1}{12}\epsilon_{1}$ and $\Psi$ a square root function, together with Theorem \[energy gap\], we will get property $(*)$. \[nonconstant slice of minimizing sequence\] Before going on, we have to give some restrictions on the area minimizing sequence $\tilde{\gamma}_{n}(t)$. In fact, we can assume that $\tilde{\gamma}_{n}(t)$ have no non-constant harmonic slices, i.e. $\big(\tilde{\gamma}_{n}(t), T^{2}_{0}\big)$ is not harmonic unless it is a constant map. We can do this by a reparametrization on $T^{2}_{0}$ as on page 10 of [@CM]. In fact, we can assume $\tilde{\gamma}_{n}(t)$ is a constant map on a small region on $T^{2}_{0}$ by small perturbation. Since $\gamma_{n}(t)$ differ from $\tilde{\gamma}_{n}(t)$ by a diffeomorphism from $T^{2}_{\tau_{n}(t)}$ to $T^{2}_{0}$, $\gamma_{n}(t)$ is also a constant map on a small region of $T^{2}_{\tau_{n}(t)}$. Hence $\gamma_{n}(t)$ is not harmonic unless it is a constant map by the unique continuation of harmonic maps(Corollary 2.6.1 of [@J1]). So we can apply Lemma \[compactification\] to $\big(\gamma_{n}(t), \tau_{n}(t)\big)$. Hence there always exist a min-max sequence $\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)$, such that $E\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)\rightarrow \mathcal{W}$ satisfying property $(*)$ of Lemma \[compactification\], which will imply bubbling convergence of $\{\rho_{n}(t_{n}), \tau_{n}(t_{n})\}$. But we have to remember that we do not know the behavior of $\tau_{n}(t_{n})$, so we will discuss two cases in the next section. Convergence results =================== In the paper [@DLL] of Ding, Li and Liu, they discussed bubbling convergence results of almost harmonic maps from tori with conformal structures converging or diverging. If the conformal structures converge, the sequence of almost harmonic maps will bubbling converge to a minimal torus together with possibly several minimal spheres. Here convergence of conformal structures will possibly ensure existence of a nontrivial minimal torus. If the conformal structures diverge to infinity, the bubbling limits only contain several minimal spheres, with the body map from torus degenerate. We will have similar results for our minimizing sequences $\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)$. In fact, our sequence are almost conformal. \[almost conformal for perturbed sequence\] If $E\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)\rightarrow\mathcal{W}$, we have $E\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)-Area\big(\rho_{n}(t_{n})\big)\rightarrow 0$. Although after the perturbation is Section 4, $\big(\rho_{n}(t), \tau_{n}(t)\big)$ may be far from conformal for some $t\in[0,1]$, this result tells us that it will still be almost conformal for the mappings with energy closed to $\mathcal{W}$. We know $\underset{t}{\max} E\big(\gamma_{n}(t), \tau_{n}(t)\big)\rightarrow\mathcal{W}$, and $E\big(\gamma_{n}(t), \tau_{n}(t)\big)\geq E\big(\rho_{n}(t), \tau_{n}(t)\big)$. So we have $E\big(\gamma_{n}(t_{n}), \tau_{n}(t_{n})\big)-E\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)\rightarrow 0$. As we know from the construction from $\gamma_{n}(t)$ to $\rho_{n}(t)$, $\rho_{n}(t)$ is gotten by at most twice harmonic replacements from $\gamma_{n}(t)$ on balls where $\gamma_{n}(t)$ have energy less than $\epsilon_{1}$. We denote the possible middle harmonic replacement by $\gamma^{k}_{n}(t)$ as in the proof of Lemma \[compactification\]. From Theorem \[energy gap\], we know that $\|\nabla\gamma_{n}(t_{n})-\nabla\gamma^{k}_{n}(t_{n})\|_{L^{2}}\leq 4 [E\big(\gamma_{n}(t_{n}), \tau_{n}(t_{n})\big)-E\big(\gamma^{k}_{n}(t_{n}), \tau_{n}(t_{n})\big)]\rightarrow 0$, and $\|\nabla\gamma^{k}_{n}(t_{n})-\nabla\rho_{n}(t_{n})\|_{L^{2}}\leq 4 [E\big(\gamma^{k}_{n}(t_{n}), \tau_{n}(t_{n})\big)-E\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)]\rightarrow 0$. Since all the energy of $\gamma_{n}(t)$, $\rho_{n}(t)$ are bounded, we know that $|Area\big(\gamma_{n}(t_{n})\big)-Area\big(\rho_{n}(t_{n})\big)|\leq |Area\big(\gamma_{n}(t_{n})\big)-Area\big(\gamma^{k}_{n}(t_{n})\big)|+|Area\big(\gamma^{k}_{n}(t_{n})\big)-Area\big(\rho_{n}(t_{n})\big)| \leq C\{\|\nabla\gamma_{n}(t_{n})-\nabla\gamma^{k}_{n}(t_{n})\|_{L^{2}}+\|\nabla\gamma^{k}_{n}(t_{n})-\nabla\rho_{n}(t_{n})\|_{L^{2}}\}\rightarrow 0$. As $E\big(\gamma_{n}(t_{n}), \tau_{n}(t_{n})\big)-Area\big(\gamma_{n}(t_{n})\big)\rightarrow 0$, we have $E\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)-Area\big(\rho_{n}(t_{n})\big)\rightarrow 0$. To discuss bubble convergence for $\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)$, we firstly talk about *the convergence of the metrics* given by $\tau_{n}(t_{n})\in\mathcal{T}_{1}$. In fact, two metrics $\tau$ and $\tau^{\prime}$ are conformally equivalent, if they lie in the same orbit of $PSL(2, \mathbb{Z})$. Denote $\mathcal{M}_{1}=\{z\in\mathbb{C}, |z|\geq 1, Imz>0, -\frac{1}{2}<Rez\leq\frac{1}{2}, \and\ if\ |z|=1, Rez\geq 0\}$ to be the fundamental region of $PSL(2, \mathbb{Z})$, which is also the moduli space of all conformal structures on $T^{2}$. So every such metric in $\mathcal{T}_{1}$ is conformally equivalent to an element in $\mathcal{M}_{1}$ after a $PSL(2, \mathbb{Z})$-action. We say *a sequence $\{\tau_{n}\}$ converge to $\tau_{0}\in\mathcal{M}_{1}$* if after being conformally translated to $\{\tau^{\prime}_{n}\}\subset\mathcal{M}_{1}$ by actions in $PSL(2, \mathbb{Z})$, $\tau^{\prime}_{n}\rightarrow\tau_{0}$. Since area and energy are all conformally invariant, we can always consider bubble convergence after conformally changing the domain metrics to the moduli space $\mathcal{M}_{1}$.\ There is *a criterion for convergence of conformal structures on Riemann surfaces* with genus $g$ given by Mumford, i.e. Lemma 3.3.2 in [@J1], or Section 4 in [@DLL]. If the lengths of the shortest closed geodesics on a family of genus $g$ surfaces have a positive lower bound, then the conformal structures on these surfaces will converge after possibly taking a subsequence. In the case of torus $T^{2}$, this criterion is relatively simple. Denote $\tau=\tau_{1}+\sqrt{-1}\tau_{2}$ to be the conformal structure on a marked torus, and we use the second normalization as discussed above[^3]. So $T^{2}_{\tau}=\{\frac{1}{\sqrt{\tau_{2}}}, \frac{\tau_{1}}{\sqrt{\tau_{2}}}+\sqrt{\tau_{2}}\}$. That the conformal structure $\tau$ degenerate means $\tau_{2}\rightarrow\infty$. The length of the shortest closed geodesic on $T^{2}_{\tau}$ has the same order as $\frac{1}{\sqrt{\tau_{2}}}$. So the criterion is obvious.\ \[convergence theorem\] Using the above notations, let $\big(\rho_{n}(t), \tau_{n}(t)\big)$ be what we get in the last section by perturbation of $\big(\gamma_{n}(t), \tau_{n}(t)\big)$ as in Lemma \[compactification\] and Remark \[nonconstant slice of minimizing sequence\], then all subsequences $\rho_{n}(t_{n})$ with $E\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)\rightarrow\mathcal{W}_{E}$, satisfy: (\*) For any finite collection of disjoint balls $\underset{i}{\cup}B_{i}$ on $T^{2}_{\tau_{n}(t_{n})}$ such that $E\big(\rho_{n}(t_{n}), \underset{i}{\cup}B_{i}\big)\leq\epsilon_{0}$, let $v$ be the harmonic replacement of $\rho_{n}(t_{n})$ on $\frac{1}{8}\underset{i}{\cup}B_{i}$. We have: $$\int_{\frac{1}{8}\underset{i}{\cup}B_{i}}|\nabla\rho_{n}(t_{n})-\nabla v|^{2}\rightarrow 0.$$ Here $\epsilon_{0}$ is the small constant given in Lemma \[compactification\]. We have the following two possible cases for $\Big\{\rho_{n}(t_{n}), \tau_{n}(t_{n})\Big\}$: (1). If $\tau_{n}(t_{n})\rightarrow\tau_{\infty}$ in the above sense, then there exist a conformal harmonic map $u:\big(T^{2}, \tau_{\infty}\big)\rightarrow N$, and harmonic spheres $\{u_{i}\}$, such that $\rho_{n}(t_{n})$ bubble converge to $\big(u, u_{1}, \ldots, u_{l}\big)$, with: $$\label{energy identity 1} \underset{n\rightarrow\infty}{\lim}E\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)=E(u, \tau_{\infty})+\underset{i}{\sum}E(u_{i}).$$ (2). If $\tau_{n}(t_{n})$ diverge, then there exist only several harmonic spheres $\{u_{i}\}$, such that $\rho_{n}(t_{n})$ bubble converge to $\big(u_{1}, \ldots, u_{l}\big)$, with body map degenerated, and $$\label{energy identity 2} \underset{n\rightarrow\infty}{\lim}E\big(\rho_{n}(t_{n}), \tau_{n}(t_{n})\big)=\underset{i}{\sum}E(u_{i}).$$ We point out here that property $(*)$ is invariant when we do recaling in the bubble process. Property $(*)$ also holds when we conformally change the metrics $\tau_{n}(t_{n})$ to $\mathcal{M}_{1}$. These two invariance properties ensure us to use property $(*)$ in all our proof. For case (1), we can use the bubbling convergence given by Sacks and Uhlenbeck in [@SU1]. Since the area and energy of this sequence will converge to the same value $\mathcal{W}=\mathcal{W}_{E}$, the energy identity \[energy identity 1\] holds. The bubbling limits are the solution of this variational problem. For case (2), the length of the shortest closed geodesics will converge to $0$. So by the argument given by Ding, Li and Liu in Section 4 of [@DLL], we can tear the torus to a long cylinder. After some conformal scaling, we can assume the radii of the cylinders equal $1$. So the sequence of almost harmonic mappings on long cylinders will converge to a set of harmonic spheres by an argument given in an un-published note [@D] of Ding. Similar argument as case (1) ensures the energy identity \[energy identity 2\]. We will need the following Proposition when we prove identities \[energy identity 1\] and \[energy identity 2\]. We denote $\mathcal{C}_{r_{1}, r_{2}}$ as a part of the cylinder $S^{1}\times\mathbb{R}$ with radial coordinates between $r_{1}$ and $r_{2}$. Clearly $\mathcal{C}_{r_{1}, r_{2}}$ is conformally equivalent to the annulus $B_{e^{-r_{2}}}\backslash B_{e^{-r_{1}}}$. Here we have to recall the concept of almost harmonic maps defined by [@CM]. Let $N$ be the ambient manifold. For $\nu >0$, we call $u\in W^{1,2}(\mathcal{C}_{r_{1}, r_{2}}, N)$ a **$\nu$-almost harmonic map**(Definition B.27 in [@CM]) if for any finite collection of disjoint balls $\mathcal{B}$ in the conformally equivalent annulus $B_{e^{-r_{2}}}\backslash B_{e^{-r_{1}}}$ of $\mathcal{C}_{r_{1}, r_{2}}$, there is an energy minimizing map $v:\cup_{\mathcal{B}}\frac{1}{8}B\rightarrow N$ with the same boundary value as $u$ such that: $$\int_{\frac{1}{8}\mathcal{B}}|\nabla u-\nabla v|^{2}\leq\nu\int_{\mathcal{C}_{r_{1}, r_{2}}}|\nabla u|^{2}.$$ \[almost harmonic maps on necks\] (Proposition B.29 of [@CM]) $\forall \delta>0$, there exist small constants $\nu>0$, $\epsilon_{2}>0$ and large constant $l\geq 1$ (depending on $\delta$ and $N$), such that for any integer $m$, if $u$ is a $\nu$-almost harmonic map as defined above on $\mathcal{C}_{-(m+3)l, 3l}$ with $E(u)\leq\epsilon_{2}$, then: $$\int_{\mathcal{C}_{-ml, 0}}|u_{\theta}|^{2}\leq 7\delta\int_{\mathcal{C}_{-(m+3)l, 3l}}|\nabla u|^{2}.$$ Here we use $(\theta, t)$ as coordinates on $S^{1}\times\mathbb{R}$, and $u_{\theta}$ means the differentiation w.r.t $\theta$. (of Theorem \[convergence theorem\]) **Case (1):** We denote $\rho_{n}=\rho_{n}(t_{n})$, and let $\tau_{n}\in\mathcal{M}_{1}$ be the corresponding conformal structure of $\tau_{n}(t_{n})$. We divide the bubbling convergence into several steps, and we will then focus on the neck parts.\ Since $\tau_{n}\rightarrow\tau_{\infty}$, we can identify a point $x\in T^{2}_{\tau_{\infty}}$ as on $T^{2}_{\tau_{n}}$ by viewing it as on the fundamental regions of lattices $\{1, \tau_{\infty}\}$ and $\{1, \tau_{n}\}$ of corresponding conformal structures. So for any $x\in T^{2}_{\tau_{\infty}}$, for a fixed small constant $\epsilon_{1}<\epsilon_{0}$, we can consider a sequence of energy concentration radii $r_{n}(x)$ defined as follows: $$r_{n}(x)=sup\{r>0, E\big(\rho_{n}, B(x, r)\big)\leq\epsilon_{1}\}.$$ Such $r_{n}(x)$ exist and are positive. Now we say $x$ is an energy concentration point if $\lim_{n\rightarrow\infty}r_{n}(x)\rightarrow 0$. If $x$ is an energy concentration point, we have that: $$\underset{r>0}{inf}\{\lim_{n\rightarrow\infty}E(\rho_{n}, B(x, r))\}\geq\epsilon_{1}.$$ Since our sequence $\rho_{n}$ have uniform bounded energy $2\mathcal{W}$, we know the number of the energy concentration points are bounded by $2\mathcal{W}/\epsilon_{1}$. Denote these points by $\{x_{1}, \cdots, x_{m}\}$. If $x\in T^{2}_{\tau_{\infty}}\backslash\{x_{1}, \cdots, x_{m}\}$, we can find a $r(x)>0$ such that $E\big(\rho_{n}, B(x, r(x))\big)\leq\epsilon_{1}$ for all $n$. and by condition $(*)$, there exist $v_{n}$ which are the energy minimizing harmonic maps defined on $\frac{1}{8}B(x, r(x))$ with the same boundary value as $\rho_{n}$, such that $\|\rho_{n}-v_{n}\|_{W^{1,2}\big(\frac{1}{8}B(x, r(x))\big)}\rightarrow 0$. Since $E\big(v_{n}, \frac{1}{8}B(x, r(x))\big)\leq\epsilon_{1}<\epsilon_{SU}$, we know from [@SU1] that $v_{n}$ have uniform interior $C^{2, \alpha}$-estimates on $\frac{1}{8}B(x, r(x))$, and hence converge to a harmonic map $u$ on $\frac{1}{9}B(x, r(x))$ in $C^{2, \alpha}$ after taking a subsequence. Hence $\rho_{n}\rightarrow u$ in $W^{1,2}\big(\frac{1}{9}B(x, r(x))\big)$. So for any compact subset $K\subset T^{2}_{\tau_{\infty}}\backslash\{x_{1}, \cdots, x_{m}\}$, we can cover them by finite many balls $\frac{1}{9}B(x, r(x))$, and hence $\rho_{n}\rightarrow u$ in $W^{1,2}(K)$ after taking a subsequence. Here $u$ is a harmonic map defined on $K$. After exhausting $T^{2}_{\tau_{\infty}}\backslash\{x_{1}, \cdots, x_{m}\}$ by a sequence of compact sets $K_{i}$, and a diagonal argument, we know $u$ is a harmonic map on $T^{2}_{\tau_{\infty}}\backslash\{x_{1}, \cdots, x_{m}\}$, and by the Theorem 3.6 of removable singularity in [@SU1], we know $u$ extends to a harmonic map on $T^{2}_{\tau_{\infty}}$.\ We now see what happens near the energy concentration points. Fix an energy concentration point $x_{i}$, and denote $r_{n, i}=r_{n}(x_{i})$. Find a small $r>0$, such that $E(u, B(x_{i}, r))\leq\frac{1}{3}\epsilon_{1}$. We rescale $\rho_{n}$ on $B(x_{i}, r_{n, i})$. Define $u_{n, i}=\rho_{n}(x_{i}+r_{n, i}(x-x_{i}))$. So $B(x_{i}, r_{n, i})$ are now rescaled to $B_{1}$, and $B(x_{i}, r)$ to $B(0, r/r_{n, i})$. $u_{n, i}$ can be viewed as defined on balls $B(0, r/r_{n, i})$ with radii converging to infinity. Since the domains converge to the whole complex plane $\mathbb{C}$, which is conformal equivalent to the sphere $S^{2}$ without the south pole, we can think $u_{n, i}$ as defined on any compact subsets of $S^{2}$ away from the south pole for $n$ large enough. Since the property $(*)$ is conformal invariant, we can do the first step to $u_{n, i}$. We can find finitely many energy concentration points $\{x_{i, 1}, \cdots, x_{i, m_{i}}\}\subset S^{2}\backslash south\ pole$, such that $u_{n, i}$ converge to a harmonic map $u_{i}$ defined on $S^{2}\backslash south\ pole$ in the sense of the above step, and hence $u_{i}$ is a harmonic sphere defined on $S^{2}$ by the Theorem of removable singularity. From our definition, we know that $E(u_{n, i}, B_{1})=\epsilon_{1}<\epsilon_{SU}$. So $x_{i, j}\in S^{2}\backslash B_{1}$[^4]. A key point is that the total energy of $u_{n, i}$ on $S^{2}\backslash \{south\ pole\cup B_{1}\}$ is decreased by a finite amount $\epsilon_{1}$ compared to the original $u_{n, i}$, as $u_{n, i}|_{B_{1}}$ taking the energy. We call such rescaling and convergence procedure bubbling convergence.\ We can repeat the bubbling convergence given in step 2 for $u_{n, i}$ on balls centered at $x_{i, j}$. We point out here that there are only finite many such steps, and then the bubbling convergence stops. Each time, we come from a sequence of maps $u_{n}$ defined on a small ball $B_{r}$, and we rescale them to exhaust the whole complex plane. Each time $u_{n}|_{B_{1}}$ take a finite amount of energy after recaling. So after several steps, the total energy of $u_{n}$ will be less than $\epsilon_{1}<\epsilon_{SU}$, and there will be no energy concentration points. The bubbling convergence stops.\ We will discuss energy identity \[energy identity 1\] now. We can decompose $T^{2}_{\tau_{n}}$ into the bubble part $\underset{i}{\cup}B(x_{i}, r)$ and the body part $T^{2}_{\tau_{n}}\backslash\underset{i}{\cup}B(x_{i}, r)$. So the total energy has decomposition $E\big(\rho_{n}, T^{2}_{\tau_{n}}\big)=E\big(\rho_{n}, T^{2}_{\tau_{n}}\backslash\underset{i}{\cup}B(x_{i}, r)\big)+\underset{i}{\sum}E\big(\rho_{n}, B(x_{i}, r)\big)$. Now we can calculate the energy of the first limit map $u_{0}$ as follows: $$E(u_{0})=\lim_{r\rightarrow 0}E\big(u_{0}, T^{2}_{\tau_{\infty}}\backslash\underset{i}{\cup}B(x_{i}, r)\big) =\lim_{r\rightarrow 0}\lim_{n\rightarrow\infty}E\big(\rho_{n}, T^{2}_{\tau_{n}}\backslash\underset{i}{\cup}B(x_{i}, r)\big).$$ So we only need to show that $\lim_{r\rightarrow 0}\lim_{n\rightarrow\infty}\underset{i}{\sum}E\big(\rho_{n}, B(x_{i}, r)\big)=\underset{i}{\sum}E(u_{i})$. Here $u_{i}$ are the bubble maps. As in the second step, we know that $u_{i}$ are limits of $u_{n, i}$ on any compact set of $\mathbb{C}$, so we can calculate the energy of the first bubble map $u_{i}$ as follows: $$E(u_{i})=\lim_{R\rightarrow\infty}E\big(u_{i}, B(R)\big)=\lim_{R\rightarrow\infty}\lim_{n\rightarrow\infty}E\big(u_{n, i}, B(R)\big).$$ By the conformal invariance of energy, $E\big(u_{n, i}, B(R)\big)=E\big(\rho_{n}, B(x_{i}, r_{n, i}R)\big)$. So we only need to show that: $$\lim_{r\rightarrow 0, R\rightarrow\infty}\lim_{n\rightarrow\infty}E\big(\rho_{n}, B(x_{i}, r)\backslash B(x_{i}, r_{n, i}R)\big)=0.$$ We denote the annulus $A(x_{i}, r, r_{n, i}R)=B(x_{i}, r)\backslash B(x_{i}, r_{n, i}R)$. Since $A(x_{i}, r, r_{n, i}R)$ is conformally equivalent to a lang cylinder $\mathcal{C}_{r_{1}, r_{2}}$, with $r_{1}=-\ln(r_{n, i}R)$, $r_{2}=-\ln(r)$, we call such annuli or such cylinders **necks**. So what left is to show that there will be no energy concentration on necks.\ We use Proposition \[almost harmonic maps on necks\] to show that necks support no energy in our case. We will use step 1 as an example, and others follow in the same way. Suppose there is a lower bound for $E\big(\rho_{n}, \mathcal{C}_{r_{1}, r_{2}}\big)$. Since $\rho_{n}$ will converge to $u_{0}$ on any small annulus centered at $x_{i}$, and $u_{n, i}$ will converge to $u_{i}$ on any large annulus centered at $0$, for fixed $L>0$ we know that there can be no energy concentration on $A(x_{i}, re^{-L}, r)$ and $A(x_{i}, r_{n, i}Re^{-L}, r_{n, i}R)$ for $r\rightarrow 0$ and $R\rightarrow\infty$. Changing to the cylinder, we know there will be no energy concentration on a region with fixed length towards boundary of $\mathcal{C}_{r_{1}. r_{2}}$. Now fix a $\delta=\frac{1}{140}$, and let $\nu$, $\epsilon_{2}$ and $l$ be as in Proposition \[almost harmonic maps on necks\]. We can find a sub-cylinder $\mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}}$ with the distance between boundaries of them converging to $\infty$, i.e. $d(\partial\mathcal{C}_{r_{1}. r_{2}}, \partial\mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}})\rightarrow\infty$, such that $E(\rho_{n}, \mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}})=\frac{1}{2}\epsilon_{2}$. We want to show that $\rho_{n}$ is $\nu$-almost harmonic on $\mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}}$ for $n$ large. In fact for any finite collection of disjoint balls $\mathcal{B}$ on the annulus, $E(\rho_{n}, \mathcal{B})\leq E(\rho_{n}, \mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}})\leq\epsilon_{2}$. We can assume $\epsilon_{2}\leq\epsilon_{1}$, so $\rho_{n}$ satisfy property $(*)$, i.e. $\int_{\frac{1}{8}\mathcal{B}}|\nabla\rho_{n}-v|^{2}\rightarrow 0$, with $v$ the energy minimizing map. Since $E(\rho_{n}, \mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}})$ have uniform lower bound, $\int_{\frac{1}{8}\mathcal{B}}|\nabla\rho_{n}-v|^{2}\leq\nu\int_{\mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}}}|\nabla\rho_{n}|^{2}$ hold for $n$ large enough. We can assume we first do the above on a cylinder a little bit larger than $\mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}}$, then by Proposition \[almost harmonic maps on necks\], we have: $$\int_{\mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}}}|(\rho_{n})_{\theta}|^{2}\leq\frac{1}{10}\int_{\mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}}}|\nabla\rho_{n}|^{2}.$$ Hence we have a lower bound on the gap between energy and area. $$\begin{split} E(\rho_{n}, \mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}})-Area(\rho_{n}, \mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}})&=\frac{1}{2}\int_{\mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}}}|(\rho_{n})_{t}|^{2}+|(\rho_{n})_{\theta}|^{2}-2|(\rho_{n})_{t}\times(\rho_{n})_{\theta}|\\ &\geq\frac{1}{8}\int_{\mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}}}|(\rho_{n})_{t}|^{2}-|(\rho_{n})_{\theta}|^{2}. \end{split}$$ So $E(\rho_{n}, \mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}})-Area(\rho_{n}, \mathcal{C}_{r^{\prime}_{1}, r^{\prime}_{2}})$ have a lower bound by the above estimates. It is a contradiction to $E(\rho_{n}(t), \tau_{n}(t))-Area(\rho_{n}(t))\rightarrow 0$ given in Lemma \[almost conformal for perturbed sequence\].\ **Case (2).** We use $(t, \theta)$ as parameters on $T^{2}_{\tau_{n}}$. In fact, we assume $\arg(\tau_{n})=\theta_{n}$, and let $z^{\prime}=t+\sqrt{-1}\theta=e^{-\sqrt{-1}(\frac{1}{2}\pi-\theta_{n})}z$ be another conformal parameter system on $T^{2}_{\tau_{n}}$. We conformally expand the torus such that the length of the circle of parameter $\theta$ is $1$, and the length of parameter $t$ is denoted by $2l_{n}$. Then we divide the torus $T^{2}_{\tau_{n}}$ into sections with length $1$ in the parameter $t$, i.e. $T^{2}_{\tau_{n}}=\underset{i}{\cup}S^{1}\times[t_{i}, t_{i+1}]$.\ We **claim** that there exists a large $L>0$, such that for $n$ large, there exist $t_{n, 0}$, such that $E(\rho_{n}, S^{1}\times[t_{n, 0}-L, t_{n, 0}+L])>\epsilon_{2}$. If the claim fails, $\forall L>0$, we can find a subsequence of $n\rightarrow\infty$, such that $\forall t_{n, i}$, $E(\rho_{n}, S^{1}\times[t_{n, i}-L, t_{n, i}+L])\leq\epsilon_{2}$. After possibly extending some $[t_{n, i}-L, t_{n, i}+L]$, we have $E(\rho_{n}, S^{1}\times[t_{n, i}-L, t_{n, i}+L])=\epsilon_{2}$. So $\rho_{n}|_{[t_{n, i}-L, t_{n, i}+L]}$ satisfy condition of Proposition \[almost harmonic maps on necks\], and hence is a contradiction to Lemma \[almost conformal for perturbed sequence\] as argued in step 5 of case 1.\ Now consider $\rho_{n}:S^{1}\times[t_{n, 0}-l_{n}, t_{n, 0}+l_{n}]\rightarrow N$. There may be bubbles near $t_{n, 0}$. Argument as in case 1 shows that $\rho_{n}$ converge to a harmonic map $u_{1}$ defined on $S^{1}\times\mathbb{R}$ besides some energy concentration points. $u_{1}$ is nontrivial since $E(\rho_{n}, S^{1}\times[t_{n, 0}-L, t_{n, 0}+L])>\epsilon_{2}$. As $S^{1}\times\mathbb{R}$ is conformally equivalent to $S^{2}\backslash north\ and\ south\ pole$, we can extend $u_{1}$ to a harmonic map on $S^{2}$. We can rescale $\rho_{n}$ near the energy concentration points, and the rescaled map will converge as we discussed in Case 1 to several bubble maps $\{u_{1, 1}, \cdots, u_{1, l_{1}}\}$. Energy identity during these bubbles will follow as in the last step of Case 1 on each long cylinder. Now we calculate the total energy: $$\begin{split} \lim_{l\rightarrow\infty}\lim_{n\rightarrow\infty}E(\rho_{n}, S^{1}\times[t_{n, 0}-l, t_{n, 0}+l])&=\lim_{l\rightarrow\infty}E(u_{1}, S^{1}\times[-l, l])+\underset{i}{\sum}E(u_{1, i})\\ &=E(u_{1}, S^{2})+\underset{i}{\sum}E(u_{1, i}). \end{split}$$ So if $\lim_{l\rightarrow\infty}\lim_{n\rightarrow\infty}E(\rho_{n}, S^{2}\times[-l_{n}, -l]\cup[l, l_{n}])=0$, there will be no other bubbles except for $\{u_{1}, u_{1,1}, \cdots, u_{1,l_{1}}\}$, and $\lim_{n\rightarrow}E(\rho_{n})=E(u_{1})+\underset{i}{\sum}E(u_{1, i})$. i.e energy identity \[energy identity 2\] holds. If $\lim_{l\rightarrow\infty}\lim_{n\rightarrow\infty}E(\rho_{n}, S^{2}\times[-l_{n}, -l]\cup[l, l_{n}])>0$, we can consider maps on the other part of the rescaled torus , i.e. we can find another base point denoted by $t_{n, 1}$, such that $|t_{n, 1}-t_{n, 0}|\rightarrow\infty$ and $E(\rho_{n}, S^{1}\times[t_{n, 1}-L, t_{n, 1}+L])>\epsilon_{2}$. Consider $\rho_{n}:S^{1}\times[t_{n, 1}-l_{n}, t_{n, 1}+l_{n}]\rightarrow N$. We can repeat the above step and get another set of harmonic spheres $\{u_{2}, u_{2,1}, \cdots, u_{2, l_{2}}\}$. Since each bubble is a harmonic sphere and must take a finite mount of energy by [@SU1], there are only finitely many such steps. We will get all these harmonic spheres $u_{i}$ and energy identity \[energy identity 2\] by summing over all the steps. **What is left?** The aim of this method is to find a min-max minimal torus, but only when the conformal structures do not degenerate can we get a nontrivial minimal torus. So we do want to know under what condition does there exist a subsequence $\Big\{\rho_{n}(t_{n}), \tau_{n}(t_{n})\Big\}$ satisfying condition $(1)$ in the above theorem. Appendix 1–a uniformization result {#apeendix1} ================================== In this section, we discuss a general uniformization theorem on the complex plane. We will focus on the continuous dependence of the conformal diffeomorphisms on the variance of general metrics. Let $g$ be a Riemannian metric on the complex plane $\mathbb{C}$. \[domain metric\] In the complex coordinates $\{z, \overline{z}\}$, we can write $g=\lambda(z)|dz+\mu(z)d\overline{z}|^{2}$. Here $\lambda(z)>0$, and $\mu(z)$ is complex function on the complex plane with $|\mu|<1$. If $g\geq\epsilon dzd\overline{z}$, there exists a $k=k(\epsilon)<1$, such that $|\mu|\leq k$. \[remark of domain metric\] The proof is just simple calculation. Hence we can always identify a plane non-degenerate metric with $|dz+\mu(z)d\overline{z}|^{2}$ conformally. In fact, $\mu$ is a rational function of the components $g_{ij}(z)$, so if a family $g(t)$ vary continuously in the $C^{1}$ class, the corresponding $\mu(t)$ also vary continuously in the $C^{1}$ class. Results in [@AB] ---------------- Let us discuss what Ahlfors and Bers did in [@AB]. They gave the **existence** and **uniqueness** of conformal diffeomorphism $w^{\mu}:\mathbb{C}_{|dz+\mu d\overline{z}|^{2}}\rightarrow\mathbb{C}_{dwd\overline{w}}$ fixing three points $(0, 1, \infty)$ for any $L^{\infty}$ function $\mu$ with $|\mu|\leq k<1$. Such maps must satisfy the following equation: $$\label{cr1} w^{\mu}_{\overline{z}}=\mu(z) w^{\mu}_{z}.$$ Define function space $B_{p}(\mathbb{C})=C^{1-\frac{2}{p}}\cap W^{1,p}_{loc}(\mathbb{C})$, where $p>2$ depends on the bound $k$ of $\mu$. They showed that $w^{\mu}$ are uniformly bounded in $B_{p}(\mathbb{C})$ for a uniform bound $k$, and that $w^{\mu}$ vary continuously in $B_{p}(\mathbb{C})$ while $\mu$ varying continuously in $L^{\infty}(\mathbb{C})$. Suppose $\mu, \nu \in L^{\infty}(\mathbb{C})$, and $|\mu|,|\nu|\leq k$, with $k<1$. Let $w^{\mu}, w^{\nu}$ be the corresponding conformal homeomorphisms, then: \[cont1\] (Lemma 16, Theorem 7, Lemma 17, Theorem 8 of [@AB]) $$d_{S^{2}}\big(w^{\mu}(z_{1}), w^{\mu}(z_{2})\big)\leq c d_{S^{2}}(z_{1}, z_{2})^{\alpha},$$ $$\|w^{\mu}_{z}\|_{L^{p}(B_{R})}\leq c(R),$$ $$d_{S^{2}}\big(w^{\mu}(z), w^{\nu}(z)\big)\leq C \|\mu-\nu\|_{\infty},$$ $$\|(w^{\mu}-w^{\nu})_{z}\|_{L^{p}(B_{R})}\leq C(R)\|\mu-\nu\|_{\infty}.$$ Here $d_{S^{2}}$ is the sphere distance, which is equivalent to the plane distance of $\mathbb{C}$ on compact sets. $\alpha=1-\frac{2}{p}$. All constants are uniformly bounded depending on $k<1$. This Lemma comes from estimates of equation \[cr1\]. Here we use sphere distance because what we concern is just local properties. Similar results --------------- If we write our metrics conformally on $\mathbb{C}$, what we concern in our case are the conformal homeomorphisms $h^{\mu}:\mathbb{C}_{dwd\overline{w}}\rightarrow\mathbb{C}_{|dz+\mu d\overline{z}|^{2}}$ fixing three points $(0,1, \infty)$, which are just the inverse mappings of those of Ahlfors and Bers. We also concern the continuous dependence of $h^{\mu}$ in $C^{0}\cap W^{1,2}_{loc}(\mathbb{C}, \mathbb{C})$ on the variance of $\mu$ in $C^{1}(\mathbb{C})$. In fact: $$h^{\mu}(w)=(w^{\mu})^{-1}(w),$$ and our mappings satisfy: $$\label{cr2} h^{\mu}_{\overline{w}}=-\mu(h^{\mu}(w))\overline{h^{\mu}_{w}}.$$ If $\mu_n$, are a sequence of metric coefficients as above, such that $\|\mu_{n}-\mu\|_{C^{1}}\rightarrow 0$, and $h^{\mu_{n}}$ as above, we want to have results similar to the above: \[cont2\] $$\label{sphere convergence for the inverse of u-conformal map} d_{S^{2}}\big(h^{\mu_{n}}, h^{\mu}\big)\rightarrow 0,$$ $$\label{local Lp convergence of u-conformal map} \|(h^{\mu_{n}}-h^{\mu})_{w}\|_{L^{p}(B_{R})}\rightarrow 0.$$ Here because the equation \[cr2\] is quasi-linear, we may not get the linear control as Lemma \[cont1\]. We will give a self contained proof of this result by argument similar to those of Ahlfors and Bers. We will use their notions. In fact we will proof the following two claims: $$d_{S^{2}}\big(h^{\mu_{n}}, h^{\mu}\big)\rightarrow 0.$$ Let $w^{\mu}$ be the conformal diffeomorphism described above, so we have uniform H$\ddot{o}$lder estimates $d_{S^{2}}\big(w^{\mu}(z_{1}), w^{\mu}(z_{2})\big)\leq c d_{S^{2}}(z_{1}, z_{2})^{\alpha}$. Here the constant $c$ is uniform for fixed $k<1$, when all $\|\mu\|\leq k$. Let $h^{\mu}=(w^{\mu})^{-1}$, we have: $$h^{\mu}_{\overline{w}}=\nu(w)h^{\mu}_{w},$$ here $\nu(w)=\big(-\mu\frac{w^{\mu}_{z}}{\overline{w^{\mu}_{z}}}\big)\circ h$. Since $\|\nu\|_{L^{\infty}}=\|\mu\|_{L^{\infty}}$, we have similar H$\ddot{o}$lder estimates $d_{S^{2}}\big(h^{\mu}(w_{1}), h^{\mu}(w_{2})\big)\leq c^{\prime} d_{S^{2}}(w_{1}, w_{2})^{\alpha}$.\ We use contradiction arguments. Suppose $(w^{\mu_{n}})^{-1}$ do not converge to $(w^{\mu})^{-1}$ in $L^{\infty}(S^{2}, S^{2})$, then there exists an $\epsilon>0$ and a sequence $x_{n}\in S^{2}$ such that $d_{S^{2}}\big((w^{\mu_{n}})^{-1}(x_{n}), (w^{\mu})^{-1}(x_{n})\big)>\epsilon$. By the compactness of $S^{2}$, we can assume $x_{n}\rightarrow x_{0}$, and $(w^{\mu_{n}})^{-1}(x_{n})\rightarrow z_{1}$, $(w^{\mu})^{-1}(x_{n})\rightarrow z_{0}$. Clearly $d_{S^{2}}(z_{0}, z_{1})\geq\epsilon$. But $w^{\mu}(z_{0})=w^{\mu}(z_{1})=x_{0}$, which forms a contradiction since $w^{\mu}$ is a homeomorphism. This is because of the following.\ Denoting $z_{n}=(w^{\mu_{n}})^{-1}(x_{n})$ and $z_{n}^{\prime}=(w^{\mu})^{-1}(x_{n})$, we have the following: $$d_{S^{2}}\big(w^{\mu_{n}}(z_{n}), w^{\mu}(z_{1})\big)\leq d_{S^{2}}\big(w^{\mu_{n}}(z_{n}), w^{\mu_{n}}(z_{1})\big)+d_{S^{2}}\big(w^{\mu_{n}}(z_{1}), w^{\mu}(z_{1}))\big)\rightarrow 0,$$ The convergence of the first term is because $w^{\mu_{n}}$ have uniform H$\ddot{o}$lder norm. So $w^{\mu}(z_{1})=x_{0}$. And $$w^{\mu}(z_{0})=\lim_{n\rightarrow\infty} w^{\mu}(z^{\prime}_{n})=\lim_{n\rightarrow\infty} x_{n}=x_{0}$$ So we have $w^{\mu}(z_{0})=w^{\mu}(z_{1})$. The conformal diffeomorphism solution $h:\mathbb{C}\rightarrow\mathbb{C}$ fixing $(0,1,\infty)$ of the equation: $$h_{w}=\alpha(w)\overline{h_{w}},$$ have estimates: $$\|(h^{\alpha}-h^{\beta})_{w}\|_{L^{p}(B_{R})}\leq C(R)\|\alpha-\beta\|^{2\alpha}_{L^{\infty}}.$$ Here constants depend only on bound $k$ of $|\alpha|\leq k<1$. $\alpha=1-\frac{2}{p}$ as in Lemma \[cont1\] and $p$ depends only on $k$. We show this in five steps, and we may use $w$ to denote $h$.\ We consider the following non-homogeneous equation: $$w_{\overline{z}}=\mu\overline{w_{z}}+\sigma.$$ We want to find solutions satisfying: $w(0)=0$, $w_{z}\in L^{p}(\mathbb{C})$, and we denote such solution by $w^{\mu, \sigma}$. We firstly consider the following preliminary equation: $$q=T(\mu\overline{q}+\sigma).$$ Here $T$, $P$ denote the operators defined in Section 1.2 of [@AB]. By the fixed point theorem, we know there is a unique solution $q\in L^{p}(\mathbb{C})$ when $p$ is appropriate. Let $w=P(\mu\overline{q}+\sigma)$. We have $w(0)=0$, $w_{z}=T(\mu\overline{q}+\sigma)=q$, and $w_{\overline{z}}=\mu\overline{q}+\sigma$ by properties of operators $T$ and $P$ given in Lemma 3 in [@AB]. So $w_{\overline{z}}=\mu\overline{w_{z}}+\sigma$, and $w$ satisfy our restriction. So $w$ is our solution. We can know that such $w$ is unique by estimating corresponding homogenous equation similar to that of Lemma 1 in [@AB]. From properties of operators $T$ and $P$ given in Lemma 3 in [@AB], we have estimates for $w^{\mu, \sigma}$: $$\|w^{\mu, \sigma}_{z}\|_{L^{p}}=\|q\|_{L^{p}}\leq c(p)\|\sigma\|_{L^{p}},$$ $$|w^{\mu, \sigma}(z_{1})-w^{\mu, \sigma}(z_{2})|\leq c|z_{1}-z_{2}|^{\alpha}$$ Here $\alpha=1-\frac{2}{p}$. In fact, by the properties of $P$, we have: $$|w^{\mu, \sigma}(z_{1})-w^{\mu, \sigma}(z_{2})|\leq c\|\mu\overline{q}+\sigma\|_{L^{p}}|z_{1}-z_{2}|^{1-\frac{2}{p}}\leq c^{\prime}\|\sigma\|_{p}|z_{1}-z_{2}|^{1-\frac{2}{p}}.$$ This mean our solution also have uniform H$\ddot{o}$lder norm.\ $w^{\mu, \sigma}$ varies continuously in $L^{\infty}$ and $L^{p}$ as $\mu$, $\sigma$ vary continuously. Let $w=w^{\mu, \sigma}$, $w^{\prime}=w^{\nu, \rho}$. We have: $$(w-w^{\prime})_{\overline{z}}=\mu\overline{(w-w^{\prime})_{z}}+\lambda,$$ here $\lambda=(\mu-\nu)\overline{w^{\prime}_{z}}+(\sigma-\rho)$. By the above results, we have estimate: $$\|(w-w^{\prime})_{z}\|_{L^{p}}\leq c\|\lambda\|_{L^{p}}\leq c\big(\|\mu-\nu\|_{L^{\infty}}+\|\sigma-\rho\|_{L^{p}}\big).$$ Similarly, we also have estimates of H$\ddot{o}$lder norm for $w-w^{\prime}$.\ Suppose $\mu$ is compactly supported. We want to have homeomorphism $w^{\mu}:\mathbb{C}\rightarrow\mathbb{C}$ satisfying: $w^{\mu}_{\overline{z}}=\mu\overline{w^{\mu}_{z}}$, with normalization $w^{\mu}(0)=0$, and $w^{\mu}_{z}-1\in L^{p}(\mathbb{C})$. In fact, let $w^{\mu}=z+w^{\mu,\mu}(z)$, with $w^{\mu,\mu}(z)$ as in the above step, we have: $$w^{\mu}_{\overline{z}}=(w^{\mu,\mu})_{\overline{z}}=\mu\overline{(w^{\mu,\mu}_{z})}+\mu=\mu(\overline{w^{\mu,\mu}_{z}+1})=\mu\overline{w^{\mu}_{z}}.$$ Clearly, $w^{\mu}(0)=0$, and $w^{\mu}_{z}-1=w^{\mu,\mu}_{z}\in L^{p}(\mathbb{C})$. From argument similar to Section 3.3 of [@AB], we know $w^{\mu}$ is homeomorphism. So $w^{\mu}$ is our solution. In this case, to have a solution fixing $(0,1,\infty)$, we only need to divide $w^{\mu}$ by $w^{\mu}(1)$. We also have results similar to Lemma 15 in [@AB] that $c(R)^{-1}\leq|w^{\mu}(1)|\leq c(R)$ when $\mu$ has compact support in $B_{R}$. We will also denote $w^{\mu}/w^{\mu}(1)$ by $w^{\mu}$ in the following.\ Let $\alpha$, $\beta$ be the two coefficients with $|\alpha|,|\beta|\leq k<1$, we give a decomposition formula: $$w^{\alpha}=w^{\beta}\circ w^{\gamma},$$ here $\gamma=\frac{\alpha-\beta}{1-\alpha\overline{\beta}}\frac{\beta}{\overline{\beta}}\big(\frac{(w^{\beta})^{-1}_{\overline{z}}}{(\overline{(w^{\beta})^{-1}})_{z}}\big)\circ w^{\alpha}$. Hence $\|\gamma\|_{L^{\infty}}\leq C\|\alpha-\beta\|_{L^{\infty}}$. The proof is just simple calculation. Using sphere distance, we have the following estimates: $$d_{S^{2}}\big(w^{\alpha}(z), w^{\beta}(z)\big)=d_{S^{2}}\big(w^{\beta}\circ w^{\gamma}(z), w^{\beta}(z)\big)\leq c d_{S^{2}}\big(w^{\gamma}(z), z\big)^{\alpha}.$$ Decompose $\gamma=\gamma_{1}+\gamma_{2}$, with $\gamma_{1}$ and $\gamma_{2}$ supported near $0$ and $\infty$ separately, we have: $$d_{S^{2}}\big(w^{\gamma}(z), z\big)\leq d_{S^{2}}\big(w^{\gamma}(z), w^{\gamma_{1}}(z)\big)+d_{S^{2}}\big(w^{\gamma_{1}}(z), z\big).$$ In the case $\gamma$ having compact support, for $|z|\leq R$ we have: $$d_{S^{2}}\big(w^{\gamma}(z), z\big)\leq c(R)\|w^{\gamma, \gamma}\|_{L^{\infty}(B_{R})}=c(R)\|w^{\gamma, \gamma}-w^{\gamma, \gamma}(0)\|_{L^{\infty}(B_{R})}\leq C(R)\|\gamma\|_{L^{\infty}}.$$ By arguments as Section 5.1 of [@AB], for $|z|\geq R$, we have $d_{S^{2}}\big(w^{\gamma}(z), z\big)\leq c(R)\|\gamma\|_{L^{\infty}}$. Combining all the above together, we have: $$d_{S^{2}}\big(w^{\alpha}(z), w^{\beta}(z)\big)\leq C(R)\|\alpha-\beta\|^{2\alpha}_{L^{\infty}}.$$ Here sphere distance is equivalent to the ordinary plane distance when restricted to a compact set on $\mathbb{C}$.\ Choose cutoff function $\eta$ supported in $B_{2R}$, with $\eta\equiv1$ on $B_{R}$, $\eta\leq 1$. Then we have: $$\big(\eta(w^{\alpha}-w^{\beta})\big)_{\overline{z}}=\alpha\overline{\big(\eta(w^{\alpha}-w^{\beta})\big)_{z}}+\lambda,$$ here $\lambda=\eta(\alpha-\beta)\overline{w^{\beta}_{z}}+\eta_{\overline{z}}\big((w^{\alpha}-w^{\beta})-\alpha\overline{(w^{\alpha}-w^{\beta})}\big)$. And $\|\lambda\|_{L^{p}}\leq C(R)\|\alpha-\beta\|_{L^{\infty}(B_{2R})}+C^{\prime}(R)\|w^{\alpha}-w^{\beta}\|_{L^{\infty}(B_{2R})}$. So the results in the step 1 and step 4 give: $$\begin{split} \|(w^{\alpha}-w^{\beta})_{z}\|_{L^{p}(B_{R})}&\leq\|\big(\eta(w^{\alpha}-w^{\beta})\big)_{z}\|_{L^{p}}\\ &\leq C(R)\|\lambda\|_{L^{p}}\\ &\leq C(R)\|\alpha-\beta\|_{L^{\infty}(B_{2R})}+C^{\prime}(R)\|w^{\alpha}-w^{\beta}\|_{L^{\infty}(B_{2R})}.\\ &\leq C(R)\|\alpha-\beta\|^{2\alpha}_{L^{\infty}}. \end{split}$$ Here we abuse the use of notion, and if we change $w^{\alpha}$ to $h^{\alpha}$, and $z$ to $w$, we will get the result. (of Lemma \[cont2\]) The first convergence \[sphere convergence for the inverse of u-conformal map\] follows from the first claim. For the second convergence \[local Lp convergence of u-conformal map\], since $h^{\mu_{n}}\rightarrow h^{\mu}$ in $L^{\infty}(S^{2}, S^{2})$, $h^{\mu_{n}}(B_{2R})$ must be restrained in a uniform finite ball $B_{R^{\prime}}$. As $\mu_{n}\rightarrow\mu$ in $C^{1}(\mathbb{C})$, we know $\mu_{n}(h^{\mu_{n}}(w))\rightarrow\mu(h^{\mu}(w))$ in $L^{\infty}$ on any bounded balls $B_{2R}$ for fixed $R<\infty$. We know from the proof of the second claim that: $$\begin{split} \|(h^{\mu_{n}}-h^{\mu})_{w}\|_{L^{p}(B_{R})}&\leq C(R)\|\mu_{n}(h^{\mu_{n}}(w))-\mu(h^{\mu}(w))\|_{L^{\infty}(B_{2R})}\\ &+C^{\prime}(R)\|h^{\mu_{n}}-h^{\mu}\|_{L^{\infty}(B_{2R})}. \end{split}$$ Since the sphere distance is equivalent to the plane distance on compact sets, the first convergence result shows that $\|h^{\mu_{n}}-h^{\mu}\|_{L^{\infty}(B_{2R})}\rightarrow 0$. So the second convergence result \[local Lp convergence of u-conformal map\] holds. [99]{} L. Ahlfors and L. Bers, Riemann’s mapping theorem for variable metrics. Ann. Math. (2)72,385-404(1960). T. Colding and W. Minicozzi , Width and finite extinction time of Ricci flow. Math. DG/0707.0108. T. Colding and W. Minicozzi , Minimal surfaces. Courant Lecture Notes in Mathematics, 4. New York University, Courant Institute of Mathematical Sciences, New York. 1999. T. Colding and W. Minicozzi , Width and mean curvature flow. math.DG/0705.3827. J. Chen, G, Tian, Compactification of moduli space of harmonic mappings. Comment. Math. Helv. 74,201-237(1999). W. Ding, Lectures on the heat flow of harmonic maps. Preprint. W. Ding, J. Li and Q. Liu, Evolution of minimal torus in Riemannian manifolds. Invent. math. 165,225-242(2006). J. Jost, Two dimensional geometric variational problems. J. Wiley and Sons, Chichester, N.Y.(1991). J. Jost, Compact Riemann surfaces, Springer.(2002). J. Qing, Boundary regularity of weakly harmonic maps from surfaces. JFA,114,458-466(1993). J. Sacks, K. Uhlenbeck, The existence of minimal immersions of 2-spheres. Ann. Math. (2)113,1-24(1981). J. Sacks, K. Uhlenbeck, Minimal immersions of closed Riemann surfaces. Trans. Am. Math. Soc. 271,639-652(1982). R. Schoen, S. T. Yau, Existence of incompressible minimal surfaces and the topology of three dimensional manifolds with non-negative scalar curvature. Ann. Math.(2)110,127-142(1979). Department of Mathematics, Stanford University, building 380, Stanford, California 94305. E-mail: [email protected] [^1]: Here $\tau_{0}(t)\equiv\sqrt{-1}$. [^2]: The Teichm$\ddot{u}$ller space $\mathcal{T}_{1}$ is simply connected, so we do not need to consider the homotopy class of conformal structures, i.e.$[(\rho, \tau)]$ is the same as $[\rho]$. [^3]: $Area(\omega_{1}, \omega_{2})=1$ [^4]: Here $B_{1}$ is a unit ball centered at the north pole.
Mid
[ 0.590529247910863, 26.5, 18.375 ]
U.S. Supreme Court nominee Brett Kavanaugh testifies before a Senate Judiciary Committee confirmation hearing on Capitol Hill in Washington, U.S., September 27, 2018. REUTERS/Jim Bourg WASHINGTON (Reuters) - The U.S. Senate Judiciary Committee on Friday set a vote for 1:30 p.m. (1730 GMT) on President Donald Trump’s Supreme Court nominee Brett Kavanaugh, one day after a contentious hearing on sexual assault allegations against him. The committee voted 11-8 to schedule the vote on the nomination, with all Republicans in support, eight of panel’s ten Democrats voting in opposition and two abstaining to demonstrate their objection. Some Democratic members of the committee walked out in protest during the committee meeting.
Mid
[ 0.6492890995260661, 34.25, 18.5 ]
JavaScript sucks (volume 1) » JavaScript Sucks: You can extend types by assigning stuff to their prototype, but that can easily break code since these new things can show up when iterating over the properties of an instance. There is no way to hide these new properties. Only magical built-in prototypes can define hidden properties. Arrays are braindead because you can't trust the built-in ways to compare them. Strangely enough it seems that == is broken in most implementations, but the other comparison operators seem to do the right thing. However, that is observed behavior, and I don't trust it to do the same thing everywhere. The only built-in mapping is the object, which means all keys must be strings. Everything is an object, except for the three or forty things that aren't. Ugh. Referencing names of things that don't exist is not an error, but returns undefined. Referencing a property of undefined is an error. It's a great way to get an exception very, very, far away from the code that was broken. Some implementations don't like extraneous commas, so you can't sanely write multi-line literal objects or arrays. When you have them, you'll get strange errors and it will take you a long time to find and fix them. There's no way to introspect the stack beyond the function that's currently running, and the function that called it. Your test failed, somewhere. Debugging JavaScript sucks. Exceptions raised by "C code" often carry no information about the last JavaScript line that was running (if you were especially fortunate, it would simply crash the browser, in IE's case anyway). Finding where an error happened basically requires a binary search of the new code that you introduced with debugging statements until you find it. Unit tests are good, the JavaScript unit test frameworks available aren't. The fact that exceptions are so hard to track down makes unit tests basically necessary in order to remain sane, why the hell isn't there something as good as py.test? There's a port of Perl's TestSimple, which is usable, but there's plenty of things not to like about it right now (which is to be expected, at 0.03). There's no operator overloading. There's no sprintf (to format numbers, mostly). Generally, there are a lot of objects that behave like arrays, but aren't, so you either have to convert them to arrays or not expect to be able to use array methods everywhere. Fortunately, there isn't a quick and easy way to convert an array-like object to a real array. Awesome. No varargs syntax, but you can call functions with variable arguments and painfully extract additional arguments from the arguments local variable. So, it's kinda like Perl subs, except since arguments isn't a real array, you can't even shift off the arguments or otherwise use it as a regular array. Ugh. foo.bar() and var bar = foo.bar; bar(); do entirely different things. You can write a function that does return a "bound function", but then you have this: var bar = bind(foo.bar, foo);. Dumb!
Low
[ 0.516268980477223, 29.75, 27.875 ]
Upgrading to IIS 7 should be rather transparent, unfortunately that is not the case when it comes to URL rewriting as we knew it from IIS 6. In IIS 6 all we had to do was to add a wildcard mapping making sure that all requests went through the ASPNET ISAPI process. After this was done, one could create a global.asax file that would either pass requests directly through or rewrite the URL based on an internal algorithm. UPDATE: Please see my updated post on how to do proper URL rewriting using IIS 7. I didn’t really expect this to be a problem when I first requested http://localhost for the first time after setting up my site on IIS 7 (all default settings). Unfortunately this was what I was presented with. Anyone having worked with wildcard mappings from IIS 6 will recognize this, this is the result you’ll get after adding a wildcard mapping without having created your URL rewriting functionality. After adding a wildcard map the IIS will not automatically find a default file (by design). This however, is not the problem cause here. I already have my URL rewriting functionality written in my global.asax BeginRequest method and I’d really like to just reuse my current code. Although the new IIS has a whole new bunch of features - one of them being a new “more correct” way of doing URL rewriting -, I really just wan’t to get my website up and running again so I can continue my work. What I present below is a quick’n’dirty hack that will get my old URL rewriting code to work again. It may not be the IIS 7 way of doing it, and it may not work in your case, it depends on the type of URL mapping you’re doing in your project. In short, YMMV. My scenario For this website, improve.dk, all URL’s except static files are requested as though they were folders. That means you will not see any pages ending in anything but a /. Any static files are requested as usual. That means I can be sure that a regular expression like . will catch all static files, while * will catch all pages - as well as the static files! How I got URL rewriting to work like IIS 6 Start by opening the IIS Manager and selecting your website. Now enter the “Handler Mappings” section: Notice the “StaticFile” handler. Currently it’s set to match and catch both File and Directory requests. If you look back at the first image, you’ll notice that the error message details that the handler causing the 404 error is the StaticFile handler. As I know that all my static files will have a file extension (also I don’t care for directory browsing), I’ll simply change my StaticFile handler so it only matches .* - and only files. Your StaticFile handler should now look like this: Now, if you go back and make a request to http://localhost you’ll still get the 404 error, but this time the request is not handled by the StaticFile handler, actually it doesn’t get handled by any handler at all: What needs to be done now is that we need to map any and all requests to the aspnet_isapi.dll isapi file - just like we would usually do in IIS 6. Add a new Script Map to the list of Handler Mappings and set it up like this: Click OK and click Yes at the confirmation dialog: Now if you make a request to either http://localhost or any other file you’ll get the following error: Looking throug the Event log reveals the cause of the error: The aspnet_isapi.dll file cannot be used as a Handler for websites running in the new IIS 7 Integrated Mode, thus we will need to make our website run in classic .NET mode. Right click your website node in the IIS Manager and select Advanced Settings. Select the “Classic .NET AppPool” and close the dialog boxes: Now you should be able to make a request to http://localhost and see it work. Your URL rewriting should work as a charm aswell: But obviously somethings wrong. Making a request to any static file will reveal the problem: “Failed to Execute URL”, what a great descriptive error. Fortunately you won’t have to spend hours ripping out hair… As I have already done that - at least I’ll save a trip or two to the barber. The problem is that the static files are being processed by the aspnet_isapi.dll file instead of simply sending the request along to the StaticFile handler. If you click the “View Ordered List…” link in the IIS Manager from the Handler Mappings view, you’ll see the order in which the handlers are being executed for each request: When you add a new Script Map it’ll automatically get placed at the very top of the line taking precedence over any other handlers, including the StaticFile one. What we have to do is to move our Wildcard handler to the very bottom, below the StaticFile handler. By letting the StaticFile handler have precedence over our Wildcard handler we ensure that any static files (matching .) gets processed correctly while any other URL’s gets passed along to our own Wildcard handler that’ll do the URL rewriting and make business work as usual: After doing so, both static files as well as your custom URL’s should execute as they would under IIS 6: Disclaimer Please notice that this is a hack. This is not the way URL rewriting is supposed to be done under IIS 7. But instead of spending hours upon hours investigating how to do it the right way, this is quick fix to make things work like they did before. Also please note that this solution is intended to work for my specific situation. Your needs for URL rewriting may not necessarily match mine, so you may have to modify certain settings to suit your specific needs. Mark S. Rasmussen I'm the CTO at iPaper where I cuddle with databases, mold code and maintain the overall technical & team responsibility. I'm an avid speaker at user groups & conferences. I love life, motorcycles, photography and all things technical. Say hi on Twitter, write me an email or look me up on LinkedIn.
Mid
[ 0.5618448637316561, 33.5, 26.125 ]
/** * Copyright (c) 2016-present, Facebook, Inc. * All rights reserved. * * This source code is licensed under the BSD-style license found in the * LICENSE file in the root directory of this source tree. An additional grant * of patent rights can be found in the PATENTS file in the same directory. */ #import "FBAllocationTrackerManager.h" #import <objc/runtime.h> #import "FBAllocationTrackerDefines.h" #import "FBAllocationTrackerImpl.h" #import "FBAllocationTrackerSummary.h" BOOL FBIsFBATEnabledInThisBuild(void) { #if _INTERNAL_FBAT_ENABLED return YES; #endif return NO; } #if _INTERNAL_FBAT_ENABLED @implementation FBAllocationTrackerManager { dispatch_queue_t _queue; NSUInteger _generationsClients; } - (instancetype)init { if (self = [super init]) { _queue = dispatch_queue_create("com.facebook.fbat.manager", DISPATCH_QUEUE_SERIAL); } return self; } + (instancetype)sharedManager { static FBAllocationTrackerManager *sharedManager = nil; static dispatch_once_t onceToken; dispatch_once(&onceToken, ^{ sharedManager = [FBAllocationTrackerManager new]; }); return sharedManager; } - (BOOL)isAllocationTrackerEnabled { return FB::AllocationTracker::isTracking(); } - (void)stopTrackingAllocations { FB::AllocationTracker::endTracking(); } - (void)startTrackingAllocations { FB::AllocationTracker::beginTracking(); } - (void)enableGenerations { dispatch_async(_queue, ^{ if (_generationsClients == 0) { FB::AllocationTracker::enableGenerations(); FB::AllocationTracker::markGeneration(); } _generationsClients += 1; }); } - (void)disableGenerations { dispatch_async(_queue, ^{ _generationsClients -= 1; if (_generationsClients <= 0) { FB::AllocationTracker::disableGenerations(); } }); } - (void)markGeneration { FB::AllocationTracker::markGeneration(); } - (NSArray<FBAllocationTrackerSummary *> *)currentAllocationSummary { FB::AllocationTracker::AllocationSummary summary = FB::AllocationTracker::allocationTrackerSummary(); NSMutableArray<FBAllocationTrackerSummary *> *array = [NSMutableArray new]; for (const auto &item: summary) { FB::AllocationTracker::SingleClassSummary singleSummary = item.second; Class aCls = item.first; NSString *className = NSStringFromClass(aCls); FBAllocationTrackerSummary *summaryObject = [[FBAllocationTrackerSummary alloc] initWithAllocations:singleSummary.allocations deallocations:singleSummary.deallocations aliveObjects:singleSummary.allocations - singleSummary.deallocations className:className instanceSize:singleSummary.instanceSize]; [array addObject:summaryObject]; } return array; } - (NSArray<FBAllocationTrackerSummary *> *)_getSingleGenerationSummary:(const FB::AllocationTracker::GenerationSummary &)summary { NSMutableArray *array = [NSMutableArray new]; for (const auto &kv: summary) { FBAllocationTrackerSummary *summaryObject = [[FBAllocationTrackerSummary alloc] initWithAllocations:0 deallocations:0 aliveObjects:kv.second className:NSStringFromClass(kv.first) instanceSize:class_getInstanceSize(kv.first)]; [array addObject:summaryObject]; } return array; } - (NSArray<NSArray<FBAllocationTrackerSummary *> *> *)currentSummaryForGenerations { FB::AllocationTracker::FullGenerationSummary summary = FB::AllocationTracker::generationSummary(); if (summary.size() == 0) { return nil; } NSMutableArray *array = [NSMutableArray new]; for (const auto &generation: summary) { [array addObject:[self _getSingleGenerationSummary:generation]]; } return array; } - (NSArray *)instancesForClass:(__unsafe_unretained Class)aCls inGeneration:(NSInteger)generation { std::vector<__weak id> objects = FB::AllocationTracker::instancesOfClassForGeneration(aCls, generation); if (objects.size() == 0) { return nil; } NSMutableArray *objectArray = [NSMutableArray new]; for (id obj: objects) { if (obj) { [objectArray addObject:obj]; } } return objectArray; } - (NSArray *)instancesOfClasses:(NSArray *)classes { return FB::AllocationTracker::instancesOfClasses(classes); } - (NSSet *)trackedClasses { std::vector<__unsafe_unretained Class> classes = FB::AllocationTracker::trackedClasses(); return [NSSet setWithObjects:classes.data() count:classes.size()]; } @end #else @implementation FBAllocationTrackerManager + (instancetype)sharedManager { return nil; } - (BOOL)isAllocationTrackerEnabled { return NO; } - (void)startTrackingAllocations {} - (void)stopTrackingAllocations {} - (void)enableGenerations {} - (void)disableGenerations {} - (void)markGeneration {} - (NSArray *)currentAllocationSummary { return nil; } - (NSArray *)currentSummaryForGenerations { return nil; } - (NSArray *)instancesForClass:(__unsafe_unretained Class)aCls inGeneration:(NSInteger)generation { return nil; } - (NSArray *)instancesOfClasses:(NSArray *)classes { return nil; } - (NSSet *)trackedClasses { return nil; } @end #endif
Low
[ 0.48043478260869504, 27.625, 29.875 ]
Abell 383 Abell 383 is a galaxy cluster in the Abell catalogue. See also Abell catalogue List of Abell clusters References 383 Category:Abell richness class 2 Category:Active galaxies Category:Galaxy clusters Category:Eridanus (constellation)
Low
[ 0.527964205816554, 29.5, 26.375 ]
Behavioral and motor improvement after deep brain stimulation of the globus pallidus externus in a case of Tourette's syndrome. The objective of our paper is to show the partial decrease of therapeutic effect with battery exhaustion in a previously successfully treated patient with refractory Tourette's syndrome (TS). We present a 47-year-old patient diagnosed with TS based on the TS Study Group Criteria and Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition. Surgery was considered based on refractoriness to conservative management. Presurgical evaluation included magnetic resonance imaging (MRI), positron emission tomography scan, and neuropsychologic, neurologic, and psychiatric tests utilizing Yale Brown Obsessive Compulsive Scale, Yale Global Tics Severity Scale, Hamilton Depression Rating Scale, Hamilton Anxiety Rating Scale, Global Assessment of Functioning Scale, and Mini-mental State Examination. Target coordinates were obtained from inversion recovery MRI. Quadripolar deep brain stimulation (DBS) electrodes were implanted bilaterally in the globus pallidus externus (GPe) and connected to the pulse generator in the same procedure. To determine the clinical response to DBS, the scores of the scales obtained preoperatively were compared with those obtained postoperatively. No surgical complications were detected and according to the clinical scales the patient experienced a marked improvement of his symptoms, although he never showed obsessive-compulsive disorder components of any type. The battery was exhausted after two years with the subsequent significant partial loss of therapeutic effect. GPe seems to be a highly promising target of DBS for the treatment of medically refractory TS. After battery exhaustion, the patient experienced a marked partial decrease in the therapeutic effect, which confirms the beneficial action of this method.
Mid
[ 0.655, 32.75, 17.25 ]
<CsoundSynthesizer> <CsOptions> ; Select audio/midi flags here according to platform -odac ;;;realtime audio out ;-iadc ;;;uncomment -iadc if realtime audio input is needed too ; For Non-realtime ouput leave only the line below: ; -o hrtfstat-2.wav -W ;;; for file output any platform </CsOptions> <CsInstruments> sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 iAz = p4 iElev = p5 itim = ftlptim(1) ; transeg a dur ty b dur ty c dur ty d kamp transeg 0, p3*.1, 0, .9, p3*.3, -3, .5, p3*.3, -2, 0 ain loscil3 kamp, 50, 1 aleft,aright hrtfstat ain, iAz, iElev, "hrtf-44100-left.dat","hrtf-44100-right.dat" outs aleft, aright endin </CsInstruments> <CsScore> f 1 0 0 1 "Church.wav" 0 0 0 ;Csound computes tablesize ; Azim Elev i1 0 7 90 0 ;to the right i1 3 7 -90 -40 ;to the left and below i1 6 7 180 90 ;behind and up e </CsScore> </CsoundSynthesizer>
Low
[ 0.53384912959381, 34.5, 30.125 ]
Governor Jerry Brown Irreparably Harms The Children Of California By Signing Draconian Vaccine Law Signs Vaccine Legislation in Spite of Serious Health Consequences for California’s Children Either by Impeachment or Recall: BROWN MUST GO by Concerned Residents of California Governor Jerry Brown signed a mandatory childhood vaccine bill this week which ought to elicit the outrage of every Californian. This draconian piece of legislation is as fascist and outrageous as it gets. Clearly, Brown is in the pockets of those Big Pharma corporations who stand to gain the most from pushing such dangerous vaccines on the largest population of children of any state in the nation. As governor of the once great state of California, Brown has quick and easy access to the highest integrity scientific research which has demonstrated that vaccines are downright dangerous, as well as potentially lethal.[1] Childhood vaccination programs are especially fraught with medical risks and health consequences, some of which can manifest as life-threatening illness and diseases the younger the child is. As the young children grow into teenagers, many now develop disorders that are characterized by difficulties with social interaction and communication such as Autism and Asperger’s syndrome. There are many other serious health conditions and medical ailments associated with childhood vaccines[2] which Brown has been apprised of, yet he proceeded with signing this odious legislation. Why does Jerry Brown deliberately jeopardize the health of the California’s children? Truly, that is the 64 thousand dollar question! Knowing that so many parents throughout the state are vehemently opposed to having their children vaccinated, why did Brown approve this highly dubious measure? Mind you, the governor has literally put his reputation on the line by enthusiastically endorsing such a repugnant bill. For both hidden and duplicitous reasons, he has now brought the wrath of parents and outrage of other concerned citizens upon him. What are those reasons? First, it is extremely important to understand that Governor Jerry Brown is not stupid. He knowns full well that many children will die from the ever-intensifying super-vaccination regime.[3] He knows that many children, teenagers and young adults will suffer terribly from many of the illnesses and diseases which are the direct result of these vaccines. The governor has also been apprised of the fact that many of his younger Baby Boomer acquaintances have experienced lifelong medical conditions and heath challenges due to their childhood vaccinations. Hence, the Governor made his fateful decision based on all of this well known information; after all, most of it is posted all over the internet. Much accurate data regarding the serious health repercussion has been disseminated over many years; and both Brown and the California legislature have seen it all. His staff in Sacramento has received volumes of authoritative scientific research papers on the dangers of vaccines. Therefore, the governor cannot plead ignorance of his crime against the people of California. Vaccine Injury Compensation Trust Fund At $3.5 billion As Of March 31, 2014, the special fund established to compensate those injured by vaccines stood at $3,475,302,680.15. It is a fund paid into by the U.S. taxpayer by way of a $0.75 excise tax charged for each vaccine. Even though the Big Pharma corporations produce the defective and dangerous vaccines, it is those who actually receive and pay for the vaccine who are forced to contribute to the Vaccine Injury Compensation Trust Fund. In other words those who are medically injured are held financially responsible for damage caused by the defective vaccines. Now that California has a mandatory vaccine law in place, the parents are forced to pay for the numerous vaccinations required, even though they adamantly reject them. They must likewise pay the tax to cover the costs that are anticipated from the serious injuries to their children’s health. In other words the federal government fully expects a certain number of claims to be filed as an actuarial percentage of those vaccinated throughout every state. To add insult to [vaccine] injury, the well compensated government lawyers are then incentivized to aggressively litigate every legitimate claim that is filed by parents and other aggrieved parties. The government then uses the vaccine consumer’s money to fight their meritorious claim in court. If one intends to file a lawsuit proving the causal link between the vaccine and autism, the case will be given top priority by the government so as to ensure that the plaintiff will not be successful in collecting either compensatory or punitive damages. They simply will not allow a precedent to be set or else the floodgates will be opened. The U.S. government and pharmaceutical corporations do not want to pay out billions of dollars in claims to the countless children who have developed Autism and Asperger’s syndrome. Nor do they want to fairly compensate those teenagers and young adults who are still taking medication for the ADD or ADHD that were caused by childhood vaccination programs. Certainly, Governor Brown knows that the Italian courts have decided that vaccines cause Autism. Even though the Vaccine Injury Compensation Trust Fund is very much a federal program, Brown’s administration is well aware of the message that such a trust fund sends to the public. There are now so many scientifically documented cases that vaccines have caused serious physical injury and psychological harm that Brown himself ought to be held directly responsible for approving the exceptionally dangerous and harmful law SB 277. What’s the real point here? BROWN KNOWS THAT SOME VACCINES ARE EXTREMELY DANGEROUS FOR CHILDREN. The governor has been provided irrefutable scientific evidence that a significant number of children will sustain physical injury and/or psychological harm from the ever-intensifying super-vaccination regime that is in place today throughout the state of California. Then why did he sign the legislation? Because he was told to. If he didn’t sign it, his political career would have been terminated by a real or fabricated scandal, a politically engineered recall vote, or a call for impeachment. Such is the importance of the bellwether state of California regarding the advancement of the Super-Vaccination agenda. What Brown failed to understand is that he has two masters, not one. Where the globalists have obviously funded all of his recent elections just like Obama’s, he functions in a much smaller political milieu than the POTUS. So, although he has made himself beholden to his corporate sponsors due to his receipt of excessive campaign contributions, he must also answer to the people who elected him. It appears that the governor is quite fearful of his corporate overlords, but is not afraid of his real master — the voters of California. In reality he answers to the people, and only the people of California. Clearly, he has forgotten this inconvenient fact of life and must be reminded post haste. Until he learns to fear the righteous indignation of the CA electorate, he will consistently do the bidding of his corporate masters. They will always command his attention since they have consistently financed his political campaigns. It is much easier to hold a rogue governor like Jerry Brown accountable for the criminal actions which he and his administration are guilty of than, say, an Obama or any POTUS. Now that he has signed a law ‘legalizing’ the systematic harm of California’s children, Brown has set himself up to be held personally accountable for all damages which result from his direct approval of the patently illicit vaccine legislation SB 277. This point CANNOT be over-emphasized. Governor Brown, and every other elected state representative and appointed bureaucrat, who are responsible for this heinous initiative which will sicken and inflict disease on the children of California, need to be held accountable for their criminal conduct. Once each and every one of them is held personally responsible for this ongoing crime against the people, many more initiatives can be undertaken to take back the power from what has essentially become the Government-Corporate Vaccine Complex. First and foremost, however, Governor Jerry Brown MUST be made an example of. This critical point is both non-negotiable and uncompromising. As the chief executive of the state, whose signature ‘legally’ established the illicit mandatory vaccine law — SB 277 — he is directly responsible. Therefore, he must suffer the consequences of committing what is actually a state-sponsored crime against the people. In this way the similar conduct of all of the elected co-conspirators who colluded on behalf of the pharmaceutical companies will likewise be put on notice. Exactly what is the message being sent to these so-called state representatives. Don’t Mess With The Health Of Our Children … Not Now, Not Ever! What is the real agenda behind SB 277, the Mandatory Childhood Vaccine Bill? Whenever the ruling cabal (aka the Shadow Government) wants a piece of legislation approved, there is simply no stopping it. As to exactly why this particular vaccine bill was so necessary to their agenda, that will be taken up in the next article. In the meantime the following exposé provides an excellent analysis of the relevant historical background, current context, and the prevailing MO of the Government-Corporate Vaccine Complex. What is exceedingly important to understand is that a line has been drawn in the California sand. On one side stand the people of California, who will no longer be dictated to in such a tyrannical manner by both their governor and the Big Pharma lobbyists who wrote the draconian vaccine law. On the other side stand the corporate globalists who have always gotten their way … until now! There is one critical element of this rapidly developing and exceedingly volatile conflict. That element concerns personal sovereignty. The anti-vax movement rests firmly on the foundation of the sacredness of the personal sovereignty of each family member, as well as every citizen of the USA. That sovereignty will not be infringed upon. Nor do the anti-vax folks ever infringe upon the personal sovereignty of those who wish to vaccinate themselves all day long. “Have at it!” they might exclaim. The pro-vaccine crowd has been manipulated by the Big Pharma corporations who work in tandem with the U.S. government. By means of a highly focused and relentless fear-based media campaign, they have been successful at scaring the uneducated into an unending and ever-growing vaccine regime. This is what many now refer to as a Super-Vaccination agenda. California is now the central battleground for the necessary defense of personal sovereignty. It is actually one of many battlefields throughout the nation which constitute an ongoing war between the U.S. citizenry and the U.S Federal Government. It is the U.S Gov’t which routinely implements a covert corporate agenda that often inimical to life on Earth. The final point here is that Governor Brown has chosen the side of the Government-Corporate Vaccine Complex over the side of the people he swore to serve and protect. Hence, his official conduct is nothing less than treasonous, and he should be treated as a traitor by those children and families whom he has so profoundly betrayed.
Low
[ 0.479357798165137, 26.125, 28.375 ]
Q: MYSQL php: query multiple rows and return value if "WHERE IN" id not found I am developing a website on wordpress that manages events. The issue is that I need to get some event title names from their corresponding IDs in a single query. I am trying to get multiple rows out of a single MYSQL query, which works fine using "WHERE IN ({$IDs})": $events_implode = implode(',',$events); $events_titles = $wpdb->get_results("SELECT `title` FROM `wp_events` WHERE `id` in ({$events_implode}) ORDER BY FIELD(id,{$events_implode}",ARRAY_A); However, I need it to return a value/string if one of the $IDs was not found in the query instead of returning nothing. Example: If $events_implode is like: 318,185,180,377, and the events with IDs 180 & 185 do not exist (being deleted for instance), I get back the events titles for 318 & 377 only: Array ( [0] => Array ( [title] => Title 1 ) [1] => Array ( [title] => Title 4 ) ) while I need it to return something like: Array ( [0] => Array ( [title] => Title 1 ) [3] => Array ( [title] => Title 4 ) ) I tried IFNULL: $events_titles = $wpdb->get_results("SELECT IFNULL( (SELECT `title` FROM `wp_events` WHERE `id` in ({$events_implode}) ORDER BY FIELD(id,{$events_implode}) ),'not found')",ARRAY_A); but since the query returns more than one row, I get the error message: "mysqli_query(): (21000/1242): Subquery returns more than 1 row" Is there a solution to that? Many thanks! A: You could achieve this by also querying the id and not just the title: SELECT id, title FROM ... etc Then your result array will look like this (for the example data) array( 0 => array("id" => 318, "title" => "title for 318"), 1 => array("id" => 377, "title" => "title for 377") ) But this can be converted to what you need, with the help of the $events array you already have: foreach($event_titles as $row) { $hash[array_search($row['id'], $events)] = $row['title']; }; $events_titles = $hash; Now the array looks like this (for the example data): array( 0 => array("title" => "title for 318"), 3 => array("title" => "title for 377") ) NB: That conversion can also be done without explicit loop: $ids = array_intersect($events, array_column($event_titles, 'id')); $event_titles = array_combine(array_keys($ids), array_column($event_titles, 'title')); Alternative The above is probably the best solution for your case, but you could also build your query dynamically to achieve this: $subsql = implode( ' UNION ALL ', array_map(function ($event) { return "SELECT $event AS id"; }, $events) ); $sql = "SELECT wp_events.title FROM ($subsql) AS eid LEFT JOIN wp_events ON wp_events.id = eid.id ORDER BY eid.id"; $events_titles = $wpdb->get_results($sql, ARRAY_A); The value for $subsql will be like this (for example data): SELECT 318 AS id UNION ALL SELECT 185 AS id UNION ALL SELECT 180 AS id UNION ALL SELECT 377 AS id The value of $events_titles will be an array that has one entry for each event, whether or not it matched. So for the example you gave, you would get this result: array( 0 => array("title" => "title for 318"), 1 => array("title" => null), // because no match with 185 2 => array("title" => null), // because no match with 180 3 => array("title" => "title for 377") ) So you can now just test for null. If additionally you would like to remove the null entries without changing the indexes (0 ... 3) in the array, then do this: $event_titles = array_filter($event_titles, function ($title) { return $title; }); This will give you (in the example): array( 0 => array("title" => "title for 318"), 3 => array("title" => "title for 377") )
Low
[ 0.5361305361305361, 28.75, 24.875 ]
Scorn For the Horn byBrian HallonJuly 20, 2010 Back in South Africa fans of the most beautiful are sworn to the horn; everywhere else in the world the feelings are those of scorn. Several major sporting events have made an extra effort to ban the obnoxious sounding instrument already. The long list of vuvuzela haters includes: Major League Baseball, Mixed Martial Arts, and next summer’s rugby World Cup in New Zealand. But the Premier League hasn’t banned the bee hive sounding instrument just yet and the league made it a personalized rule to each venue that hosts Premier League matches if they want to ban the horns. And one club has already stepped up and took the initiative and banned the horn for good from their home matches. The vuvuzelas have been banned in White Hart Lane and this statement appeared on Tottenham Hotspur’s website yesterday: Following discussions with the Police and representatives from the local licensing authorities, the club will not be permitting vuvuzelas or similar instruments into White Hart Lane on match days. We are concerned that the presence of the instruments within the stadium pose unnecessary risks to public safety and could impact on the ability of all supporters to hear any emergency safety announcements. We are very proud of the fantastic atmosphere that our supporters produce organically at White Hart Lane and we are all very much looking forward to this continuing into the forthcoming season. The issue of public safety is an interesting one. I’m sure the truth is that the horns were loud and obnoxious and no one at White Hart Lane wanted to be annoyed by the trumpets. So instead it appears to be legally outlawed due to the impediment the horns create over the announcements on the loud speaker system. And the horns will probably remain a South African favorite considering they were used for the world’s biggest event and not a weekly type of deal like the EPL schedule. Plus the horn is South African culture and should remain an important part of the hosting nation’s sporting culture as well.
Mid
[ 0.59275053304904, 34.75, 23.875 ]
![](NEUROLOGY2019028597FFU1.jpg) Phantom limb pain often follows the amputation or deafferentation of a limb and has a large impact on a patient\'s life.^[@R1]^ Although effective treatment is limited and lacks sufficient evidence,^[@R2][@R3][@R5]^ mirror therapy and related techniques have been hypothesized to reduce pain by strengthening the cortical representation of the phantom hand.^[@R2][@R3][@R9]^ However, some recent studies have suggested that the pain reduction is associated with a weakening of the representation.^[@R10],[@R11]^ In our previous study, we developed a brain--computer interface (BCI) to control a robotic hand^[@R12],[@R13]^ using magnetoencephalography (MEG) signals and tested a hypothesis that training to use the BCI robotic hand, which was controlled based on the cortical representation of the phantom hand by moving the phantom hands, strengthened the cortical representation of the phantom hand to reduce the pain.^[@R14]^ However, contrary to that hypothesis, the pain was significantly increased immediately after the BCI training compared to after a sham training in a crossover trial, although the BCI training succeeded in strengthening the cortical representation of the phantom hand. Therefore, to decrease the pain by attenuating the cortical representation of the phantom hand, the same patients were trained to control the BCI based on the cortical representation of the intact hand by moving the phantom hand. In this case, the pain was significantly decreased immediately after the training compared to the pain changes of the previous sham trainings, which were not randomized with the current training. These results suggested that training to use the BCI based on the cortical representation of the intact hand movements, coupled with the phantom hand, temporarily relieved pain.^[@R11]^ However, the efficacy of the BCI training has not been elucidated by a randomized controlled trial. In this study, we tested the hypothesis that the BCI training based on the cortical representation of the intact hand movements reduces phantom limb pain sustainably. Methods {#s1} ======= Classification of evidence {#s1-1} -------------------------- This interventional study provides Class III evidence that training with BCI reduces phantom limb pain significantly more than sham training. We tested the hypothesis that BCI training reduces phantom limb pain sustainably by a single-blinded randomized sham-controlled crossover trial ([figure 1](#F1){ref-type="fig"}). The primary outcome was reduced pain at day 4, defined as visual analogue scale (VAS) normalized by the VAS scores at day 1. The exclusion/inclusion criteria are defined in the following section. The dropout rate was less than 20%. ![Participant flow diagram\ Participant flow diagram with actual number of participants.](NEUROLOGY2019028597FF1){#F1} Study design and participants {#s1-2} ----------------------------- We conducted a randomized single-blinded crossover design trial at Osaka University in Japan. Eligible patients had chronic phantom limb pain of the upper limb, and we selected patients who met all of the following inclusion criteria: (1) phantom hand sensation, (2) chronic pain in the phantom hand, (3) no hand or no actual sensation in the residual hand, (4) no hand or completely plegic hand, and (5) normal comprehension and intellectual capacity according to the Japanese Adult Reading Test (JART-25). Here, "no hand" refers to amputees and "no actual sensation in the residual hand" refers to complete deafferentation. The affected hands of all patients with brachial plexus root avulsion had no sensation and were plegic due to complete avulsion of their roots (as definitively confirmed through MRI or CT with myelogram). We excluded patients with incompletely plegic hands so that the motor function of these patients was the same as the amputees. Participants who were not able to be recorded by MEG were excluded to allow all patients to perform the training using MEG. Standard protocol approvals, registrations, and patient consents {#s1-3} ---------------------------------------------------------------- The study adhered to the Declaration of Helsinki and was performed in accordance with protocols approved by the Ethics Committee of the Osaka University Clinical Trial Center (No. 13381-6, UMIN000013608, study protocol available from Dryad, text 1, [doi.org/10.5061/dryad.15dv41nt9](https://doi.org/10.5061/dryad.15dv41nt9)). All patients were informed of this study\'s purpose and possible consequences (i.e., that it sought to induce changes in cortical activity related to pain using the BCI and that the experiments included 2 trainings with different decoders), and written informed consent was obtained. Patients were informed that their pain might increase or decrease after training, although the change would be temporary. Randomization and masking {#s1-4} ------------------------- The participants were enrolled in the study by 2 medical doctors who had no involvement with the rest of the trial. The experimenter in charge of the BCI system assigned the patients to the trial groups by a block method with a block size of 2. Although this experimenter was aware of the treatment allocation, the participants and the other experimenters who assessed pain were not aware of the allocations. All of the experiment\'s procedures and settings were exactly the same among trials, except for the selection of the decoder, which was set by the experimenter. Notably, although the assessment of pain was performed blindly, this study was single-blinded because the experimenter in charge of the BCI system was aware of the treatment allocation. Controlling the BCI system blindly was difficult. Therefore, the randomization was also performed by the experimenter in charge of the BCI system. To assess the masking, we asked all participants whether the hand image was controllable. Notably, we did not ask the patients their group allocation directly because we could not do the interview after the last follow-up for all patients. Procedures {#s1-5} ---------- All patients completed 2 sessions with a washout period of more than 3 weeks; each session consisted of experiments on 3 consecutive days using a different training---real or random. At the beginning of the 3-day experiments, each patient\'s pain was evaluated with a 100-mm VAS. The patients marked a point on a 100-mm horizontal line to indicate their pain intensity; the line represented a range with the worst pain at the right side and no pain at the left side. The patients then performed a movement task for their phantom hand while the MEG signals were recorded. The patients were visually instructed with the Japanese word for "grasp" or "open" to move their phantom hand so that it was either grasping or opening at the presented times, without moving other body parts ([figure 2A](#F2){ref-type="fig"}).^[@R15]^ Next, the patient did the same movement task with their intact hand, while the MEG signals from 84 selected sensors were recorded ([figure 2B](#F2){ref-type="fig"}). From the MEG signals during the intact hand movements, the motor cortical currents were estimated at 126 selected cortical points contralateral to the intact hand ([figure 2C](#F2){ref-type="fig"}). These were converted into *z* scores using mean and SD estimated from the 50 seconds of resting state data before the movement task. *z* Scores were averaged in a 400-ms time window from −2,000 to 1,000 ms at 100-ms intervals according to the cue. We constructed a "real decoder" to estimate the likelihood of the intact hand\'s "opening" movement using the *z* scores averaged in a 400-ms time window by sparse logistic regression (SLR).^[@R16]^ During this decoder construction, which took about 5 minutes, the patient stayed at rest in the MEG scanner with eyes closed. ![Movement task and training to move the virtual hand image controlled by brain--computer interface\ (A) The movement task began with a 3-second visual presentation of a black cross. A Japanese word was shown for 1 second to instruct the participants which movement to perform. After two 1-second timing cues, the execution cue with the cross sign was presented for 0.5 seconds with a sound, and patients performed the indicated movement. Cues with sounds were repeated 4 times for each instruction. Each movement type was assigned in random order 10 times. (B) The locations of the 84 selected sensors are shown as red points. (C) The 126 selected vertices are shown in red on the motor cortex for each hemisphere. (D) Schematic explanation of the training to control the virtual hand image. The hand image was presented to patients during the training and moved according to the output of the decoder every 200 ms. MEG = magnetoencephalography.](NEUROLOGY2019028597FF2){#F2} After the decoder construction, the patients performed 10 minutes of real or random training 3 times on day 1. During the real training, a virtual image of the patient\'s phantom hand was presented to the patient and moved according to the likelihood of the hand opening as evaluated by the real decoder using the 400-ms averaged *z* scores every 200 ms, which was calculated from the MEG signals obtained online ([figure 2D](#F2){ref-type="fig"} and [video 1](#SM1){ref-type="supplementary-material"}). Notably, we only used the cortical currents estimated from the MEG signals from 84 selected sensors both for the construction of the real decoder and for the real training. During the random training, the same image was controlled by a value that randomly increased or decreased every 200 ms. For both trainings, patients were instructed to control the same images by moving their phantom hand. The difference between the 2 trainings was whether or not the images were controlled based on cortical activities. Trainings were done 6 times on day 2, and 3 times on days 1 and 3. After the training on day 3, the patient performed the movement task for the phantom hand and the intact hand in the same manner as on day 1 to evaluate whether and how the cortical representation of the phantom hand had changed. The amount of training was maximized by keeping the whole MEG recording of each day within 2 hours to avoid patient overload. Finally, immediately after each day\'s training, we evaluated patient pain by VAS and asked patients to describe whether they could control the hand image during the training. The patients reported their pain with a VAS at a similar time to the training every day from day 4 to day 20. Although the pain was evaluated by short-form McGill Pain Questionnaire 2 (SF-MPQ2) at the beginning of the training for each patient, we used VAS to evaluate pain repeatedly during and after the training. 10.1212/009858_Video_1 ###### This video shows an example of the brain--computer interface training during which a patient controlled his hand image by moving his phantom hand. The phantom hand image was shown with the probability of "open hand," which was estimated from the cortical motor currents online. The phantom hand image was controlled according to the probability. The black line in the left panel shows the estimated probability. The red line shows the time average of the black line for 5 consecutive points.Download Supplementary Video 1 via [http://dx.doi.org/10.1212/009858_Video_1](10.1212/009858_Video_1) Outcomes {#s1-6} -------- The primary outcome was reduction of pain at day 4, defined as the VAS normalized by the VAS scores at day 1 (baseline). The VAS scores were normalized to compare the pain reduction rate among the real and random trainings. As the secondary outcome, the normalized VAS was compared among trainings at the remaining follow-up. Safety and adverse events were evaluated for each trial. If patients failed to report pain, the lost VAS score was treated as missing data. MEG recordings {#s1-7} -------------- Participants were placed in the supine position, with their heads centered in the gantry. Patients were instructed not to move the head during the measurement to avoid motion artifacts. A projection screen presented visual stimuli (Presentation, Neurobehavioral Systems, Albany, CA) from a liquid crystal projector (LVP-HC6800, Mitsubishi Electric, Tokyo, Japan). MEG signals were measured using a 160-channel whole-head MEG system equipped with coaxial-type gradiometers (MEGvision NEO, Yokogawa Electric Corporation, Kanazawa, Japan) housed in a magnetically shielded room. MEG signals were sampled at 1,000 Hz with an online high-pass filter at 0.1 Hz, a band-rejection filter at 60 Hz, and a low-pass filter at 200 Hz, and acquired online by FPGA DAQ boards (PXI-7854R, National Instruments, Austin, TX) after passing through an optical isolation circuit. Five head marker coils were attached to each patient\'s face before the MEG recording to determine the position and orientation of sensors relative to the head. The positions of the coils were measured to evaluate differences in head position before and after each MEG recording, with a maximum acceptable difference of 5 mm. Cortical current estimation by variational Bayesian multimodal encephalography {#s1-8} ------------------------------------------------------------------------------ A polygon model of the cortical surface was constructed based on structural MRI (T1-weighted; Signa HDxt Excite 3.0T, GE Healthcare UK Ltd., Buckinghamshire, UK) using Freesurfer software (Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown). To align MEG data with individual MRI data, we scanned the 3D facial surface, positions of the marker coils, and 50 points on the scalp of each participant (FastSCAN Cobra, Polhemus, Colchester, VT). 3D facial surface data were superimposed on the anatomical facial surface extracted from the MRI data. Marker coil positions were also measured using the MEG system before each recording and were used to align the MEG system with the MRI through the FastSCAN coordinate. Cortical currents were estimated from MEG data using variational Bayesian multimodal encephalography (ATR Neural Information Analysis Laboratories, Kyoto, Japan).^[@R17]^ The program estimated 4,004 single-current dipoles equidistantly distributed and perpendicular to the cortical surface. An inverse filter was calculated to estimate the cortical current of each dipole using MEG signals from all trials from 0 to 1 second during the movement task, with the baseline of current variance estimated from the signals from −1.5 to −0.5 seconds. We used a uniform prior. The hyperparameters m0 and γ0 were set to 100 and 10, respectively. The lead field was computed using the boundary elementary method with coregistered sensor positions and the polygon model using the data obtained immediately before each 10-minute training. We only used the cortical currents of the sensorimotor cortex ipsilateral to the phantom hand from the whole cortical currents estimated from the MEG signals of 84 selected sensors. Preparation of the decoder for BCI {#s1-9} ---------------------------------- To construct a decoder, the experimenter determined the timing from the cue that had the most information for the intact hand movements. A support vector machine with a radial basis function kernel was used to classify the movement types of the intact hand by 10-fold cross-validation for the time-averaged *z* score at each timing from −2,000 to 1,000 ms with several hyperparameters that were selected by the experimenter.^[@R13]^ Then, the experimenter selected one timing of data, which has high classification accuracy evaluated by the support vector machine. The *z* scores of the selected timing of data were used for decoder construction to infer the intact hand movements of grasping and opening by SLR.^[@R16]^ We used SLR to estimate the likelihood of one movement represented as a logistic function. During the training, the constructed decoder estimated the likelihood of the "opening" movement using the *z* scores estimated from the MEG signals online every 200 ms. MATLAB R2013a (MathWorks, Natick, MA) was used for the calculations. Training to move a virtual phantom hand controlled by BCI {#s1-10} --------------------------------------------------------- We took 8--10 stepped pictures of the patient\'s intact hand from grasping to opening. Each picture was flipped right to left to make a virtual image of the patient\'s phantom hand; this image was presented to the patient during the training and moved according to the likelihood of opening the hand ([figure 2D](#F2){ref-type="fig"} and [video 1](#SM1){ref-type="supplementary-material"}). The likelihood was evaluated as a number from 0 to 1. The numbers were then divided by 8--10 steps and assigned to the flipped pictures. During the random training, the same image was controlled by randomly generated values, starting from 0 at the beginning to which was added −0.1 or 0.1, as selected randomly using the MATLAB "randi" function, every 200 ms. If the absolute value exceeded 1, the value was not changed. This process generated a gradual increase and decrease of the value from −1 to 1. Statistical analysis {#s1-11} -------------------- Sample size calculation was based on the pain reduction rate measured using VAS scores in our previous study^[@R11]^ in which VAS scores were reduced by 0.9 (SD 0.22) (VAS after training/VAS before training) on average after training. Pain reduction in 3 consecutive 10-minute trainings was therefore expected to be 0.729 (0.9^3^). Assuming a difference in the score of 0.27 between the training with the real and the random decoder, approximately 12 patients were necessary to achieve 80% power (simplified calculation as 2-sided *t* test for paired samples; α = 5%). Statistical analyses were performed using MATLAB R2013a. Data sets were initially assessed for normality using the Kolmogorov-Smirnov test. VAS scores were evaluated before and after each day\'s trainings (pretraining and posttraining) for the 2 conditions (real training and random training) on each of the 3 days (day 1, day 2, and day 3) by 3-way analysis of variance (ANOVA). Statistical significance was considered at *p* \< 0.05. For each real and random training, VAS scores were compared before and after training among the 3 days by 2-way ANOVA with Bonferroni correction. VAS score differences were evaluated by paired Student *t* test with Bonferroni correction. The effect size of the pain reduction was evaluated by Cohen *d*. Pre- and post-training VAS scores for each training day were averaged and reported as the means with SDs, and daily pretraining scores were normalized by the day 1 pretraining score. The normalized VAS scores from days 1--10 and the normalized VAS averaged from day 11--20 were compared between the real and random trainings among days by 2-way ANOVA with *p* \< 0.05 (real vs random and days). Normalized VAS scores between real and random trainings were compared by paired Student *t* test for each day with *p* \< 0.05. We used multiple imputations with sequential regression for the missing VAS score during the follow-up to create and analyze 20 severally imputed datasets. Calculations were performed using IBM SPSS. The estimates and the SDs were combined using Rubin rules. Data availability {#s1-12} ----------------- The data that support the findings of this study are available on request from the corresponding author and from Dryad, tables e-3 through e-6 ([doi.org/10.5061/dryad.15dv41nt9](https://doi.org/10.5061/dryad.15dv41nt9)). Results {#s2} ======= Patients with phantom limb pain {#s2-1} ------------------------------- Between August 1, 2015, and June 30, 2018, 14 patients with chronic phantom limb pain were recruited at Osaka University Hospital in Japan ([figure 1](#F1){ref-type="fig"}).^[@R18]^ Two patients were excluded because they could move their affected hands slightly and were found to have incomplete avulsion of their roots during detailed examination. The follow-up with the participants ended on July 3, 2018, at which point the number of participants was 12. Twelve patients with phantom limbs due to brachial plexus root avulsion or amputation of the forearm participated in this study ([table 1](#T1){ref-type="table"} and table e-1, [doi.org/10.5061/dryad.15dv41nt9](https://doi.org/10.5061/dryad.15dv41nt9)). Pain was characterized by the Japanese version of the SF-MPQ2^[@R19]^ before the experiment. ###### Clinical profiles of patients ![](NEUROLOGY2019028597TT1) The patients were randomly allocated into 2 groups ([figure 1](#F1){ref-type="fig"}). On day 1, the total scores of SF-MPQ2 before the training were not significantly different between the 2 groups (mean \[SD\]; real first, 35.3 \[25.0\]; random first, 32.8 \[31.5\]; *p* = 0.88). However, the VAS scores were significantly different between 2 groups (real first, 60.8 \[25.1\], 1/100 mm; random first, 22.8 \[6.9\]; *p* = 0.005), because we did not allocate the participants based on the pain scales. During the follow-up (day 5--10), a total of 11 out of 144 scores (7.6%) were missing due to the patient\'s carelessness or misunderstanding (tables e-3 through e-6, [doi.org/10.5061/dryad.15dv41nt9](https://doi.org/10.5061/dryad.15dv41nt9)). BCI training reduced pain during the 3-day trainings {#s2-2} ---------------------------------------------------- Among the 3 consecutive days of training, the training significantly changed VAS scores, although the differences of VAS scores were not significant between real and random trainings (pre- vs post-training, *p* = 0.0004 \< 0.05; real vs random, *p* = 0.71; days, *p* = 0.42, interaction *p* \> 0.05). As a post hoc test, VAS scores were significantly reduced during real training (pre- vs post-training, *p* = 0.0030 \< 0.025; days, *p* = 0.36; interaction *p* = 0.13; [figure 3A](#F3){ref-type="fig"}), but not during random training (pre- vs post-training, *p* = 0.047 \> 0.025; days, *p* = 0.90; interaction *p* = 0.96, [figure 3B](#F3){ref-type="fig"}). Notably, pain was reduced with a large effect size on day 1 and day 2 of the real trainings ([table 2](#T2){ref-type="table"} and [figure 3A](#F3){ref-type="fig"}). However, for random training days, VAS scores were not significantly decreased, with smaller effect sizes ([table 2](#T2){ref-type="table"} and [figure 3B](#F3){ref-type="fig"}). Moreover, pretraining pain was reduced during the 3 days of real training, and these VAS scores significantly decreased from day 1 to day 3 and from day 2 to day 3 but not from day 1 to day 2 ([table 2](#T2){ref-type="table"} and [figure 3A](#F3){ref-type="fig"}). Pretraining VAS scores for the random training were not significantly changed across the 3 days ([table 2](#T2){ref-type="table"} and [figure 3B](#F3){ref-type="fig"}). Real training significantly reduced pain immediately after training as well as at the beginning of training on day 3, which demonstrated a cumulative pain reduction effect, although the VAS scores were not significantly different among real and random trainings during the 3 days of trainings and the VAS changes were not specific to the real training. ![Real training decreased pain\ The visual analogue scale (VAS) scores are shown as box and whisker plots between pretraining (filled) and posttraining (shaded) on the 3 days of real training (red; A) and random training (blue; B). The cross represents the mean. The dot shows the VAS scores. \**p* \< 0.05, \*\**p* \< 0.01, paired Student *t* test, Bonferroni corrected.](NEUROLOGY2019028597FF3){#F3} ###### Cohen *d* of the training ![](NEUROLOGY2019028597TT2) The only adverse event was increased pain during the training compared to that at the beginning for 2 patients (1.7%) in the real training and 7 patients (58.3%) in the random training. The 2 patients who experienced pain during real training had brachial plexus root avulsion and also experienced pain during random training. The VAS scores of the day 1 pretraining were not significantly different between the first and second arms of the crossover trial (first, 41.8 \[15.0\]; second, 40.1 \[9.3\]; *p* = 0.68). However, the VAS scores of the day 1 pretraining were significantly different between the real and random training due to the differences of the baseline VAS scores (real, 45.3 \[24.3\]; random, 36.6 \[18.5\]; *p* = 0.019). According to patient interviews after each training, no patient realized that the random training was unconnected to their movements (table e-2, [doi.org/10.5061/dryad.15dv41nt9](https://doi.org/10.5061/dryad.15dv41nt9)). Real training reduced pain for 5 days after the 3-day trainings {#s2-3} --------------------------------------------------------------- As a primary outcome of this study, the analgesic effects of the training were evaluated after the 3-day trainings. From day 1 to day 4, VAS scores were significantly reduced after the real training (45.3 \[24.2\] at day 1 to 30.9 \[20.6\] at day 4; *p* = 0.009 \< 0.025) but not after the random training (36.6 \[18.5\] to 36.7 \[25.0\]; *p* = 0.98). The VAS scores of the real training were significantly smaller than those of random training at day 4 (*p* = 0.048 \< 0.05). Moreover, the VAS scores normalized by day 1 pretraining scores were significantly smaller for real training than random training at day 4 (real, 0.68 \[0.30\]; random, 0.94 \[0.32\]; Cohen *d* = 1.31, *p* = 0.00038 \< 0.05). The 3-day real training significantly reduced pain for patients with phantom limb pain. The normalized VAS scores were significantly different for real and random training in the follow-up (real vs random, *p* \< 0.0001; among days, *p* = 0.001; interaction, *p* = 0.70). Real training normalized scores were significantly smaller than random training scores even 5 days after the 3-day training except for those at days 6 and 7 ([figure 4](#F4){ref-type="fig"}). This result suggests that the 3-day real training induced significant pain reduction that was sustained for 5 days after training. ![Normalized visual analogue scale (VAS) scores after the training\ Normalized VAS scores are shown as box and whisker plots for each day of real training (red) and random training (blue). The cross represents the mean. The box represents the upper and lower quartiles. The whisker represents the maximum and minimum. The dot shows the VAS scores. \**p* \< 0.05, paired Student *t* test.](NEUROLOGY2019028597FF4){#F4} It should be noted that the pain reduction rate between day 1 and day 4 (\[day 4 − day 1\]/day 1) was not significantly correlated with the patient\'s demographics such as age (correlation coefficient 0.12), disease duration (0.05), and the VAS at day 1 (−0.013). Discussion {#s3} ========== This study demonstrates that training to move a virtual phantom hand controlled by BCI significantly reduces phantom limb pain, with the reduction sustained for 5 days after a 3-day training. VAS scores were significantly reduced by 32% at day 4 after real training but not after random training. Pain was also significantly reduced by 36% at day 8 after real training. This finding suggests that training to use a BCI once a week would be a novel treatment to control phantom limb pain. The pain reductions in this study were comparable to those of other visual feedback treatments. Mirror therapy reportedly reduces phantom limb pain by 37.6% according to a VAS for 4 weeks of training.^[@R20]^ Similarly, visual feedback training using augmented reality decreases phantom limb pain by 32% according to a numeric rating scale.^[@R7]^ Notably, these previous studies took longer trainings than our current study and demonstrated that longer training periods decreased the pain more. In our 3-day trainings, the real training significantly reduced pain at day 1 with 3 trainings per day and at day 2 with 6 trainings per day, but not at day 3 with 3 trainings per day. In the short term, 2 days with 3 to 6 BCI trainings might be enough to reduce pain by an amount comparable to other visual feedback treatments. Applying 2 days of this BCI training every week might improve pain reduction in the long term. Compared with other feedback trainings, our training controlling the BCI virtual hand has some similarities and differences with advantages and disadvantages. Theoretically, the training to move the virtual hand controlled by the BCI with a real decoder is similar to mirror therapy, during which patients move their phantom hand while in fact moving the intact hand, which activates the cortical representation of the intact hand while moving the phantom hand. In our training, patients learned to induce cortical activity for the intact hand while moving the phantom hand. However, because it is difficult to activate the intact hand representation without thinking about intact hand movements, it is difficult to achieve this goal with mirror therapy. The training to control the BCI affords patients the opportunity to activate this representation without moving the intact hand, possibly making it easier to achieve the ultimate goal. Moreover, by monitoring cortical activities, the BCI training should be better at inducing targeted cortical activities compared to the other trainings that do not record the cortical activities. Therefore, if the cortical activity of the phantom hand is the therapeutic target, the BCI training will improve the efficacy to modulate the cortical activity to reduce the pain. The mirror therapy was effective in only 3 out of 7 patients in this study as their prior experiences. But the BCI training reduced the pain for 5 out of the 7 patients and for 9 out of all 12 patients. This suggests that BCI training may be effective for more patients than mirror therapy.^[@R4]^ It should be noted, however, that BCI training costs much more than mirror therapy. We propose that methods based on the same mechanism should be developed to make implementation of BCI training in clinic more cost-effective. In our previous study, we demonstrated that BCI training with the phantom hand decoder temporarily increased pain in association with the improved accuracy to classify the MEG signals of grasping and opening the phantom hand.^[@R11]^ On the other hand, the BCI training with the intact hand decoder decreased pain with a reduction in classification accuracy.^[@R11]^ Here, we evaluated the classification accuracy as the measure of the cortical representation.^[@R21]^ Similarly, in the current study, the classification accuracy of phantom hand movements was significantly decreased after the real training (data available from Dryad, text 2, [doi.org/10.5061/dryad.15dv41nt9](https://doi.org/10.5061/dryad.15dv41nt9)) in association with the pain reduction. Patients with a larger reduction of pain tended to reduce the classification accuracy after the training. It was suggested that the cortical representation should be changed to reduce the classification accuracy to reduce the pain. It should be noted that this study has some limitations. Although the number of participants was comparable to previous studies,^[@R7][@R8][@R9]^ it was nonetheless a small number, which limited the study. Also, we recruited patients with phantom limb pain due to both amputation and brachial plexus root avulsion. Although the pain might be different between the 2 patient groups, the effect of the training was similar among the patients and showed consistent changes in pain and cortical representation. Our results demonstrated that training to control BCI reduces pain similarly among these patients. In addition, the length of the training (3 days) is short to evaluate its long-term clinical effect. This highlights the need for further studies to evaluate the long-term clinical effects of the BCI trainings for a longer period of time. Moreover, in the current study, we evaluated the baseline pain only once before each 3-day training; however, even in the same patient, pain varies over time and can change spontaneously. It will be better to evaluate the baseline pain for a longer period in the future study. Our 3-day training demonstrated a significant analgesic effect that lasted 1 week compared to the random trainings. Although the study had several limitations, the observed pain reduction evaluated in the randomized crossover trial strongly suggests that the BCI training is effective for reducing phantom limb pain. Moreover, the effect size suggests that the BCI training will be a promising method to reduce phantom limb pain compared to other treatments. Our results strongly support the application of the training to control BCI for longer periods for the clinical relief of phantom limb pain. Class of Evidence: [NPub.org/coe](http://NPub.org/coe) Study funding ============= This research was conducted under the Strategic Research Program for Brain Sciences (SRPBS) (JP19dm0307008) from the Japan Agency for Medical Research and Development (AMED). This research was also supported in part by the Japan Science and Technology Agency (JST) Precursory Research for Embryonic Science and Technology (JPMJPR1506), Core Research for Evolutional Science and Technology (JPMJCR18A5), and Exploratory Research for Advanced Technology (JPMJER1801); Grants-in-Aid for Scientific Research from KAKENHI (JP17H06032, JP15H05710, JP18H05522, and JP18H04085); Brain/MINDS (19dm0207070h0001) and Brain/MINDS Beyond (19dm0307103h0001) from AMED; TERUMO Foundation for Life Sciences and Arts; SONPO Foundation; Daiichi Sankyo Foundation of Life Science; and Versus Arthritis (21537) IITP grant funded by MSIT (2019-0-01371). Disclosure ========== T. Yanagisawa had grants from AMED (JP19dm0307008), JST (JPMJPR1506, JPMJCR18A5, JPMJER1801), KAKENHI (JP17H06032, JP15H05710), TERUMO Foundation, Daiichi Sankyo Foundation, and SONPO Foundation during the conduct of the study. R. Fukuma reports no disclosures. B. Seymour had grants from Versus Arthritis (21537) and IITP funded by MSIT (2019-0-01371). M. Tanaka reports no disclosures. K. Hosomi had grants from Brain/MINDS (19dm0207070h0001) and Brain/MINDS Beyond (19dm0307103h0001) from AMED and KAKENHI (JP18H05522 and JP18H04085) during the conduct of the study. Dr. Hosomi belongs to a joint research department established with sponsorship by Teijin Pharma Limited. O. Yamashita reports no disclosures. H. Kishima had grants from Brain/MINDS (19dm0207070h0001) and Brain/MINDS Beyond (19dm0307103h0001) from AMED and KAKENHI (JP18H05522 and JP18H04085) during the conduct of the study. Y. Kamitani reports no disclosures. Y. Saitoh had grants from AMED (JP19dm0307008) during the conduct of the study and belongs to a joint research department established with sponsorship by Teijin Pharma Limited. Go to [Neurology.org/N](https://n.neurology.org/lookup/doi/10.1212/WNL.0000000000009858) for full disclosures. ANOVA : analysis of variance BCI : brain--computer interface MEG : magnetoencephalography SF-MPQ2 : short-form McGill Pain Questionnaire 2 SLR : sparse logistic regression VAS : visual analogue scale [^1]: Go to [Neurology.org/N](https://n.neurology.org/lookup/doi/10.1212/WNL.0000000000009858) for full disclosures. Funding information and disclosures deemed relevant by the authors, if any, are provided at the end of the article. [^2]: The article processing charge was funded by the AMED.
Mid
[ 0.636363636363636, 31.5, 18 ]
The 2012 Ticket William Kristol April 18, 2011, Vol. 16, No. 30 Remember Barack Obama? He’s the president of the United States. As a candidate he promised hope and change. Now he defends the status quo. The fact that the status quo is clearly unsustainable doesn’t deter him. His budget’s endless deficits and rising debt takes us down a perfectly obvious road to ruin. But Obama asks us to close our eyes, pretend not to see, and hope against hope that we don’t need to change. Thankfully, the Republican party in 2011, under the leadership of Wisconsin congressman Paul Ryan, has decided to be serious about governing. But to govern America requires the presidency. The late Jack Kemp helped inspire the last sea change in American politics from the halls of Congress in the late 1970s. But real change required the defeat of incumbent president Jimmy Carter and the election of Ronald Reagan in 1980. Thus the current interest, and anxiety, among Republicans and conservatives about 2012. The 2012 election is likely to be decisively important for the future of our country—but worrying about Election Day won’t make it arrive any sooner. Nor is it at all clear that narrowing the presidential field early, or coming to agreement on a nominee sooner rather than later, would help Republicans prevail. In 2008, Obama clinched his party’s nomination much later than John McCain did, yet Obama’s having to endure a long schedule of hard-fought primaries didn’t stop him from winning the general election handily. All we can do is let the candidates run who have decided to run, and urge them to be bold and forthright in laying out their plans for the country. And we can encourage other candidates to consider running, too, in the assumption that there may be individuals who’d be good presidents but haven’t chosen to run for the office—perhaps because they’re busy with the jobs they have already, or perhaps because they’re not as certain they should be president as those now putting themselves forward. This lack of certainty, incidentally, isn’t a sign of bad character. For example, Donald Trump has certainty. He says he’s running for president, and an NBC / Wall Street Journal poll of Republicans last week had him tied for second in the GOP field with 17 percent of the vote. Trump shouldn’t be, and won’t be, the GOP nominee. But this degree of support does suggest unhappiness with the established candidates and an openness to someone new. So does the reaction of our readers to a recent blog post on The Weekly Standard website. Here’s some context. On April 3, Paul Ryan and Florida senator Marco Rubio appeared as guests on Fox News Sunday. Ryan explained and defended his budget. Rubio called for more decisive action in Libya. Later that day, I wrote a short item half-jokingly suggesting (once again) a Ryan-Rubio ticket in 2012. This is just a small sample of the emails we received: • I love the two, couldn’t be any better or smarter pair for 2012. They have my vote! I could finally sleep at night with those two running the country. Pray they team up and run in 2012. • Paul Ryan and Marco Rubio are the future of not only the Republican party, but also America. Get rid of the old hacks, it’s time for these dynamic American leaders. If they won’t run, add Chris Christie to the mix. . . . • I am a registered Democrat. . . . I agree with you on Ryan-Rubio, and would be willing to work for them in Joe Biden’s home state of DE. I don’t consider myself a Tea Party guy, although I agree with most of what they stand for. . . . Sign me up. • Tell Ryan we’ll let his kids roller-skate in the W.H. . . . ANYTHING! . . . Seriously, I understand they lack experience in some areas. However, when it comes to fiscal reality Paul Ryan IS the smartest man in the room. And he knows a whole lot about American political history . . . love the guy. All of this suggests a willingness to consider more and hitherto unexpected options for the GOP nominee. And the following email correctly implies that not Obama but a fatalism about politics and the country may be the greatest obstacle to Republican success in 2012: Mr. Kristol, While your idea of a Ryan-Rubio 2012 ticket is a worthy one, it just won’t work. Rubio makes much too much sense for such a new senator; and he would have way too much appeal to the fastest growing population segment of our country. Ryan is: Much too smart. Much too sensible. Much too straightforward. Much too knowledgeable and specific in his solutions. Much too knowledgeable in the actual workings of government. Much too genuine. And waaaaaaaaaaay too likable. How can this possibly work? Make it work. We at The Weekly Standard can’t make it work. But Republican primary voters can. And they can choose a nominee—whoever that is—worthy of the battle ahead.
Mid
[ 0.587786259541984, 28.875, 20.25 ]
ABSTRACT: National Alcohol Survey (NAS) Resources This National Alcohol Survey (NAS) Resources core, active all years, will generate, manage and provide needed NAS datasets to all Center research projects and to other independent investigators. The Center has conducted NAS surveys of the adult (age 18+) US population at approximately 5-year intervals since the 1960s, with considerable standardization of measures since 1979 (the 6th NAS or N6). In addition to continuing support for existing NAS datasets (including the most recent 2014-15 N13), this scientific core will conduct a new National Alcohol Survey (N14) in conjunction with a skilled and tested fieldwork organization. As in past surveys, we will include African American/Black and Hispanic/Latino oversamples and dual-frame (cell phone and landline), list-assisted random-digit dial (RDD) sampling. Innovations include conducting a study comparing RDD response rates and survey estimates with those for a non-probability sample (web panel) and for a web survey sample recruited using address-based probability sampling. We also will include a pilot test with experimental manipulation of the telephone recruitment script to increase participation and completion by racial/ethnic minority respondents. Using results from the Neighborhood Methods Project Sub-Study 3, we will introduce new questions on the neighborhood and community context of alcohol use and its consequences. For the planned household survey, instrument development and piloting begins in 2018 (Year 3) and fielding in 2019, with completion by early 2020. Throughout the proposed period, beginning Year 1, we will prepare NAS datasets (together with the Statistical and Data Services [SDS] Core) for use by investigators in the Center's research projects, affiliated independent grants, and other research projects. Dataset enhancements will include (a) cleaning and (b) weighting data, as well as additional routine methodological tasks to (c) adjust alcohol consumption measures for alcohol content of specific beverages consumed on- and off premise using NAS drink size and beverage type data; (d) provide certain data imputations (e.g., to address missing income data); (e) develop and confirm algorithms (e.g., for calculating mean intake, or volume); (f) create needed scales (e.g., for measuring Alcohol Use Disorder (AUD) and other constructs); and (g) conduct, with the SDS Core biostatisticians, needed psychometric analyses to provide basic reliability and validity information. For N13 and N14, we also will (h) add geo-referenced contextual data elements drawn from Census and other archival sources. When ready, we will (i) merge new NAS data with the earlier series extending from N6. Finally, per the data-sharing plan, we will (j) de-identify the NAS data and periodically place datasets and supporting documentation in the public domain, by agreement with the Alcohol Epidemiological Data System (AEDS), managed for NIAAA and grantees by CSR, Inc.
High
[ 0.684210526315789, 30.875, 14.25 ]
An alpha2,6-sialyltransferase cloned from Photobacterium leiognathi strain JT-SHIZ-119 shows both sialyltransferase and neuraminidase activity. We cloned, expressed, and characterized a novel beta-galactoside alpha2,6-sialyltransferase from Photobacterium leiognathi strain JT-SHIZ-119. The protein showed 56-96% identity to the marine bacterial alpha2,6-sialyltransferases classified into glycosyltransferase family 80. The sialyltransferase activity of the N-terminal truncated form of the recombinant enzyme was 1477 U/L of Escherichia coli culture. The truncated recombinant enzyme was purified as a single band by sodium dodecyl sulfate polyacrylamide gel electrophoresis through 3 column chromatography steps. The enzyme had distinct activity compared with known marine bacterial alpha2,6-sialyltransferases. Although alpha2,6-sialyltransferases cloned from marine bacteria, such as Photobacterium damselae strain JT0160, P. leiognathi strain JT-SHIZ-145, and Photobacterium sp. strain JT-ISH-224, show only alpha2,6-sialyltransferase activity, the recombinant enzyme cloned from P. leiognathi strain JT-SHIZ-119 showed both alpha2,6-sialyltransferase and alpha2,6-linkage-specific neuraminidase activity. Our results provide important information toward a comprehensive understanding of the bacterial sialyltransferases belonging to the group 80 glycosyltransferase family in the CAZy database.
Mid
[ 0.653465346534653, 33, 17.5 ]
Cairns Pet Cremation Services At Possum Ridge Pet Cremations we are here to provide families in the Cairns area with a dignified option of after care for their beloved pet. With our specially designed pet cremator, we are able to cremate each pet individually. Families are able to choose whether they want their pets ashes returned in our standard ash container (included), or in one of our large range of memorial Pet Urns. (Extra costs apply) If owners do not wish to have ashes returned, there is always the option to have their pets ashes spread in our Pet Memorial Garden. We are available to come to your home, or nominated Veterinary Clinic, to safely transfer your beloved pet into our care. We can travel throughout the Cairns area, and extend from Port Douglas to Tully (fees may apply). Alternatively, you are welcome to bring your beloved pet on their last journey to us. We will be here to warmly receive you. POSSUM RIDGE PET CREMATIONS "With a special, gentle and professional service, we make a difficult time a little easier." Pet Cremation Service Options Bird, Rat or Reptile Ashes Returned $185.00 inc GST Ashes Not Returned $150.00 inc GST Small Animals (Up to 10kg) Ashes Returned $275.00 inc GST Ashes Not Returned $200.00 inc GST Medium Animals (11kg - 25kg) Ashes Returned $350.00 inc GST Ashes Not Returned $260.00 inc GST Large Animals (26kg - 40kg) Ashes Returned $425.00 inc GST Ashes Not Returned $300.00 inc GST XLarge Animals (40kg +) Ashes Returned $470.00 inc GST Ashes Not Returned $340.00 inc GST Collection Service Options Business Hours 8:30am-4:30pm Monday to Friday Kuranda $100.00 inc GST Babinda $110.00 inc GST Innisfail, Tablelands, Mossman, Port Douglas $120.00 inc GST Tully $140.00 inc GST Weekday – After Hours 5:00am-8:30am / 4:30pm-10:00pm Extra $120.00 inc GST Weekend 5:00am-10:00pm Extra $150.00 inc GST Public Holiday 5:00am-10:00pm Extra $220.00 inc GST More About Our Pet Cremation Services Do you have questions about our pet cremation services? Contact our team now to get answers.
Mid
[ 0.6012793176972281, 35.25, 23.375 ]
Q: How do I float/wrap text around a table I've been trying to use the wrapfigure and wraptable from the wrapfig package, but to no avail. This works for text. \begin{wrapfigure}{l}{40mm} \begin{center} Some random text \end{center} \end{wrapfigure} This should be to the left of the table. \blindtext But once I add a table (see code below), it fails. \begin{wrapfigure}{l}{40mm} \begin{center} \begin{table} \begin{tabular}{l | c} A & Cell 1 \\ \hline B & Cell 2 \\ \hline \end{tabular} \end{table} \\ Some random text \end{center} \end{wrapfigure} This should be to the left of the table. \blindtext What am I doing wrong? Thanks. A: The table environment is for normal floating tables. You do not need to put a tabular inside a table environment. You should use the wraptable environment instead. \begin{wraptable}{l}{40mm} \begin{tabular}{l | c} A & Cell 1 \\ \hline B & Cell 2 \\ \hline \end{tabular} Some random text \end{wraptable} This should be to the left of the table. \blindtext
High
[ 0.697490092470277, 33, 14.3125 ]
Australian Vox Eminor will be attending the on-site BYOC qualifier for DreamHack Winter's main $250,000 SteelSeries Championship tournament. The Australians most recently topped the Oceania MSI Beat it! qualification and will also be attending the $22,000 finals in Beijing on November 22-23rd, right before DreamHack Winter. Now they've also announced their plans to attend DreamHack Winter, regardless of obtaining an invitation, as they have rented a house to stay in and will at least be trying their luck in the BYOC qualifier, should they not receive an invitation. Vox Eminor after winning CGPL Season 1 (photo: Vox Eminor's Facebook) The squad will be traveling directly from Beijing to Jönköping in order to save time, and it will also allow them three days between the events to bootcamp in Sweden to better prepare for DreamHack. Vox Eminor have dominated the domestic scene with six wins so far in 2013, with three additional first place finishes in Australia in late 2012, since CS:GO took over as the tournament game. Vox Eminor: Aaron "AZR" Ward Luke "Havoc" Paton Iain "Snyper" Turner Chad "Spunj" Burchill Azad "topguN" Orami Vox Eminor will have their shot against international competition as they will be attending both MSI Beat it! and DreamHack Winter's BYOC qualifier in late November, marking their first trips overseas in CS:GO.
Mid
[ 0.644549763033175, 34, 18.75 ]
A comparative evaluation of substance abuse treatment: V. Substance abuse treatment can enhance the effectiveness of self-help groups. Affiliation with Alcoholics Anonymous (AA) and other 12-Step self-help groups is becoming more common at the same time as professional substance abuse treatment services are becoming less available and of shorter duration. As a result of these two trends, patients' outcomes may be increasingly influenced by the degree to which professional treatment programs help patients take maximum advantage of self-help groups. The present study of 3018 treated veterans examined how the theoretical orientation of a substance abuse treatment program affects (1) the proportion of its patients that participate in self-help groups, and, (2) the degree of benefit patients derive from participation in self-help groups. Patients treated in 12-Step and eclectic treatment programs had higher rates of subsequent participation in 12-Step self-help groups than did patients treated in cognitive behavioral programs. Furthermore, the theoretical orientation of treatment moderated the outcome of self-help group participation: As the degree of programs' emphasis on 12-Step approaches increased, the positive relationships of 12-Step group participation to better substance use and psychological outcomes became stronger. Hence, it appears that 12-Step oriented treatment programs enhance the effectiveness of 12-Step self-help groups. Findings are discussed in terms of implications for clinical practice and for future evaluations of the combined effects of treatment and self-help groups.
Mid
[ 0.654205607476635, 35, 18.5 ]
Probing bulk electronic structure with hard X-ray angle-resolved photoemission. Traditional ultraviolet/soft X-ray angle-resolved photoemission spectroscopy (ARPES) may in some cases be too strongly influenced by surface effects to be a useful probe of bulk electronic structure. Going to hard X-ray photon energies and thus larger electron inelastic mean-free paths should provide a more accurate picture of bulk electronic structure. We present experimental data for hard X-ray ARPES (HARPES) at energies of 3.2 and 6.0 keV. The systems discussed are W, as a model transition-metal system to illustrate basic principles, and GaAs, as a technologically-relevant material to illustrate the potential broad applicability of this new technique. We have investigated the effects of photon wave vector on wave vector conservation, and assessed methods for the removal of phonon-associated smearing of features and photoelectron diffraction effects. The experimental results are compared to free-electron final-state model calculations and to more precise one-step photoemission theory including matrix element effects.
High
[ 0.663087248322147, 30.875, 15.6875 ]
Jacques Pepin: Fast Food My Way Vegetable Fete Jacques reminisces about his childhood and celebrates the bounty of the season. He cooks up a batch of ratatouille that can be served two ways -- perfectly delicious on its own or with pasta in Ratatouille with Penne. Seafood stars in Shrimp with Cabbage and red Caviar, and delicate peaches end the show in Jacques' classically made Peach Melba.
Mid
[ 0.6339468302658481, 38.75, 22.375 ]
STILL STAALED:Marc Staal is still trying to recover from the post-concussion symptoms he has been suffering since the summer, but he's still going to be out for the foreseeable future for the Rangers. He won't accompany the team on their four-game Western Canada road trip. He has gone from being held out for caution in the preseason to still sitting out weeks later. (Newsday) PELUSO PICKS A FIGHT: Well, not really. But in a figurative sense, the former Senators, Blackhawks, Devils, Blues and Flames tough guy is standing up for fighting in hockey, saying a ban on it "would be stupid." He goes on to assert that depression after playing isn't from fighting, but instead it's poor self-esteem from years of being told all you can do is fight. (Slam Sports) MASCOT METER: Ever look at an NHL mascot and say to yourself, "Gee, that's really lame?" You aren't the only one. Here is a list thrown together of the eight lamest mascots in the NHL and it's topped by the Canadiens' red-headed furball known as Youppi! The exclamation point is in his name, not my sentence. (Yardbarker.com) THE BEST EVER: That's the claim of Dejan Kovacevic about Penguins goalie Marc-Andre Fleury. He thinks the Flower will go down when it's all said and done as the best goaltender the Pens have ever seen, better than Tom Barrasso and the original netminder Les Binkley. (Pittsburgh Tribune-Review) TROTZ PLOTS MORE: The Predators are perfect at 2-0 even with starting the season the road, but that doesn't mean Barry Trotz is happy with his team. Saying the team has goalie Pekka Rinne to thank for the four points, they are getting back to working even more on defense. Ol' Barry back at it. (Smashville 24/7) BLADES WEEK 3: Again, for anybody who might be wondering about the Battle of the Blades show in Canada, here is a recap from the third episode. Russ Courtnall and Kim Navarro were booted from the show. They were put together a short time before the show after the death of Wade Belak, who was going to be a contestant. (Puck Daddy) WHIP IT: Judging from the first week of the season, you are going to hear a lot of a song called The Whip by a band named Locksley this season. The Toronto Maple Leafs are among a few teams that will be using the song when goals are scored this season. Here's a look at the music video. Now it's stuck in your head for good.
Low
[ 0.51008064516129, 31.625, 30.375 ]
The present invention relates to a combustion method for NOx reduction, as well as an apparatus therefor, to be applied to water-tube boilers, reheaters of absorption refrigerators, or the like. Generally, as the principle of suppression of NOx generation, there have been known (1) suppressing the temperature of flame (combustion gas), (2) reduction of residence time of high-temperature combustion gas, and (3) lowering the oxygen partial pressure. Then, various NOx reduction techniques to which these principles are applied are available. Examples that have been proposed and developed into practical use include the two-stage combustion method, the thick and thin fuel combustion method, the exhaust gas recirculate combustion method, the water addition combustion method, the steam jet combustion method, the flame cooling combustion method with water-tube groups, and the like. With the progress of times, NOx generation sources even of relatively small capacity such as water-tube boilers have been coming under increasingly stricter regulation of exhaust gas, and so further reduction of NOx are demanded therefor. The present applicant proposed a NOx reduction technique for these demands by Japanese Patent Laid-Open Publication HEI 11-132404 (Specification of U.S. Pat. No. 6,029,614). This prior art technique is intended to achieve NOx reduction by a combination of suppression of combustion gas temperature with water tubes and suppression of combustion gas temperature with exhaust gas recirculation. However, the technique was capable of NOx reduction up to only about 25 ppm, other than one that allows NOx reduction to below 10 ppm to be achieved. It is noted that NOx reduction with the value of NOx generation being not more than 10 ppm will hereinafter be referred to as super NOx reduction. In this prior art technique, it is conceivable to enhance the function of combustion-gas-temperature suppression with water tubes with the aim of achieving the super NOx reduction. This functional enhancement is to provide water tubes in contact with a burner or to increase the heat transfer surface of water tubes. However, excessive fulfilment of this functional enhancement would cause an increase in pressure loss or an unstable combustion such as oscillating combustion. Further, it is also conceivable to enhance the function of combustion-gas-temperature suppression with exhaust gas recirculation to achieve the super NOx reduction. This functional enhancement is to increase the exhaust-gas recirculation quantity. However, this functional enhancement would cause an amplification of unstable characteristics of exhaust gas recirculation. That is, the exhaust gas recirculation has a characteristic that exhaust-gas flow rate or temperature changes due to changes in combustion quantity or changes in load. Increasing the exhaust-gas recirculation rate would cause these unstable characteristics to be amplified, so that stable NOx reduction could not be achieved. Furthermore, the functional enhancement for exhaust gas recirculation would cause the combustion reaction to be suppressed, which would lead to an increase in emission of CO and unburnt components as well as to an increase in thermal loss. Also, increasing the exhaust gas recirculation rate would cause the blower load to increase. Excessive suppression of burning reaction would lead to an increase in emission of CO and unburnt contents, as well as to an increase in thermal loss.
Low
[ 0.521472392638036, 31.875, 29.25 ]
French. Lizzie Crozier French Scrapbook, p. 06 d The Stough Charges. On motion of Mr. Maynard, a committee of the board not residents of Knoxville, was appointed, and the case of the professor, charged by Rev. H. W. Stough with teaching immorality to his classes, to examine the record and report their finding to the board, with their recommendations in the premises. The committee named was Bolton Smith, of Memphis; H. Clay Evans, of Chattanooga and Hon. W. P. Cooper, speaker of the Tennessee house of representatives, of Bedford county. After fully considering all the correspondence in the case, and all evidence the committee made the following report, which was unanimously adopted:— "Your committee having read the statement of Reverend Dr. Stough, and the correspondence with him, as well as the statements of Mrs. McGranahan, Mrs. Knox, Mrs. Dodson. Mrs. Crafts and Mrs. French, furnished by him, and subsequent correspondence with these ladles from the president of the university, finds that Dr. Stough's statements are in no sense supported by the letters furnished by him, and we feel that he should have withdrawn his statements as soon as he saw these letters. "Having examined the newspaper re- port of the lecture of Professor Schaeffer in The Knoxville Journal and Tribune of June 20, 1913, prepared by Mrs. L. Cro- zier-French, we feel warranted in relying upon the latter, written when the lecture was fresh in mind, rather than on the former statements prepared three years later. We do not in the least suggest that the ladies' statements are not the result of an absolute desire to be fair to Professor Shhaeffer, but merely that the report of Mrs. French, being commendatory in tone, is incompatible with the belief that the statements now alleged were in fact in her mind when the article for the paper was written. When to this was added the denial of Professor Schaeffer, that he ever held or expressed such an opinion, and the denial of Pro- fessorf (sic.) J. C. Pridmore, Miss Ruby Franklin and Mrs. J. C. Pridmore, who were present at the lecture, that any such statements were made, we feel the accusation of these ladies to have been disproved as conclusively as it is possible to disprove statements made after the lapse of three years. "Besides, the charges are so enormous that we could not find any man of clean life guilty of them unless the evidence were conclusive, and no one can claim this to be true in this case. (Signed) "BOLTON SMITH, Chairman. "H. CLAY EVANS Click tabs to swap between content that is broken into logical sections. To use material or to order reproductions, contact [email protected] or phone 865 215-8808. Please provide a brief description of the material. Full Text Search The Stough Charges. On motion of Mr. Maynard, a committee of the board not residents of Knoxville, was appointed, and the case of the professor, charged by Rev. H. W. Stough with teaching immorality to his classes, to examine the record and report their finding to the board, with their recommendations in the premises. The committee named was Bolton Smith, of Memphis; H. Clay Evans, of Chattanooga and Hon. W. P. Cooper, speaker of the Tennessee house of representatives, of Bedford county. After fully considering all the correspondence in the case, and all evidence the committee made the following report, which was unanimously adopted:— "Your committee having read the statement of Reverend Dr. Stough, and the correspondence with him, as well as the statements of Mrs. McGranahan, Mrs. Knox, Mrs. Dodson. Mrs. Crafts and Mrs. French, furnished by him, and subsequent correspondence with these ladles from the president of the university, finds that Dr. Stough's statements are in no sense supported by the letters furnished by him, and we feel that he should have withdrawn his statements as soon as he saw these letters. "Having examined the newspaper re- port of the lecture of Professor Schaeffer in The Knoxville Journal and Tribune of June 20, 1913, prepared by Mrs. L. Cro- zier-French, we feel warranted in relying upon the latter, written when the lecture was fresh in mind, rather than on the former statements prepared three years later. We do not in the least suggest that the ladies' statements are not the result of an absolute desire to be fair to Professor Shhaeffer, but merely that the report of Mrs. French, being commendatory in tone, is incompatible with the belief that the statements now alleged were in fact in her mind when the article for the paper was written. When to this was added the denial of Professor Schaeffer, that he ever held or expressed such an opinion, and the denial of Pro- fessorf (sic.) J. C. Pridmore, Miss Ruby Franklin and Mrs. J. C. Pridmore, who were present at the lecture, that any such statements were made, we feel the accusation of these ladies to have been disproved as conclusively as it is possible to disprove statements made after the lapse of three years. "Besides, the charges are so enormous that we could not find any man of clean life guilty of them unless the evidence were conclusive, and no one can claim this to be true in this case. (Signed) "BOLTON SMITH, Chairman. "H. CLAY EVANS
Low
[ 0.5201612903225801, 32.25, 29.75 ]
This image was degraded by the researchers to show what the old telescope would have seen. Credit: Big Bear Solar Observatory NJIT's new 1.6-meter clear aperture solar telescope—the largest of its kind in the world—is now operational. The unveiling of this remarkable instrument—said to be the pathfinder for all future, large ground-based telescopes—could not have come at a more auspicious moment for science. This year marks the 400th anniversary of Galileo's telescope that he used to demonstrate that sunspots are indeed on the Sun. "With our new big, beautiful white machine, Galileo's work can leap ahead with a capability never before available," said NJIT distinguished professor of physics Philip R. Goode. Goode, the heart and soul of the project, has been director of Big Bear Solar Observatory (BBSO), since NJIT took over management of BBSO in 1997 from California Institute of Technology. Located high above sea level in Big Bear Lake, CA, the observatory is one of six major land-based facilities supported by federal funding. This photo is the first light image from the new solar telescope at Big Bear Solar Observatory. Credit: Big Bear Solar Observatory "We are already seeing photos offering a better understanding of the Sun," said Goode. "With this instrument we should be able to have a better understanding of dynamic storms and space weather—which can have dramatic effects on Earth." Earlier this month, researchers achieved what is called first scientific light. This means the telescope is finally operational. To achieve its full powers, at least three more years of work will be needed. Photos from the first observations are still being processed. Nevertheless, Goode and his researchers were able to extract a few unique images and one is shown here. It clearly illustrates the before-and-after capabilities of the old versus new telescope. "Our prized first image shows the Sun's ever-present, turbulent granular field with its largest granules being about the size of Alaska," Goode said. "The small, bright points in the dark lanes are the smallest-scale magnetic structures on the Sun. Look closely at the "after" photo (which you may want to enlarge) and you will see a string of pearls. Each pearl is a cross-section of an intense, single fiber of the Sun's magnetic field - the basic building block of the solar magnetism." Goode adds that the Sun is now in a state of prolonged magnetic inactivity, perhaps the longest such time in a century. "The new telescope is ideal for studying the Sun as it rises from this strange state of quietude," he added. The new instrument has three times the aperture of the old telescope. It represents a significant advance in high-resolution observations of the Sun, since it has the largest aperture of any solar telescope in existence, said Goode. Other pluses include a marvelous location-- high in a Southern California mountain lake, and since it is an off-axis instrument, there is no part of sunlight blocked by the telescope, itself. The new telescope will be used in joint observation campaigns with NASA satellites to optimize the scientific output of all observations of the Sun. BBSO has always operated in such campaigns, but now can do so with greatly-enhanced capabilities. The National Science Foundation (NSF), Air Force Office of Special Research (AFOSR) and NASA have provided more than $5 million in hardware components. The telescope is filled with new technologies. The 1.7-meter (equivalent to 4.6 feet) primary mirror was polished by the world-renowned Steward Observatory Mirror Laboratory at the University of Arizona (UA). The extremely-precise measurements of the mirror's shape required the application, for the first time, of a computer-generated hologram. The development of this technology will be essential for figuring the next generation of even-larger night time telescopes. The final error in the primary mirror is only a few parts in a billion from its desired parabolic shape. "Buddy Martin at the UA Mirror Lab has described the mirror in ours as the pathfinder for large nighttime telescopes that are about-to-be built," said Goode. This is the new solar telescope at Big Bear Solar Observatory. Credit: Big Bear Solar Observatory Another key design issue for this large-aperture solar telescope was the creation of a thermal control system capable of maintaining the temperature of the mirrors near or below ambient air. To achieve this, the dome employs a wind-gate and exhaust system which controls the airflow from the wind. The structure maintains the same temperature inside and outside the dome, and clears concentrations of heat in and around the optical paths. In addition, BBSO engineers implemented a closed-cycle, chilled-air system as part of the telescope mount to limit so-called "mirror seeing." This sweeps away turbulent cells and directly cools the primary mirror. After a day of observations, the mirror must be cooled overnight to ensure that it is somewhat cooler than ambient in the morning. DFM Engineering, Longmont, CO, built and tested the optical support structure and active-support mirror cell for the enormous mirror. It is supported by 36 actuators that can bend out low-order aberrations, such as those due to gravity and/or thermal effects. The telescope is now in its commissioning phase in which more sophisticated observations are made possible with the implementation of advanced hardware. These include adaptive optics to correct for atmospheric distortion and hardware to measure magnetic fields in visible and infrared light. "It is good at last to have our destiny in our own hands rather than those of our capable partners," said Goode. "Seeing first light was a great moment because the team in BBSO finally knew that its big white machine works as we had planned." Source: New Jersey Institute of Technology Explore further NJIT physicists expect new super lens to reveal first light by early 2006
Mid
[ 0.614349775784753, 34.25, 21.5 ]
Monday, December 3, 2012 I am more and more convinced that we live in a world of make-believe, a world of make-believe created by the media we use. Six years ago I started a personal inquiry into why every time I built an art program at a school it was either cut or I was bumped out within two or three years. This journey took me to spending five years as a technology integration specialist, study of school policy, and intense research into the history of public schooling, particularly the history of trends in instructional methods and pedagogy. What I found was something quite troubling. In the past ten years Minnesota has lost over 50% of its fine arts FTEs in schools across the state. How can this be? Could there be other data and statistics out there that would point to the source of this problem? I will come back to this point later. Last summer it struck me. I have attended and presented at the ISTE Conference twice in the last three years and have followed it closely the years I could not attend. This is the world's largest education technology conference. Now, when I was growing up the computer was a major influence on my intellectual development. I spent hours learning how to program video games and write code. Through writing code I learned algebra, logic, mechanics, and physics in an intensely immersive project-based authentic way. For me the most important aspect of this machine as a learning tool was what it provided as a medium of expression, of creativity, and of engagement. It brought math to life. It also provided me a way to cope with my dysgraphia. But, of the 385 concurrent sessions offered at the 2012 ISTE Conference only four were about engaging students as computer programmers. FOUR! That is just 1% of the sessions at the world's largest education technology conference. What is going on here? For fun, and to illustrate a point, I created this infographic about last summers ISTE Conference: 1% of sessions at ISTE about programming and 50% of Minnesota's art teachers lost, could these statistics have a common root? I believe so. What I left out of this infographic is that when you search the ISTE conference site using the session keyword search engine the number of sessions with the word "create" appear far more than any of the examples I listed here. However, upon closer inspection of these sessions they are all examples of creation as a consumer of participatory media. When we create a web page using a WYSIWYG editor, when we build a Facebook page, when we upload photos to Flickr, when we make a podcast, when we click "like" on someone's status, when we click a link we are feeding the beast. All of these activities produce data. In Marshal McLuhan's (1964) Understanding Media: The Extensions of Man McLuhan tells us that the "Medium is the Message," meaning that it matters less what content is contained in media than the effect a media has on us. In other words, the effect of the existence of television in our homes is greater than the effect of any content that may be delivered through it. A media has an effect of working us over, making us numb to its nature and leading us to shift our ways of thinking. McLuhan wrote this book at a time when Television was beginning to take over as the dominant media. Twenty years later Neil Postman wrote Entertaining Ourselves to Death at a time when television had long since replaced newsprint and radio as the dominant media and McLuhan's observations were more apparent. Postman observed that when a society shifts its dominant media from one to another it also shifts what it values as sources of truth. Once it was that “feeling is believing” then “saying is believing” then “seeing is believing” then “reading is believing” then “deducing is believing” and now “counting is believing.” Postman argues that it is the media driven culture that has reduced our concept of what is believable data to that which can be counted, that which can be objectified and abstracted. I would argue that it is not the mass media of television or radio that did this, these are analog media but rather the emergence of the computer and digital media. If you look to rhetorical arguments, campaign propaganda, and advertisements from the 1950s you see a different appeal than you do today. Not a day goes by where we are not inundated with data and statistics as a source to base our beliefs, change our minds, or influence actions but just 60 years ago the use of data and statistics was far less a part of our lives. Instead, the analog electric media of the time produced a belief system that relied on analog sources. A politician was more likely to use a "plain folk" argument or a clever play on words to sway a voter and the advertisements for products focused more on how a product would make you feel. But today it is data that we look to for our source of truth. The role of data and data visualization have changed and evolved over time. We have become a data-obsessed culture to the point where if we make decisions that are not data-driven or data-informed we are looked to be foolish. In a staff meeting just a couple of months ago a colleague said, "How can you justify a decision like that without collecting data to back it up?" A statement like this is telling on two fronts. First, it negates qualitative data and only focuses on that which can be quantified. Second, it implies that the decision has already been made and that the data we collect ought to back up our decision. This is not data-driven decision making, it is decision-driven data collection. But more times than not what is called data-driven decision making is really a rhetorical device used to justify decisions based on other factors. The building blocks of digital media is data, it is digits, 1s and 0s and having moved to center stage in the past 10 years as the dominant media it has shifted publicly accepted sources of truth to that which can be quantified. If it can't be counted it is hard to justify it and it is hard for the meta-world created by the data we produce which overlays our real world to see it. Along with this new source of truth comes a charge and desire to "feed the beast;" to produce more and more data. Because, the more clearly the meta-data world represents aspects of our real world the easier it is to manipulate both. Hence the presence of so many sessions at ISTE asking teachers and students to "create" but so few asking them to "program." And, because there is no data-collection method used to evaluate the effectiveness of Minnesota's arts programs many of them get left out of the data-driven decision process. How can you make a decision to keep a program when the law of the land (NCLB) asks you to make decisions based upon data you have collected. Data that is easily quantifiable. But, make a standardized test to evaluate student achievement in the fine arts and you kill it. I say we are living in a make-believe world. The data=truth pandemic is a grand illusion. This world overlays our real world and we tap into it all the time. Today with a smartphone and an augmented reality app you can scan your neighborhood and access data about physical places and soon you will be able to access data about people. QR codes give us access to the meta-data world associated with objects around us. There have even been calls to create a game layer over the real world. These tools all give us more information that are supposed to help us make decisions but what do they leave out? Are there things in our real world that cannot be digitized or datafied? Are there things that will not make it into the meta-world. And, when we rely on the meta-world for our source of truth what happens to those things that don't make it there? The big problem is not that data cannot represent aspects of truth. It can. But it can never represent all aspects of truth. It is by nature an abstraction and massive amounts of data rely on visualization techniques to make sense of it. A data visualization is a further abstraction one step more removed from the truth. The more we work with abstraction the more we can manipulate the interpretation of truth. Truth in data becomes truth crafted by interpretation of data which becomes truth crafted by design. Nowhere is this more evident than in the emergence of infographics. If I gave you a statistic that said homeowners increased their spending on entertainment 11% between 1986 and 2010 you probably wouldn't pay it much mind. But, if I accompanied that statistic with picture of an overweight guy in a stained tank top slumping on a couch watching television and eating potato chips you would interpret that data infusing it with negative associations. However, if I instead accompanied that data with an image representing ballet, the theater, or the symphony it takes on a new interpretation. Data can be manipulated without changing the data. Marshal McLuhan was cautiously optimistic that if we understand the nature of media we can avoid many of the negative effects it brings. If we understand the nature of the beast we can keep it at bay. At the same time he observed that, "First we craft our tools then our tools craft us," leaving the possibility that it matters not how much we understand the nature of the media, it will likely have its effect anyway. 11 comments: I agree with your argument, but I think that "creation as a consumer of participatory media" needs more explanation as an idea. If you could elaborate more in this area, then you would have better buy in from a larger group on the basic idea in this article. Unfortunately, this is really one of the challenges of the digital age. How can we move from scripted creation in a highly restrictive environment to more free-range creation? How can we move from constructing IKEA furniture to carpentry? I understand what you're saying, and I surely agree that more folks should know how to code. That said, it's only when the tools become available in easier form that the true flourishing of creativity becomes possible, because then average folks - not just specialized experts - can utilize them without steep learning curves. HTML, blogs, and Facebook are a prime example of this. When we tried to advocate to people the power of having a site on the Web, few took us up on that proposition when it required learning HTML. More took us up on that when it became easier, in the form of blogs. And even more took us up on that when it became even easier, in the form of Facebook. We see similar parallels with other forms of digital creation. Despite your concern about use of pre-designed tools, we are seeing an ever-increasing flourishing of user-generated content, conversation, and connections, all of which were less possible when the tools were more difficult. Fair enough. But just because you had the interest to start digging into coding doesn't mean most people do. In fact, we know they don't. And yet they can be quite creative with tools others design and offer. For example, you're not using blogging software you created from scratch. Nor do you likely use word processing software, presentationware, electronic spreadsheets, email programs, and other tools that you made yourself. And yet you use those tools that were made by others in creative and powerful ways. Doesn't your daily practice disprove your own point? So because I use tools someone else made to write and publish my blog it negates the startling statistic that less than 1% of the sessions at ISTE last year were about teaching students to program? Plus, my point is not that we should only use tools we make from scratch but only that computational thinking is important enough to develop in students that it should not be on the endangered species list. And, I do often make my own tools. I'm curious to hear from either of you (Scott or Carl) on how Jonassen's Mindtools would fit into this discussion. There are obviously advantages to being able to create something on your own, it's just a balance between what you're asking others to do, and what they're willing to do (i.e. learn how to code). There are also implications on design model thinking here. Can teachers (and students) be designers without knowing how the programs they're using internally work? After some reflection on Scott's comment yesterday, I don't think I ever made the case that tools made by others limit creativity. They do place limits on expression, which is true of any media, digital or analog. What I was trying to point out was that with web 2.0 this limit of expression is different because it produces data that can be accessed by the tool maker and most web 2.0 tools integrate data collection and aggregation as a core feature. In many cases this has profound positive advantages but it still fuels the belief that data is the source of truth. I am only slightly familiar with Johannson's Mindtools. My understanding is that basically he sees the use of technology in a learning environment as a cognitive extension. I think this theory is in perfect alignment with Marshal McLuahn's belief that all technology and all media are extensions of people. Tons of others support this viewpoint as well. Also, I agree with Scott in that you certainly can design without knowing how all the nuts and bolts work. In fact, you could argue that the web designer who writes all his work in html didn't invent the language or the browser that is used to compile the code, nor did he invent the assembly language that the the browser code gets translated into or the machine on which it renders. Thanks for providing such nice information to us. It provides such amazing information on care/as wellHealth/.The post is really helpful and very much thanks to you. The information can be really helpful on health, care as well as onexam/ tips. The post is really helpful. Thanks for providing such nice information to us. It provides such amazing information on Law Exams/
Mid
[ 0.545893719806763, 28.25, 23.5 ]
#!/usr/bin/python3 # # Copyright 2019 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. r"""Loads Synthea Patient bundles in GCS and extracts relevant features and labels from the resources for training. The genereated tensorflow record dataset is stored in a specified location on GCS. Example usage: python assemble_training_data.py --src_bucket=my-bucket \ --src_folder=export \ --dst_bucket=my-bucket \ --dst_folder=tfrecords """ import gzip import json import math import os import random import tempfile from absl import app from absl import flags from shared import features import tensorflow.compat.v1 as tf from google.cloud import storage FLAGS = flags.FLAGS flags.DEFINE_string('src_bucket', None, 'GCS bucekt where exported resources are stored.') flags.DEFINE_string( 'src_folder', None, 'GCS folder in the src_bucket where exported resources are stored.') flags.DEFINE_string('dst_bucket', None, 'GCS bucket to save the tensowflow record file.') flags.DEFINE_string( 'dst_folder', None, 'GCS folder in the dst_bucket to save the tensowflow record file.') def _load_examples_from_gcs(): """Downloads the examples from a GCS bucket.""" client = storage.Client() bucket = storage.Bucket(client, FLAGS.src_bucket) for blob in bucket.list_blobs(prefix=FLAGS.src_folder): if not blob.name.endswith('.gz'): continue print('Downloading patient record file', blob.name) with tempfile.NamedTemporaryFile() as compressed_f: blob.download_to_filename(compressed_f.name) print('Building TF records') with gzip.open(compressed_f.name, 'r') as f: for line in f: yield features.build_example(json.loads(line.decode('utf-8'))) def _load_examples_from_file(): """Loads the examples from a gzipped-ndjson file.""" for file in os.listdir(FLAGS.src_folder): if not file.endswith('.gz'): continue with gzip.open(os.path.join(FLAGS.src_folder, file), 'r') as f: for line in f: yield features.build_example(json.loads(line)) def _build_examples(): """Loads the examples and converts the features into TF records.""" examples = [] patients = iter([]) if not FLAGS.src_bucket: patients = _load_examples_from_file() else: patients = _load_examples_from_gcs() for patient in patients: if patient is None: continue feature = { 'age': tf.train.Feature( int64_list=tf.train.Int64List(value=[patient['age']])), 'weight': tf.train.Feature( int64_list=tf.train.Int64List(value=[patient['weight']])), 'is_smoker': tf.train.Feature( int64_list=tf.train.Int64List(value=[patient['is_smoker']])), 'has_cancer': tf.train.Feature( int64_list=tf.train.Int64List(value=[patient['has_cancer']])), } examples.append( tf.train.Example(features=tf.train.Features(feature=feature))) return examples def _save_examples(examples): """Splits examples into training and evaludate groups and saves to GCS.""" random.shuffle(examples) # First 80% as training data, rest for evaluation. idx = int(math.ceil(len(examples) * .8)) training_folder_path = FLAGS.dst_folder + '/training.tfrecord' if FLAGS.dst_folder is not None else 'training.tfrecord' record_path = 'gs://{}/{}'.format(FLAGS.dst_bucket, training_folder_path) with tf.io.TFRecordWriter(record_path) as w: for example in examples[:idx]: w.write(example.SerializeToString()) eval_folder_path = FLAGS.dst_folder + '/eval.tfrecord' if FLAGS.dst_folder is not None else 'eval.tfrecord' record_path = 'gs://{}/{}'.format(FLAGS.dst_bucket, eval_folder_path) with tf.io.TFRecordWriter(record_path) as w: for example in examples[idx + 1:]: w.write(example.SerializeToString()) def main(_): _save_examples(_build_examples()) if __name__ == '__main__': app.run(main)
Mid
[ 0.5590909090909091, 30.75, 24.25 ]
1. Field of the Invention This invention relates to a baby carriage, particularly to a baby carriage having a push rod switchable between two states, one for the face-to-back push mode and the other for the face-to-face push mode, with front wheel casters attached to the lower ends of a pair of front legs and rear wheel casters attached to the lower ends of a pair of rear legs. 2. Description of the Prior Art There are a variety of baby carriages on the market, including one having front wheel casters which make the front wheels swivelable. This type of baby carriage can easily change its direction of travel and is superior in operability. Further, there is also a baby carriage on the market having a push rod switchable between two states, one for the face-to-back push mode and the other for the face-to-face push mode. According to this type, the baby carriage can be pushed not only from the side associated with the baby's back but also in face-to-face relation in which the pusher looks at the baby's face while talking to him. A baby carriage having said two functions, i.e., a baby carriage provided both with said front wheel casters and with said two-state switchable push rod is also on the market. When said baby carriage having said two functions is moved with the push rod in the face-to-back push mode, the front casters effectively act, ensuring a smooth change of direction. However, when the baby carriage is moved with the push rod switched to the face-to-face push mode, the rear wheels, which are now positioned forward with respect to the direction of travel, are not swivelable, so that a smooth change of direction cannot be attained. In this case, if the front wheel casters, now positioned rearward with respect to the direction of travel, freely swivel, this degrades operability to the contrary of expectation. Thus, it is common practice to lock the front casters to inhibit their swiveling when the push rod is switched to the face-to-face push mode.
Low
[ 0.5275423728813561, 31.125, 27.875 ]
Sole Desires To be honest, I try not to spend on shoes that much for the sole reason (pun not intended) that shoes are expensive. HAHA But I still ogle at the gorgeous pairs and add them to my wishlist. And lemme share a few that I have been eyeing. 🙂 BTW, these are all local. #supportlocal This futuristic ERA Wedge from Gold Dot – an awesome combination of platform and wedge. These Elise ballet flats from Blanc by Gold Dot. The detail is interchangeable -you can easily switch the bow to the triangle acrylic which easily changes the mood of your outfit!
Low
[ 0.297687861271676, 12.875, 30.375 ]
On the 10th anniversary of North Korea’s commitment to give up its nuclear weapons, China last month called for the resumption of nuclear negotiations with Pyongyang based on a 2005 multilateral agreement. Chinese Foreign Minister Wang Yi said in a Sept. 19 speech that although much has changed since 2005, if the agreement’s “common understandings can be gradually implemented, not only can we achieve the denuclearization of the Korean peninsula, but also open up new prospects for peace and development of Northeast Asia.” China chaired the talks that led to the 2005 agreement. The talks, which also included Japan, Russia, South Korea, and the United States, led to a joint statement on Sept. 19, 2005, that included a commitment by North Korea to dismantle its nuclear weapons. In return, the five other countries were to work to strengthen economic ties with Pyongyang and explore security cooperation in Northeast Asia. Wang’s speech in Beijing to a group of experts and officials from countries involved in the talks came about a week after North Korea’s announcement that a reactor it had used in the past to produce plutonium for nuclear weapons is fully operational again. North Korea is believed to have enough plutonium for six to eight nuclear weapons and may have produced highly enriched uranium for additional warheads. In carrying out the 2005 agreement, North Korea disabled the reactor it used to produce plutonium for nuclear weapons in 2007 and destroyed the reactor’s cooling tower in 2008. But before the 2005 agreement was fully implemented, North Korea withdrew from the process. Since the so-called six-party talks fell apart in 2009, North Korea has conducted two nuclear tests and restarted a heavy-water reactor that produces weapons-grade plutonium. (See ACT, October 2013.) U.S. Secretary of State John Kerry said on Sept. 16, shortly after Pyongyang’s announcement about the reactor, that if North Korea does not refrain from “irresponsible provocations that aggravate regional concerns,” it will face “severe consequences.” In his speech, Wang called on all the countries that were part of the six-party talks to “build up consensus” and create the necessary conditions for the resumption of the negotiations. Specifically, he said the parties should reaffirm the principles of the 2005 joint statement, jointly explore ways to “address security concerns of relevant parties” on the Korean peninsula, and refrain from attempts to disrupt the stability of Northeast Asia. Meanwhile, North Korea announced in mid-September that it may launch a satellite into orbit on Oct. 10 to celebrate the 70th anniversary of the founding of the Workers’ Party of Korea. Pyongyang’s state-run Korean Central News Agency reported that the director of North Korea’s National Aerospace Development Administration said that it was pushing toward the “final phase” in the development of a “new earth observation satellite” in honor of the anniversary. North Korea successfully launched a satellite on its Unha-3 launch vehicle for the first time in December 2012 after a failed attempt in April of that year. (See ACT, January/February 2013.) Most experts say if North Korea launched another satellite, it would likely use the Unha-3, as Pyongyang has not publicly displayed another model. Because of their applicability to ballistic missile development, North Korean satellite launches are prohibited under UN Security Council resolutions. The Sohae Satellite Launching Station, the site of the 2012 launches, received upgrades last year that would allow it to accommodate rockets even larger than the Unha-3. (See ACT, November 2014.) But satellite imagery from last month did not give any indication that the North Koreans were preparing for a launch, according to an imagery analysis by Jack Liu and Joseph Bermudez. In a Sept. 15 article for 38 North, an online publication of the U.S.–Korea Institute at Johns Hopkins University, Liu and Bermudez concluded that, in the five weeks between the time the images were taken and Oct. 10, there was “possibly sufficient time for the North to prepare for a launch if Pyongyang follows past practices and procedures.” The two analysts said this would be possible only if North Korea already had begun to prepare the satellite launch vehicle at the launch pad. Concealment measures make it difficult to observe if this process has begun, Liu and Bermudez said. They wrote that if North Korea follows “past practice,” increased activity at the site, including filling up the buildings that hold propellant for the launch, would be expected. Satellites would likely be able to detect such activity. Email Updates The Arms Control Association depends on the generous contributions of individuals who share our goal of promoting public understanding of and support for effective arms control policies. ACA is a nonpartisan, nonprofit membership organization, and your financial support makes a difference. Your membership comes with a 12-month subscription to Arms Control Today.
Mid
[ 0.610421836228287, 30.75, 19.625 ]
Gordonia (plant) Gordonia is a genus of flowering plants in the family Theaceae, related to Franklinia, Camellia and Stewartia. Of the roughly 40 species, all but two are native to southeast Asia in southern China, Taiwan and Indochina. The remaining species, G. lasianthus (Loblolly-bay), is native to southeast North America, from Virginia south to Florida and west to Louisiana; G. fruticosa is native to the tropical rainforests of Central and South America, from Costa Rica to Brazil. They are evergreen trees, growing to 10–20 m tall. The bark is thick and deeply fissured. The leaves are alternately arranged, simple, serrated, thick, leathery, glossy, and 6–18 cm long. The flowers are large and conspicuous, 4–15 cm diameter, with 5 (occasionally 6-8) white petals; flowering is in late winter or early spring. The fruit is a dry five-valved capsule, with 1-4 seeds in each section. The species are adapted to acidic soils, and do not grow well on chalk or other calcium-rich soils. They also have a high rainfall requirement and will not tolerate drought. Some botanists include Franklinia within Gordonia, even though recent phylogenetic studies show that Franklinia'''s closest living relationship is with the Asian genera Schima and not Gordonia; it differs in being deciduous and flowering in late summer, not late winter. The draft Flora of China account of Theaceae in China splits Gordonia into two genera, with G. lasianthus retained in Gordonia, and the Asian species transferred to Polyspora; this treatment is not yet widely accepted.Gordonia chrysandra may have anti-inflammatory medicinal properties. Species There are about 40 species, including:Gordonia anomalaGordonia balansaeGordonia ceylanicaGordonia curtyanaGordonia fruticosaGordonia hirtaGordonia hirtellaGordonia javanicaGordonia lasianthusGordonia maingayiGordonia multinervisGordonia penangensisGordonia scortechiniiGordonia shimidaeGordonia sinensisGordonia singaporeanaGordonia speciosaGordonia tagawaeGordonia taipingensisGordonia villosaGordonia wallichiiGordonia yunnanensisGordonia species from East Asia were transferred to Polyspora, including:Polyspora acuminataPolyspora axillarisPolyspora chrysandraPolyspora hainanensisPolyspora kwangsiensisPolyspora longicarpaPolyspora tiantangensisPolyspora tonkinensisCultivation and uses Several species of Gordonia'' are grown as ornamental plants for their flowers produced in winter when few other trees are in flower. They are however difficult to grow compared to the similar but generally smaller-growing camellias. References External links Flora of China: draft text of Theaceae Category:Ericales genera
Mid
[ 0.5929203539823, 33.5, 23 ]
/** * KnotMesh.java * * Copyright (c) 2013-2016, F(X)yz * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * Neither the name of F(X)yz, any associated website, nor the * names of its contributors may be used to endorse or promote products * derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE * DISCLAIMED. IN NO EVENT SHALL F(X)yz BE LIABLE FOR ANY * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package org.fxyz3d.shapes.primitives; import javafx.beans.property.DoubleProperty; import javafx.beans.property.IntegerProperty; import javafx.beans.property.SimpleDoubleProperty; import javafx.beans.property.SimpleIntegerProperty; import javafx.scene.DepthTest; import javafx.scene.shape.CullFace; import javafx.scene.shape.DrawMode; import javafx.scene.shape.TriangleMesh; import org.fxyz3d.geometry.Face3; import org.fxyz3d.geometry.Point3D; import org.fxyz3d.shapes.primitives.helper.KnotHelper; /** * Spring based on this model: http://en.wikipedia.org/wiki/Trefoil_knot * Wrapped around a torus: http://mathoverflow.net/a/91459 * Using Frenet-Serret trihedron: http://mathematica.stackexchange.com/a/18612 */ public class KnotMesh extends TexturedMesh { private static final double DEFAULT_MAJOR_RADIUS = 2.0D; private static final double DEFAULT_MINOR_RADIUS = 1.0D; private static final double DEFAULT_WIRE_RADIUS = 0.2D; private static final double DEFAULT_P = 2.0D; private static final double DEFAULT_Q = 3.0D; private static final double DEFAULT_LENGTH = 2.0D*Math.PI*DEFAULT_Q; private static final int DEFAULT_LENGTH_DIVISIONS = 200; private static final int DEFAULT_WIRE_DIVISIONS = 50; private static final int DEFAULT_LENGTH_CROP = 0; private static final int DEFAULT_WIRE_CROP = 0; private static final double DEFAULT_START_ANGLE = 0.0D; private static final double DEFAULT_X_OFFSET = 0.0D; private static final double DEFAULT_Y_OFFSET = 0.0D; private static final double DEFAULT_Z_OFFSET = 1.0D; private KnotHelper knot; public KnotMesh() { this(DEFAULT_MAJOR_RADIUS, DEFAULT_MINOR_RADIUS, DEFAULT_WIRE_RADIUS, DEFAULT_P, DEFAULT_Q, DEFAULT_LENGTH_DIVISIONS, DEFAULT_WIRE_DIVISIONS, DEFAULT_LENGTH_CROP, DEFAULT_WIRE_CROP); } public KnotMesh(double majorRadius, double minorRadius, double wireRadius, double p, double q) { this(majorRadius, minorRadius, wireRadius, p, q, DEFAULT_LENGTH_DIVISIONS, DEFAULT_WIRE_DIVISIONS, DEFAULT_LENGTH_CROP, DEFAULT_WIRE_CROP); } public KnotMesh(double majorRadius, double minorRadius, double wireRadius, double p, double q, int lengthDivisions, int wireDivisions, int lengthCrop, int wireCrop) { setMajorRadius(majorRadius); setMinorRadius(minorRadius); setWireRadius(wireRadius); setP(p); setQ(q); setLengthDivisions(lengthDivisions); setWireDivisions(wireDivisions); setLengthCrop(lengthCrop); setWireCrop(wireCrop); updateMesh(); setCullFace(CullFace.BACK); setDrawMode(DrawMode.FILL); setDepthTest(DepthTest.ENABLE); } @Override protected final void updateMesh(){ setMesh(null); mesh=createKnot((float) getMajorRadius(), (float) getMinorRadius(), (float) getWireRadius(), (float) getP(), (float) getQ(), (float) getLength(), getLengthDivisions(), getWireDivisions(), getLengthCrop(), getWireCrop(), (float) getTubeStartAngleOffset(), (float)getxOffset(),(float)getyOffset(), (float)getzOffset()); setMesh(mesh); } private final DoubleProperty majorRadius = new SimpleDoubleProperty(DEFAULT_MAJOR_RADIUS){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final double getMajorRadius() { return majorRadius.get(); } public final void setMajorRadius(double value) { majorRadius.set(value); } public DoubleProperty majorRadiusProperty() { return majorRadius; } private final DoubleProperty minorRadius = new SimpleDoubleProperty(DEFAULT_MINOR_RADIUS){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final double getMinorRadius() { return minorRadius.get(); } public final void setMinorRadius(double value) { minorRadius.set(value); } public DoubleProperty minorRadiusProperty() { return minorRadius; } private final DoubleProperty wireRadius = new SimpleDoubleProperty(DEFAULT_WIRE_RADIUS){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final double getWireRadius() { return wireRadius.get(); } public final void setWireRadius(double value) { wireRadius.set(value); } public DoubleProperty wireRadiusProperty() { return wireRadius; } private final DoubleProperty length = new SimpleDoubleProperty(DEFAULT_LENGTH){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final double getLength() { return length.get(); } public final void setLength(double value) { length.set(value); } public DoubleProperty lengthProperty() { return length; } private final DoubleProperty q = new SimpleDoubleProperty(DEFAULT_Q){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final double getQ() { return q.get(); } public final void setQ(double value) { setLength(2d*Math.PI*Math.abs(value)); q.set(value); } public DoubleProperty qProperty() { return q; } private final DoubleProperty p = new SimpleDoubleProperty(DEFAULT_P){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public double getP() { return p.get(); } public final void setP(double value) { p.set(value); } public DoubleProperty pProperty() { return p; } private final IntegerProperty lengthDivisions = new SimpleIntegerProperty(DEFAULT_LENGTH_DIVISIONS){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final int getLengthDivisions() { return lengthDivisions.get(); } public final void setLengthDivisions(int value) { lengthDivisions.set(value); } public IntegerProperty lengthDivisionsProperty() { return lengthDivisions; } private final IntegerProperty wireDivisions = new SimpleIntegerProperty(DEFAULT_WIRE_DIVISIONS){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final int getWireDivisions() { return wireDivisions.get(); } public final void setWireDivisions(int value) { wireDivisions.set(value); } public IntegerProperty wireDivisionsProperty() { return wireDivisions; } private final IntegerProperty lengthCrop = new SimpleIntegerProperty(DEFAULT_LENGTH_CROP){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final int getLengthCrop() { return lengthCrop.get(); } public final void setLengthCrop(int value) { lengthCrop.set(value); } public IntegerProperty lengthCropProperty() { return lengthCrop; } private final IntegerProperty wireCrop = new SimpleIntegerProperty(DEFAULT_WIRE_CROP){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final int getWireCrop() { return wireCrop.get(); } public final void setWireCrop(int value) { wireCrop.set(value); } public IntegerProperty wireCropProperty() { return wireCrop; } private final DoubleProperty tubeStartAngleOffset = new SimpleDoubleProperty(DEFAULT_START_ANGLE){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final double getTubeStartAngleOffset() { return tubeStartAngleOffset.get(); } public void setTubeStartAngleOffset(double value) { tubeStartAngleOffset.set(value); } public DoubleProperty tubeStartAngleOffsetProperty() { return tubeStartAngleOffset; } private final DoubleProperty xOffset = new SimpleDoubleProperty(DEFAULT_X_OFFSET){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final double getxOffset() { return xOffset.get(); } public void setxOffset(double value) { xOffset.set(value); } public DoubleProperty xOffsetProperty() { return xOffset; } private final DoubleProperty yOffset = new SimpleDoubleProperty(DEFAULT_Y_OFFSET){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final double getyOffset() { return yOffset.get(); } public void setyOffset(double value) { yOffset.set(value); } public DoubleProperty yOffsetProperty() { return yOffset; } private final DoubleProperty zOffset = new SimpleDoubleProperty(DEFAULT_Z_OFFSET){ @Override protected void invalidated() { if(mesh!=null){ updateMesh(); } } }; public final double getzOffset() { return zOffset.get(); } public void setzOffset(double value) { zOffset.set(value); } public DoubleProperty zOffsetProperty() { return zOffset; } private TriangleMesh createKnot(float majorRadius, float minorRadius, float wireRadius, float p, float q, float length, int subDivLength, int subDivWire, int cropLength, int cropWire, float startAngle, float xOffset, float yOffset, float zOffset) { listVertices.clear(); listTextures.clear(); listFaces.clear(); int numDivWire = subDivWire + 1-2*cropWire; double a=wireRadius; knot = new KnotHelper(majorRadius, minorRadius, p, q); areaMesh.setWidth(knot.getLength()); areaMesh.setHeight(polygonalSize(wireRadius)); knot.calculateTrihedron(subDivLength); for (int t = cropLength; t <= subDivLength-cropLength; t++) { // 0 - length for (int u = cropWire; u <= subDivWire-cropWire; u++) { // -Pi - +Pi if(cropWire>0 || (cropWire==0 && u<subDivWire)){ float du = (float) (((double)u)*2d*Math.PI / ((double)subDivWire)); double pol = polygonalSection(du); float cu=(float)(a*pol*Math.cos(du)), su=(float)(a*pol*Math.sin(du)); listVertices.add(knot.getS(t, cu, su)); } } } // Create texture coordinates createReverseTexCoords(subDivLength-2*cropLength,subDivWire-2*cropWire); // Create textures for (int t = cropLength; t < subDivLength-cropLength; t++) { // 0 - length for (int u = cropWire; u < subDivWire-cropWire; u++) { // -Pi - +Pi int p00 = (u-cropWire) + (t-cropLength)* numDivWire; int p01 = p00 + 1; int p10 = p00 + numDivWire; int p11 = p10 + 1; listTextures.add(new Face3(p00,p01,p11)); listTextures.add(new Face3(p11,p10,p00)); } } // Create faces for (int t = cropLength; t < subDivLength-cropLength; t++) { // 0 - length for (int u = cropWire; u < subDivWire-cropWire; u++) { // -Pi - +Pi int p00 = (u-cropWire) + (t-cropLength)* (cropWire==0?subDivWire:numDivWire); int p01 = p00 + 1; int p10 = p00 + (cropWire==0?subDivWire:numDivWire); int p11 = p10 + 1; if(cropWire==0 && u==subDivWire-1){ p01-=subDivWire; p11-=subDivWire; } listFaces.add(new Face3(p00,p01,p11)); listFaces.add(new Face3(p11,p10,p00)); } } return createMesh(); } public Point3D getPositionAt(double t){ return knot.getPositionAt(t); } public Point3D getTangentAt(double t){ return knot.getTangentAt(t); } public double getTau(double t){ return knot.getTau(t); } public double getKappa(double t){ return knot.getKappa(t); } }
Mid
[ 0.5702306079664571, 34, 25.625 ]
Thomas Wainwright Thomas Wainwright (1876 – 13 May 1949) was an English footballer who played as a half-back for Burslem Port Vale, Crewe Alexandra, Wellington Town, and Notts County between 1900 and 1906. Biography Thomas Wainwright was born in 1876 in the Johnstons Buildings on Beam Street in Nantwich. His father was James Wainwright (1851–1926) who married his mother Mary Ann Simmons (1852–1937) on 3 March 1872 in Nantwich. He was one of nine children, having six brothers and two sisters. Around 1891 he moved from Beam Street to 24 Mill Street, Nantwich, and according to the 1891 census his occupation was a boot clicker. He started his footballing career with Crewe Carriage Works before appearing for Nantwich in The Combination in 1893–94, going on to serve his home town club to the turn of the century. In 1901 the census reports him as a general labourer just before he went on to play football. Wainwright joined Burslem Port Vale in November 1900 and made his debut at the Athletic Ground in a 3–2 win over Barnsley on 1 December. He enjoyed a spell in the first team, and ended the 1900–01 campaign with 14 Second Division appearances to his name. However he then fell out of the first team picture, and was released at the end of the 1901–02 season. He then moved on to Crewe Alexandra, Wellington and Notts County.. He finished his playing career back at Nantwich before becoming the club's trainer in 1913. At the end of his professional playing career he married Emily Pitt in Nottingham. They then moved back to Nantwich, living at 22 Mill Street, Nantwich, where he became an engineering labourer, moving on to be a furnace-man at the local railway works. They had five children between 1907 and 1922: Eric, Doris, Leonard, James and Olive. Thomas Wainwright died in Nantwich on 13 May 1949 from prostate cancer. He is buried in Nantwich cemetery with his wife Emily and daughter Doris. Statistics Source: References Category:1882 births Category:1949 deaths Category:People from Nantwich Category:Sportspeople from Cheshire Category:English footballers Category:Association football midfielders Category:Nantwich Town F.C. players Category:Port Vale F.C. players Category:Crewe Alexandra F.C. players Category:Notts County F.C. players Category:Telford United F.C. players Category:English Football League players Category:Deaths from prostate cancer Category:Deaths from cancer in England
Mid
[ 0.55530474040632, 30.75, 24.625 ]
Mexico heads Puebla Process World committee will focus on migration issues. Mexico was selected as the next “pro-témpore” president of the Regional Conference on Migration, known as the Puebla Process, which is charged with strengthening capacity and collaboration between officials from member nations. The member nations are Belize, Canada, Costa Rica, El Salvador, the United States, Guatemala, Honduras, Mexico, Nicaragua, Panama and the Dominican Republic, said the Interior Secretariat in a statement over the weekend. The federal dependency said that members will participate in training groups beginning in January 2015, after the announcement of the “Immigration Accountability Executive Action,” presented by U.S. President Barack Obama on Nov. 20. In representation of the Mexican Government, Interior Secretariat Population, Migration and Religious Affairs Undersecretary Mercedes del Carmen Guillén Vicente received the “pro-témpore” presidency of the Regional Conference on Migration. During the ceremony celebrated in Managua, Nicaragua, Guillén Vicente said that there needs to be organized migratory regulations to ensure conditions of absolute dignity for foreigners that travel through the country. Speaking before representatives of the conference, the undersecretary said that Mexicans who return to their homeland receive support through the Somos Mexicanos (we are Mexicans) program. The program provides options for employment, the possibility to receive identification documents, health security, transportation to shelters and tickets to their destinations. Through the implementations of the Frontera Sur program, announced July 7 by President Enrique Peña Nieto, the country is moving forward to provide more tools and effective mobility of documented and legal people inside of Mexico. The government is also working to strengthen the efficiency of handing out “Regional Visitor Cards” and “Border Worker Cards” so that foreign nationals from neighboring countries can visit the states on the southern border of Mexico in a documented, orderly and free way, said Guillén. The conference president designate also announced that she will meet with representatives from the Foreign Relations Secretariat and governments from member states in June 2015. A meeting of the Regional Migration Consulting Group will be hosted in November 2015, she said. The American migration conference, also known as the Puebla Process, was formed with the objective to compare and share perspectives based on experiences in situations of origin, travel and destination of migration, said the statement from the secretariat. The president of the conference is decided by consensus among member countries during the annual Vice-Ministerial Meeting. During its term, the chosen government must facilitate the coordination between governments to conduct regular meetings, ask member countries to designate delegates for the Drafting Committee at the start of each meeting, identify high priority initiatives, cover the cost of meetings and take on a role regarding the administration of the Fund for the Return of Migrants in Highly Vulnerable Situations. Source: The News Post navigation One comment I agree with the majority of the American citizens who disagree with President Obama’s handling of the border control, the illegal immigrant situation and the present Presidential decree on any amnesty for illegals within the 50 American states. His sworn duty is to uphold the U.S. Constitution , and not make new laws. There are a number of lawsuits against the President , the worst involves the 25th Constitutional amendment regarding his sanity and ability to function as president. One of the tests is: does he look upon himself as above the law, and thus does not follow the law, and makes laws that the executive branch is not empowered under the Constitution to do ? Many have called for impeachment, but that is a sticky proposition, since Obama has never been properly vented as eligible to be president, and if shown to be ineligible, if impeached, all the laws and presidential decrees that he has made over the past almost seven years … would be null and void. For example: he has fired many of the Nations top military generals for not agreeing with is duty as Commander and Chief, and has spent some $7 trillion in his function of President. Members of his own political party are avoiding him, and many are leaving the administration. I personally would like to see him expelled and sent back to Kenya, where the Kenya government has acknowledged that he is a Kenya citizen . I cannot get much worse.. The U.S. immigration laws should be made as strict as Mexico’s immigration laws in-order for the U.S. to keep its English language and culture.
Mid
[ 0.547677261613691, 28, 23.125 ]
Consumer Agency to Step Up Scrutiny of Auto Lenders The U.S. Consumer Financial Protection Bureau is set to expand supervision of auto-finance companies even after making little headway in a campaign to get banks to change how they lend for new and used cars. The agency, which already supervises consumer finance products at banks with assets over $10 billion, plans to propose a rule paving the way for oversight of non-bank car lenders at a field hearing in Indianapolis Thursday, two people briefed on the matter said. Such an action would bring the auto-finance units of manufacturers such as Ford Motor Co. into the bureau’s purview, along with firms such as Consumer Portfolio Services and Credit Acceptance Corp. During its four-year existence, the consumer bureau has changed industry practices in mortgage lending, credit cards and other products through a mixture of supervision and regulation. In all, the agency said Monday, it has recovered about $4.7 billion in relief for more than 15 million consumers harmed by illegal practices. The bureau has been less successful in auto finance, an industry it targeted in March 2013. It has issued guidelines, demanded data from banks and filed an enforcement action to alter a system officials say lets auto dealers charge higher rates to minority buyers. Nearly 18 months later, the market isn’t a lot different. Ally Financial, one of the top four auto lenders, refused to overhaul its business after paying $98 million to settle claims of unfair lending by the agency and the Department of Justice. ‘Dramatic’ Impact While one firm, Chicago-based BMO Harris Bank, changed its practices, settlement talks between the consumer bureau and other banks have slowed or stalled. And this week, more than 400 auto dealers lobbied Congress for a bipartisan bill that would stop the agency from targeting their finance methods. “They may have overestimated their ability to get change in the market,” Alan Kaplinsky, head of the consumer financial services group at the law firm of Ballard Spahr LLP, said of consumer bureau officials. “All the big players are going to be reluctant to be the first ones to make a move because it has a dramatic competitive impact.” Sam Gilford, a spokesman for the bureau, said the agency has made progress, including by recovering millions of dollars in the Ally settlement. “Our explicit goal from the beginning has been to root out illegal auto-lending discrimination,” Gilford said. “There is still more work to do.” Indirect Attack The consumer bureau can’t simply order dealers to change how they finance cars. In the 2010 fight over the Dodd-Frank Act that created the consumer bureau, the dealers successfully lobbied Congress for an exclusion from the agency’s oversight. Instead, the bureau is attacking the matter indirectly through lenders. The consumer bureau’s 2013 guidelines take aim at a practice the agency calls “dealer markup” and auto dealers term “dealer participation” or “dealer-assisted finance.” Banks function as indirect lenders, allowing dealers to add to the interest rate the banks charge and pocket the difference. Consumer groups say the practice gives dealers too much leeway to move certain buyers into more-expensive loans. Dealers say the markup is a reasonable price for their services, which include bringing in customers and handling paperwork, and that buyers can negotiate the spread. Indianapolis Hearing U.S. officials say the system often results in higher payments by members of minority groups. The consumer bureau has said lenders should monitor spreads to prevent discrimination, or switch to a flat markup so all buyers get the same deal. At the Indianapolis hearing the bureau will outline its methodology for determining whether discrimination is occurring, according to the people briefed on its plans, who spoke on condition of anonymity because the discussions aren’t public. Officials also plan to disclose how they have settled disagreements with some banks behind the scenes. Banks haven’t been eager to abandon the markup system because they don’t want to lose their piece of the booming auto financing business. Outstanding auto-loan volume reached $839 billion in the second quarter of this year, up 12 percent from a year earlier, according to Experian Plc, a research firm that analyzes auto finance. Ally Financial, the former General Motors Co. lending arm, settled with the government after the bureau and Justice Department compiled data showing that more than 235,000 minority borrowers paid higher rates due to a discriminatory pricing system. The deal announced Dec. 20 included an $18 million fine and $80 million in borrower restitution. Not ‘Caving’ About six weeks later, Ally Chief Executive Officer, Michael Carpenter said that the bank refused the agency’s demands that it abolish dealer markup and that it wouldn’t be the “Trojan horse” that introduced flat fees to the industry. “We are not going to go to flats,” Carpenter told Automotive News on Feb. 3. “That is obviously not what the CFPB wanted to hear. They thought we were going to cave.” Ally spokeswoman Gina Proia confirmed his comments by e- mail. Chris Kukla of the Center for Responsible Lending, said that there is a “first-mover problem” in obtaining the changes to the overall system that consumer officials seek. He said lenders who don’t revamp the system will have “an ongoing dialogue” with the bureau. Ally, under the terms of its settlement, will have to report to the agency each quarter on what it’s doing to avoid discrimination. Regulatory Risk The banks’ determination to stick with the system is “going to depend on the lenders’ appetite for regulatory risk” when they’re trying to avoid it, said Kukla, who is senior counsel for government affairs at the Durham, North Carolina- based group. The bureau’s tool for pressuring the banks is a legal doctrine known as “disparate impact,” which the agency has been applying to a range of consumer financial products. The doctrine holds that lenders can be sanctioned for actions with a discriminatory effect -- as demonstrated by statistical analysis, for example -- even if they didn’t intend to discriminate. Consumer officials have been collecting information about lending patterns by sending civil investigative demands, a form of subpoena, to at least 10 lenders, according to four people briefed on the actions. Banks’ Doorsteps “We know that they are currently at the doorstep of quite a number of the major banks in the auto finance area, as well as the doorstep of regional banks,” said Kenneth Rojc, the managing partner of the auto finance group at Nisen & Elliot LLC in Chicago. The U.S. financing arms of Toyota Motor Corp. and Honda Motor Co. have received such requests from Justice and the consumer agency, they disclosed in regulatory filings. Fifth Third Bancorp., based in Cincinnati, disclosed a Justice Department query in May. Wells Fargo & Co., Ally, Capital One Financial Corp. and JPMorgan Chase & Co., the top four lenders, together had 19 percent of the auto-loan market at the end of the second quarter of this year, according to Experian. Fifty-two percent of loans came from banks and credit unions, while 27 percent originated with financial companies controlled by car manufacturers. The fragmentation means dealers have many options for financing, and will likely send their business to banks that give them discretion, said Michael Buckingham, senior director in the auto finance practice at J.D. Power & Associates. ‘Proactive Step’ BMO Harris Bank, which dropped the old system, had less than 1 percent of the U.S. auto loan market at the end of 2013, making it the 28th-largest lender, according to Experian. That didn’t stop Richard Cordray, the director of the consumer bureau, from publicly praising it for switching to flat fees. “It is encouraging to see BMO Harris taking this proactive step to protect consumers from discrimination,” said Cordray. “When people go to buy a car, they should not have to worry whether they’ll pay more for their auto loan because of their race, gender, or ethnic background.” Talks between the agency and auto lenders to settle claims of discrimination have stalled at times because the bureau wanted a public commitment that banks will eliminate the discretion of dealers to change loan costs, according to four people briefed on the discussions. The lenders have instead aimed for confidential agreements with a promise to monitor how dealers mark up loans to minority borrowers, something most of them are doing anyway, these people said, requesting anonymity to discuss private negotiations. ‘Different Approaches’ “We are not mandating one specific path to compliance,” the bureau’s Gilford said. “We have been very clear that there are a number of different approaches lenders can take.” The consumer bureau could sue a bank rather than reach a negotiated settlement, but the methodology for determining discrimination is controversial and has already been the subject of Supreme Court cases. That has emboldened the lenders to resist the bureau’s pressure, said the people, since a lawsuit would likely take years, and could deliver a legal victory to the banks. “From a legal standpoint, they are managing that risk in doing business with dealers,” said Rojc, the auto finance lawyer. “And until we see more enforcement action from CFPB, that approach will continue to hold.” The U.S. Consumer Financial Protection Bureau is set to expand supervision of auto-finance companies even after making little headway in a campaign to get banks to change how they lend for new and used cars.
Mid
[ 0.576576576576576, 32, 23.5 ]
I had all manner of offensive jokes to make, but I found this worth mentioning: check out the little bird, ankle-deep in water. Check out his grin. He knows we’re in for a kangaroo pancake any second now. One Response to *splat* My theory is that the kangaroo which is currently airborne is none other than Marvin, reincarnated as a different species and a different sex, but otherwise with all of the same character traits he used to have. Understandly then, this mother kangaroo is happy, SO happy, that soon she will finally be rid of the monster.
Low
[ 0.45631067961165006, 23.5, 28 ]
Q: Understanding LSTM Training and Validation Graph and their metrics (LSTM Keras) I have trained a RNN/LSTM model. I would like to interpret my model results, after plotting the graph for Loss and accuracy (b/w training and Validation data set). My objective is to classify the labels (either 0 or 1) if i provide only a partial input to the model. In such a way I have performed training. Train_Validate_Test_Split Train 80% ; Validate 10 % ; Test 10% X_train_shape : (243, 100, 5) Y_train_shape : (243,) X_validate_shape : (31, 100, 5) Y_validate_shape : (31,) X_test_shape : (28, 100, 5) Y_test_shape : (28,) Model Summary Model Graph Model Metrics Question or Interpretation from the model results Q 1 : What can I understand/interpret from loss and Accuracy graph ? How can i confirm whether the model trained properly for my data set or not ? Q 2 : Whether oscillations in both loss and accuracy, have some effect in >model training ? (Or it is a normal behavior) If not, how can I regularize my model without oscillations ? Q 3 : What can I interpret or understand from my metrics tabular column ? My > Y_test accuracy is more when compared with Train & Validation accuracy, What can i interpret from this behavior ? A: From visually inspecting the graph, we see that the validation loss and accuracy has improved with each epoch - with the training loss and accuracy higher than that of validation. This indicates that accuracy has improved with training. As suggested in another post, one potential solution is to calculate the exponential moving average of the validation loss to remove the oscillations and better determine the improvement in this metric. If you are finding that the test accuracy is higher than that of training, this might suggest underfitting. This could imply that more training on your model is required, or has been over-regularized.
High
[ 0.6820276497695851, 37, 17.25 ]
/* * Copyright 2012-2020 the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.springframework.boot.actuate.endpoint.jmx; import java.util.ArrayList; import java.util.List; import javax.management.InstanceNotFoundException; import javax.management.MBeanRegistrationException; import javax.management.MBeanServer; import javax.management.MalformedObjectNameException; import javax.management.ObjectName; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.extension.ExtendWith; import org.mockito.ArgumentCaptor; import org.mockito.Captor; import org.mockito.Mock; import org.mockito.Spy; import org.mockito.junit.jupiter.MockitoExtension; import org.springframework.jmx.JmxException; import org.springframework.jmx.export.MBeanExportException; import static org.assertj.core.api.Assertions.assertThat; import static org.assertj.core.api.Assertions.assertThatExceptionOfType; import static org.assertj.core.api.Assertions.assertThatIllegalArgumentException; import static org.assertj.core.api.Assertions.assertThatIllegalStateException; import static org.mockito.ArgumentMatchers.any; import static org.mockito.BDDMockito.given; import static org.mockito.BDDMockito.willThrow; import static org.mockito.Mockito.verify; /** * Tests for {@link JmxEndpointExporter}. * * @author Stephane Nicoll * @author Phillip Webb */ @ExtendWith(MockitoExtension.class) class JmxEndpointExporterTests { @Mock private MBeanServer mBeanServer; @Spy private EndpointObjectNameFactory objectNameFactory = new TestEndpointObjectNameFactory(); private JmxOperationResponseMapper responseMapper = new TestJmxOperationResponseMapper(); private List<ExposableJmxEndpoint> endpoints = new ArrayList<>(); @Captor private ArgumentCaptor<Object> objectCaptor; @Captor private ArgumentCaptor<ObjectName> objectNameCaptor; private JmxEndpointExporter exporter; @BeforeEach void setup() { this.exporter = new JmxEndpointExporter(this.mBeanServer, this.objectNameFactory, this.responseMapper, this.endpoints); } @Test void createWhenMBeanServerIsNullShouldThrowException() { assertThatIllegalArgumentException().isThrownBy( () -> new JmxEndpointExporter(null, this.objectNameFactory, this.responseMapper, this.endpoints)) .withMessageContaining("MBeanServer must not be null"); } @Test void createWhenObjectNameFactoryIsNullShouldThrowException() { assertThatIllegalArgumentException() .isThrownBy(() -> new JmxEndpointExporter(this.mBeanServer, null, this.responseMapper, this.endpoints)) .withMessageContaining("ObjectNameFactory must not be null"); } @Test void createWhenResponseMapperIsNullShouldThrowException() { assertThatIllegalArgumentException() .isThrownBy( () -> new JmxEndpointExporter(this.mBeanServer, this.objectNameFactory, null, this.endpoints)) .withMessageContaining("ResponseMapper must not be null"); } @Test void createWhenEndpointsIsNullShouldThrowException() { assertThatIllegalArgumentException().isThrownBy( () -> new JmxEndpointExporter(this.mBeanServer, this.objectNameFactory, this.responseMapper, null)) .withMessageContaining("Endpoints must not be null"); } @Test void afterPropertiesSetShouldRegisterMBeans() throws Exception { this.endpoints.add(new TestExposableJmxEndpoint(new TestJmxOperation())); this.exporter.afterPropertiesSet(); verify(this.mBeanServer).registerMBean(this.objectCaptor.capture(), this.objectNameCaptor.capture()); assertThat(this.objectCaptor.getValue()).isInstanceOf(EndpointMBean.class); assertThat(this.objectNameCaptor.getValue().getKeyProperty("name")).isEqualTo("test"); } @Test void registerShouldUseObjectNameFactory() throws Exception { this.endpoints.add(new TestExposableJmxEndpoint(new TestJmxOperation())); this.exporter.afterPropertiesSet(); verify(this.objectNameFactory).getObjectName(any(ExposableJmxEndpoint.class)); } @Test void registerWhenObjectNameIsMalformedShouldThrowException() throws Exception { given(this.objectNameFactory.getObjectName(any(ExposableJmxEndpoint.class))) .willThrow(MalformedObjectNameException.class); this.endpoints.add(new TestExposableJmxEndpoint(new TestJmxOperation())); assertThatIllegalStateException().isThrownBy(this.exporter::afterPropertiesSet) .withMessageContaining("Invalid ObjectName for endpoint 'test'"); } @Test void registerWhenRegistrationFailsShouldThrowException() throws Exception { given(this.mBeanServer.registerMBean(any(), any(ObjectName.class))) .willThrow(new MBeanRegistrationException(new RuntimeException())); this.endpoints.add(new TestExposableJmxEndpoint(new TestJmxOperation())); assertThatExceptionOfType(MBeanExportException.class).isThrownBy(this.exporter::afterPropertiesSet) .withMessageContaining("Failed to register MBean for endpoint 'test"); } @Test void destroyShouldUnregisterMBeans() throws Exception { this.endpoints.add(new TestExposableJmxEndpoint(new TestJmxOperation())); this.exporter.afterPropertiesSet(); this.exporter.destroy(); verify(this.mBeanServer).unregisterMBean(this.objectNameCaptor.capture()); assertThat(this.objectNameCaptor.getValue().getKeyProperty("name")).isEqualTo("test"); } @Test void unregisterWhenInstanceNotFoundShouldContinue() throws Exception { this.endpoints.add(new TestExposableJmxEndpoint(new TestJmxOperation())); this.exporter.afterPropertiesSet(); willThrow(InstanceNotFoundException.class).given(this.mBeanServer).unregisterMBean(any(ObjectName.class)); this.exporter.destroy(); } @Test void unregisterWhenUnregisterThrowsExceptionShouldThrowException() throws Exception { this.endpoints.add(new TestExposableJmxEndpoint(new TestJmxOperation())); this.exporter.afterPropertiesSet(); willThrow(new MBeanRegistrationException(new RuntimeException())).given(this.mBeanServer) .unregisterMBean(any(ObjectName.class)); assertThatExceptionOfType(JmxException.class).isThrownBy(() -> this.exporter.destroy()) .withMessageContaining("Failed to unregister MBean with ObjectName 'boot"); } /** * Test {@link EndpointObjectNameFactory}. */ static class TestEndpointObjectNameFactory implements EndpointObjectNameFactory { @Override public ObjectName getObjectName(ExposableJmxEndpoint endpoint) throws MalformedObjectNameException { return (endpoint != null) ? new ObjectName("boot:type=Endpoint,name=" + endpoint.getEndpointId()) : null; } } }
Mid
[ 0.6243386243386241, 29.5, 17.75 ]
Faecal metabolomic fingerprint after moderate consumption of red wine by healthy subjects. Faecal metabolome contains information on the metabolites found in the intestine, from which knowledge about the metabolic function of the gut microbiota can be obtained. Changes in the metabolomic profile of faeces reflect, among others, changes in the composition and activity of the intestinal microorganisms. In an effort to improve our understanding of the biological effects that phenolic compounds (including red wine polyphenols) exert at the gut level, in this foodomic study we have undertaken a metabolome characterization of human faeces after moderate consumption of red wine by healthy subjects for 4 weeks. Namely, a nontargeted metabolomic approach based on the use of UHPLC-TOF MS was developed to achieve the maximum metabolite information on 82 human faecal samples. After data processing and statistical analysis, 37 metabolites were related to wine intake, from which 20 could be tentatively or completely identified, including the following: (A) wine compounds, (B) microbial-derived metabolites of wine polyphenols, and (C) endogenous metabolites and/or others derived from other nutrient pathways. After wine consumption, faecal metabolome was fortified in flavan-3-ols metabolites. Also, of relevance was the down regulation of xanthine and bilirubin-derived metabolites such as urobilinogen and stercobilin after moderate wine consumption. As far as we know, this is the first study of the faecal metabolome after wine intake.
Mid
[ 0.6537530266343821, 33.75, 17.875 ]
Yeast dough is considered one of the most basic but complicated of the dough family. Just think of the first cakes you made – I’m almost sure they weren’t yeast cakes. But mine were! As a girl I really had no inkling about “kitchen stuff” until I noticed something interesting. Twice a year my mother would make a cookie and cake Kiddush for our shul to mark the yahrtzeits of my father’s parents who were killed in the Holocaust. Those cake platters were laden with all sorts of baked goodies – melt-in-your-mouth cakes alongside mouthwatering cookies, professionally made and gone almost as soon as they hit the table. But although Mom prepared quite a variety, I noticed something missing from her overflowing plates. There were none of the fluffy, yeasty kokosh and moon cakes that I especially loved. “Why don’t you ever make yeast cakes, Mommy?” I wondered aloud. “Oh, that’s not for me! My yeast dough never seems to rise to my expectations. I’ll just stick to cookie dough.” Then and there I decided: “Come next Kiddush, there’ll be fantastic, fluffy kokosh cakes on my mother’s cookie platters and I’ll make them!” So, I practiced that summer on my friends in camp. I was counselor in a small Canadian camp that bunked 120 campers in all. On the 17th of Tammuz and the 9th of Av, when there were no meals served, the cook had the day off and I got the kitchen key! I was in full reign, preparing bowls of dough rising, waiting to be turned into sizzly cheesy pizzas, chocolaty sweet kokosh cakes and soft, supple onion pitas to “break the fast” on. Judging from how quickly everything disappeared, I knew I was pretty good at making yeast dough and more importantly, the dough was good to me – it rose to every occasion and never disappointed me. True to my decision, that year my yeast cakes found a place of honor on mother’s Kiddush platters. Until today, there’s nothing more gratifying for me than kneading yeast dough. As it rises on my counter, I feel an elation rise within me too. The cakes release their yeasty aroma as they bake, expanding in the heat and in my heart. To date, I’ve baked tens of thousands challahs, rolls and yeast cakes in hundreds of shapes, sizes and flavors at home, for simchas and in my workshops. Which is good news for those of you suffering from Yeast Dough Phobia. Because over time, I’ve collected quite a few tips for a successful experience in preparing yeast dough. Of course you could always just run out to your preferred bakery and buy their kokosh cake but in my opinion, nothing can beat the heavenly scent of your yeast cake baking in your kitchen! All you “knead” to know is a few simple rules like correct temperature, mixing, rising and rolling. Besides, the more you make yeast dough, the better you get the feel for it and the better it will “behave” for you. Hands-on experience is the best teacher around! For starters, how does your dough grow? There are two methods for yeast dough rising. 1. The warm method which is most commonly used and 2. The cold method which chefs sometimes prefer. Here’s how the cold method works: After the dough is ready, put it into a large plastic bag and tie loosely on top. Refrigerate overnight or up to 3 days. After a few hours, the bag will resemble a blown up balloon. This means the fermentation gases are doing their job (even though they were “left in the cold”). A few hours before you’re ready to make your yeast cake or rolls, remove the bag of dough from the refrigerator and leave in the bag on your counter until the dough reaches room temperature. Proceed as the recipe instructs. Here’s a question I received from Sarah about kokosh cake filling: Q: “Do you have a good chocolate filling for my kokosh cake. Somehow mine always sinks into the dough till it seems as if there’s no filling at all. Do you have a recipe for the “store bought” kind?” A: The chocolate filling I use for my kokosh cake is easy to make and since it’s made with dry ingredients, easy to “spread.” It imitates the store bought filling in that it’s oozy without oozing out of the roll or into the dough. About the Author:Mindy Rafalowitz is a recipe developer and food columnist for over 15 years. She has published a best selling cookbook in Hebrew for Pesach and the gluten sensitive. Mindy is making progress on another specialty cookbbok for English readers. For kitchen questions or to purchase a sample recipe booklet at an introductory price, contact Mindy at [email protected]. If you don't see your comment after publishing it, refresh the page. Our comments section is intended for meaningful responses and debates in a civilized manner. We ask that you respect the fact that we are a religious Jewish website and avoid inappropriate language at all cost. If you promote any foreign religions, gods or messiahs, lies about Israel, anti-Semitism, or advocate violence (except against terrorists), your permission to comment may be revoked. Participating in ManiCures during the school day may feel like a break from learning, but the intended message to the students was loud and clear. Learning and chesed come in all forms, and can be fun. Szold was among the founders and leaders (she served on its executive committee) of Ichud (“Unity”), a political group that campaigned against the creation of an independent, sovereign Jewish state in Eretz Yisrael. Purim is a fantastic time for fantasies, so I hope you won’t mind my fantasizing about how easy life would be if kids would prefer healthy cuisine over sweets. Imagine waking up to the call of “Mommy, when will my oatmeal be ready?”… As you rush to ladle out the hot unsweetened cereal, you rub […]
High
[ 0.6733668341708541, 33.5, 16.25 ]
Role of nitrosamides in the high risk for gastric cancer in China. Gastric cancer is the leading cause of death from cancer in China. Samples of fish sauce, a traditional seasoning, were collected in the high-risk area for gastric cancer in the Fuzhou area, Fujian Province. When fish sauce samples were nitrosated at pH 2.0, direct mutagenicity and high contents of N-nitrosamide were detected (30.9-78.0 microM); the N-nitrosamide content of three samples of fish sauce made in Guangdong and purchased from a market outside Fujian were low (2.1-6.0 microns). When the nitrosated fish sauce extract was given to newborn rats by gavage, dysplasia and adenocarcinoma were induced in the glandular stomach in the 4th and 16th experimental week, respectively. N-Nitrosamides were also found in fasting gastric juice from patients with chronic gastritis in the high-risk area of Putian. The mean concentration of total N-nitrosamides in the extracts correlated with the severity of gastritis in the stomach. These findings indicate that N-nitrosamides may play an important role in causing gastric cancer in China.
High
[ 0.6811594202898551, 35.25, 16.5 ]
devmode: true overcloud_as_undercloud: true use_specific_hash: true docker_registry_host: "{{ job.build_container_images|default(false)|bool | ternary('127.0.0.1:5001', 'docker.io') }}" docker_registry_namespace: tripleopike delorean_hash_label: &promotion-testing-tag "{{ dlrn_hash|default(dlrn_hash_tag) }}" docker_image_tag: *promotion-testing-tag images: - name: overcloud-full url: "{{ overcloud_image_url }}" type: tar - name: ipa_images url: "{{ ipa_image_url }}" type: tar inject_images: - "ironic-python-agent.initramfs" - "ironic-python-agent.kernel" - "overcloud-full.qcow2" - "overcloud-full.initrd" - "overcloud-full.vmlinuz" release: pike dlrn_hash_tag: consistent overcloud_image_url: https://images.rdoproject.org/pike/rdo_trunk/current-tripleo/overcloud-full.tar ipa_image_url: https://images.rdoproject.org/pike/rdo_trunk/current-tripleo/ironic-python-agent.tar repo_cmd_before: | sudo rm -rf /etc/yum.repos.d/delorean*; sudo rm -rf /etc/yum.repos.d/*.rpmsave; sudo yum clean all; sudo yum-config-manager --disable "*" if [ -e /etc/ci/mirror_info.sh ]; then source /etc/ci/mirror_info.sh else # Otherwise, fallback to official mirrors provided by CentOS. export NODEPOOL_CENTOS_MIRROR={{ lookup('env','NODEPOOL_CENTOS_MIRROR')|default('http://mirror.centos.org/centos', true) }} export NODEPOOL_BUILDLOGS_CENTOS_PROXY=https://buildlogs.centos.org export NODEPOOL_RDO_PROXY=https://trunk.rdoproject.org fi rdo_dlrn=`curl --silent ${NODEPOOL_RDO_PROXY}/centos7-pike/{{ dlrn_hash_path|default(dlrn_hash_tag, true) }}/delorean.repo -S 2>>~/dlrn_repo_curl_errors.log | grep baseurl | cut -d= -f2` rdo_dlrn_deps=`curl --silent ${NODEPOOL_RDO_PROXY}/centos7-pike/delorean-deps.repo -S 2>>~/dlrn_repo_curl_errors.log | grep baseurl | cut -d= -f2` if [[ -z "$rdo_dlrn" || -z "$rdo_dlrn_deps" ]]; then echo "Failed to parse dlrn hash" exit 1 fi export RDO_DLRN_REPO=${rdo_dlrn/https:\/\/trunk.rdoproject.org/$NODEPOOL_RDO_PROXY} export RDO_DLRN_DEPS_REPO=${rdo_dlrn_deps/https:\/\/trunk.rdoproject.org/$NODEPOOL_RDO_PROXY} repos: - type: generic reponame: delorean filename: delorean.repo priority: 20 baseurl: $RDO_DLRN_REPO - type: generic reponame: delorean-deps filename: delorean-deps.repo baseurl: $RDO_DLRN_DEPS_REPO # CentOS related repos - type: generic reponame: quickstart-centos-base filename: quickstart-centos-base.repo baseurl: ${NODEPOOL_CENTOS_MIRROR}/7/os/x86_64/ - type: generic reponame: quickstart-centos-updates filename: quickstart-centos-updates.repo baseurl: ${NODEPOOL_CENTOS_MIRROR}/7/updates/x86_64/ - type: generic reponame: quickstart-centos-extras filename: quickstart-centos-extras.repo baseurl: ${NODEPOOL_CENTOS_MIRROR}/7/extras/x86_64/ - type: generic reponame: quickstart-centos-qemu filename: quickstart-centos-qemu.repo baseurl: ${NODEPOOL_CENTOS_MIRROR}/7/virt/x86_64/kvm-common/ - type: generic reponame: quickstart-centos-ceph-jewel filename: quickstart-centos-ceph-jewel.repo baseurl: ${NODEPOOL_CENTOS_MIRROR}/7/storage/x86_64/ceph-jewel/ - type: generic reponame: quickstart-centos-opstools filename: quickstart-centos-opstools.repo baseurl: ${NODEPOOL_CENTOS_MIRROR}/7/opstools/x86_64/ - type: generic reponame: quickstart-centos7-rt filename: quickstart-centos7-rt.repo baseurl: ${NODEPOOL_CENTOS_MIRROR}/7/rt/x86_64/ includepkgs: tuned-profiles-cpu-partitioning enabled: 0 repo_cmd_after: | sudo yum install -y yum-plugin-priorities; {% if not enable_opstools_repo|default(false)|bool %}sudo yum-config-manager --save --setopt quickstart-centos-opstools.enabled=0; {%endif %} sudo yum-config-manager --disable rdo-qemu-ev; sudo rpm -e epel-release || true; sudo yum remove -y rdo-release centos-release-ceph-* centos-release-openstack-* centos-release-qemu-ev || true; sudo rm -rf /etc/yum.repos.d/CentOS-OpenStack-*.repo /etc/yum.repos.d/CentOS-Ceph-*.repo /etc/yum.repos.d/CentOS-QEMU-EV.repo; sudo rm -rf /etc/yum.repos.d/*.rpmsave; sudo yum repolist; sudo yum clean metadata {% if repo_setup_run_update|default(true)|bool %} sudo yum update -y {% endif %} undercloud_rpm_dependencies: >- python-tripleoclient ceph-ansible
Mid
[ 0.605405405405405, 28, 18.25 ]
An abortion storm in cattle associated with neosporosis in Taiwan. An abortion storm associated with acute neosporosis involving 18 cattle was observed in a dairy farm in Taiwan. Aborted fetus age ranged from 3 to 8 months. Of the 38 cattle in that farm examined during the abortion storm, 52.6% (20/38), 13.2% (5/38) and 10.5% (4/38) contained both IgG and IgM, only IgG and only IgM antibodies to Neospora caninum, respectively. No antibody to N. caninum was detected prior to the abortion storm. Follow-up study conducted a year later showed that 23 out of 28 cattle had sero-converted. Since some cattle were positive to either only IgG or IgM, we suggest that both IgG and IgM should be tested for diagnosing neosporosis. Neosporosis surveillance of naive cattle herd is recommended.
Mid
[ 0.6530612244897951, 30, 15.9375 ]
Talisman and Witchy Charms Jewelry Charms with a witchy vibe will put a spell on you. Flaunt your darker side by creating spellbinding jewelry with moon phase charms, all-seeing eye charms, inverted crescent moon charms, and bone charms. You can mix and match, and layer these Talisman charms to create long earrings & witchy necklaces. You'll also find Amulets of protection, owl charms, and cat charms in this curated collection of jewelry charms with a edgy, witchy vibe.
Mid
[ 0.603248259860788, 32.5, 21.375 ]
Hi, I'm Lee the owner of Lee Hughes Building Ltd. Specialising in complete home renovations, extensions, conversion, bathrooms & kitchens to the highest specifications. Providing all building trades for any project, from initial site consultation. I site manage from... Carpentry & Joinery Job - Recent feedback review The work isn't completely finished but I'm really pleased so far. Lee took the trouble to help and advise on the flooring choices and the carpenter is doing a great job. Really clean and tidy worker, works hard all the time he is here, ... All aspects of work under taken from bespoke projects to a simple jobs.Specializing in Kitchen and bathroom installation including carpentry work, doors, bespoke fitted furniture, skirting, flooring, tiling and unique projects. In the past I have worked on an eco barn to a... Carpentry & Joinery Job - Recent feedback review The job that James did for us was a challenge partly because of the difficulties that another company caused in providing the material, but James was very patient, professional and accommodating. The finished job was excellent quality and... My name is Ben and I have been in the trade for 15 years. I have worked for a family building business in my village for many years, (been established for over 75 years) and have branched out on my own in recent times. I specialise in carpentry and have full qualifications... Carpentry & Joinery Job - Recent feedback review We are very pleased with Ben's work. Good communication and very competitive price. Highly recommended.... SDB Carpentry is focused on providing a high quality service and customer satisfaction in the Warwickshire area. Quality workmanship and reliable services, carried out by advanced tradesmen whilst offering competitive rates and free estimates. Carpentry & Joinery Job - Recent feedback review An absolutely brilliant job. Very very pleased with the outcome (totally new floor in our conservatory). Very professional, on time every day. Would definitely recommend and use again. ... carpenter and joiner 22 years qualified city and guilds ,guild of master craftsman ,from kitchen fitting ,doors ,floors and all home improvements ,plastering I have also built over 1000 bespoke garden rooms over the last 6 years,ive also worked on the channels 4 home show... Carpentry & Joinery Job - Recent feedback review Trevor and team completed numerous internal jobs ranging from plastering to carpentry. The quality of workmanship is second to none and I have no hesitations in recommending his services. ... I am a fully qualified carpenter/joiner with over 30yrs experience. I specialise in all aspects of carpentry/ joinery. I work to very high standards. Im friendly reliable and punctual. I dont use any sub contractors. Everything from survey to completing the job are carried... Carpentry & Joinery Job - Recent feedback review Ivan assessed our requirements to install a balaustrade and stair handrail. A price and date for the work was agreed. Ivan rang the day before to check everything was ready and duly arrived the following morning. During the work it became... Carpentry & Joinery Job - Client reference Carl fitted a kitchen and hung new internal doors for me. The work was all carried out to a very high standard and I was very happy with the way he carried out my work. He was very polite and gave me the professional advice I needed when I... Clark carpentry is a professional, hardworking and honest business.I provide a prompt, reliable and tidy service who always put the client first. Excellent standard, excellent rates. Carpentry & Joinery Job - Recent feedback review I can highly recommend Curtis, his work was exceptional and he was very polite and friendly. Being an old house our walls were not straight but Curtis fitted the wardrobes perfectly, without any gaps, and he designed the style to match the... We are a small company specialising in the design and manufacture of bespoke furniture, we make beautiful freestanding pieces, fitted furniture, whole room schemes and unique gifts. We can deliver a project on time and on budget. Exceptional craftsmanship at a sensible... Carpentry & Joinery Job - Recent feedback review James was fantastic throughout the project. Very professional throughout he works to a very high standard and left the area very clean and tidy at all stages. His workmanship is fantastic and we were very impressed with the finished... Friendly professional experienced carpenter and joiner with over 15 years experience-no job being to small. skill sets - window fitting-wet rooms-door hanging-kitchens-bathrooms-roofing-fencing and general maintenance. Carpentry & Joinery Job - Recent feedback review Grant was excellent, great workmanship on the doors, all completed including cleaning up in a day and a half. Grant measured up, sourced materials, quoted and completed the Job within a week of contact. I would defiantly recommend Grant and... Here at Campbell carpentry we pride our selves in quality and detail , from bespoke furniture to decking and everything in between , with over 25 years in carpentry and joinery you'll know to have a great job done. Carpentry & Joinery Job - Recent feedback review Wonderful job. Very impressed. A fair price and completed in a timely fashion. ... PJC Services' goal is to not only help you design and build your dream bathroom, kitchen or a bespoke piece of furniture, we will make the process easy and enjoyable for you. We can help you with all phases of your next project. Carpentry & Joinery Job - Recent feedback review Paul knows his stuff, very quick worker (finished ahead of estimated time) and has all the skills required. Fitted skirting throughout the house, kitchen cupboards (custom job to cover up but give access to stopcock), kitchen worktop and...
Mid
[ 0.581818181818181, 36, 25.875 ]
Effect of p-chlorophenylalanine on avoidance learning of two differentially housed mouse strains. The effect of p-chlorophenylalanine (PCPA) has been studied on the acquisition of avoidance learning and on brain concentrations of 5-hydroxytryptamine, 5-hydroxyindoleacetic acid and tryptophan of differentially housed male mice of Albino Swiss and DBA strains. The results obtained do not support the hypothesis that learning ability varies inversely with the concentration of brain 5-hydroxytryptamine. PCPA can appear as influencing learning ability of different strains of mice differentially housed, depending more on the emotional baseline of the animals than on brain 5-hydroxytryptamine modification.
High
[ 0.670422535211267, 29.75, 14.625 ]
Ultrasound-guided excision combined with intraoperative assessment of gross macroscopic margins decreases the rate of reoperations for non-palpable invasive breast cancer. The standard technique for intraoperative tumour localization of clinically occult tumours is wire-guided localization (WGL). This, however, this has several disadvantages. The aim of the present work is to report our single-centre experience with intraoperative ultrasound-guided (IOUS) excision, performed by surgeons, combined with intraoperative assessment of macroscopic pathologic and ultrasound margins in non-palpable invasive cancers indicated for conservative breast therapy. Two-hundred and twenty-five non-palpable invasive breast cancers were subjected to excision with IOUS. The lesion was located in the operating room with a high-frequency ultrasound probe (8-12 MHz), which was then used to guide surgical removal. The specimen margins were estimated by ultrasonography and macroscopic pathologic examination. The sensitivity of IOUS and effectiveness in the characterization of the specimen margins were evaluated, assessing the need for reoperation. Pathologic tumour size was 12.0 ± 6.7 mm and 13 lesions (6.4 %) were <5 mm. The sensitivity of IOUS localization was 99.6% (224/225 cases). Only one cancer of less than 5 mm was not localized. The average weight of the specimens was 26.1 g. A second operation was required to remove margins in the 4% of cases (9/225). In 5 cases remains of in situ or invasive carcinoma were found. In two cases, conservative surgery was converted to mastectomy. IOUS excision combined with the intraoperative assessment of the macroscopic margins of non-palpable breast cancers is a safe, useful, and efficient technique. We obtained an excellent characterization of tumour margins with moderate removal of breast tissue and consequently a lower number of reoperations were required and good cosmetic results were obtained. We believe that use of this technique in conservative breast cancer surgery should be recommended.
High
[ 0.711462450592885, 33.75, 13.6875 ]
Welcome to GeekPolice! We truly love technology and security and we want to share it with the world. Recognize the excitement of technology here daily:☞Security Discussion on malware, ransomware, and much more! ☞24/7 hard- and software tech support (+mobile!) ☞Virus and malware removal support ☞Business & Enterprise Users/Endpoints Now Supported!! ☞Tons of tutorials, guides and solutions ☞The very finest of our voluntary Support Staff ☞Much, much more FREE! Facebook opened its crowdsourcing translation tool via the Facebook Connect service, so that users can easily translate other sites and apps across the Web. As Facebook expanded across the world, the social network called users to chime in and translate the site into 65 languages, instead of paying professionals for the lengthy job. Users submitted possible translations and the most accurate ones were chosen for the final version. Now, in a bid to expand the use of its Facebook Connect service, the social network introduced on Tuesday Translations for Facebook Connect, a service that gives site owners the option to allow users to translate the content in the same way Facebook did with its own site. Translations for Facebook Connect will be free for developers and few extra lines of code will enable the service on sites already using the Connect service. Volunteer translators across the globe will be able to translate participant sites into more than 65 languages.
Mid
[ 0.601604278074866, 28.125, 18.625 ]
Revision as of 05:45, 27 April 2008 Contents Description In this screen you have the ability to create a new poll(when you click on the 'New' button in the Poll Manager), or to edit an existing poll (when you select a poll and click on the 'Edit' button in the Poll Manager). Screenshot Details and description You will see different fields where you can fill in or edit information of the poll. These are: Name: Fill in the name of the poll. This field is required. Alias: This name is a shortened title of the poll, used when using SEF. Lag: This is the time which must pass before a user may submit another vote in this poll. Published: Select Yes when you want the section to be published, and No when you do not want the section to be published. Options: These fields are used for the questions. You may enter up to 12 questions for each poll. Typical Usage You will use the Poll Manager New/Edit when you want to add a new poll, or edit an existing pol. Here you will find some of the reasons for the use of this screen. You want to create a new poll. You want to change the name or alias of a poll. You want to change the questions to display. You want to change the lag time for a poll. Toolbar At the top right you will see the toolbar: The functions are: Preview: Opens a lightbox window showing the current poll. Save: Save the feed and return to the main screen of the Poll Manager Apply: Save the feed , but stay in the Poll Manager New or Edit screen. Cancel: Go back to the main screen of the Poll Manager, without saving the feed you edited or created. Help: Open the Help Screen for the Poll Manager: New/Edit. That is the same screen as this Help Screen. Points to Watch None known at this time. Dependencies For the poll to show in the page, you need to add a poll module using the module manager:
Mid
[ 0.5831533477321811, 33.75, 24.125 ]
Q: FilterDefinition serialization does not work as expected in new MongoDb driver With mongodb .net driver versions earlier than 2 we built Query<Person> object (part of its api) and were able to serialize it into a mongodb query with ToJson() method. With mongodb driver v.2.5 now we have new FilterDefinition<Person> to build similar queries, but serialization does not work properly anymore: FilterDefinition<Person> filter = Builders<Person>.Filter.Eq(t => t.Name, "Alex"); filter.ToBsonDocument() // returns {{ "_t" : "SimpleFilterDefinition`2" }} filter.ToJson() // returns same {{ "_t" : "SimpleFilterDefinition`2" }} filter.ToString() // returns MongoDB.Driver.SimpleFilterDefinition`2[TestApp.Person,System.String] Same happens to other type of filtering operations and other entities. Any suggestions on how to make serialization work right? A: Try the following var personSerializer = new MongoClient() .GetDatabase("test") .Settings .SerializerRegistry .GetSerializer<Person>(); var filter = Builders<Person>.Filter.Eq(x => x.FirstName, "Bob"); var doc = filter.Render(personSerializer, BsonSerializer.SerializerRegistry); Console.WriteLine(doc);
Mid
[ 0.642276422764227, 29.625, 16.5 ]
Customer stories Theo Pouw Groep customer story Enabling sustainable innovation and helping the business stay competitive. construction Challenge: A competitive industry The Theo Pouw Group supplies primary and secondary building materials to the soil, road, water and concrete construction industry – a competitive industry, with many environmental regulations. The Dutch construction sector has high environmental standards – public tenders are often decided based on the Environmental Cost Indicator (MKI). This is why suppliers have to produce high-value products with low CO2 emissions. Environmental reports are not enough for this task, since materials, processes and suppliers might change. It is crucial to stay on top of environmental data in order to stay competitive. We distinguish ourselves towards our clients through high-value products with low CO2 emissions. Ecochain helps us to reduce costs and shows us how we can produce our materials with less energy. Alexander Pouw Commercial Director, Theo Pouw Groep BV Solution: Creating transparent flows Ecochain’s Environmental Specialists helped the Theo Pouw Group to map their processes and generate material and energy flows. With our Environmental Intelligence Platform, they created a clear picture of their energy and emissions and found hotspots to efficiently improve their product portfolio. Ecochain helps us with our calculations and helps make our company more sustainable, by finding and using available solutions and technologies. Alexander Pouw Commercial Director, Theo Pouw Groep BV Results: Sustainability is a competitive advantage Ecochain helps the Theo Pouw Group to gain a competitive advantage in the construction industry. They incrementally improved the sustainability of their products, while reducing costs and increasing process efficiency. Get a free consultation Our teams of Environmental Specialists and Business Consultants are experienced in both, Sustainability and Business Management. Because they combine the best of both worlds, they can help you set your business up for the future. Topics Company Join the movement Our customers have calculated more than 250.000 footprints with Ecochain – more than any other solution in the world. Join the movement today and receive a bi-weekly update that will help you make your business more sustainable. This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.AcceptRejectRead More
Mid
[ 0.6552631578947361, 31.125, 16.375 ]
-by Cam and Nicole Wears Why We Think Maui Is A Perfect Family Friendly Destination As I sat in the comfy lounge chair, watching the palm trees sway in the breeze and Baby B rolling around on the grass, I couldn’t help but think that Maui has to be one of the best family friendly [...] For years we’ve seen the pictures of Maui’s clear blue water, soft sandy beaches, pristine forests and lush green mountains. It’s a travel destination that has always called to us, yet we had never made the trip. Last month however, we finally had the opportunity to experience the celebrated Hawaiian Island. We traveled with our [...] by Joshua Lurie My fall trip to Hawaii was to learn about the increasing connection between islanders and their homegrown food. For years, Hawaii imported 85% of its food from across the Pacific, but islanders are finally starting to harness the power of their fertile volcanic soil to produce food on-site, and that means taking [...] In Los Angeles, tell someone you’re from Hawaii, and you’ll hear lots of sighs followed by a look that’s best described as wishful lusting. Angelenos have a deep love affair with the Islands, more people from Los Angeles come to Hawaii than from any other city. This month the Hawaii Visitors and Convention Bureau is [...] I don’t care who you are. You can’t say you’ve actually visited Hawaii until you’ve been to a luau. We went to a seriously awesome one on our last night in Maui called Old Lahaina Luau. It was a good 45 minute drive from our hotel and I take responsibility for the slight detour by [...]
Mid
[ 0.563706563706563, 36.5, 28.25 ]
11 What is 14690 to the power of 1/3, to the nearest integer? 24 What is 110850 to the power of 1/3, to the nearest integer? 48 What is the seventh root of 1579 to the nearest integer? 3 What is the third root of 8533 to the nearest integer? 20 What is the square root of 16069 to the nearest integer? 127 What is 5729 to the power of 1/2, to the nearest integer? 76 What is the square root of 831 to the nearest integer? 29 What is 911 to the power of 1/10, to the nearest integer? 2 What is 31662 to the power of 1/6, to the nearest integer? 6 What is 416 to the power of 1/6, to the nearest integer? 3 What is the seventh root of 14432 to the nearest integer? 4 What is 1102 to the power of 1/5, to the nearest integer? 4 What is the cube root of 13397 to the nearest integer? 24 What is the cube root of 1030 to the nearest integer? 10 What is the sixth root of 2482 to the nearest integer? 4 What is the cube root of 318 to the nearest integer? 7 What is the square root of 2331 to the nearest integer? 48 What is 1041 to the power of 1/3, to the nearest integer? 10 What is the cube root of 71565 to the nearest integer? 42 What is the square root of 482 to the nearest integer? 22 What is 951 to the power of 1/2, to the nearest integer? 31 What is 19765 to the power of 1/8, to the nearest integer? 3 What is 592 to the power of 1/3, to the nearest integer? 8 What is the cube root of 1837 to the nearest integer? 12 What is 32769 to the power of 1/2, to the nearest integer? 181 What is 852 to the power of 1/10, to the nearest integer? 2 What is 3348 to the power of 1/8, to the nearest integer? 3 What is the tenth root of 7461 to the nearest integer? 2 What is 36507 to the power of 1/7, to the nearest integer? 4 What is 70180 to the power of 1/2, to the nearest integer? 265 What is 46184 to the power of 1/5, to the nearest integer? 9 What is the third root of 635 to the nearest integer? 9 What is 260 to the power of 1/7, to the nearest integer? 2 What is the ninth root of 23899 to the nearest integer? 3 What is the square root of 615 to the nearest integer? 25 What is the fourth root of 7043 to the nearest integer? 9 What is the square root of 4733 to the nearest integer? 69 What is 531 to the power of 1/2, to the nearest integer? 23 What is 1827 to the power of 1/2, to the nearest integer? 43 What is 1254 to the power of 1/7, to the nearest integer? 3 What is 5736 to the power of 1/6, to the nearest integer? 4 What is the cube root of 1526 to the nearest integer? 12 What is the fourth root of 4457 to the nearest integer? 8 What is the fifth root of 211 to the nearest integer? 3 What is the seventh root of 779 to the nearest integer? 3 What is the square root of 418 to the nearest integer? 20 What is the eighth root of 91 to the nearest integer? 2 What is 37270 to the power of 1/5, to the nearest integer? 8 What is the tenth root of 4000 to the nearest integer? 2 What is the fifth root of 19135 to the nearest integer? 7 What is the third root of 26736 to the nearest integer? 30 What is 901 to the power of 1/3, to the nearest integer? 10 What is the square root of 25513 to the nearest integer? 160 What is 1682 to the power of 1/5, to the nearest integer? 4 What is the tenth root of 950 to the nearest integer? 2 What is 28494 to the power of 1/3, to the nearest integer? 31 What is the tenth root of 22894 to the nearest integer? 3 What is the third root of 9557 to the nearest integer? 21 What is the square root of 2304 to the nearest integer? 48 What is 14663 to the power of 1/2, to the nearest integer? 121 What is the third root of 13744 to the nearest integer? 24 What is the square root of 5342 to the nearest integer? 73 What is 10732 to the power of 1/6, to the nearest integer? 5 What is 54549 to the power of 1/9, to the nearest integer? 3 What is 4556 to the power of 1/2, to the nearest integer? 67 What is 110 to the power of 1/7, to the nearest integer? 2 What is the square root of 1896 to the nearest integer? 44 What is the sixth root of 107035 to the nearest integer? 7 What is 32739 to the power of 1/10, to the nearest integer? 3 What is the sixth root of 7392 to the nearest integer? 4 What is 1526 to the power of 1/7, to the nearest integer? 3 What is 818 to the power of 1/8, to the nearest integer? 2 What is 79819 to the power of 1/9, to the nearest integer? 4 What is 7934 to the power of 1/10, to the nearest integer? 2 What is 11617 to the power of 1/10, to the nearest integer? 3 What is 2325 to the power of 1/8, to the nearest integer? 3 What is 45151 to the power of 1/7, to the nearest integer? 5 What is 1955 to the power of 1/2, to the nearest integer? 44 What is the cube root of 39510 to the nearest integer? 34 What is 2264 to the power of 1/3, to the nearest integer? 13 What is the third root of 21528 to the nearest integer? 28 What is the third root of 831 to the nearest integer? 9 What is 4737 to the power of 1/3, to the nearest integer? 17 What is 1012 to the power of 1/6, to the nearest integer? 3 What is 3370 to the power of 1/5, to the nearest integer? 5 What is 85965 to the power of 1/9, to the nearest integer? 4 What is 451 to the power of 1/2, to the nearest integer? 21 What is 63799 to the power of 1/3, to the nearest integer? 40 What is 39684 to the power of 1/10, to the nearest integer? 3 What is the square root of 11635 to the nearest integer? 108 What is 682 to the power of 1/6, to the nearest integer? 3 What is the square root of 94527 to the nearest integer? 307 What is the cube root of 34583 to the nearest integer? 33 What is the third root of 51999 to the nearest integer? 37 What is the square root of 16932 to the nearest integer? 130 What is 1012 to the power of 1/2, to the nearest integer? 32 What is the square root of 27440 to the nearest integer? 166 What is the sixth root of 1815 to the nearest integer? 3 What is 3604 to the power of 1/2, to the nearest integer? 60 What is the cube root of 12623 to the nearest integer? 23 What is the square root of 8703 to the nearest integer? 93 What is the third root of 22238 to the nearest integer? 28 What is the cube root of 168640 to the nearest integer? 55 What is 34535 to the power of 1/2, to the nearest integer? 186 What is 91756 to the power of 1/8, to the nearest integer? 4 What is the cube root of 1260 to the nearest integer? 11 What is the seventh root of 1081 to the nearest integer? 3 What is the third root of 229 to the nearest integer? 6 What is 645 to the power of 1/7, to the nearest integer? 3 What is the square root of 62147 to the nearest integer? 249 What is the fourth root of 63566 to the nearest integer? 16 What is 418 to the power of 1/2, to the nearest integer? 20 What is the third root of 7427 to the nearest integer? 20 What is the square root of 92213 to the nearest integer? 304 What is the seventh root of 1365 to the nearest integer? 3 What is the cube root of 3530 to the nearest integer? 15 What is 20327 to the power of 1/2, to the nearest integer? 143 What is the ninth root of 6037 to the nearest integer? 3 What is 2122 to the power of 1/2, to the nearest integer? 46 What is 233 to the power of 1/2, to the nearest integer? 15 What is 2171 to the power of 1/2, to the nearest integer? 47 What is the square root of 7262 to the nearest integer? 85 What is the third root of 1329 to the nearest integer? 11 What is 429 to the power of 1/2, to the nearest integer? 21 What is 11855 to the power of 1/2, to the nearest integer? 109 What is 89191 to the power of 1/3, to the nearest integer? 45 What is the cube root of 5465 to the nearest integer? 18 What is the eighth root of 3481 to the nearest integer? 3 What is the tenth root of 2627 to the nearest integer? 2 What is the square root of 10477 to the nearest integer? 102 What is the eighth root of 79664 to the nearest integer? 4 What is 2531 to the power of 1/4, to the nearest integer? 7 What is the cube root of 15404 to the nearest integer? 25 What is the square root of 11930 to the nearest integer? 109 What is the square root of 1405 to the nearest integer? 37 What is the fourth root of 37316 to the nearest integer? 14 What is the eighth root of 1951 to the nearest integer? 3 What is 153220 to the power of 1/10, to
Mid
[ 0.621176470588235, 33, 20.125 ]
IN THE COURT OF APPEALS OF THE STATE OF IDAHO Docket No. 42383 STATE OF IDAHO, ) 2015 Unpublished Opinion No. 601 ) Plaintiff-Respondent, ) Filed: August 27, 2015 ) v. ) Stephen W. Kenyon, Clerk ) CYNTHIA EVON DENT, ) THIS IS AN UNPUBLISHED ) OPINION AND SHALL NOT Defendant-Appellant. ) BE CITED AS AUTHORITY ) Appeal from the District Court of the Third Judicial District, State of Idaho, Canyon County. Hon. George A. Southworth, District Judge. Judgment of conviction for possession of a controlled substance, affirmed. Sara B. Thomas, State Appellate Public Defender; Brian R. Dickson, Deputy Appellate Public Defender, Boise, for appellant. Hon. Lawrence G. Wasden, Attorney General; Jessica M. Lorello, Deputy Attorney General, Boise, for respondent. ________________________________________________ MELANSON, Chief Judge Cynthia Evon Dent appeals from her judgment of conviction for possession of a controlled substance, alleging her Fifth Amendment rights were violated at trial. For the reasons set forth below, we affirm. An officer was dispatched to Dent’s residence to perform a welfare check in response to a caller expressing concerns for Dent’s safety. While performing the welfare check, officers discovered two small plastic bags that appeared to contain methamphetamine. After admitting she was the sole occupant of the residence, Dent was placed under arrest for possession of methamphetamine, and the officer asked Dent if she had anything illegal on her person that the officer “needed to know about.” Dent disclosed that she possessed a methamphetamine pipe, which she provided to the officer. The pipe and the small plastic bags tested positive for 1 methamphetamine. Dent was charged with possession of a controlled substance. I.C. § 37-2732. She pled not guilty and proceeded to trial. At trial, an officer was questioned about the small plastic bags that were found while searching Dent’s residence. The following exchange took place between the prosecutor and the officer: Q. Did you ask [Dent] about those baggies? A. I did. Q. What did she say about those baggies? A. Initially stated that she didn’t have--she didn’t know what they were. Thereafter, I had read [Dent] her . . . rights and she didn’t want to talk about them. Dent did not object to the officer’s comment. A jury found Dent guilty of possession of methamphetamine. Dent appeals, alleging that her Fifth Amendment rights were violated when the officer commented on her exercise of her right to remain silent. Generally, issues not raised below may not be considered for the first time on appeal. State v. Fodge, 121 Idaho 192, 195, 824 P.2d 123, 126 (1992). Idaho decisional law, however, has long allowed appellate courts to consider a claim of error to which no objection was made below if the issue presented rises to the level of fundamental error. See State v. Field, 144 Idaho 559, 571, 165 P.3d 273, 285 (2007); State v. Haggard, 94 Idaho 249, 251, 486 P.2d 260, 262 (1971). In State v. Perry, 150 Idaho 209, 245 P.3d 961 (2010), the Idaho Supreme Court abandoned the definitions it had previously utilized to describe what may constitute fundamental error. The Perry Court held that an appellate court should reverse an unobjected-to error when the defendant persuades the court that the alleged error: (1) violates one or more of the defendant’s unwaived constitutional rights; (2) is clear or obvious without the need for reference to any additional information not contained in the appellate record; and (3) affected the outcome of the trial proceedings. Id. at 226, 245 P.3d at 978. Dent alleges the officer’s testimony (that Dent did not want to talk about the small plastic bags after being read her Miranda 1 rights) violated her unwaived Fifth Amendment right against self-incrimination. The Fifth and Fourteenth Amendments to the United States Constitution, as well as Article I, Section 13 of the Idaho Constitution, guarantee a criminal defendant the right not to be compelled to testify against himself or herself. State v. Ellington, 151 Idaho 53, 60, 1 See Miranda v. Arizona, 384 U.S. 434 (1966). 2 253 P.3d 727, 734 (2011). The right to remain silent bars the prosecution from commenting on a defendant’s invocation of that right. Id. A prosecutor may not use evidence of post-arrest, post- Miranda silence for either impeachment purposes or as substantive evidence of guilt because of the promise present in a Miranda warning. Id. If a prosecutor is allowed to introduce evidence of silence, for any purpose, then the right to remain silent guaranteed in Miranda becomes so diluted as to be rendered worthless. State v. Parker, 157 Idaho 132, 147, 334 P.3d 806, 821 (2014); State v. White, 97 Idaho 708, 714-15, 551 P.2d 1344, 1350-51 (1976). The state argues that the officer’s comment was not intended to imply guilt and, therefore, did not violate Dent’s Fifth Amendment rights. The state explains that its purpose in questioning the officer about the small plastic bags was to establish that the bags were the basis of arrest, which led to Dent disclosing the pipe that was on her person. While the prosecutor may not have intended the line of questioning to create an implication of guilt based upon Dent’s invocation of her right to remain silent, we are not persuaded that the officer had any other purpose for making the comment. Had the officer testified that Dent “stated that she didn’t have--she didn’t know what they were” and left out the second comment that “thereafter, I had read [Dent] her Miranda rights and she didn’t want to talk about them,” the testimony would have had the same evidentiary value without commenting on Dent’s silence. The comment about Dent’s silence did not establish a connection between Dent and the baggies, which led to Dent’s arrest. In a similar case, when an officer testified that the defendant’s interview terminated because the defendant remained silent, the Idaho Supreme Court held that the state provided “no convincing reason why the jury needed to be informed of how the second interview terminated.” Parker, 157 Idaho at 147, 334 P.3d at 821. The same applies here. The state has provided no convincing reason why the jury needed to be informed that Dent refused to speak about the plastic bags after being read her Miranda rights. Even if the officer’s comment was unsolicited, the officer’s actions are imputed to the state. See Ellington, 151 Idaho at 61, 253 P.3d at 735. Because Dent has shown that her Fifth Amendment right was violated, which is clear from the record without the need to reference any additional information, she has established the first two prongs of the Perry fundamental error analysis. The remaining question is whether Dent met her burden of proving that the officer’s comment affected the outcome of the trial. Dent claims there is a reasonable possibility that the 3 officer’s comment affected the outcome of the trial because one of the jurors could have concluded, absent the implication of guilt, that the baggies had been left in her house by another person without Dent’s knowledge. According to Dent, there is a reasonable possibility that the officer’s comment may have swayed at least one juror who might have thought the state did not prove Dent’s knowledge, with regard to the plastic bags, beyond a reasonable doubt. Assuming Dent is correct, she still has not proven that the officer’s comments affected the outcome of the trial. After she was told she was under arrest, Dent openly disclosed that she had a methamphetamine pipe on her person. The existence of that pipe, which contained methamphetamine residue, was sufficient for the jury to find her guilty of possession of methamphetamine. Accordingly, even if Dent was prejudiced with regard to her knowing possession of the plastic bags, she was not prejudiced with regard to the methamphetamine pipe she possessed. Therefore, Dent has not met her burden of proving that the officer’s improper comment affected the outcome of the trial. Dent has met her burden of proving that her Fifth Amendment rights were violated at trial. However, Dent did not show that the violation affected the outcome of the trial. Therefore, Dent’s judgment of conviction for possession of a controlled substance is affirmed. Judge GUTIERREZ and Judge GRATTON, CONCUR. 4
Mid
[ 0.582887700534759, 27.25, 19.5 ]
(10 June - 12:39 PM)I am trying to set up a biobanking up and running for stool, urine, blood from patients. These samples will be used for future studies of microbiome, molecular biomarkers. We are still figuring out what is the best way to go about doing this, if anyone has information or have been involved in such things please message me. "I am among those who think that science has great beauty. A scientist in his laboratory is not only a technician: he is also a child placed before natural phenomena which impress him like a fairy tale."
Mid
[ 0.6236080178173721, 35, 21.125 ]
The AstrOlympics project consists of a series of posters, videos, and a website . All of these materials are free to download and use. Science educators may contact the project via the website to request a small number of available hard copies. AstrOlympics will also be distributed through NASA and International Astronomical Union networks.The 2018 Olympic Games will be held in South Korea between February 9-25.NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations. AstrOlympics provides brief explanations of the physical concepts and then compares examples from common every day experiences, Olympic events, and discoveries from space made with Chandra and other observatories. For example, the rotation section compares the spin of an ice skater to that of a washing machine to the rotating of a spinning dead star.
Mid
[ 0.576344086021505, 33.5, 24.625 ]
Introduction ============ Living donor liver transplantation (LDLT) has been adopted as the preferred method of treatment for adults with end stage liver disease due to shortage of the available cadaveric donors \[[@B1]\]. In LDLT, injury to the hepatic arterial system could lead to severe complications like hepatic artery thrombosis (HAT), which may result in graft loss in the recipient or significant reduction in the blood supply to the remaining liver in the donor or both \[[@B2]\]. Holbert et al. \[[@B3]\] had reported that the segment 4 of the liver is at a greater risk for development of HAT as a post-transplant complication. Michels \[[@B4]\] had defined the artery to the segment 4 of the liver as the middle hepatic artery (MHA). Hence it may be suggested that a detailed knowledge regarding the anatomy of the MHA could have considerable implications in the preservation/reconstruction of the MHA, which could reduce the incidence of HAT as a postoperative complication of LDLT. The purpose of this paper was to characterize the origin of the MHA in detail and classify the variations observed through the anatomical study of the cadaveric livers, and also to analyze the clinical significance (if any) of the findings in relation to postoperative vascular complications in LDLT. Materials and Methods ===================== The study was conducted in the Department of Anatomy, Lady Hardinge Medical College and Associated Hospitals, New Delhi, India. The author obtained ethical approval from the Ethics Committee of the above mentioned institution. The livers used in this study were retrieved from human cadavers aged 55-78 years (male, 77; female, 48), obtained from the clinical wards of associated Smt. Sucheta Kriplani Hospital, New Delhi, India. Prior to procurement of the cadavers written informed consent was obtained from the family members/relatives in each case. Only those cadavers, with no history of any liver disease in their medical records, were selected for this study. Resections were performed very carefully, so as to include the liver, celiac trunk, left gastric artery (LGA), lesser omentum, superior mesenteric artery (SMA) and head of the pancreas, in each of the study specimen. A total of 125 adult livers, without macroscopic abnormalities were examined. According to Couinaud\'s \[[@B5]\] and Bismuth\'s \[[@B6]\] liver segmentation, the left medial sub-segment of the liver which lies medial to the falciform ligament, has been defined as the segment 4. Saxena et al. \[[@B7]\] quoting the works of Hjortsjo \[[@B8]\] and Mizumoto and Suzuki \[[@B9]\], had reported that topographically the quadrate lobe constitutes the major part of the left medial sub-segment of the liver. However the quadrate lobe has also been considered as a subdivision of the right anatomical lobe of the liver \[[@B10]\]. Hence it was postulated that the quadrate lobe could possibly be supplied by more than one branch from the hepatic arterial system. Dissection was started from the inferior aspect of the hepatic hilum and subsequently all the branches of the celiac trunk, SMA and the hepatic arterial system were identified. Meticulous dissection was performed in each specimen and the arterial branches supplying the quadrate lobe of liver were exposed by carefully removing the liver tissue surrounding these branches. The distribution of each arterial branch supplying the quadrate lobe was noted and subsequently the area supplied by these arteries was estimated. For the present study, that branch of the hepatic arterial system which formed the dominant arterial supply of the quadrate lobe of liver was defined as the artery to the hepatic segment 4 or the MHA. Further, an artery arising from the SMA and reaching into the right lobe of the liver was defined as the accessory right hepatic artery (aRHA), and an artery arising from the LGA and reaching into the left lobe of the liver was defined as the accessory left hepatic artery (aLHA). The MHA was identified in each of the liver specimen, the origin of the artery was traced, and the different variations observed in the origin of the MHA were schematically studied across all the study specimen. Results ======= At first the branching pattern of the celiac trunk and the SMA was noted in all the dissected specimens and in 123 cases (98.4%) classical trifurcation of the celiac trunk was observed, whereas in two specimens (1.6%) bifurcation of the trunk was observed (splenogastric trunk) with the common hepatic artery arising from the SMA (hepato-mesenteric trunk) ([Fig. 1](#F1){ref-type="fig"}). Subsequently the branches of the hepatic arterial system were taken into consideration and in general it was observed that the right hepatic artery (RHA) and the left hepatic artery (LHA) arose as branches of the proper hepatic artery. The LHA divided into two sub-branches; medial and lateral segmental arteries. The medial segmental artery (MSA) supplied the quadrate lobe and anterior region of the left lobe. The lateral segmental artery (LSA) further divided into superior and inferior divisions to supply the left lobe of liver. The inferior division of the LSA also supplied the quadrate lobe. The RHA divided into two sub-branches: anterior and posterior segmental arteries, which further divided into superior and inferior divisions to supply the right lobe of liver. The inferior division of the anterior segmental artery (ASA) also supplied the quadrate lobe of the liver. Hence it was observed that the quadrate lobe in all the livers was supplied by three arteries; MSA and LSA (sub-branches of the LHA) and the ASA (sub-branch of the RHA), with either the MSA or the ASA being the dominant one. Now the MHA was identified in each specimen as the dominant artery which supplied the quadrate lobe of liver and it was observed that the MHA arose as a sub-branch of LHA in fifty-nine specimens (47.2%) and as a sub-branch of RHA in sixty-six specimens (52.8%) ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}). This was followed by the identification of the accessory hepatic arteries when present in the study specimens. It was noted that the aLHA was present in sixteen cases (12.8%), aRHA in seventeen cases (13.6%) and both the accessory hepatic arteries were present in three cases (2.4%) ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}). Based on the variations observed in the origin of the MHA, while taking into consideration the presence of accessory hepatic arteries (aLHA/aRHA/both), in the present study the hepatic arterial anatomy was classified into six types ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}). In type I hepatic arterial configuration, the dominant artery to the quadrate lobe was ASA and in type II it was MSA. Type I and type II hepatic arterial configurations were observed in 37.6% and 33.6% cases respectively and they are the normal anatomical pattern observed in most of the adult livers, without the presence of any accessory hepatic arteries. In type III hepatic arterial configuration, the dominant artery to the quadrate lobe was MSA, the aRHA was present and it was observed in 13.6% of livers. In type IV, the dominant artery to the quadrate lobe was ASA, the aLHA was present and it was noted in 11.2% of livers. In type V, the dominant artery to the quadrate lobe was ASA, both the accessory hepatic arteries were present (aRHA and aLHA) and it was observed in 2.4% of the livers. In type VI hepatic arterial configuration, the dominant artery to the quadrate lobe was ASA, common hepatic artery (CHA) was arising from the hepato-mesenteric trunk, aLHA was present and it was noted in 1.6% of the livers ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}). It was further observed that in seventeen specimens (13.6%) under study, in the presence of aRHA, MHA arose as a sub-branch of LHA (type III in present study) and in nineteen specimens (15.2%), in the presence of aLHA (irrespective of the presence of aRHA), MHA arose as a sub-branch of RHA (type IV to type VI in the present study) ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}). Discussion ========== The observations of the present study with regards to the branching pattern of the celiac trunk are different from most of the previous reports ([Table 2](#T2){ref-type="table"}) \[[@B11], [@B12], [@B13], [@B14], [@B15]\]. However the findings of the present study are in accordance with the more recent studies like Katsume et al. \[[@B16]\] and Ugurel et al. \[[@B17]\] ([Table 2](#T2){ref-type="table"}). It is noteworthy that the observations of the present study are different from the findings of Chitra \[[@B18]\] and Prakash et al. \[[@B19]\] which similar to mine were conducted on Indian cadavers ([Table 2](#T2){ref-type="table"}). Such disparity might be attributed to the difference in the sample size of the cadavers and the difference in the study population groups. In the present study bifurcation of the celiac trunk were observed in two livers (1.6%), where the celiac trunk gave rise to spleno-gastric trunk and the CHA arose from SMA (as the hepato-mesenteric trunk) ([Fig. 1](#F1){ref-type="fig"}). Such variation in the branching of the celiac trunk corresponds to type 5 as per the classification proposed by Adachi \[[@B14]\]. It may be highlighted that my findings regarding variations in the branching of celiac trunk (type VI in this study) ([Table 2](#T2){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}) are in accordance with the observations of Hiatt et al. \[[@B20]\] who had reported on the basis of 1,000 cases of liver transplantation, the incidence of hepato-mesenteric trunk (CHA arising from SMA) as 1.5%. Further it may be noted that the incidence of hepato-mesenteric trunk as reported in the present study is close to the findings of Nayak and Vasudeva \[[@B21]\] who had observed that the CHA arose from SMA in 3% cases. Arterial supply to the hepatic segment 4 was first described by Healey et al. \[[@B22]\] who had reported that the artery was arising in equal proportions from the RHA and the LHA. Michels \[[@B4]\] had defined the artery going to the segment 4 as the MHA that courses in the umbilical fossa and also observed that the artery was arising from RHA and LHA in equal proportions ([Table 3](#T3){ref-type="table"}). Since then majority of the authors have defined the artery to the segment 4 of the liver as the MHA. Onishi et al. \[[@B23]\] had reported that MHA was arising predominantly from the LHA (61.5%) and in most of the remaining cases from the RHA (27.5%). Futara et al. \[[@B24]\] noted that the MHA was present in 47.3% of cases and was arising in equal proportion from the LHA (20%) and RHA (20%). Kishi et al. \[[@B25]\] observed that MHA was arising from the LHA in 62.5% cases and from the RHA in 37.5% of the cases, which was similar to the findings of Onishi et al. \[[@B23]\]. Wang et al. \[[@B26]\] had defined MHA as the artery which arose from its artery of origin at the hepatic hilum and gave off branch/branches to the hepatic segment 4 at the umbilical fissure. Accordingly they had reported the presence of MHA in 103 cases (71%), and in contrast to previous studies, the MHA arose predominantly from the RHA (58.3%) and in most of the remaining cases from LHA (36.9%) ([Table 3](#T3){ref-type="table"}). Yoshimura et al. \[[@B27]\] had described the artery to the hepatic segment 4 as the left medial artery (LMA). They had classified the LMA into three types: type I arising from the distal part of the LHA on reaching the umbilical portion of the portal vein (37.2%), type II arising from the proximal part of the LHA before reaching the umbilical portion of the portal vein (35.8%), and type III arising from RHA (27%). Kamel et al. \[[@B28]\] identified the artery supplying the hepatic segment 4 in 39 cases and it arose from the RHA in 62.5% patients. Jin et al. \[[@B29]\] had reported that the artery to segment 4 was arising from the RHA in 53.2% liver specimen, and from the LHA in 32.3% cases ([Table 3](#T3){ref-type="table"}). To summarize the observations made by different authors in the previous studies, it may inferred that the origin of the MHA showed a high degree of variability and in majority of the cases, the artery was arising from either the LHA or the RHA. In accordance with the observations made by different authors in the previous studies, it was noted in the present study that the origin of the MHA showed considerable variability and arose from either LHA or the RHA ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}). Moreover the findings of this study were similar to the observations of Kamel et al. \[[@B28]\], Jin et al. \[[@B29]\], and Wang et al. \[[@B26]\] in that the MHA in this study arose predominantly as a branch of RHA (52.8% specimens) and from the LHA (47.2% specimens) in the remaining cases ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}). The incidence of aLHA arising as a branch of LGA as observed in the present study ([Table 4](#T4){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}) is higher than what has been reported previously ([Table 4](#T4){ref-type="table"}) \[[@B20], [@B30], [@B31]\]. However the incidence of aRHA arising as a branch of SMA in the present study is close to recent reports but considerably higher than the findings of Daseler and Anson \[[@B30]\] ([Table 4](#T4){ref-type="table"}). The occurrence of both aLHA and aRHA as observed in the present study is very much similar to the findings of Hiatt et al. \[[@B20]\] but higher than the observations of Koops et al. \[[@B31]\] ([Table 4](#T4){ref-type="table"}). Liver surgery is largely an anatomic exercise and Couinaud \[[@B32]\], a French surgeon and anatomist made significant contributions towards understanding the hepatic arterial configuration. According to Couinaud \[[@B33]\], embryologically there are 3 lobes in the early stage of hepatic formation, each supplied by an embryonic artery of its own. The lateral sector (segment 2) is supplied by the embryonic LHA, the medial and anterior sectors (segments 3, 4, 5, and 8) by the embryonic MHA and the posterior sectors (segments 6 and 7) by the embryonic RHA. The embryonic LHA is derived from LGA, the embryonic MHA from the CHA and embryonic RHA from the SMA ([Fig. 2](#F2){ref-type="fig"}) \[[@B33]\]. In the present study, six types of hepatic arterial configuration were observed and it was noted that the MHA arose as a sub-branch of either RHA or LHA ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}), both of which happens to be the branches of CHA, which happens to be the artery from which MHA arises in the embryonic life \[[@B33]\]. This observation may explain the variations in the origin of MHA, as documented by different authors in previous studies ([Table 3](#T3){ref-type="table"}) and as observed by the author of the present study ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}). Saxena et al. \[[@B7]\] reported that the quadrate lobe belongs functionally to the left lobe of the liver, hence during right lobe LDLT, preservation of the LHA and the MHA is critical for adequate blood supply to the remaining left lobe in the donor \[[@B34]\]. Similarly during left lobe LDLT, reconstruction of LHA and MHA, ensures adequate blood supply and thereby viability of the graft \[[@B26]\]. Subsequently it may be suggested that when MHA arises as a sub-branch of LHA in the presence of aRHA (type III in present study) ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}), during either right lobe/left lobe LDLT, the preservation/reconstruction of MHA as per requirement would require no modification in the standard surgical procedure. Thus the risk of intra-operative injury to MHA would be less, thereby reducing the chances of development of post-operative HAT \[[@B26]\]. However when MHA arises as a sub-branch of RHA in the presence of aLHA (type IV to type VI in present study) ([Table 1](#T1){ref-type="table"}, [Fig. 1](#F1){ref-type="fig"}), during either right lobe/left lobe LDLT, the preservation/reconstruction of the MHA would necessitate a complex surgical technique. During right lobe LDLT, as the right lobe graft would include the RHA, hence it would be very difficult to preserve the MHA in the donor, which is essential for the adequate blood supply and hence survival of the remaining left lobe in the donor. Similarly during left lobe LDLT, the left lobe graft should include the MHA along with the LHA for proper reconstruction of both the arteries and survival of the graft in the recipient. As the RHA has to be preserved for the survival of the remaining right lobe in the donor, hence it would be very difficult to harvest the MHA in the left lobe graft, thereby threatening the survival of the graft in the recipient. Hence it may be opined that in the presence of an aLHA in the donor, during right lobe/left lobe LDLT, the risk of intra-operative injury to the MHA is very high, which could be associated with an increased incidence of post-operative HAT, which in turn may lead to reduced blood supply to the quadrate lobe of liver. Recent literature suggests that there has been a preference among surgeons to use livers with an accessory hepatic artery for LDLT. Sakamoto et al. \[[@B35]\] had opined that using left sided liver graft with an aLHA in the donor would be more advantageous for arterial reconstruction in the recipient, as compared to grafts with the normal LHA, due to thicker and longer arterial branches in the accessory form of the artery. Aramaki et al. \[[@B36]\] however advocated the use of right sided grafts from donors with an aLHA and left sided grafts from donors with an aRHA, as this would reduce the incidence of postoperative vascular complications in donors. Sakamoto et al. \[[@B35]\] had performed left lobe LDLT in the presence of an aLHA in 24 cases and reported the occurrence of hepatic arterial occlusion in 3.2% of the cases. Aramaki et al. \[[@B36]\] have documented the use of right liver graft in the presence of an aLHA in 16 cases and no vascular complication was reported in any of the donors. However the author of the present study may opine that it could be useful to recognize the variations in the origin of MHA in the donor, particularly in the presence of aLHA, in order to reduce the risk of vascular complications during right/left lobe LDLT. The author sincerely acknowledge that available modern surgical techniques does enable surgeons to adopt strategies that is appropriate for most of the anatomical variations in the hepatic vascular system, nevertheless preoperative evaluation of the hepatic arterial configuration with the help of three dimensional (3D) angiography from multidetector-row computed tomography could enable full appreciation of the variations in the hepatic arterial system in the donor \[[@B37]\]. The author does accept that there are limitations to the conclusions drawn from this study, as it has been conducted on cadaveric liver specimen. Hence cautious interpretation is suggested with regards to the observations made in this study. In summary, based on the observations made from the cadaveric study, it was evident that in the presence of an aLHA in an adult donor, the MHA would arise as a sub-branch of RHA. This configuration could lead to difficulty in preservation of the MHA (essential for the survival of the remaining left lobe in the donor) during right lobe LDLT and in harvesting the MHA (essential for the survival of the left lobe graft in the recipient) during left lobe LDLT. Thus presence of an aLHA in the donor could possibly be associated with an increased risk of intra-operative injury to the MHA during right/left lobe LDLT. This may subsequently lead to serious post-operative vascular complication like HAT, as it has been reported that hepatic segment 4 is selectively at a greater risk of developing HAT as compared to other segments and majority of the authors have described MHA as the artery going to supply the hepatic segment 4. The author express heartfelt gratitude to all the senior consultants, resident doctors and technicians of the Department of Anatomy, Lady Hardinge Medical College & Smt. Sucheta Kriplani Hospital, New Delhi, India, for their unconditional support and co-operation throughout the study. The author is grateful to the authorities of Lady Hardinge Medical College for funding this study. ![Illustrative representation of the types of hepatic arterial configuration observed in the study based on the variations in the origin of middle hepatic artery (MHA), taking into consideration the presence of accessory hepatic arteries. AA, abdominal aorta; aLHA, accessory left hepatic artery; aRHA, accessory right hepatic artery; ASA, anterior segmental artery; CHA, common hepatic artery; CT, celiac trunk; GDA, gastro-duodenal artery; HMT, hepato-mesenteric trunk; LGA, left gastric artery; LHA, left hepatic artery; LSA, lateral segmental artery; MHA, middle hepatic artery; MSA, medial segmental artery; PHA, proper hepatic artery; PSA, posterior segmental artery; RHA, right hepatic artery; SA, splenic artery; SMA, superior mesenteric artery.](acb-47-188-g001){#F1} ![Illustration showing the origin of the three hepatic arteries in embryonic life with the respective Couinaud\'s segments in the liver (represented by numbers) supplied by each of the three hepatic arteries. In this figure, Couinaud segmentation system was followed, which is based on the distribution in the liver of both the portal vein and the hepatic veins. Fissures of the three hepatic veins (portal scissurae) longitudinally divide the liver into four sectors. The planes containing the right and left portal pedicles (hepatic scissurae) transversely divide the sectors into eight segments. AA, abdominal aorta; CA, celiac artery; CHA, common hepatic artery; eLHA, embryonic left hepatic artery; eMHA, embryonic middle hepatic artery; eRHA, embryonic right hepatic artery; HS, hepatic scissura; IVC, inferior vena cava; LF, left fissure; LGA, left gastric artery; LHV, left hepatic vein; LLPV, left lobar portal vein; MF, median fissure; MHV, middle hepatic vein; PV, portal vein; RF, right fissure; RHV, right hepatic vein; RLPV, right lobar portal vein; SA, splenic artery; SMA, superior mesenteric artery.](acb-47-188-g002){#F2} ###### Variations in the dominant arterial supply to the quadrate lobe of liver as observed in the present study ![](acb-47-188-i001) Values are presented as number (%). In the present study that branch of the hepatic arterial system which formed the dominant arterial supply of the quadrate lobe of liver was defined as the middle hepatic artery (MHA). RHA, right hepatic artery; LHA, left hepatic artery; aRHA, accessory right hepatic artery; aLHA, accessory left hepatic artery; CHA, common hepatic artery. ###### Variations in the branching pattern of celiac trunk as reported by previous authors and as observed in the present study ![](acb-47-188-i002) Values are presented as number (%). ###### A summary of the origin of middle hepatic artery (artery to hepatic segment -4) as detailed by the previous authors ![](acb-47-188-i003) RHA, right hepatic artery; LHA, left hepatic artery; PHA, proper hepatic artery; SMA, superior mesenteric artery; GDA, gastroduodenal artery; CHA, common hepatic artery; MDCT, multidetector computerised tomography. ^\*^Dual type, in this type the artery to segment 4 was arising as 2 principal arteries stemming from 2 different origins. ###### Incidence of accessory hepatic arteries as reported by previous authors and as observed in the present study ![](acb-47-188-i004) Values are presented as number (%). aLHA, accessory left hepatic artery; LGA, left gastric artery; aRHA, accessory right hepatic artery; SMA, superior mesenteric artery.
Mid
[ 0.62565445026178, 29.875, 17.875 ]
Are you glad Emma and Ethan are finally giving into their feelings? Something tells me Sutton isn’t too pleased. On the Sept. 12 episode of The Lying Game, Emma (Alexandra Chando) decided not to go to homecoming because Laurel (Allie Gonino) was grounded, but ended up changing her mind when Char (Kirsten Prout) and Mads (Alice Greczyn) forced her to run for queen. And because Emma is such a good person, she convinced the Mercers to let Laurel go to the dance with Justin (Randy Wayne.) But the evening took an interesting turn when Emma discovered a picture of Mr. Mercer, Mr. Rybeck, and the twins’ birth mom from high school — together! Meanwhile, Nisha (Sharon Pierre-Louis) revealed to Mr. Rybak that Eduardo (Rick Malambri) and Mads kissed, but got thrown off her pedestal when she lost her queendom to Emma. In two climactic ending scenes, Emma kissed Ethan (Blair Redford) in front of the entire school and we saw a shot of the twins’ mom in a hospital room painting a picture of a night sky — and two little girls. I thought Emma handled the whole homecoming problem very well, as she successfully made everyone happy — even Ethan. I totally give her props for that. Sutton, on the other hand, is being so immature. She asked Emma to pretend to be her, so she had no right to be angry with Emma when she ran for homecoming queen, or even when she made out with her boyfriend. Emma’s just following orders — from a girl she barely knows, no less. Sutton’s also abusing her relationship with Thayer (Christian Alexander) by kissing him out of envy. She’s already been awful to Ethan by not telling her friends about him. I’m surprised he could stand that kind of behavior for so long. She’s so catty! As usual, Mr. Rybak is being a freak. He’s so possessive of Mads, and I hope Eduardo tells her everything he heard when he was hiding in her closet! He really gives me the chills whenever he’s on screen or even when he speaks. (Such a creeper!) I thought it was weird he didn’t chastise Eduardo after receiving the picture. I guess he has some secret up his sleeve that he wants Eduardo to be part of. And after seeing the secret picture, I’m really starting to think Mr. Rybak is Sutton and Emma’s dad. He seemed to be pretty friendly with Annie Hobbs in the photo, and maybe he’s trying to keep some scandal from leaking out. Obviously, that can’t be the whole mystery but that idea may be the start of something bigger. I’m also getting more curious about what The Lying Game actually is. That’s the title, but we’re on the fifth episode and it still hasn’t been explained.
Mid
[ 0.615577889447236, 30.625, 19.125 ]
GMAT Problem Solving Next steps In a certain calculus class, the ratio of the number of mathematics majors to the number of students who are not mathematics majors is 2 to 5.if 2 more mathematics majors were to enter the class the ratio would be 1 to 2. How many students are in the class
Mid
[ 0.618736383442265, 35.5, 21.875 ]
[Warning: Mild spoilers about Season 5 of Ray Donovan ahead.] In past seasons, Ray Donovan had dark comedic elements that made the family crime drama somewhat unique in the ways it balanced lightness with its heaviness. While the Showtime series nev… Read more A struggling actor and crook moves to Los Angeles, hoping to score some movie roles, where he meets a gay PI at a party. When a series of female bodies turn up, they find themselves investigating the murders.
Mid
[ 0.5511811023622041, 35, 28.5 ]
Pringle welcomes the start of 2016 General Election Campaign Independent TD Thomas Pringle welcomed the announcement of the dissolution of the 31st Dáil and the start of the General Election 2016 campaign saying he looks forward to meeting as many people as possible throughout the Donegal constituency in the weeks to follow. ‘It might be the shortest election campaign in Irish history but it’s been a long time for people in Donegal living under the anti-rural policies of Fine Gael and Labour. My core message to people on the campaign trail will echo what constituents have been telling me all along; that we want a community-centred society with an economy that serves the people, not people serving an economy’ says Pringle. ‘To generate this kind of society I will prioritise political reform by holding a referendum to keep water in public ownership, to enshrine economic, social and cultural rights into our constitution as well as people-initiated referenda. These referenda will start the process of changing the culture of our government placing the interests of the people first, not corporate interests.’ ‘The next Dáil must implement rural-friendly policies like broadband roll-out prioritising remote areas, rural-friendly job creation in sectors like biomass alongside a moratorium on wind farming to protect communities. We need to see a restoration of income supports and social welfare payments to combat fuel, food and child poverty rates which have sky-rocketed since FG/Labour made drastic cuts to state services.’ ‘People are dealing with greater uncertainty over access to healthcare in Donegal. We can address this by increasing incentives to retain medical staff in the county and by restoring GP supports as part of the development of a primary care model with universal healthcare a central component. I want to see much more resources provided for elderly mental health services, disability supports and all frontline services that were drastically cut by FG/Labour.’ ‘Other big concerns for people include access to childcare and educational supports like SNAs and resource hours in schools, greater disability access in public transport and the upgrading of road infrastructure to attract jobs to Donegal. These are all issues I have continuously campaign for and will continue to address if re-elected to the next Dáil.’ ‘Ireland needs to fight harder for our fair share of natural resources at the EU level in relation to fishing quotas. We deserve a fairer share of our natural resources which could in turn create local jobs in Donegal.’ ‘It has been an honour to serve as an Independent TD for Donegal since 2011 and with the support of the people of Donegal I will be able to continue to represent the county where I will keep Donegal on the political agenda’ concludes Pringle.
High
[ 0.6744730679156901, 36, 17.375 ]
Congenital myopathy with type 2A muscle fiber uniformity and smallness. We describe a 12-year-old girl with congenital myopathy. ATPase histochemical reactions and immunocytochemical analysis of muscle fiber-type composition with monoclonal antibodies against slow, fast (2A and 2B) and fetal myosin demonstrate that this congenital disease is characterized by type 2A muscle fiber uniformity and smallness. This is an unusual feature for a congenital myopathy in which the fiber type predominance, when present, is confined to type I.
High
[ 0.673521850899742, 32.75, 15.875 ]
Share Two United Nations monitors of children's rights are set to call on families living in subdivided homes, and on other underprivileged and special-needs children, for a glimpse at their life in Hong Kong. It is the first overseas trip the pair are making in their capacity as members of the 18-strong UN Committee on the Rights of the Child, and was made possible by the invitation of a local NGO. The visit takes place ahead of the Children's Rights Convention in Geneva next month, where the Hong Kong government is expected to present a report on the topic, and will then receive comments and suggestions from the UN. Committee chairwoman Kirsten Sandberg and rapporteur Maria Herczog, an expert researcher, arrived on Monday for the week-long stay, and are due to meet more than 40 groups. "It will be very interesting to meet the children and talk with them directly, to hear about their experiences and their views about being children in Hong Kong," Sandberg said yesterday before the first of their meetings. The invitation to visit, issued by the non-profit Hong Kong Committee on Children's Rights, was the first the pair had received since getting elected a few years ago to the UN committee. Herczog said it was rare for NGOs to invite UN members, as they had to shoulder all the expenses of the visit. "It is also important for us to meet as many professionals as we can who are working with children and providing children's services, who wouldn't come to Geneva to meet us otherwise." Last night, close to 50 children and teenagers from 12 groups met the pair, drawing attention to wide-ranging issues including the financial pressure of buying textbooks, ethnic-minority rights, suicides related to school stress and hopes of getting a decent home. The children read out speeches that they had prepared themselves. "My wish is to live with my mum and little sister and brother in a permanent home," said eight-year-old Hilda Cheung, who lives at the Caritas family crisis centre in Kowloon Bay to avoid violence at home. Hilda's sister, five-year-old Yoyo, said: "I'd like it if my mum gets to sleep in a real bed, and we can all watch TV together." Numan Gharib, 16, talked about the difficulties in learning Chinese despite being born and raised in Hong Kong, because of a lack of official support. "We would like the government to not focus just on the Chinese language, and to allow ethnic minorities to attend university." A few groups talked about an overemphasis on grades and exam results, as well as a lack of job diversity and prospects for the future. Both Sandberg and Herczog said the visit would help them understand the real situation for children in Hong Kong. Herczog said she was aware of how undeveloped early-childhood services and educational programmes were in Hong Kong. In February, a few Hong Kong NGOs presented reports to the UN at a closed-door Geneva meeting. The Hong Kong committee said in its report that one in every four children in the city lived in poverty. This article appeared in the South China Morning Post print edition as:
Mid
[ 0.644859813084112, 34.5, 19 ]
--- abstract: | This paper reports the spectral and timing analyses of the quiescent low-mass X-ray binary (qLMXB) U24 observed during five archived [[*Chandra*]{}]{}/ACIS exposures of the nearby globular cluster NGC 6397, for a total of 350. We find that the  flux and the parameters of the hydrogen atmosphere spectral model are consistent with those previously published for this source. On short timescales, we find no evidence of aperiodic intensity variability, with 90% confidence upper limits during five observations ranging between $<$8.6% rms and $<$19% rms, in the 0.0001–0.1 Hz frequency range (0.5–8.0 keV); and no evidence of periodic variability, with maximum observed powers in this frequency range having a chance probability of occurrence from a Poisson-deviated light curve in excess of 10%. We also report the improved neutron star (NS) physical radius measurement, with statistical accuracy of the order of $\sim$10%: ${\mbox{$R_{\rm NS}$}}= 8.9{\mbox{$^{+ 0.9}_{- 0.6}$}}{\hbox{$\,{\rm km}$}}$ for ${\mbox{$M_{\rm NS}$}}= 1.4{\mbox{$\,M_\odot$}}$. Alternatively, we provide the confidence regions in mass–radius space as well as the best-fit projected radius ${\mbox{$R_{\infty}$}}= 11.9{\mbox{$^{+ 1.0}_{- 0.8}$}}{\hbox{$\,{\rm km}$}}$, as seen by an observer at infinity. The best-fit effective temperature, ${\mbox{$kT_{\rm eff}$}}= 80{\mbox{$^{+ 4}_{- 5}$}}{\mbox{$\,{\rm eV}$}}$, is used to estimate the neutron star core temperature which falls in the range $T_{\rm core} = \left( 3.0 - 9.8 \right) {\mbox{$\times 10^{7}$}}{\mbox{$\rm\,K$}}$, depending on the atmosphere model considered. This makes U24 the third most precisely measured NS radius among qLMXBs, after those in , and in M13. author: - 'Sebastien Guillot$^1$, Robert E. Rutledge$^1$, Edward F. Brown$^2$' bibliography: - 'biblio.bib' title: 'Neutron Star Radius Measurement with the Quiescent Low-Mass X-ray Binary U24 in NGC 6397' --- Introduction ============ The emission from low-mass  binaries in quiescence (qLMXBs) is routinely studied to provide useful constraints on the physical models of the interior of neutron stars (NSs). The low luminosity (${\mbox{$10^{32}$}}$–${\mbox{$10^{33}$}}{\mbox{$\,{\rm erg\,{\mbox{$\,{\rm s^{-1}}$}}}$}}$, 4–5 orders of magnitude lower than the outburst luminosities) of these objects was first observed in the post-outburst stages of the transient LMXBs Cen X-4 and Aql X-1 [@vanparadijs87], and initially interpreted as a thermal blackbody emission powered by some low-level mass accretion onto the NS surface [@verbunt94]. In an alternate explanation for the energy source of qLMXBs, the luminosity is provided by the energy released during accretion episodes by pressure-sensitive nuclear reactions (electron captures, neutron emission, and pycnonuclear reactions) in the NS deep crust. The theory of deep crustal heating (DCH; @brown98) describes how the accreted matter piles up at the top of the NS surface, forcing the matter underneath to deeper layers of the crust, and how the energy is deposited in the crust. The resulting nuclear reaction chain releases $\sim1.5{\mbox{$\,{\rm MeV}$}}$ per accreted nucleon (see [@gupta07] and [@haensel08] for details about crustal heating models), and gives rise to a time-average luminosity directly proportional to the time-averaged accretion rate: $$\langle L\rangle = 9{\mbox{$\times 10^{32}$}}\,\frac{\langle \dot{M} \rangle}{10^{-11}{\mbox{$\rm\,{\mbox{$\,M_\odot$}}\, {\mbox{$\,{\rm yr^{-1}}$}}$}}}\,\frac{Q}{1.5{\mbox{$\rm\,MeV\,amu^{-1}$}}}{\mbox{$\,{\rm erg\,{\mbox{$\,{\rm s^{-1}}$}}}$}}\label{eq:dch}$$ where $Q$ is the average heat deposited in the NS crust per accreted nucleon. It was also suggested that the observed thermal spectrum of qLMXBs is the result of the energy deposited in the crust, heating the NS core during outbursts and thermally re-radiating away from the crust through the NS atmosphere on core-cooling timescales [@brown98]. It is thought that this atmosphere is composed of pure hydrogen since heavy elements gravitationally settle on short timescale ($\sim$ seconds, @romani87 [@bildsten92]) once the accretion from the low-mass evolved companion star onto the NS surface shuts off after an outburst. Models of NS H atmosphere [@rajagopal96; @zavlin96; @mcclintock04; @heinke06], now routinely used for qLMXBs, imply emission-area radii consistent with the entire area of the NS, compared to the ${\mbox{$\,^{<}\hspace{-0.24cm}_{\sim}\,$}}1{\hbox{$\,{\rm km}$}}$ emission-area radii in the previously imposed blackbody approximation [@rutledge99]. Spectral fitting of qLMXBs with such models also leads to the determination of the projected radius (as observed from infinity) defined by ${\mbox{$R_{\infty}$}}= {\mbox{$R_{\rm NS}$}}g_{\rm r}^{-1} = {\mbox{$R_{\rm NS}$}}\left(1 - 2G{\mbox{$M_{\rm NS}$}}/{\mbox{$R_{\rm NS}$}}c^{2}\right)^{-1/2}$, where  is the physical radius of the NS. In the spectra of some qLMXBs in the field of the Galaxy (for example Cen X-4, @rutledge01a, and Aql X-1, @rutledge01b), an additional power-law component, dominating the spectrum above $2{\mbox{$\,{\rm keV}$}}$ and unrelated to the H-atmosphere thermal emission, is observed. Proposed interpretations of this power-law tail include residual accretion onto the NS magnetosphere [@grindlay01a; @cackett05], shock emission via the emergence of a magnetic field [@campana00b], or an intrabinary shock between the winds from the NS and its companion star [@campana04b]. However, recent analyses of the quiescent emission of LMXBs have shown that variations in the non-thermal component are correlated to the variations in the thermal component. This suggests the presence of a variable low-level accretion on the NS (for the LMXB XTE J1701$-$462, @fridriksson10, and for the LMXB Cen X-4, @cackett10). Another characteristic of qLMXBs is the expected lack of strong variability on long or short timescales since their emission is dominated by the thermal radiation from the interior of the NS. While the thermal component is not expected to display intensity variability, other emission mechanisms (like those mentioned in the previous paragraph) may be responsible for intensity and spectral variations [@brown98]. However, recent outburst episodes can generate variations in the quiescent thermal luminosity on days to years timescales, as observed for the LMXB KS 1731-260 (@rutledge02c, following the models described in @ushomirsky01, see also @brown09). The DCH/H-atmosphere interpretation has been applied to a large number of historical transient LMXBs and provided  measurements from high signal-to-noise (S/N) spectra. The following list is, to the best of our knowledge, exhaustive: 4U 1608-522 [@rutledge99], 4U 2129+47 [@rutledge00], Cen X-4 [@campana00a; @rutledge01a], Aql X-1 [@rutledge01b], KS 1731-260 [@rutledge02c], XTE J2123$-$058 [@tomsick04], EXO 1747$-$214 [@tomsick05], MXB 1659$-$29 [@cackett06], 1M 1716$-$315 [@jonker07a], 1H 1905+000 [@jonker07b], 2S 1803$-$245 [@cornelisse07], 4U 1730$-$22 [@tomsick07], EXO 0748$-$676 [@degenaar09], and XTE J1701$-$462 [@fridriksson10]. However, the 10%–50% systematic uncertainty in the distances to qLMXBs in the field directly affects the  measurement uncertainty. Obtaining precise constraints on the dense matter equation of state (EoS) is the observational motivation for measuring the radii of NSs, requiring $\sim 5\%$ accuracy on  to be useful for this purpose [@lattimer04; @steiner10]. The known distances to globular clusters (GCs) and their expected overabundances of  binaries [@hut92] has motivated the search for qLMXBs in the core of GCs. There are 26 GC qLMXBs spectrally identified so far (see @heinke03c and @guillot09a for two complementary lists). However, some of them have poorly constrained radius and temperature measurements. This can be due to low count statistics and/or a large amount of galactic absorption (for example ${\mbox{$N_{\rm H}$}}=1.2\times10^{22}{\mbox{$\rm\,atoms{\mbox{$\,{\rm cm^{-2}}$}}$}}$ for the GC Terzan 5) in their direction which alters the low-energy end (0.1–1) of the spectra where the H-atmosphere spectrum of qLMXBs peaks. Therefore, their identification is regarded as less secure. Most qLMXBs in GCs have low-S/N spectra, and therefore, have rather large uncertainties on their  measurements ($\sim15\%$ or more), precluding their use for EoS constraints. So far, only a few qLMXBs (in , @gendre03a, in M13, @gendre03b, and X7 in 47 Tuc, @heinke06) have spectra with S/N high enough to provide useful constraints on the dense matter EoS [@webb07; @steiner10]. The close proximity of the globular cluster NGC 6397 ($d \approx 2.5{\mbox{$\,{\rm kpc}$}}$, @harris96 [@hansen07; @strickler09]), the moderately low galactic absorption[^1] (${\mbox{$N_{\rm H}$}}= 0.14{\mbox{$\times 10^{22}$}}{\mbox{$\rm\,atoms{\mbox{$\,{\rm cm^{-2}}$}}$}}$, noted ${\mbox{$N_{\rm H,22}$}}=0.14$ hereafter) in its direction makes it a useful target for the spectral identification of qLMXBs. The discovery observation of the qLMXB U24 [^2] in NGC 6397 provided modest constraints on the NS projected radius: ${\mbox{$R_{\infty}$}}= 4.9{\mbox{$^{+ 14}_{- 1}$}}{\hbox{$\,{\rm km}$}}$ [@grindlay01b Gr01 hereafter]. The proximity of U24 to the GC core requires the use of the [*Chandra X-ray Observatory*]{}’s angular resolution to positionally and spectrally separate the qLMXB from other sources in the crowded GC core. U24 lies at $d_{\rm c}=6.8\,r_{\rm c} \approx 20 \arcsec$ from the GC center (core radius $r_{\rm c} = 0.05\arcmin$ and half-mass radius $r_{\rm HM} = 2.33\arcmin$, NGC 6397 is a core-collapse cluster). The reported effective temperature of U24 was ${\mbox{$kT_{\rm eff}$}}= 57$–$92{\mbox{$\,{\rm eV}$}}$ (90% confidence interval, Gr01). This paper presents the spectral and timing analyses of five archived deep [*Chandra*]{}-ACIS observations of U24, located in GC NGC 6397. These long exposures provide the high-S/N data necessary to confirm the qLMXB classification of U24 by obtaining precise  estimation. The lack of variability also supports this classification. The organization of this paper is as follows. Section \[sec:red\] describes the data reduction and the analyses. Section \[sec:results\] presents the results of the analyses. Section \[sec:discuss\] provides a discussion of the results and Section \[sec:conclusion\] is a short conclusion. [rrrrr]{}\[t\] 79 & 2000 Jul 31 15:31:33 & 48.34 & ACIS-I3 (FI) & F\ 2668 & 2002 May 13 19:17:40 & 28.10 & ACIS-S3 (BI) & F\ 2669 & 2002 May 15 18:53:27 & 26.66 & ACIS-S3 (BI) & F\ 7460 & 2007 Jul 16 06:21:36 & 149.61 & ACIS-S3 (BI) & VF\ 7461 & 2007 Jun 22 21:44:15 & 87.87 & ACIS-S3 (BI) & VF\ Data Reduction and Analysis {#sec:red} =========================== [ccccccc]{}\[t\] 79 & 17$^{\rm h}$40$^{\rm m}$41.421$^{\rm s}$ & 0.02 & –534004.73 & 0.02 & – & Gr01\ 79 & 17$^{\rm h}$40$^{\rm m}$41.459$^{\rm s}$ & 0.6 & –534004.47 & 0.6 & 128$\sigma$ & This work\ 2668 & 17$^{\rm h}$40$^{\rm m}$41.489$^{\rm s}$ & 0.6 & –534004.38 & 0.6 & 152$\sigma$ & This work\ 2669 & 17$^{\rm h}$40$^{\rm m}$41.485$^{\rm s}$ & 0.6 & –534004.53 & 0.6 & 149$\sigma$ & This work\ 7460 & 17$^{\rm h}$40$^{\rm m}$41.486$^{\rm s}$ & 0.6 & –534004.60 & 0.6 & 302$\sigma$ & This work\ 7461 & 17$^{\rm h}$40$^{\rm m}$41.488$^{\rm s}$ & 0.6 & –534004.54 & 0.6 & 228$\sigma$ & This work\ Observations, Source Detection, and Count Extraction {#sec:obs} ---------------------------------------------------- We analyze one archived [*Chandra*]{}/ACIS-I and four archived [*Chandra*]{}/ACIS-S observations of NGC 6397 (Table \[tab:Obs\]). The source detection and the data analysis are performed using the CIAO v4.1.1 package [@fruscione06]. The event files (level=1) are re-processed with the latest calibration files from CALDB v4.1 [@graessle07 with the latest effective area maps, quantum efficiency maps, and gain maps], as recommended in the CIAO Analysis Thread “*Reprocessing Data to Create a New Level=2 Event File*” to include the up-to-date CTI corrections (charge-transfer inefficiency). The re-processed event files are analyzed including counts in the 0.5–8.0 energy range. The full-chip light curves do not show evidence of background flares in any of the five observations, allowing use of the full exposure time of each observation. The low flux of the source of interest, U24, allows neglecting pile-up; the count rate of $\sim 0.06$ photons per frame corresponds to a pile-up fraction of less than 2% [^3]. For each observation (ObsID), the source detection is performed with the CIAO [wavdetect]{} algorithm. An exposure map is created using the task [mkexpmap]{} prior to the source detection. The [ wavdetect]{} exposure threshold *expthresh* is set to 0.1 and the wavelet detection scales are set to *scales = “1.0 2.0 4.0 8.0”*. Thirty-five sources are detected ($\sigma>3$) on the ObsID 2668, 37 sources on ObsID 2669, 66 sources on ObsID 7460, 48 on ObsID 7461, and 37 on the ACIS-I observation ObsID 79. This paper is solely focused on radius measurement and timing analysis on the qLMXB U24. While the source detection is performed over the whole ACIS chip, the following analysis pertains only to the source U24. Counts are extracted with the CIAO script [psextract]{} around the source position in a circular region of radius 3, which ensures that 98% of the enclosed energy fraction at 1 is included [^4]. The closest detected source, located at 10.6 distance from U24, has 21.6 counts (background subtracted) within 1.5. It contributes to $\ll0.04$ contamination counts within the extraction radius of U24 (on the longest observation). The background is extracted from an annulus centered at the qLMXB position with inner radius 5, and outer radius 30. Other detected sources within the background annulus are excluded with a 5 radius region, which eliminate 99.8% of source counts in the background region. For the deepest observation (ObsID 7460), 15 counts from other sources are within the extracted background (which contains 6187 counts). In other words, these constraints ensure that $\sim 0.25\%$ of the background counts are due to other sources. Finally, the extraction radius (containing 98% of the ECF) does not require to apply a correction to the flux. Following the CIAO Science Thread *“Creating ACIS RMFs with mkacisrmf”*, the response matrices files (RMFs) are recalculated prior to the spectral analysis since the RMFs obtained from [ psextract]{} are not suited for ACIS observations with focal plane temperature of –120C (the usual [mkrmf]{} command does not use the latest calibration available in the case of –120C ACIS imaging data). In addition, the ancillary response files (ARFs) are also recalculated using the energy grid of the newly obtained RMFs. Overall, the extracted spectra, together with the RMFs and ARFs, are used for the spectral analysis. In those spectra, the effect of background counts can be ignored. Indeed, in the worst case (for ObsID 7460), the number of expected background events accounts for 2.4% of the total number of counts in the extracted region (78.0 background counts out of a total of 3188 counts), so that the background is neglected for the spectral analysis. Spectral Analysis {#sec:spectra} ----------------- For each of the five observations, two spectral files are created, one with unbinned events (for fitting with the Cash-statistics, @cash79) and one with binning (for fitting with the -statistics). For the latter, the bin width in the 0.5–1.5 energy range matches the energy resolution of the ACIS-S3 chip, i.e., $\sim$0.15. Above 1.5, four wider bins (0.3, 0.6, and two 3 wide spectral bins) are created. In some cases of low count statistics, the last 2 or 3 bins are grouped together to maintain a minimum of 20 counts per bin. The main criterion for the creation of the spectral bins is the energy resolution of the detector, but the 20 counts minimum is imposed to ensure approximate Gaussian uncertainty in each bin. Such a binning avoids an artificially small reduced-, and conserves the validity of -statistics. Spectral fitting is performed with the software *XSPEC* v12.5.1 [@arnaud96] using the publicly available model of NS H-atmosphere [nsatmos]{} [@mcclintock04; @heinke06]. The model assumes non-magnetic NSs and has been computed for a range of surface gravity $g=(0.1-10){\mbox{$\times 10^{14}$}} {\mbox{$\,{\rm cm\,s^{-2}}$}}$. For the normalization parameter, [ nsatmos]{} uses the emitting fraction of the NS surface. It is kept fixed to unity in this work; in other words, the whole NS surface emits. The distance parameter is held fixed as well at the value of NGC 6397, $d=2.5{\mbox{$\,{\rm kpc}$}}$ [@hansen07; @strickler09]. The NS mass is assumed to be $1.4{\mbox{$\,M_\odot$}}$. Finally, the galactic absorption is taken into account using the [phabs]{} multiplicative model, with , the hydrogen column density parameter, set to ${\mbox{$N_{\rm H,22}$}}= 0.14$. The errors on the best-fit parameters (, ) are calculated using the command [error]{} in *XSPEC* with 90% confidence or using the command [steppar]{}. Confidence contours in mass–radius space are obtained with the [steppar]{} command with both the mass and the radius as free parameters. The results of the spectral analysis are presented in Section \[sec:spec\_res\]. Variability Analysis {#sec:variability} -------------------- For each of the five observations, we perform two analyses to search for source variability on timescales shorter than the duration of the observations, and one analysis for timescales spanning the time between the first and last observations. [**Power Density Spectrum (PDS)**]{}. The data from each observation were binned into a discrete light curve, with time bin size equal to the time resolution used in the observation ($\Delta T=3.24104\sec$). A standard fast-Fourier transform (FFT) of the discrete data into frequency space was produced [@press95], using an open-source FFT algorithm [@frigo98]. This produced a Fourier transform of the data, covering the frequency range $1/T-0.15427{\mbox{$\rm\,Hz$}}$, where $T$ is the duration of the observation, and $0.15427{\mbox{$\rm\,Hz$}}$ is the Nyquist frequency, with discrete frequency resolution $1/T$, and a total of $N/2$ frequency bins, where $N=T/\Delta T$ is the number of time bins in the light curve. The resulting Fourier transform was then converted into a PDS, where the power $P_j$ in each frequency bin $j$ is $P_j= a_j^2 + b_j^2$, where $a_j$ and $b_j$ are the real and imaginary parts of the Fourier component associated with a frequency $f_j=j/T$, producing a PDS. The data were then normalized according to a standard prescription for analyses [@leahy83]. [**Short-Timescale ($<$1day) Variability**]{}. Short-term variability in each ObsID is assessed by visual inspection of the five sources light curves. In addition, a Kolmogorov–Smirnov (K-S) test [@press95] is performed to quantify the consistency of the temporal distribution of counts in each observation with a constant count rate. More specifically, the integrated-count light curve is compared to a linear distribution using a K-S test. A low K-S probability would indicate that the integrated light curve is significantly different from a linear distribution, demonstrating the presence of variability on the timescale of the observation. [**Long-Timescale ($\sim$months-years) flux variability**]{}. Long-term variability is investigated to determine if the flux remained constant over the course of the five observations, between 2000 and 2007. This is performed by adding a multiplicative constant parameter to the spectral model, and by fitting the five spectra simultaneously. The constant is kept fixed at a value $c=1$ for one spectrum (ObsID 79) and as a free and untied parameter for the remaining four spectra while the remaining spectral parameters (, , ) are assumed to be the same across all observations. Best-fit $c$ values statistically consistent with unity would demonstrate that the source flux remained constant on the timescale of the five observations. Results {#sec:results} =======    Positional Analysis {#sec:pos} ------------------- The position of the source reported in the discovery observation (ObsID 79) is R.A.=17$^h$40$^m$41.421$^s$ and ${\rm decl.}=-53\deg 40\arcmin 04.73\arcsec$ (Gr01). The authors corrected for the systematic [*Chandra*]{} uncertainty in the pointing (0.6) by cross-identifying cataclysmic variables on [*Chandra*]{} and [*Hubble Space Telescope*]{} (HST) observations. The positional uncertainty on the source is subarcseconds and consists in the residual error from the correction and the statistical uncertainty from the detection algorithm. For comparison, the source detection in all five observations presented here led to the positions listed in Table \[tab:PosU24\]. It is concluded that the position of U24 is consistent with that reported previously, within the 0.6 [*Chandra*]{} systematic uncertainty. No correction for the systematic uncertainty is performed here since the identification of U24 is free of source confusion and since the purpose of this paper focuses on the spectral analysis. Spectral Analysis {#sec:spec_res} ----------------- [ccccccc]{}\[h\] 79 & (0.14) &10.7 & 73 & 1.50 & Cash: 256.7 (100%) & –\ & 0.09 & 6.8 & 97 & 1.27 & Cash: 254.2 (0%) & –\ & (0.14) &10.1 & 75 & 1.49 & [$\chi^2_\nu$/dof (prob.) = [0.76]{}/[7]{} (0.62)]{} & $\leq$ 8.8%\ & 0.10 & 7.2 & 93 & 1.30 & [$\chi^2_\nu$/dof (prob.) = [0.60]{}/[6]{} (0.73)]{} & –\ 2668 & (0.14) & 7.2 & 95 & 1.47 & Cash: 213.3 (100%) & –\ & 0.13 & 7.1 & 96 & 1.44 & Cash: 213.1 (0%) & –\ & (0.14) & 7.8 & 88 & 1.44 & [$\chi^2_\nu$/dof (prob.) = [1.81]{}/[9]{} (0.61)]{} & $\leq$ 3.9%\ & 0.13 & 7.2 & 94 & 1.38 & [$\chi^2_\nu$/dof (prob.) = [1.97]{}/[8]{} (0.46)]{} & –\ 2669 & (0.14) &10.0 & 76 & 1.52 & Cash: 159.0 (98%) & –\ & 0.11 & 7.0 & 96 & 1.36 & Cash: 157.7 (0%) & –\ & (0.14) & 9.9 & 77 & 1.51 & [$\chi^2_\nu$/dof (prob.) = [0.63]{}/[8]{} (0.74)]{} & $\leq$ 4.2%\ & 0.12 & 7.2 & 93 & 1.37 & [$\chi^2_\nu$/dof (prob.) = [0.66]{}/[7]{} (0.70)]{} & –\ 7460 & (0.14) & 9.9 & 75 & 1.35 & Cash: 367.0 (100%) & –\ & 0.11 & 6.9 & 94 & 1.18 & Cash: 355.1 (0%) & –\ & (0.14) &10.1 & 74 & 1.35 & [$\chi^2_\nu$/dof (prob.) = [2.32]{}/[8]{} (0.02)]{} & $\leq$ 4.7%\ & 0.11 & 7.0 & 93 & 1.19 & [$\chi^2_\nu$/dof (prob.) = [2.14]{}/[7]{} (0.04)]{} & –\ 7461 & (0.14) & 6.7 &100 & 1.37 & Cash: 291.0 (0%) & –\ & 0.14 & 6.6 &101 & 1.38 & Cash: 291.0 (0%) & –\ & (0.14) & 6.1 &104 & 1.37 & [$\chi^2_\nu$/dof (prob.) = [0.97]{}/[7]{} (0.45)]{} & $\leq$ 3.3%\ & 0.17 &10.7 & 74 & 1.57 & [$\chi^2_\nu$/dof (prob.) = [1.03]{}/[6]{} (0.41)]{} & –\ All 5 & (0.14) & 8.9 & 80 & 1.39 & Cash: 1289.9 (100%) & –\ & 0.12 & 6.9 & 95 & 1.28 & Cash: 1276.1 (69.2%) & –\ & (0.14) & 9.3 & 78 & 1.39 & [$\chi^2_\nu$/dof (prob.) = [1.49]{}/[45]{} (0.02)]{} & $\leq$ 3.7%\ & 0.12 & 7.2 & 92 & 1.29 & [$\chi^2_\nu$/dof (prob.) = [1.16]{}/[44]{} (0.03)]{}& –\ The spectral analysis is performed using the Cash-statistics on the unbinned data (neglecting the background) and using -statistics on the grouped data. Results are presented in Table \[tab:results\], listing the best-fit parameters for Cash and -statistics, freezing and thawing . All reported errors are 90% confidence. The -statistics null hypothesis probability confirms the viability of the fitted model while the Cash-statistics assumes that the model describes the data. Nonetheless, the use of Cash-statistics provides smaller uncertainties on the best-fit parameters, specifically for the NS radius measurements. A simultaneous fit is also performed using all five spectra, therefore increasing the count statistics and providing better constraints on the best-fit parameters (Figure \[fig:allspec\]). Prior to that, the source long-term variability is inspected, as described in Section \[sec:variability\]. The best-fit values (with Cash-statistic) for the multiplicative constants described in Section \[sec:variability\] are $c=1.02{\mbox{$^{+ 0.09}_{- 0.10}$}}$, $c=1.04{\mbox{$^{+ 0.09}_{- 0.10}$}}$, $c=0.92{\mbox{$^{+ 0.06}_{- 0.07}$}}$, and $c=0.95{\mbox{$^{+ 0.07}_{- 0.08}$}}$, for the observations ObsID 2668, ObsID 2669, ObsID 7460, and ObsID 7461, respectively, and $c=1$ (fixed) for ObsID 79. All best-fit values are statistically consistent with unity (within $1.5\sigma$), indicating that the source flux did not vary on long-term timescale. This allows for simultaneous spectral fitting with the constant multiplier fixed at the value $c=1$ for all five spectra. The upper limit of a power-law contribution to the total flux is also estimated. To do so, a power-law component with fixed photon index $\alpha = 1$ is added to the NS atmosphere model and the flux of this component using the upper limit of the power-law normalization parameter is measured. It is found that the power-law contribution accounts for $\leq 3.7\%$ of the total flux (90% confidence upper limit), when estimated from the simultaneous fits. Upper limits on the contribution of a power-law component for the individual spectra are also indicated in Table \[tab:results\]. There is some evidence for a count deficiency between 0.8 and 1.2 on ObsID 7460 given the assumed spectrum and instrument calibration (see Figure \[fig:spec7460\]). We parameterize this apparent dip in the spectrum with a [notch]{} component. The best-fit [notch]{} central energy is $E=0.96{\mbox{$^{+ 0.03}_{- 0.02}$}}{\mbox{$\,{\rm keV}$}}$, and the best-fit width is $W=42\pm17{\mbox{$\,{\rm eV}$}}$. The added component improves the statistics, [$\chi^2_\nu$/dof (prob.) = [1.35]{}/[6]{} (0.23)]{}, without altering the best fit NS temperature and radius. We also investigate the statistical significance of this deficiency. Using the method described in a previous work [@rutledge03], we estimate the probability of observing such a deviation from a continuum model. First, the observed spectrum is convolved with the energy redistribution of the ACIS-S detector. Then, using a Monte Carlo approach, it is shown that the convolved spectrum does not exceed the 99% confidence limits envelope obtained from 10,000 simulated spectra (Figure \[fig:envelope\]). They were created using the best-fit [nsatmos]{} model (${\mbox{$R_{\rm NS}$}}= 8.9{\hbox{$\,{\rm km}$}}$, ${\mbox{$kT_{\rm eff}$}}= 80{\mbox{$\,{\rm eV}$}}$, ${\mbox{$M_{\rm NS}$}}= 1.4$, $d=2.5{\mbox{$\,{\rm kpc}$}}$, and ${\mbox{$N_{\rm H,22}$}}=0.14$, see Table \[tab:results\]). The maximum deviation corresponds to a 98.2% confidence, which does not constitute sufficient evidence to claim the detection of an absorption line, but which should be investigated in more details with higher S/N observations. For completeness, the results of spectral fits with other models are provided. Using an absorbed simple blackbody model, the fit to the spectra (all five binned spectra) is not statistically acceptable: [$\chi^2_\nu$/dof (prob.) = [3.51]{}/[45]{} ($2{\mbox{$\times 10^{-14}$}}$)]{} with =0.14 fixed, and [$\chi^2_\nu$/dof (prob.) = [1.631]{}/[44]{} ($5{\mbox{$\times 10^{-3}$}}$)]{} with  allowed to vary. A thermal bremsstrahlung model is also fit to the spectra, leading to a statistically acceptable fit ([$\chi^2_\nu$/dof (prob.) = [1.45]{}/[45]{} (0.03)]{}) with best-fit parameter $kT = 391\pm10{\mbox{$\,{\rm eV}$}}$.       Variability Analyses -------------------- We find no evidence of broadband variability as a function of frequency in the PDS. For broadband variability, uncertainties were assigned to each frequency bin equal to the square root of the theoretical variance in the power, appropriate to the assumed normalization [@leahy83], and derived 90% confidence ($2\sigma$) upper-limits on the root-mean-square (rms) variability. The data were then rebinned logarithmically and the resulting PDS were fit with a model of a constant power [^5], plus a power-law component in which the power scales $P_j\propto f_j^{-\alpha}$, and the power-law slope was held fixed at $\alpha=1$. This model was fit to the data using a Levenberg–Marquardt  minimization technique [@press95], to find the best-fit model parameters. The resulting 90% confidence upper limits on the broadband variability, in a frequency range $0.0001$–$0.10{\mbox{$\rm\,Hz$}}$ (used for each observation, to ease comparison of limits, although the longer observations are sensitive to variability at frequencies below this range) and across the full [*Chandra*]{}/ACIS energy range (0.5–8.0) were : $<12\%$ (ObsID 79), [$\chi^2_\nu$]{}=0.63 (26 dof); $<19\%$ (ObsID 2668), [$\chi^2_\nu$]{}=1.5 (27 dof);$<11\%$ (ObsID 2669), [$\chi^2_\nu$]{}=1.32 (26 dof); $<$6.4% (ObsID 7460), [$\chi^2_\nu$]{}=0.98 (15 dof); $<$8.6% (ObsID 7461), [$\chi^2_\nu$]{}=0.98 (9 dof). We find no evidence of periodic variability. For ObsIDs 79, 2668, 2669, 7460, and 7461 respectively, we find maximum (normalized) powers of $P_{\rm max}=$21.8, 17.0, 15.3, 17.7, and 23.7 which, with a number of frequency bins of 7526, 4383, 4154, 23072, and 13812, correspond to respective probabilities of chance occurrence in all cases of $>$10%. Visual inspection of the light curves did not reveal any variation in the source count rates. Short-term variability was also investigated in a more quantitative way by comparing the integrated ligthcurve with a linear distribution. None of the five observations showed an integrated light curve significantly different from a linear distribution. More specifically, the calculated K-S probabilities were 65% (ObsID 79), 62% (ObsID 2668), 95% (ObsID 2669), 51% (ObsID 7460) and 16% (ObsID 7461). Therefore, we find no evidence of intensity variability on the timescale of the integration times, i.e., ${\mbox{$\,^{<}\hspace{-0.24cm}_{\sim}\,$}}1{\mbox{$\rm\,day$}}$. Discussion {#sec:discuss} ==========  Calculation ------------ Producing realistic constraints on the dEoS requires obtaining values of  relying on as few assumptions as possible. Keeping the mass fixed for the model fitting is therefore not appropriate for that purpose. We estimate the value of  by permitting  to vary, and calculating the contours of constant model probability resulting from the fits in a mass–radius space (Figure \[fig:contours\]). This is done using the [steppar]{} command in *XSPEC*. The 90% confidence regions of  and  are obtained from the 90%-contour in the mass–radius space: ${\mbox{$R_{\rm NS}$}}= 9.7{\mbox{$^{+ 0.9}_{- 0.8}$}}{\hbox{$\,{\rm km}$}}$ and ${\mbox{$M_{\rm NS}$}}= 1.13{\mbox{$^{+ 0.47}_{- 0.32}$}}{\mbox{$\,M_\odot$}}$. The best-fit value of the projected radius is therefore ${\mbox{$R_{\infty}$}}=11.9{\hbox{$\,{\rm km}$}}$. The calculation of the uncertainties is complicated by the fact that the distribution of  and  is not symmetric around the best-fit values (i.e., not Gaussian). Moreover,  and  are highly correlated, as shown by the crescent shape of the contour in M-R space (Figure \[fig:contours\]). Therefore, the calculation of the uncertainties on  using Gaussian normal error propagation is not valid. We describe two methods to obtain the uncertainty on . The projected radius and its uncertainties can be obtained from a tabulated version of the [nsa]{} spectral model [@zavlin96]. However, this model (or [nsa]{}) is less adapted than the [ nsatmos]{} or [nsagrav]{} models [@webb07] because it was calculated for a single value of the surface gravity $g = 2.43{\mbox{$\times 10^{14}$}}{\mbox{$\,{\rm cm\,s^{-2}}$}}$ while the other two models consider a range of values. Nevertheless, the best-fit  value with this model is: ${\mbox{$R_{\infty}$}}= 12.1 {\mbox{$^{+ 1.5}_{- 0.9}$}}{\hbox{$\,{\rm km}$}}$ (consistent with the value calculated in the previous paragraph), for a temperature ${\mbox{$kT_{\rm eff}$}}=76{\mbox{$^{+ 2}_{- 3}$}}{\mbox{$\,{\rm eV}$}}$ ([$\chi^2_\nu$/dof (prob.) = [1.54]{}/[44]{} (0.02)]{}). A second method to estimate the uncertainties involves geometric construction, by reading graphically the error region of  on the M-R contours (Figure \[fig:contours\]). For that, we choose to use the line of constant surface gravity (i.e., constant $M\left(R\right)$) that goes through the point $\left(R,M\right) = \left(0{\hbox{$\,{\rm km}$}},0{\mbox{$\,M_\odot$}}\right)$ and the point of best fit in M-R space. This line intersects the 90% contour at the points $\left(R,M\right) = \left(9.048{\hbox{$\,{\rm km}$}},1.045{\mbox{$\,M_\odot$}}\right)$ and $\left(R,M\right) = \left(10.39{\hbox{$\,{\rm km}$}},1.245{\mbox{$\,M_\odot$}}\right)$. These two points correspond to the values ${\mbox{$R_{\infty}$}}=11.15{\hbox{$\,{\rm km}$}}$ and ${\mbox{$R_{\infty}$}}=12.92{\hbox{$\,{\rm km}$}}$ which are, respectively, estimates of the lower and upper 90% confidence uncertainties on ${\mbox{$R_{\infty}$}}$, assuming a constant value of the surface gravity. Therefore, the projected radius and its estimated 90% confidence uncertainties are: ${\mbox{$R_{\infty}$}}=11.9{\mbox{$^{+ 1.0}_{- 0.8}$}}{\hbox{$\,{\rm km}$}}$. With the achieved uncertainty, U24 becomes the third best radius measurement of a NS among the population of GC qLMXBs, after the ones in  [@gendre03a] and in M13 [@gendre03b].    Error budget ------------ The high S/N spectra and the precise radius measurements obtained in the work presented here can be used to constrain the EoS of dense matter. A high precision on the NS radius is mandatory to exclude some of the existing nuclear dense matter EoSs and provide the necessary constraints to understand the behavior of such matter. However, other sources of error come into play in this type of measurements. To quantify the total uncertainty on the radius measurement presented here, we estimate the contribution of each source of error into an error budget, including the distance to the GC NGC 6397, uncertainties intrinsic to the model used, systematic and statistical uncertainties. In those references where these uncertainties are discussed (for example, @heinke06), only two of the three uncertainties we discuss here (distance and detector systematics) are addressed. No work that we can find in the literature discusses the impact of the uncertainty in the spectral model on derived model parameters; therefore, we do so here. - The distance to the GC was recently measured using two independent methods. The analysis of the CO white dwarf (WD) sequence from deep observations in an outer field of NGC 6397 led to a distance of 2.540.07 [@hansen07]. More recently, using CO WDs in central regions of the cluster, the distance was calculated to be 2.340.13 [@strickler09]. The weighted mean of these two measurements is 2.500.06, corresponding to a 2.4% uncertainty. The unknown line-of-sight position of U24 within NGC 6397 accounts for $<0.1\%$ of the distance uncertainty, which can be neglected compared to the GC distance uncertainty. - The calculation of spectral model [NSATMOS]{} also contributes to the total uncertainty on the measured radius. However, the previous works describing the model do not provide a discussion on the fractional uncertainties in intensity due to convergence during the calculation of the spectral model [@mcclintock04; @heinke06]. Therefore, it is not possible to evaluate the errors of the resulting spectra. The cited reference for similar models [[NSA]{} and [NSAGRAV]{}, @zavlin96] only provides information on the temperature calculation convergence, which does not permit an estimation of the uncertainty error in the modeled intensity as a function of energy. - The statistical uncertainties are those quoted in Table \[tab:results\] (90% confidence). This includes the 3% systematic uncertainty of the detector calibration, taken into account using the “[systematic 0.03]{}” command in *XSPEC*. Consequently, the distance uncertainty (2.4%) is the only quantifiable error not taken into account in the radius measurement obtained from spectral fitting. It is therefore added in quadrature to the systematic and statistical uncertainties to obtain the total quantifiable uncertainty in the radius measurement. For example, the upper bound uncertainty limit of  was 10.1% and is 10.4% when accounting for the distance uncertainty. The lower bound uncertainty limit was 6.7% and becomes 7.1%. Consequently, the physical radius is ${\mbox{$R_{\rm NS}$}}= 8.9{\mbox{$^{+ 0.9}_{- 0.6}$}}$ (for ${\mbox{$M_{\rm NS}$}}= 1.4 {\mbox{$\,M_\odot$}}$) while the estimated radiation radius is ${\mbox{$R_{\infty}$}}= 11.9{\mbox{$^{+ 1.0}_{- 0.8}$}}{\hbox{$\,{\rm km}$}}$, when considering the sources of uncertainty listed above. In conclusion, in NGC 6397, the distance uncertainty of NGC 6397 alone minimally affects the current uncertainty on the radius measurement. Core temperature calculation ---------------------------- The best-fit temperature and physical radius of the NS can be used to determine the interior temperature. This calculation is model dependent and due to the uncertainties in the deep atmosphere composition (the depth of the H/He transition in particular), two different models are considered here. The first one assumes a layer of helium down to a column depth $y=1{\mbox{$\times 10^{9}$}}{\mbox{$\,{\rm g{\mbox{$\,{\rm cm^{-2}}$}}}$}}$ with a pure layer of iron underneath. The second model considers a thin layer of He down to $y=1{\mbox{$\times 10^{4}$}}{\mbox{$\,{\rm g{\mbox{$\,{\rm cm^{-2}}$}}}$}}$ with a mixture dominated by rp-processes. These two alternatives take into account the extremal values for the core temperature, for a fixed effective temperature. The calculation, described in a previous work [@brown02], was performed with a fixed mass ${\mbox{$M_{\rm NS}$}}= 1.4 {\mbox{$\,M_\odot$}}$ and a fixed radius ${\mbox{$R_{\rm NS}$}}= 8.9{\hbox{$\,{\rm km}$}}$, which corresponds to the best-fit value. The effective temperature used was ${\mbox{$kT_{\rm eff}$}}= 80 {\mbox{$^{+ 4}_{- 5}$}}{\mbox{$\,{\rm eV}$}}$. The calculation is performed down to a column depth of $1{\mbox{$\times 10^{14}$}}{\mbox{$\,{\rm g{\mbox{$\,{\rm cm^{-2}}$}}}$}}$, since the temperature is nearly isothermal in deeper layers. For the first model, the resulting interior temperature (at $y={\mbox{$10^{14}$}}{\mbox{$\,{\rm g{\mbox{$\,{\rm cm^{-2}}$}}}$}}$) is $T_{\rm core} = \left( 3.37{\mbox{$^{+ 0.36}_{- 0.41}$}} \right) {\mbox{$\times 10^{7}$}}{\mbox{$\rm\,K$}}$. The second model leads to the value of interior temperature $T_{\rm core} = \left( 8.98{\mbox{$^{+ 0.81}_{- 0.98}$}} \right) {\mbox{$\times 10^{7}$}}{\mbox{$\rm\,K$}}$. Overall, if it is assumed that the H/He transition depth is unknown, the core temperature is in the range of extreme values: $T_{\rm core}=\left(3.0-9.8\right){\mbox{$\times 10^{7}$}}{\mbox{$\rm\,K$}}$. Conclusion {#sec:conclusion} ========== We have performed the spectral analysis of five archived [*Chandra*]{} observations of the qLMXBs in the GC NGC 6397. The $\sim$ 350 of integration time available permitted to obtain high S/N spectra and improve the radius measurement. More specifically, the simultaneous spectral fitting of all five observations, using an NS H atmosphere, allowed us to provide constraints on the NS radius with $\sim$10% statistical uncertainty (90% confidence). This confirmed the qLMXB nature of the  source. Therefore, the measured NS properties are ${\mbox{$R_{\rm NS}$}}=8.9{\mbox{$^{+ 0.9}_{- 0.6}$}}{\hbox{$\,{\rm km}$}}$ and ${\mbox{$kT_{\rm eff}$}}= 80{\mbox{$^{+ 4}_{- 5}$}}{\mbox{$\,{\rm eV}$}}$, for ${\mbox{$M_{\rm NS}$}}= 1.4{\mbox{$\,M_\odot$}}$, and assuming an NS with an atmosphere composed of pure hydrogen. The estimated interior temperature lies in the range $T_{\rm core} = \left( 3.0-9.8 \right) {\mbox{$\times 10^{7}$}}{\mbox{$\rm\,K$}}$. In the 0.5–10 range, the flux corresponding to these best-fit parameters is $F_{\rm X} = \left( 1.39{\mbox{$^{+ 0.02}_{- 0.06}$}} \right) {\mbox{$\times 10^{-13}$}}{\mbox{$\,{\rm erg\,{\mbox{$\,{\rm cm^{-2}}$}}\,{\mbox{$\,{\rm s^{-1}}$}}}$}}$, equivalent to a luminosity of $L_{\rm X} = \left(1.04{\mbox{$^{+ 0.01}_{- 0.05}$}}\right) {\mbox{$\times 10^{32}$}}{\mbox{$\,{\rm erg\,{\mbox{$\,{\rm s^{-1}}$}}}$}}$ at a distance of 2.5. The spectra did not show evidence for a power-law component as inferred by the upper limit on the contribution to the total flux of 3.7%. The results of this analysis were consistent with those of the discovery observation (Gr01); the reported NS radius and temperature were ${\mbox{$R_{\infty}$}}= 4.9{\mbox{$^{+ 14}_{- 1}$}}{\hbox{$\,{\rm km}$}}$ and ${\mbox{$kT_{\rm eff}$}}= 57$–$92{\mbox{$\,{\rm eV}$}}$. No optical counterpart was detected on the [*HST*]{} observations, with a limiting magnitude of $M_{V} > 11$ (Gr01). No variability was observed on long timescales. Therefore, unless an outburst (for which there is no evidence) happened between the observations–between 2000 and 2002, or between 2002 and 2007–we conclude that the source remained in its quiescent stage since the discovery observation. It is worth noting that none of the GC qLMXBs discovered in quiescence so far have been seen in outburst. Moreover, an outburst happening between the available observations would have had an impact on the observed luminosity and intensity variability would have been detected [@ushomirsky01; @rutledge02c]. The search of short-timescale variability ($<1{\mbox{$\rm\,day$}}$) did not reveal any such variability. Finally, a PDS analysis was performed and did not demonstrate evidence of periodic variability in the frequency range $0.0001$–$0.10{\mbox{$\rm\,Hz$}}$. The lack of intensity variability on various timescales further supports the classification of the source. In conclusion, the qLMXB U24 in the GC NGC 6397 adds to the list of qLMXBs suitable to place constraints on the dense matter EoSs with a best-fit projected radius of ${\mbox{$R_{\infty}$}}= 11.9{\mbox{$^{+ 1.0}_{- 0.8}$}}{\hbox{$\,{\rm km}$}}$. R.E.R. is supported by an NSERC Discovery grant. E.B. is supported by NASA/ATFP grant NNX08AG76G. The results presented have made use of data from archived observations available at the High Energy Astrophysics Archive Research Center Online Service, provided by the NASA GSFC. Finally, the authors would like to thank the referee for useful remarks that led to the improvement of this article. [^1]: From http://cxc.harvard.edu/toolkit/colden.jsp using the NRAO data [@dickey90]. [^2]: The source name U24 was used in the discovery paper (Gr01) and will be used throughout this work. [^3]: [[*Chandra*]{}]{} Observatory Proposer Guide v12.0, Figure 6.18, December 2009 [^4]: [[*Chandra*]{}]{} Observatory Proposer Guide v12.0, Figure 6.7, December 2009 [^5]: The constant accounts for the expected Poisson noise power in the PDS, which is approximately 2 in the PDS normalization used; however, we observed the Poisson level for each of the PDS to be suppressed, to a value of $\sim1.9$, due to instrumental dead-time effects. To correct for this, on average, in the rms variability upper limits derived here, the measured rms limits were increased by a factor $\langle A \rangle/2.0$, where $\langle A \rangle$ is the best-fit Poisson power level.
Mid
[ 0.6277056277056271, 36.25, 21.5 ]
The primary goal of this project is to define a portable and efficient C programming interface (API) to determine the call-chain of a program. The API additionally provides the means to manipulate the preserved (callee-saved) state of each call-frame and to resume execution at any point in the call-chain (non-local goto). The API supports both local (same-process) and remote (across-process) operation. As such, the API is useful in a number of applications. Some examples include: exception handling The libunwind API makes it trivial to implement the stack-manipulation aspects of exception handling. debuggers The libunwind API makes it trivial for debuggers to generate the call-chain (backtrace) of the threads in a running program. introspection It is often useful for a running thread to determine its call-chain. For example, this is useful to display error messages (to show how the error came about) and for performance monitoring/analysis. efficient setjmp() With libunwind, it is possible to implement an extremely efficient version of setjmp(). Effectively, the only context that needs to be saved consists of the stack-pointer(s).
High
[ 0.664935064935064, 32, 16.125 ]
[From cardiac catheterization to the study of left ventricular function. Importance of relaxation and diastole in ischemic cardiopathy]. Some of the most recent developments on left ventricular function derived from Brutsaert's work on triple control of contraction and relaxation, and on the new division of the different phases of the cardiac cycle, are reviewed. This allows a re-assessment of the different hemodynamic parameters obtained from cardiac catheterization, which were then applied to relaxation and diastole and to coronary artery disease.
High
[ 0.7065693430656931, 30.25, 12.5625 ]
I don't know if he does it on purpose or not but like I've been saying for the last 5 or 6 years Eli is the last person on the team we have to worry about. I know I'm getting ahead of myself since we play such a tough schedule but I really think this can be a very very special year with all the talent we have on this roster. As always it'll come down to who can stay healthy but if we don't lose anyone major I feel very confident we're going to have a great drunken time watching the team this year. Tom Coughlin is very pleased with Ramses Barden this spring. "I think he’s figured it out and that’s a good first step," Coughlin said on @SiriusXMNFL. "He’s had some really good days in practice and done a really good job. I really do think that he’s right there. He’s got to do it with consistency. There can’t be any shortcomings. You’re a big receiver. You must excel in the green zone (and) as a blocker. I do feel good about where he is. I really feel good that he feels good about where he is. I feel like its a good sign. Coughlin is never a coach to give a lot of praise so if he's doing it for Barden I'd say that's a very good thing. Even if he can become a red (green?) zone threat that would be pretty useful. Plus as we've seen, having a lot of receiver depth is becoming an underrated necessity in a pass happy league so hopefully he can give us that. For those of you who were scared about the Bernard signing and what it possibly meant for Austin: Quote: Patricia Traina Fewell on what makes Marvin Austin different than the other DTs they have: "He has legite speed and quickness and get-off. I'm going to have to learn to use Marvin a little bit differently than I use Linval Joseph, and Shaun Rogers, and Rocky Bernard because I thinkt hat (Austin) can really cause some offensive linemen some problems." Sounds like WR Brandon Collins is going to be someone to watch when the preseason games roll around. Also, Brandon Bing's name has been mentioned a good amount by the beat writers so we can throw him into that Tier C CB depth that will be fighting it out. Can Bing play safety? It would certainly be easier to make room for him on the Roster if he could beat out Jackson or Sash instead of trying to beat out Johnson, Tryon and the other CB who's name escapes me. I doubt we keep more than 9 DBs on the 53 man roster and I feel more confident in our CBs than our Safety depth. Can Bing play safety? It would certainly be easier to make room for him on the Roster if he could beat out Jackson or Sash instead of trying to beat out Johnson, Tryon and the other CB who's name escapes me. I doubt we keep more than 9 DBs on the 53 man roster and I feel more confident in our CBs than our Safety depth. Naa he's a CB. S will be Rolle, Phillips, Sash, and then Stevie Brown has the inside track to the 4 spot right now. Don't forget Will Hill and Janzen Jackson are fighting for that spot also. Deon Grant can always come back to the mix if the coaches don't like what they're seeing out of the group. Agreed. Frank Gore is higher than Eli. I think that says it all. But it's just more motivation for our players. The only Giants I can say should be happy with their place on the list is Tuck and Cruz. JPP,Eli and Nicks should feel underrated by the players who voted. guys, i've been thinking about just how scary our defense really can be. I'm going to be stating the obvious, but I just want to write it all down and be in awe of it. So much rotation, depth and talent... DE: JPP, Tuck, Osi. Right there, best DE group in the NFL. JPP is a top 5 DE and going into his 3rd year. Tuck is the warrior, tremendous all around, and Osi is still, IMO, a top 5 pass rusher in the league. A group with the potential to create a LOT of turnovers and wreak so much havoc. DT: Canty, Linval, Austin, Bernard, Rogers: Bernard and Rogers are guys who are good in short yardage and good depth guys. Canty was so crucial for us last season, over looked by many. Linval has all the talent and has great flashes, think he can really make a step. Austin has that devastating pass rush potential. Yes, he's been out of football for 2 years, but I'm excited to see what he can do. LB: Blackburn, Rivers, Boley, Williams: We can have SO MUCH rotation here. Run stuffer/best LB'er ever Chase Blackburn out there can call signals, as can Boley. Boley was so great last year, and he, Williams and Rivers make up for a great group of cover LB'ers. CB: TT, Webster, Prince, Hosley: Throw in a bunch of talented guys such as Bing and Blackmon who can come in and contribute (who ever wins camp battles) and this is a DEEP and talented group. Webster is a top 10 (perhaps 5) CB in the league. We know what TT can do, but coming off injury it's going to be interesting. Prince is finally healthy and we know how talented he is, along with Hosley. S: Might be our weakest position defensively because of no Grant and I'm not sure I trust Sash. KP and ROlle is a great duo, we love 'em. Hopefully with TT back, we won't need as many 3 safety sets. Oh yeah, and then there's the wildcard...which is funny because he's a joker. Kiwanuka. I love him and if Fewell uses him right, he's a force. He played actually pretty well at LB last season and vastly improved in coverage...and also is still one helluva DE. Gotta love Kiwi and the amount of weapons we can throw at people. Fewell needs to keep the foot on the gas. We've got a plethora of pass rushers in Osi, JPP, Tuck, Austin, Kiwi. Some big run stuffers up front with Chase behind them, and a ton of cover LB'ers where we can mix and match the opponents sets. Use rotation to keep guys fresh. I'm really, really excited for this season __________________We ALL bleed scarlet New York Giants Super Bowl 46 Champs UNITED: I actually attend the college I root for Quote: Originally Posted by PalmerToCJ BTW, if it's 3rd and 97... I'm throwing a screen pass to Brian Leonard and he will convert. We have tons of talent on defense. Always have. It all comes down to Fewell and how he uses it. I would temper my expectations of our DTs though. Canty was a beast last year, but he's coming off a knee injury that concerns me. Joseph is solid, and can become a beast, but we haven't seen him take that step yet. Austin is a wild card, I don't know what to expect from the guy. Bernard is depth. Rogers could be fat and washed up, or he can be decent depth to Joseph. We don't know. So our DT core can be anywhere from great to mediocre. It all depends on how things shape up. I just rewatched some clips on nfl.com, and I had to just say this before I can start focusing on the new season: We won the freaking Super Bowl!!! YES! It's so awesome to rewatch the sound fx clips, and rewatch Deion and Marshall faulk interview Coughlin after the victory. I am so happy I got to experience such a special season, particularly because I didn't watch the whole season in 2007, just the SB. It was SB XLII that made me a Giants-fan, but this feels like my first real championship as a fan of the team. I'm so happy to have been able to be a part of the ride as a fan.
Mid
[ 0.647450110864745, 36.5, 19.875 ]
Predictors of recurrent sickness absence among workers having returned to work after sickness absence due to common mental disorders. The aim of this study was to investigate whether sociodemographic, disease-related, personal, and work-related factors - measured at baseline - are predictors of recurrent sickness absence (SA) at 6 and 12 months follow-up among workers who returned to work after SA due to common mental disorders (CMD). Based on a cluster-randomized controlled trial, this prospective study comprised 158 participants, aged 18-63 years, with partial or full return to work (RTW) and an occupational physician-diagnosed CMD. Data on predictors were collected with questionnaires and administrative data. Outcome was the incidence of recurrent SA (ie, decreased work for ≥30% of contract hours due to all-cause SA regardless of partial or full RTW) at 6 and 12 months follow-up. Longitudinal logistic regression analysis with backward elimination was used. We found that company size >100 [odds ratio (OR) 2.59, 95% confidence interval (95% CI) 1.40-4.80] and conflicts with the supervisor (OR 2.21, 95% CI 1.21-4.04) were predictive of recurrent SA. Having ≥1 chronic diseases decreased the risk of recurrent SA (OR 0.54, 95% CI 0.30-0.96). Two work- and one disease-related factor predicted the incidence of recurrent SA among workers with CMD. Healthcare providers can use these findings to detect and help workers who have returned to work and are at higher risk for recurrent SA. Furthermore, future interventions to prevent recurrent SA could focus on supervisor conflicts.
High
[ 0.681404421326397, 32.75, 15.3125 ]
Q: Storing application settings in Python I am looking for a smart way to store my settings in Python. All I want to do is to set some parameters and use it in the code. I found this Best practices for storing settings and this Storing application settings in C# but I don't think that helps me to solve my problem. I want store some parameters for example age weight and name etc. and access them easily. Is there any best practice? A: I think one way to do that is to store your parameters in a class. Like this: class Settings: # default settings name = 'John' surname = 'Doe' age = 45 gender = 'Male' Another way is to store these variables in a configuration file and then read from it. But I find the former a lot more easier because you can access in your code easier. Like this: from settings import * settings = Settings() username = settings.name
High
[ 0.6573426573426571, 35.25, 18.375 ]
NEW YORK (AP) - The Latest on Purdue Pharma’s Swiss bank transfers (all times local): 10:15 p.m. A spokesperson for Mortimer D.A. Sackler has defended $1 billion in previously unknown transfers from OxyContin maker Purdue Pharma to his family, saying they were “perfectly legal and appropriate in every respect.” New York state’s attorney general contends that the Sackler family, which owns OxyContin maker Purdue Pharma, used the oversea bank accounts to conceal the transfer of millions of dollars from the company to themselves. New York on Friday asked a judge to enforce subpoenas of companies, banks and advisers to Purdue and the Sackler family. The spokesperson says the attorney general’s contention is an attempt to “torpedo a mutually beneficial settlement that is supported by so many other states and would result in billions of dollars going to communities and individuals across the country that need help.” The spokesperson says the transfers occurred a decade ago. ___ 4:47 p.m. New York state’s attorney general contends that the family that owns OxyContin maker Purdue Pharma used Swiss bank accounts to conceal the transfer of millions of dollars from the company to themselves. New York on Friday asked a judge to enforce subpoenas of companies, banks and advisers to Purdue and the Sackler family. The state said it has already documented $1 billion in transfers between those parties. The filing was made in a New York court. It follows decisions by that state and others to reject a tentative settlement announced this week with Stamford, Connecticut-based Purdue. Sign up for Daily Newsletters Manage Newsletters Copyright © 2020 The Washington Times, LLC.
Mid
[ 0.544303797468354, 32.25, 27 ]
On the mobility of vacancy clusters in reduced activation steels: an atomistic study in the Fe-Cr-W model alloy. Reduced activation steels are considered as structural materials for future fusion reactors. Besides iron and the main alloying element chromium, these steels contain other minor alloying elements, typically tungsten, vanadium and tantalum. In this work we study the impact of chromium and tungsten, being major alloying elements of ferritic Fe-Cr-W-based steels, on the stability and mobility of vacancy defects, typically formed under irradiation in collision cascades. For this purpose, we perform ab initio calculations, develop a many-body interatomic potential (EAM formalism) for large-scale calculations, validate the potential and apply it using an atomistic kinetic Monte Carlo method to characterize the lifetime and diffusivity of vacancy clusters. To distinguish the role of Cr and W we perform atomistic kinetic Monte Carlo simulations in Fe-Cr, Fe-W and Fe-Cr-W alloys. Within the limitation of transferability of the potentials it is found that both Cr and W enhance the diffusivity of vacancy clusters, while only W strongly reduces their lifetime. The cluster lifetime reduction increases with W concentration and saturates at about 1-2 at.%. The obtained results imply that W acts as an efficient 'breaker' of small migrating vacancy clusters and therefore the short-term annealing process of cascade debris is modified by the presence of W, even in small concentrations.
High
[ 0.656565656565656, 32.5, 17 ]
Model # 12805 Internet #203635378 Rachael Ray Porcelain II 2-Piece Cookware Set in Purple Order online and have your items delivered by truck from a local Home Depot store. Select the delivery time that is most convenient for you. How Does It Work? Start by building your Shopping Cart. Select items for Express Delivery from Store and check availability for the delivery ZIP. During Checkout, choose an available delivery date and time window from the scheduling calendar. Place your order. Your items will arrive during the delivery window selected. How Much Does Delivery Cost? The delivery fee is the same regardless of the size, weight or number of items in your order. Your final price will be based on the delivery options you select during Checkout. Standard service includes an All Day time window on the date of your choosing. Items will be delivered to an area outside the house or job site. 2-hour and 4-hour time windows and Threshold delivery service are available for an additional cost. What Service Levels Are Available? Outside Delivery: Service includes delivery to area outside the house which is accessible by delivery equipment. Threshold Delivery: Service includes delivery across your first doorway or threshold (i.e. garage, backyard, deck, first room of the home). Can I Track My Order? Yes! On the day of delivery, you can track your truck's location and estimated arrival time in My Account. Your driver will also call you 30 minutes prior to arrival. If you are unable to receive your items in person, you can choose to have the delivery left unattended during Checkout (Outside Delivery service only). Frequently Bought Together Product Overview There can never be too many durable skillets in the kitchen, and with this twin pack it’s easy to stock up on two useful sizes of this versatile cookware shape. By having more than one skillet, sauteing a side portion of snap peas or browning a batch of sweet Italian sausages for a sausage and pumpkin soup is always a cinch. Rachael Ray Hard Enamel cookware provides superior performance in fun colors and durable materials. This twin pack will add twice the style to any kitchen and make it easy to tackle recipes of any size. The Non-Stick interiors provide impeccable food release and make cleaning up a breeze. The sturdy construction promotes even heating, helping to reduce hot spots that can burn foods, and the enamel exteriors are not only colorful, but durable too. The rubberized handles are comfortable to grasp and dual riveted for extra strength so moving from one kitchen station to another can be done with confidence. Oven safe to 350°F. The Tramontina Style Ceramica 8 Pc Cookware SetThe Tramontina Style Ceramica 8 Pc Cookware Set in metallic copper assortment was carefully selected to cover every day cooking tasks - making it an excellent value. This set provides simple healthy cooking effortless cleaning and outstanding durability. Part of the Tramontina Style collection which features the latest innovations and ... More +Product DetailsClose Ecolution Heavy Weight 12-Piece Cookware Set in Black-EHWB-1212 - The Home The Tramontina Style Ceramica 9 Pc Cookware /The Tramontina Style Ceramica 9 Pc Cookware / Bakeware Set in metallic copper assortment was carefully selected to cover every day cooking and baking tasks - making it an excellent value. This set provides simple healthy cooking effortless cleaning and outstanding durability. Part of the Tramontina Style collection which features ... More +Product DetailsClose
Low
[ 0.51409978308026, 29.625, 28 ]
3.7 Million Comments Later, Here's Where Net Neutrality Stands Enlarge this image toggle caption Win McNamee/Getty Images Win McNamee/Getty Images Now, we wait. The window for the public to weigh in on how federal rule-makers should treat Internet traffic is closed, after a record 3.7 million comments arrived at the FCC. The Sunlight Foundation analyzed the first 800,000 and found that fewer than 1 percent were opposed to net neutrality enforcement. The principle of net neutrality generally means that all Internet traffic is treated equally. But whether the weight of popular opinion can overcome the significant lobbying heft of Internet service providers fighting against stronger net neutrality rules is a huge question mark. An analysis by San Francisco-based data firm Quid found that Verizon alone spent $100 million to lobby Congress on net neutrality since 2009. (That kind of money could buy you 793 houses or 4 million bottles of Maker's Mark.) What's On The Table The proposal before the five-member Federal Communications Commission, led by Chairman Tom Wheeler, would allow broadband providers such as Time Warner and Verizon to engage in "commercially reasonable" traffic management. That means they could potentially charge content companies (like Netflix) to get their content to you faster — paid prioritization, or "fast lanes." Internet advocates want equal treatment of all Internet traffic, and some, like advocacy group Free Press, have pushed for a reclassification of the Internet as a public utility under Title II, which could give the FCC more enforcement power to keep the Internet open. A broad coalition of telecom and cable companies has opposed this, arguing it would create unnecessary obstacles to broadband deployment. Reclassification would be "unlawful, unnecessary and profoundly unwise," the National Cable and Telecommunications Association wrote in its filing. What's Happening Now FCC members are on Capitol Hill this week, answering questions from Congress about plans for regulating the Internet. The Senate Judiciary Committee held a hearing on the topic Wednesday, featuring questioning from strong net neutrality supporters Patrick Leahy, D-Vt., the committee chairman; and Sens. Al Franken, D-Minn., and Richard Blumenthal, D-Ct. Wheeler is testifying Wednesday afternoon to a House panel, and while the topic focus is "the needs of small business and rural Americans," net neutrality is likely to come up. The Wait Wheeler has said that he hopes the FCC will approve a proposal before the end of 2014, which gives the commission a few more months to sift through the comments, meet with stakeholders and reconvene on this issue. In the meantime, there's plenty of reading the tea leaves of Wheeler's public comments and moves. One issue that's come up in this interim period is how to treat mobile connections. The Mobile Issue When policymakers last addressed this issue in 2010, they decided to treat your mobile connection to the Internet differently from the one that goes to your home or office, giving mobile carriers an exemption from net neutrality enforcement. But as our mobile phones have become the ever-present computing devices in our lives — ones we are more and more addicted to — it's beginning to not make sense that mobile and home connections are regulated as two different Internets. Wheeler has indicated he's reconsidering this. He said in a speech last week that the lines are blurring: People want connectivity everywhere they go. That suggests the principle of net neutrality could be extended to mobile broadband, too. All his statements, however, are but merely his opinions at any given time. Making the lasting rules on this issue will require some hustling if commissioners are committed to doing it by the end of the year. One thing's certain: Millions are watching. Editor's Note: NPR's legal counsel has filed a comment to the FCC on the net neutrality proposal. You can read it here.
High
[ 0.6649484536082471, 32.25, 16.25 ]
Athletics News Release Union baseball drops to No. 17 in latest rankings JACKSON, Tenn. - 4/12/2011 - The Union (Tenn.) University baseball fell four spots to No. 17 in the latest NAIA National Rankings. Union has been as high as No. 7 this season, but has posted a 5-5 record in their last 10 games to fall to No. 17. However, the Bulldogs are currently riding a 3-game winning streak after a 3-2 win over William Carey (Miss.) earlier today. Union remains in the hunt for a conference title as they are tied for second with an 11-4 record with No. 10 Cumberland (Tenn.) and Martin Methodist (Tenn.). Trevecca Nazarene, who is receiving votes in the latest rankings, has a 12-3 conference record. Union has just three conference weekends left before the tournament. Union will travel to Mid-Continent (Ky.) this weekend and then will play at Freed-Hardeman next weekend. Union will close out the conference season at home versus Trevecca Nazarene on April 28-29. The TranSouth Conference Baseball Tournament will be at Union’s Fesmire Field this season on May 2-5.
Mid
[ 0.646017699115044, 36.5, 20 ]
This paper offers a new interpretation of John Austin’s views both on assertion and on adverbs, as result of which an expressivist thesis concerning the semantics for action sentences is advanced. First, Austin’s analysis of assertion based on various, specific assertive forces and his remarks on adverbs are systematically connected in order to obtain assertive schemata for action sentences. Finally, those schemata are put to work as the expression of inferential commitments implicit in argumentative practices of different sorts (exculpatory, justificatory and illustrative) in the deployment of which, both logical contrariety and contradiction are exploited
High
[ 0.687898089171974, 33.75, 15.3125 ]
Best adjustable dumbbells 2019 Contents Welcome to our breakdown of the Best adjustable dumbbells currently available on the market. Finding time for strength and conditioning can be tricky. If you are like me you probably like to spend your precious free evenings training Jiu-Jitsu or whatever other hobby you have. However there is always time for a quick workout at home. A good workout can be done in 30 minutes and will leave you feeling great after. When working out like this you do not need a lot of fancy equipment. Even just a couple of dumbbells can get you a decent full body workout. The best thing about adjustable dumbbells is that they do not take up much space and the best ones are as good as any equipment you will find in the gym. My favorite thing about this product is the speed at which you can adjust these dumbbells. I like to do circuit training in my workouts so the speedy adjustment ensures no time is lost. The ease of adjustment makes my workout flow seamlessly as my rhythm is never interrupted by having to prepare the weights. There are 15 different levels of resistance which covers 95% of the exercises in my program. The Bowflex SelectTech 552 comes with a rack for the dumbbells which is handy for storage. As is the case with most adjustable dumbbells the Bowflex are neat and space efficient which is important if you have limited space for your workout. This product is also very good value for money when compared to other similar offerings. The ease of use and durability make this product solid investment. The main negative point I have in relation to these dumbbells is that I found myself using the max weight of 52.5lbs a lot. This will not be a problem for most but if you are a bit bigger you may find that you will need other weights for those exercises where you require a heavier load. Reading reviews online I saw some other people complain about the limited lifespan of the plastic part that holds the weights in place. However my Bowflex SelectTech 552 seem to have a metal clip that holds the weight in place so it seems as if this original design flaw has been fixed. Overall I have to say that I really love these dumbbells and I am sure that they will continue to feature heavily in my workouts. Pros Ease of adjustment is perfect Space efficient Well designed Great value Comes with easy to use app Cons Max 52.5 lbs may not be enough for advanced level CAP Barbell 40-pound adjustable dumbbell The CAP 40 dumbbell set is designed for use by the weight lifting beginner. They have a simple design and are easy to make up. They are perfect for people looking to do bicep curls, tricep extensions, shoulder presses, or some light chest presses. When you pick them up you can straight away feel the quality. The plates are made from cast iron and seem very durable. The handles are as comfortable as you would expect from iron bars. It is simple enough to add and remove plates and the spin lock collars lock them in place. Very importantly the spin locks on these dumbbells do not loosen during the exercise which is usually my main issue with adjustable sets. The plastic storage case allows me to keep the dumbbells neat and organised. The case itself is durable and looks well in my workout room The main negative point I have is that these dumbbells are not quite 40lbs as advertised. When I weighed them they are 37lbs. This is not a major problem as 37lbs is adequate for the exercises I am doing but it is annoying that it is advertised correctly. Pros Rust free plates Durable cast iron finish Easy to store Budget friendly Cons Total weight is actually 37lbs Unipack Adjustable Dumbbells -200lbs The Unipack adjustable dumbbells are a pretty straight forward offering. 200lbs of steel for you to lift up and down! They are a no frills product that do not boast lots of fancy features. 200lbs is a lot of weight and should be sufficient for the needs 90% of people who want to lift weights at home. I personally prefer to do chest presses with dumbbells as opposed to a barbell so these weights are perfect for me as there is no danger of reaching the max weight anytime soon. The alternative option to buying a product like this is to order lots of different dumbells which will clog up any room. The locking screw generally works well but it can take some time to fix if you are doing different exercises. So far after a couple months of use I have had no issue of the weights coming loose. In terms of price these dumbbells are very reasonable. They compare well when compared to other similar options and are highlight as a preferred option on Amazon for this reason. Pros Heavy dumbbells suit any exercise Build solid Good value cons Paint tends to flake Lock mechanism can be a bit time consuming Power Block Elite Dumbbells The first thing you will notice about the power blocks is the unusual design. The rectangular shape make them look more like a builders toolbox than a set of dumbbells. However as soon as you start using you will quickly appreciate the genius of the design. Overall they are compact, balanced and feel great to use. Adjusting the weight by 10lbs is instantaneous. Smaller adjustments are a little bit trickier but still easily done. To adjust you just simply pull the pin out and slap it in. Overhead, incline, and flat bench presses are very stable. The shorter length allows the weights to come closer together at the top of movements. The plate sections never wobble or rattle. Presses, curls and extensions all feed smooth and balanced. As soon as you pick up the Powerblock you will notice the quality. They are built with solid materials which ensures a long lifespan. Unlike other dumbbells they are not composed of iron which makes rusting impossible. For many people looking in this price range it comes down to whether to go for the Powerblock or Bowflex. My own personal preference is PowerBlock Elite as the adjustment time is quicker. I also prefer the feel and overall look of the Powerblock. In saying that the Bowflex is a an excellent product that you will not go wrong with. The PowerBlock set the standard for adjustable dumbbells. If you buy these dumbbells you will not be disappointed! Pros Very clever design Easy to adjust Look great Stores very easy Extremely durable and stable Great value Cons More expensive than other options Omnie 200 lbs adjustable dumbbells The Omnie adjustable dumbbells come in 3 different weights.The weights are 65lb ,105lb, or 200lbs. The polish chrome finish of the Omnie will look great in any home gym. The finish is corrosion proof so ensures that the dumbbells are less likely to rust. In terms of feel, the handles are textured which makes for a comfortable and solid grip. When using you never feel like the dumbbells will slip. Like the other screw fix dumbbells on this list the process of changing can be a bit annoying. This is especially the case if you are doing a circuit. The screw itself works well and the rubber ring ensures that the screw fixes tight and does not rattle when lifting. Overall these are well made dumbbells that will last forever. Pros Durable and sturdy Corrosion free Different weight options Cons Annoying to adjust Conclusion So there you have it. There are many options out there but the ones listed above tend to be the most popular for a reason. When comparing the best adjustable dumbbells you should be looking at each product based on the following 5 categories. Adjustment mechanism Price Durability Storage Look and Feel For me the adjustment mechanism is the most important as there is nothing worse than adjustable weight that is awkward to change. The Powerflex product has really nailed this problem and for me it makes it the best adjustable dumbbell available on the market today. Breaking Grips is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com
Mid
[ 0.572769953051643, 30.5, 22.75 ]
Montgomery County commissioners squabble over new positions While Montgomery County commissioners continued to debate several new positions presented during Tuesday's meeting, the court approved a transfer of one employee that is drawing concern from a few residents as well as a review by the District Attorney's Office. In a 3-1 vote, commissioners approved the transfer of Precinct 2 Commissioner Charlie Riley's wife Deanne from the Sheriff's Office to Precinct 5 Constable David Hill's Office. Precinct 3 Commissioner James Noack was the lone nay vote, while Commissioner Riley recused himself. The move follows Riley's motion Nov. 22 to create the position. However, he maintains he was not aware his wife would be considered for the job when the court approved the item. While the position is considered new, Hill explained that the position is actually an upgrade from a position that was vacated by an employee in September. In addition to approving Deanne Riley's transfer, in two 4-1 votes, the court approved a new lieutenant position for the Precinct 3 Constable's Office at a salary of $125,000 annually, and a temporary maintenance tech position for the Conroe North Houston Regional Airport. Precinct 1 Commissioner Mike Meador was the lone nay vote on the Precinct 3 constable position, while Noack voted against the temporary airport tech position. Precinct 5 Constable's office Hill spoke to the court Tuesday in defense of Commissioner Riley explaining he approached the commissioner about upgrading the position to help with a "gang of paperwork" since the workload in the office has increased. Hill said the other constable's offices have office managers and his office will benefit from someone in that position as well. "I approach the court today to hire a lady who worked for me 15 years ago who I know personally, who I know her work habits and knowledge of county operations. … I know what she can do in our office to facilitate as an office manager and free our people to get back on the streets," Hill said. "I request we hire Deanne Riley to that position." Deanne Riley started working for the county in 2002. The Montgomery County DA's Office confirmed Monday that it received and was reviewing a complaint regarding Commissioner Riley's move to create the position ultimately filled by his wife. According to information obtained by The Courier, agenda items for the court agenda are due the Wednesday before the court's meeting. For the Nov. 22 meeting, that date was Nov. 16. However, on Nov. 17, a draft agenda was released to commissioners, and it did not include the request by Hill regarding the upgraded position. When the official agenda was posted Nov. 18, Hill's request was included. Sheriff-elect Rand Henderson confirmed the schedule for interviews for his new administration was sent Nov. 21. Deanne Riley along with Paige Pangarakis, who is being transferred from MCSO to Commissioner Riley's office, declined those interviews with Henderson. Pangarakis, who has served as the administrative manager for Chief Deputy Randy McDaniel, works in the same office with Deanne Riley. Henderson said he received Deanne Riley's resignation letter Nov. 30. Montgomery County officials confirmed Tuesday that while there is no written policy regarding vacant positions, neither the Precinct 5 position nor the new position in Riley's office were posted as open positions and no applicants were received except for Deanne Riley and Pangarakis, respectively. According to the women's payroll change request forms, Deanne Riley's transfer to Hill's office is effective Jan. 1, and Pangarakis' transfer to Commissioner Riley's office is effective Dec. 31. In their new positions, Deanne Riley will have her same title, while Pangarakis will be an administrative assistant. Deanne Riley will be taking a salary cut from around $68,000 to $58,000 a year. Hill had requested the $58,000 position, replacing a $43,000-a-year position and making up the difference with his allotted budget. Pangarakis' salary will be cut from around $55,000 to $48,000 annually. Precinct 3 Constable's Office In a 4-1 vote, commissioners agreed to create a lieutenant position. Meador was the lone nay vote. Constable Ryan Gable said James Sumner is tapped to fill the position. Sumner has 35 years in law enforcement, most recently as assistant chief with the Harris County Sheriff's Office under interim Sheriff Ron Hickman. Hickman, who was appointed to fill the spot after former Sheriff Adrian Garcia left for an unsuccessful bid as mayor in May 2015, lost his run for sheriff to Ed Gonzalez. Noack praised Gable for his presentation for the new position noting it was the "exact way" to demonstrate the need for a new position. However, following the presentation by Gable, Meador questioned why the position was not presented during the budget hearings in July. The county's budget cycle started Oct. 1. "Constable, did you not see this need in the budget session," Meador asked. "Why wait four months after the (budget) process? … I'm just having a problem making this step four or five months after the budget process." Gable said he did not anticipate the growth of his office. "We understand it is a big request, especially at this time," said Gable, who believes the move is an opportunity to get Sumner with his extensive background. Budget impact would be about $94,000 pro-rated for 2016 and $125,500 for 2017. Gable requested an additional $55,000 for a vehicle, but the commissioners asked if that could be put on hold. Henderson said the MCSO could help in "finding" a vehicle for the position if that was a "sticking point" for the court. "I agree that is a great hire," Henderson said of Sumner. Conroe North Houston Regional Airport Airport Director Scott Smith requested that the court approve a temporary maintenance tech position for the month of January. According to Smith, a longtime staff member is retiring and this would allow his replacement to start and work closely to learn the position. The court approved the request in a 4-1 vote with Noack being the lone nay vote. Noack questioned Smith regarding numerous budget amendments made since the court approved the budget in September. According Noack, the court has approved more than $100,000 in budget amendments since October. "I am not approving any more positions in 2017; we set a budget and we need to stick to it," Noack said. "I welcome my court members to join me to hold line."
Low
[ 0.497041420118343, 31.5, 31.875 ]
About Otaku Coin Association Planning Partners Otaku Coin Planning Partner Anime!Anime! A news website launched in 2005 specializing in anime information. Constructed with the concept of “being the cheerleaders for anime and the anime industry,” it offers information regarding not just the latest anime series but also all kinds of anime-related information, such as series aimed towards kids and families, famous series from yesteryear, the anime business, as well as news about anime overseas. It also features unique stories, such as interviews with voice actors and creators, as well as pieces on the new charms displayed by different works or voice actors. It also has exciting content that everyone can enjoy, such as a page featuring the latest anime with consent from the rights holders, as well as articles in collaboration with anime characters written under the supervision of the anime’s staff. The website also manages Anime!Anime! Biz, a sister website specializing in information about the anime business and other industry information. Advisor / Partner AnyPay Inc. AnyPay Inc. was established in June 2016 by Shinji Kimura, the company representative and the serial entrepreneur who led Atlantis and Gunosy to their success. Its mission is to “realize a society enveloped by technology.” AnyPay Inc. has released the bill-splitting app “paymo” and the payment service “paymo biz.” With its business developments in FinTech and Kimura’s cryptocurrency investment expertise, AnyPay Inc. launched its ICO consulting service in September 2017, which has helped companies around the world such as Bread and Drivezy. Otaku Coin Planning Partner asura film co., ltd. Our company is a production studio that delivers consistently high quality service, producing, composite and effect VFX by ourselves. We always try to unite a best team, directors and operators, for requirement of any client’s orders. Production works can have not only Japanese animation and cartoons, but also short-films like promotion movie (PV), music movie (MV), and some animation using digital motion tools, puppet tools. Next step, we will start up some 3DCG projects in near future. Award History: ADFEST2017, FILM CRAFT LOTUS, ANIMATION, SILVER, [SAMURAI NOODLES "THE ORIGINATOR] / Tokyo Anime Pitch Grand Prix 2018 [Krocchi the Streetcat] Otaku Coin Planning Partner whomor Inc. whomor Inc. (head office: Chuo-ku, Tokyo) is an IT startup handling entertainment content that was founded six years ago by a manga artist. With the vision of “bringing excitement to the world through creativity,” whomor engages in creating new initiatives and content not limited to existing methods. In partnership with 6,000 creators, whomor creates character illustrations for video games, promotions using manga that precisely describe problems in the businesses, and original manga works. Otaku Coin Planning Partner Tokyo Otaku Mode Through its Facebook page with over 20 million likes and its e-commerce site, the Tokyo Otaku Mode Shop, Tokyo Otaku Mode (TOM) delivers the latest news and products from Japanese anime, manga, games, music, and fashion to fans around the world. Also, through the TOM Shop, TOM receives goods related to anime and manga directly from official makers and sells them overseas. TOM also designs and distributes original merchandise with creators worldwide to customers in over 130 countries. Using the know-how it has developed in international business, TOM offers services to support overseas business expansions and inbound businesses. With a mission of “Making the world happy through otaku culture,” TOM provides a new value to the world’s entertainment market in the hopes of improving the lifestyles of Japanese subculture fans around the world. Otaku Coin Planning Partner Toyo Institute of Art and Design The Toyo Institute of Art and Design is an art and design vocational school that offers courses in subjects such as illustration, design, 3DCG, and manga, as well as topics ranging from traditional painting techniques to the latest technological advancements in art preservation and repair. We strive to develop creators with the mottos of “to train people aiming for creative jobs” as well as “to train professionals who can flourish in the real world.” We aren’t limited to just teaching students the techniques and knowledge behind art and design, but we connect that knowledge to employment and place great importance on continuous support. The Toyo Institute of Art and Design reached its 70th anniversary in May 2016. Otaku Coin Planning Partner Honey's Anime An anime, manga, and video game news media for overseas readers that began operation in November 2014. It is currently available in English and Spanish. In addition to delivering news directly procured from industry insiders in Japan, the US, and Spanish-speaking countries as quickly as possible to readers, Honey’s Anime has gained popularity with users for its plentiful variety of articles, including those written from the unique perspective of the reporter, reports from events all over the worlds, and interviews with those in the industry. Since its launch three years ago, it has grown to become one of the world’s leading sites in the genres of anime, manga, and video games, and its number of readers continues to increase. Otaku Coin Planning Partner Honeyfeed Honeyfeed is an English CGM-type light novel site that began operation in January 2017. The social light novel platform is a place where English-speaking users who have become unsatisfied by simply viewing and subscribing to anime and manga themselves can express their surging creative urges. The service sprang forth from Honey’s Anime readers lamenting that there are few places to express their original anime- and manga-like stories. Honeyfeed is an oasis for hardcore otaku with a pleasant atmosphere in which they can elevate their expressive power and potential through users giving mutual feedback and more. Otaku Coin Planning Partner Rakuten Collection Rakuten Collection is an online character lottery service that began operation in May 2016. Under the concept of “Shopping is Entertainment!” the service produces unexpected encounters with characters fans like and aims to offer a new experience through character goods. Products mainly include anime and game character goods, along with idol, artist, and athlete merch, and the service is widely expanding its limited edition products for fans to enjoy, regardless of the genre. Otaku Coin Planning Partner bacoor Inc. Established in February 2017. Has locations in Kobe and Ho Chi Minh City. Deals mainly in research for Ethereum blockchains and smart contracts, as well as application development that uses those technologies. Has more than 30 engineers working in both Japan and Vietnam. Its main product is HB Wallet, an Ethereum wallet application that has been downloaded over 200 thousand times around the world. With just one app, users can safely manage not just Ethereum, but over 10,000 different kinds of ERC20 Tokens. Also manages the website REAL BOOST, which uses blockchain and smart contract technologies for crowdfunding. Its beta version is up as of December 2017. Otaku Coin Planning Partner double jump.tokyo A company established on April 3, 2018, specialized in the development of blockchain games. Established by members with know-how in the development and management of numerous games (mobile social games, PC online games, home consoles, etc.) as well as a deep understanding of platforms and finance, including blockchain technology and virtual currency. A subsidiary of DLE Inc. Otaku Coin Planning Partner HAKUHODO Founded in 1895, Hakuhodo continues to innovate everyday life under the unwavering philosophies of "Consumer Conception" and "Partnerism." Professionals with high creativity organize teams to help clients solve problems in all areas, from management to business to social issues, as well as advertising areas. Hakuhodo anticipates changing the marketing environment and aims to become the world's leading marketing company that improves the business value of clients with integrated marketing management skills. Otaku Coin Planning Partner TokyoFigure Established in June 2016. Manages Tokyo Figure (https://tokyofigure.jp/), a Japanese online shop that designs, manufactures, and sells figures and other anime goods. Based on the concept of, “Superior beauty and excitement! From Japan to the world, and from the world to Japan!” the company works hard every day to supply great products and emotions to fans around the world by participating in anime, doujinshi, and figure exhibition events. Tokyo Figure teams up with excellent figure manufacturers, makers, and prototype-creators to design and sell products that are faithful and true to the anime or game’s original illustrations, capturing the very essence of the series. Otaku Coin Planning Partner Tudukimi Tudukimi is an event that began in September 2016 and is held every season to showcase promotional videos of new anime. The event, which is held every three months just before the start of the new broadcast season, is organized based on official footage exhibited by license holders, and countless fans proclaim, “Since attending Tudukimi my anime watch list has increased!” The event is simulcast around the world through Nico Nico Live, Bandai Channel, Facebook Live (in English-speaking countries), Bilibili Douga (in Chinese-speaking countries), and other platforms. With over 150,000 viewers, the event continues to evolve as global content. Otaku Coin Planning Partner AKIHABARA AREA TOURISM ORGANIZATION A joint-venture company that operates tourist sites based on a network of over 400 stores under a joint investment by major companies in Akihabara, the Akihabara Area Tourism Organization plans to develop a base for spreading information from Akihabara to the world through a wide range of areas including campaigns throughout Akihabara, event development, media management including visuals and signage, and managing a tourism center. Planning Advisors Advisor / CEO of qdopp Inc. Leo Akahoshi Leo Akahoshi studied abroad in New York during middle school and in Syria during university. He engaged in investment banking services for foreign companies prior to working at a domestic securities company. He then shifted to GREE, Inc. in 2012 and oversaw domestic and overseas investments, acquisitions, and new business planning. He founded qdopp Inc. and assumed the position of CEO in 2014, where he manages both Honey’s Anime, an overseas news media for anime, manga, and games, and Honeyfeed, an English novel content submission site. He also engages in overseas advertisements of otaku content and business consulting using his knowledge of the overseas otaku market. Advisor / CEO of SAKURAS Co., Ltd. Masayuki Ikegami Born in Yugoslavia, Masayuki Ikegami became a US resident during middle school, where he won an award for programming. During his days as a student, he was passionate about programming. After graduating from the Department of Science and Technology at Keio University, he entered the foreign business consulting firm A.T. Kearney. After consulting a variety of businesses, he took part in launching the DeNA and NTT Docomo joint venture Everystar Co., Ltd. and was inaugurated as the CEO. There, he created a variety of mega hit works including “Ōsama Game (King's Game)” and “Doreiku,” as well as a smartphone novel boom. He resigned from the position in March 2015 and in April of that year founded SAKURAS Co., Ltd. He currently teaches online business strategies at the Globis University Graduate School of Management in addition to engaging in digital strategy consulting for broadcasters and publishers. Advisor / Shogakukan Managing Director Nobuhiro Oga Born in 1983 in Tokyo, Nobuhiro Oga entered VIZ Media, LLC in the US after graduating from graduate school. Experienced in the business of licensing Japanese content, he is engaged in the development of digital distribution businesses. He was inaugurated as the president of Shogakukan in 2012, his current position as of 2017. Advisor / Blockchain Accountant, Certified Public Accountant Hitoshi Kakizawa Worked in the corporate sales division of Mizuho Bank and learned about many complaints clients had with their banks. After joining Deloitte Touche Tohmatsu LLC, he audited companies preparing to list on the stock market and worked with over 15 IPOs in 3 years. In 2015, he took part in Deloitte Tohmatsu Venture Support’s FinTech team and worked as a “blockchain accountant.” He has organized seminars about ICOs, supported cryptocurrencies and blockchain ventures, consulted for blockchain verification tests, developed audit procedures and conducted audits for cryptocurrency exchanges, and assisted with the implementation of in-house operation management systems. In October 2017, he joined a blockchain venture as a business developer, the second Japanese person to do so. Provides private consultation for ICO projects and is the adviser for several cryptocurrency/blockchain projects. Virtual YouTuber / Otaku Coin Ambassador Kizuna AI Kizuna AI is a virtual YouTuber who uploads videos from a white space almost daily, and has drawn the attention of fans in both Japan and around the world. Her channel has over one million subscribers and over 55 million views. Her dream is to become the world’s number one YouTuber. Her interests include egosurfing and interacting with human beings. She recently celebrated her first anniversary since she began releasing videos on December 1, 2016. Her direct way of speaking and abundance of emotions conveys a humanity atypical of "AI" personas. Advisor / Keio Research Institute at SFC, Senior Researcher Kenji Saito Received his Master’s degree in Engineering (Computer Science) from Cornell University in 1993 and his PhD in Policy and Media from Keio University for his research on digital currencies in 2006. He was specially appointed a media research instructor at the same university, and became a Senior Researcher at the Shonan Fujisawa Campus in 2014. In 2016, he became the CSO (Chief Science Officer) at BlockchainHub Inc. and the Representative Director at Beyond Blockchain (non-profit) in 2017. He specializes in the Internet and society. Advisor / Founder & CEO of Consensus Base Inc. Hiroshi Shimo CEO of blockchain specialist Consensus Base Inc. Has been involved with blockchains for a long time and has the know-how and experience of performing verification tests for dozens of major firms, such as SoftBank, Daiwa Securities Group, and the Japan Exchange Group. Currently offers ICO consulting, ICO packages, trading cards, and other services through his own company. Co-published books about bitcoins and Ethereum with NEC and authored many books and articles about blockchains. Also serves as a blockchain committee member for the Ministry of Economy, Trade, and Industry. Continues to stand at the forefront of blockchain technology as an engineer and is actively attempting to spread information about the technology. Advisor / Journalist Tadashi Sudo Sudo was born in Mexico and raised in Yokohama. He is active in covering, reporting, and writing about animation around the world. He also conducts research on the business of animations. After leaving his position in a stock company, he created the information website “Anime!Anime!” in 2004, which became one of the most famous of its kind in Japan. In 2009, he created “Anime!Anime! Biz,” a website with information about the business of animations, and served as the editor. In 2012, Sudo transferred the websites he was managing to IID, Inc. In July 2017, he left “Anime!Anime!” and began to work independently. Some of his most notable works include the animation section of “Digital Content White Paper,” “Anime Industry Report,” which he composed himself, and “Who’s going to create anime from now on? A silent revolution caused by Chinese capital and online distribution” (Seikaisha Shinsho). Advisor / Sprout Inc. CEO Seigen Takano Founded Sprout Inc. by gathering a group of consultants and researchers with extensive knowledge in cybersecurity with a basis of security engineers known as “white hat hackers.” While investigating and researching the hidden weaknesses hiding behind “zero-day attacks,” the latest cyberspace trends, and cyber terror techniques, he uses the knowledge he learned from the company to support other businesses and governmental authorities’ cybersecurity. He also runs BugBounty.jp, a bug bounty program that connects businesses with white hat hackers. Recently he published a book called Dark Web, which is based on the cyber underground market deeply related to cryptocurrencies (Published July 2016 by Bungeishunju). Advisor / Founder & CEO of Poppin Games Japan Co., Ltd. Hisashi Tsujimura Born in Akita Prefecture and graduated from the Tokyo University of Foreign Studies with a degree in French. Hisashi previously worked at Japanese gaming giant DeNA, where he worked for a long time in business planning before becoming the Assistant Director of the international division. In 2010, he moved to Silicon Valley and was appointed as SVP of Marketing at DeNA Global and Director of Internet Marketing at ngmoco. He established Poppin Games Japan in July 2012 and is the current CEO. He has 10+ years of experience in the tech industry including EC, games, advertisement, investment, and blockchains. Advisor / Oculus Founder Palmer Luckey Palmer Luckey founded Oculus VR when he was 19 years old and designed the Oculus Rift, a virtual reality headset, which started a global VR movement. Facebook went on to acquire Oculus VR in 2014. In March 2017, Luckey left Oculus VR and its parent company, Facebook. He is currently active as an entrepreneur with a new startup in technology to be used in the defense industry. Luckey is a fan of Japanese VR, as well as Japanese anime. In 2017, he participated in an anime event titled “Machi Asobi” in Tokushima, Japan cosplaying as a video game character, which was a topic covered on various media. Advisor / Anime Critic Ryota Fujitsu Born in 1968. Ryota Fujitsu has been working as a freelance writer since 2000, after working as a journalist for local newspapers and an editor for weekly magazines. He currently runs serial articles on Anime!Anime! and Nizista. He also writes for magazines, web articles, DVD booklets and more. He also gives “Anime wo Yomu”(“Read” anime) lectures in Asahi Culture Center Shinjuku every third Saturday. Advisor / CEO of GIFTED AGENT Jun Kawasaki After graduating middle school, he worked on developing automatic trading systems for FX and stock, as well as logistics portal sites. When he was 17 he participated in a venture to create a realtime Q&A service before selling his enterprise. Then, he created multiple IT ventures representative of Tokyo Otaku Mode before selling them or passing them off to other executives. Currently, he is the CEO of GIFTED AGENT and works on common OS projects for blockchain use. He has appeared in the media several times, such as on NHK's "Today's Close-Up," and is currently active as one of the top members in the blockchain worldwide community.
Mid
[ 0.573033707865168, 31.875, 23.75 ]
Comparison of the manual Mycobacteria Growth Indicator tube and the Etest with the method of proportion for susceptibility testing of Mycobacterium tuberculosis. Clinical microbiology laboratories should provide reliable results on susceptibility testing of Mycobacterium tuberculosis to different agents. The manual Mycobacteria Growth Indicator Tube (MGIT) and Etest were compared to the method of proportion (MOP) for susceptibility testing of 88 clinical isolates of M. tuberculosis against isoniazid (INH), rifampin (RIF), streptomycin (STR) and ethambutol (EMB). Isolates were recovered from different patients and were identified at species level by PCR and hybridization. Resistance to INH was detected in 20.5, 29.5 and 12.5% of the isolates, followed by STR resistance (19.3, 26.1 and 1.1%), RIF (9.1, 4.5 and 5.7%) and EMB (2.3, 11.4 and 2.3%) by the MOP, MGIT and Etest, respectively. Sensitivity of the manual MGIT ranged from 37.5% for RIF resistance to 100% for EMB, while Etest sensitivity ranged from 5.9% for STR to 62.5% for RIF. MOP remains the method of choice, with the manual MGIT showing superior sensitivity at detecting resistance to INH, STR and EMB compared to the Etest.
Mid
[ 0.6430260047281321, 34, 18.875 ]
Q: Merge two arrays like this I have two array like this:: $doctor = Array( [0] => 4 [1] => 5 [2] => 8 [3] => 35 [4] => 41 [5] => 42 ) $clinic = Array( [0] => 1 [1] => 3 [2] => 9 [3] => 15 [4] => 19 [5] => 20 ) Now i want to add these array like this $all = array( [0] => 4 [1] => 1 [2] => 5 [3] => 3 [4] => 8 [5] => 9 [6] => 35 [7] => 15 [8] => 41 [9] => 19 [10] => 42 [11] => 20 I have tried this but it is not my expected output: $all = array_merge( $doctor , $clinic ); Any solution? Thanks A: You can use for loop to do that $all=[]; for($i=0;$i<6;$i++){ $all[]=$doctor[$i]; $all[]=$clinic[$i]; } if you don't have same length for the arrays, Try $doctor_size=sizeof($doctor); $clinic_size=sizeof($clinic); $all=[]; $size=$doctor_size; if($doctor_size<$clinic_size){ $size=$clinic_size; } for($i=0;$i<$size;$i++){ if(isset($doctor[$i])){ $all[]=$doctor[$i]; } if(isset($clinic[$i])){ $all[]=$clinic[$i]; } }
Mid
[ 0.5756929637526651, 33.75, 24.875 ]
Numerical simulation of Electron Energy Loss Spectroscopy using a Generalized Multipole Technique. We numerically simulate low-loss Electron Energy Loss Spectroscopy (EELS) of isolated spheroidal nanoparticles, using an electromagnetic model based on a Generalized Multipole Technique (GMT). The GMT is fast and accurate, and, in principle, flexible regarding nanoparticle shape and the incident electron beam. The implemented method is validated against reference analytical and numerical methods for plane-wave scattering by spherical and spheroidal nanoparticles. Also, simulated electron energy loss (EEL) spectra of spherical and spheroidal nanoparticles are compared to available analytical and numerical solutions. An EEL spectrum is predicted numerically for a prolate spheroidal aluminum nanoparticle. The presented method is the basis for a powerful tool for the computation, analysis and interpretation of EEL spectra of general geometric configurations.
High
[ 0.7026239067055391, 30.125, 12.75 ]
Q: Toggle Visibility in JQuery I can do this in JS, but I'm beginning to use JQuery and would prefer to develop skills for that. I have a reminder message [#CheckboxReminder] "Tick this checkbox to indicate you have checked your data". I then want to hide the reminder message when the checkbox [#IsConfirmed] is ticked, and restore it to original state if it is then unchecked. The page will be rendered with the reminder message set to a class of either Visible or Hidden; if the user has recently marked their data as "checked" the message will be hidden (but user is welcome to check the box again if they wish) I believe that JQuery toggle() can do this, but I read that it depends on the way that the #CheckboxReminder style is set to indicate visibility, and I have also read that Toggle() could get out of sync with #IsConfirmed CheckBox - e.g. rapid double clicking. Maybe I should have toggle(FunctionA, FunctioB) and have FunctionA set the checkbox state, and FunctionB unset it - rather than allowing the Click to set it? What's the best way to code this please? In case useful here's an example of what the HTML might look like: <p>When the data above is correct please confirm <input type="checkbox" id="IsConfirmed" name="IsConfirmed"> and then review the data below</p> ... <ul> <li id="CheckboxReminder" class="InitHide OR InitShow">If the contact details above are correct please make sure that the CheckBox is ticked</li> <li>Enter any comment / message in the box above</li> <li>Then press <input type="submit"></li></ul> A: Just check if the checkbox has indeed changed value before showing/hiding the message. $('#isConfirmed').click( function(){ if ( $(this).is(':checked') ) $('#CheckboxReminder').show(); else $('#CheckboxReminder').hide(); } ); The above will fire each time the checkbox is clicked, but since we first check if the checkbox is indeed checked or not it will avoid false positives.
High
[ 0.6746987951807221, 31.5, 15.1875 ]
--- Description: Emitted as TRUE by a container item to indicate that its child items do not need to be enumerated by an indexer if the container item has not changed since the last incremental index verification crawl. ms.assetid: 8bb487b9-4a51-4a6b-939e-946a8aad85de title: System.Search.IsClosedDirectory ms.topic: article ms.date: 05/31/2018 --- # System.Search.IsClosedDirectory Emitted as **TRUE** by a container item to indicate that its child items do not need to be enumerated by an indexer if the container item has not changed since the last incremental index verification crawl. This contributes to the optimization of indexer performance. In these container cases (examples are an e-mail with attachments or a compressed file with a .zip name extension), child items are inseparable from their parent item. For instance, if the [System.DateModified](./props-system-datemodified.md) property of a contained item changes, then the System.DateModified value of the container changes to match. Also, if a parent item is deleted, all of the child items are deleted as well. Therefore, if the container has not changed, the indexer knows that nothing within it has changed and does not need to open the container to examine the contents individually. ## Windows 10, version 1703, Windows 10, version 1607, Windows 10, version 1511, Windows 10, version 1507, Windows 8.1, Windows 8, Windows 7, Windows Vista ``` propertyDescription    name = System.Search.IsClosedDirectory    shellPKey = PKEY_Search_IsClosedDirectory    formatID = 0B63E343-9CCC-11D0-BCDB-00805FCCCE04    propID = 23    SearchInfo       InInvertedIndex = false       IsColumn = false    typeInfo       type = Boolean ``` ## Remarks PKEY values are defined in Propkey.h. > [!IMPORTANT] > If an item emits **TRUE** for this property, each of its child items must emit the [System.Search.IsFullyContained](./props-system-search-isfullycontained.md) property as **TRUE** to prevent those items from being deleted from the index.   ## Related topics <dl> <dt> [propertyDescription](./propdesc-schema-propertydescription.md) </dt> <dt> [searchInfo](./propdesc-schema-searchinfo.md) </dt> <dt> [labelInfo](./propdesc-schema-labelinfo.md) </dt> <dt> [typeInfo](./propdesc-schema-typeinfo.md) </dt> <dt> [displayInfo](./propdesc-schema-displayinfo.md) </dt> <dt> [stringFormat](./propdesc-schema-stringformat.md) </dt> <dt> [booleanFormat](./propdesc-schema-booleanformat.md) </dt> <dt> [numberFormat](./propdesc-schema-numberformat.md) </dt> <dt> [dateTimeFormat](./propdesc-schema-datetimeformat.md) </dt> <dt> [enumeratedList](./propdesc-schema-enumeratedlist.md) </dt> <dt> [drawControl](./propdesc-schema-drawcontrol.md) </dt> <dt> [editControl](./propdesc-schema-editcontrol.md) </dt> <dt> [filterControl](./propdesc-schema-filtercontrol.md) </dt> <dt> [queryControl](./propdesc-schema-querycontrol.md) </dt> </dl>    
Low
[ 0.534278959810874, 28.25, 24.625 ]
Tangled Up In Me It is amazing that after everything I think I’ve learned, I’m still so f**king stupid. Who didn’t see this coming? I knew it would happen but chose to go against my better judgment. I gave into the “feelings” because someone said what I wanted to hear. The Devil, at least to the best of my knowledge, was playing a tricky little game. He had his fiddle going and I couldn’t keep up. I said all the things I felt and know better than to let out of my mouth. Now, The Devil is away doing God knows who. This is the point in my dilemma where I should head straight for a cocktail. Let me explain: The Devil is a DJ and for work he travels around for a few weeks at a time. Last Friday he informed me that he would be leaving the next day. I was surprised, considering the fact that he wasn’t planning on hanging out that night. I asked if I would be seeing him before he left. I wasn’t pleased when he replied via text that he would see if he could get home early enough for me to see him. His flight was early and bed was the only thing he planned on doing, so it was implied. Out of irritation (spite), I invited the Fireman I had met earlier in the week to come over and watch a movie. It was about 2:00a.m. when we were getting down to business and my phone began to buzz. “r u up” Two minutes later… “hope so” Two minutes later… “i guess not” Before even reading the messages I knew who it was. I told the Fireman that he had to go, not giving much of an explanation. I hustled him out the door pulled on a hoodie and all but ran the eleven blocks to The Devil’s house. When I got there we went straight to sleep. Suddenly it dawned on me that he had probably already had somebody in his bed before sending the texts. I was somewhere between confused and pissed. The messages he sent seemed so eager for me to come over, yet upon arrival it seemed as if there was little, if any, interest in my presence. The next morning, with a peck on the lips and a shove out the door, The Devil said he would see me in two weeks. The first day he was gone I refused to contact him first. I thought this would be an effective way to see if there was more between us than just a condom. Nothing. The second day I received an email referring to some pictures on my webpage. “Um, where did you get those pictures my boy?” I wouldn’t have paid it much attention had he not tacked on, “my boy”. That threw me into a whirlwind of questions. Two more days followed with no sign of life from The Devil. I finally sent a text to see what was going on. He told me he lost his job and was on vacation until Monday. I haven’t heard from him since. I know we’re not a couple but after what he said I thought there was something more to it. Clearly, I’m an idiot. What have I learned? I’ve learned that my love life is best left to my bitter and cynical side. When I think with my heart, or anything lower, I can’t see clearly.
Mid
[ 0.5800865800865801, 33.5, 24.25 ]
Filed 11/7/13 P. v. Williams CA2/2 NOT TO BE PUBLISHED IN THE OFFICIAL REPORTS California Rules of Court, rule 8.1115(a), prohibits courts and parties from citing or relying on opinions not certified for publication or ordered published, except as specified by rule 8.1115(b). This opinion has not been certified for publication or ordered published for purposes of rule 8.1115. IN THE COURT OF APPEAL OF THE STATE OF CALIFORNIA SECOND APPELLATE DISTRICT DIVISION TWO THE PEOPLE, B238508 Plaintiff and Respondent, (Los Angeles County Super. Ct. No. BA348603) v. JESSICA MARIE WILLIAMS, Defendant and Appellant. APPEAL from a judgment of the Superior Court of Los Angeles County. Sam Ohta, Judge. Affirmed. Thomas T. Ono, under appointment by the Court of Appeal, for Defendant and Appellant. Kamala D. Harris, Attorney General, Dane R. Gillette, Chief Assistant Attorney General, Lance E. Winters, Assistant Attorney General, Steven D. Matthews and Roberta L. Davis, Deputy Attorneys General, for Plaintiff and Respondent. ****** Appellant Jessica Marie Williams appeals from the judgment after her conviction by jury of the attempted willful, deliberate and premeditated murder of Joshua Earles (Pen. Code, §§ 187, subd. (a), 664; count 1),1 the first degree murder of Fenton Brown (§ 187, subd. (a); count 2), and the unlawful possession of a firearm by a felon (§ 12021, subd. (a)(1); count 3). The jury found true the allegations that appellant personally and intentionally discharged a firearm which proximately caused great bodily injury and death (§12022.53, subds. (b)-(d)), and the offenses were committed for the benefit of a criminal street gang (§ 186.22, subd. (b)(1) (C)). The trial court sentenced appellant to state prison for a total term of 75 years to life plus life. Appellant contends that (1) the trial court erred by denying her motion to dismiss based on the prosecution’s failure to notify the defense of a witness’s deportation and by excluding the deported witness’s statement to the police; (2) her Wheeler/Batson2 motion was erroneously denied; and (3) the trial court abused its discretion by denying her Pitchess3 motion. Finding no error, we affirm the judgment. FACTS Mona Sanders met appellant in November 2007. The two developed an intimate relationship and appellant often spent the night at Sanders’s house. Appellant was a member of the Eight Tray Hoovers gang and her moniker was “Groove.” She wore jeans and tank tops. She wore her hair in braids and “looked like a male.” Sanders was associated with the Westside Trouble gang which was friendly with the Eight Tray Hoovers. Appellant purchased a black Chevy Caprice but the car was registered to Sanders because appellant did not have a driver’s license. 1 All further statutory references are to the Penal Code unless otherwise indicated. 2 People v. Wheeler (1978) 22 Cal.3d 258 (Wheeler); Batson v. Kentucky (1986) 476 U.S. 79 (Batson). 3 Pitchess v. Superior Court (1974) 11 Cal.3d 531 (Pitchess). 2 Early in the morning of May 29, 2008, appellant called Connie Aldridge and asked her to buy some bullets for her. Later that night, Sanders, Aldridge, a man known as “Max,” and appellant drove in the Chevy Caprice to the Big 5 Sporting Goods store in Inglewood. Sanders did not know Max but saw him with appellant in the past. Aldridge purchased a box of Remington .40-caliber Smith and Wesson bullets and gave them to appellant.4 Both Sanders and appellant drove the Caprice and usually parked it in front of Sanders’s house. Sometime after the purchase of the bullets and prior to her arrest, appellant asked Sanders to start parking the car at the back of the house. On May 30, 2008, Joshua Earles was walking from his house towards the corner of 104th Street and South Manhattan Place when an older model black car pulled up behind him. The car was an “old school Caprice or . . . Impala” and looked “like an old cop car.” The passenger had braided hair and wore a New York Yankees baseball cap backwards. The passenger asked Earles where he was from. As Earles started to back up, the passenger, using a black handgun with brown grips, shot at him. Earles ran away but was struck by four bullets and suffered injuries to his chest, right shoulder, and left leg. Officer Gui Juneau of the Los Angeles Police Department (LAPD) responded to the scene of the Earles shooting and recovered 10 shell casings. On June 2, 2008, Jonathan McKeone was inside his house when he heard a gunshot coming from the intersection of 67th Street and Vermont Avenue. He looked out the window and saw a person backing up toward a black car and shooting towards Vermont Avenue. The shooter was dressed in a white T-shirt with dark pants, and wore a baseball cap backwards. The car was parked under a streetlight and McKeone saw the shooter and another person get into the car and drive westbound on 67th Street past his 4 Electronic records obtained from the Inglewood Big 5 store showed a sale of one box of Remington .40-caliber Smith and Wesson bullets at 8:52 p.m. on May 29, 2008. 3 house. At trial, McKeone testified that he could not tell if the shooter was male or female because he only saw the shooter from the side.5 On June 2, 2008, Carlos Grenald was inside his house near 67th Street and Vermont Avenue when he heard approximately eight gunshots. He went to his front door and heard what sounded like a male voice yell “Hoover.” He heard two car doors close and then saw a dark colored sedan speed westbound on 67th Street past his house. Grenald walked to the corner of the block and found 19-year-old Fenton Brown crawling on the ground. He could see gunshot wounds to Brown’s arms. He yelled at other people who were beginning to gather at the scene to call 9-1-1. Brown told Grenald that he was coming from the liquor store two blocks away and had been in an altercation with some Bloods gang members at the liquor store. LAPD Officer Jessie West and his partner were the first officers to respond to the scene of the Brown shooting. Brown had multiple gunshot wounds and his clothing was saturated with blood. He was having difficulty breathing and asked Officer West if he was going to die. Brown told Officer West that he was standing on the corner of 67th Street and Vermont when two African-American females wearing T-shirts approached him and asked “Where are you from?” Brown responded he was “not from anywhere” and did not “bang.” One of the women pulled out a semi-automatic firearm and began shooting at Brown. While he was running away he looked over his shoulder and saw both women fleeing in the direction of a black car. Brown suffered six gunshot wounds and died approximately 30 minutes later at the hospital. LAPD Detective Linda Heitzman processed the crime scene and recovered 10 shell casings. On June 3, 2008, at approximately 6:55 p.m., LAPD Officer Nicholas Hartman and his partner Officer Prodigalidad, accompanied by Deputy Probation Officer Chon, were patrolling in a black and white police car on 81st Street near Hoover Avenue. Officer Hartman saw appellant walking down the street in the opposite direction. Appellant turned into a courtyard and started walking faster after she looked over her 5 In a pretrial statement, McKeone told the police the shooter was male. 4 shoulder towards the police car. When the police officers stopped the car to speak with appellant, she sprinted away from them. The officers gave chase and Officer Hartman observed appellant take a blue steel semiautomatic gun with brown grips from her waistband and throw it over a chain-link fence. Appellant was arrested and the gun which had one .40-caliber round in the chamber and 10 in the magazine was retrieved. At the time of her arrest, appellant was wearing a New York Yankees baseball hat commonly worn by the Neighborhood Crips, a rival gang of the Eight Tray Hoovers. Appellant asked Officer Hartman if he liked her “nap” hat.6 Appellant also wore a belt buckle with the letter “H” which stood for “Hoovers.” Appellant had three bindles of rock cocaine, a cell phone, and car keys in her pocket. The car keys were for a 1991 Chevy Caprice that was parked close to the area where appellant was detained. LAPD firearm examiner Rafael Garcia determined that the shell casings recovered from the Earles shooting and the shell casings recovered from the Brown shooting were fired from the gun that appellant discarded at the time of her arrest. The prosecution’s gang expert, Officer Hartman, testified he was assigned to the 77th Division Gang Enforcement Detail and was responsible for the Eight Tray Hoovers gang. He explained that a gang member acquires status within the gang by committing crimes, especially violent crimes. It was dangerous for a gang member to be seen by rival gang members in the rival gang’s territory. In gang culture, a “mission” involved a plan to commit a crime and then the execution of the plan. Driving into a rival gang’s territory and shooting someone would be a typical gang “mission.” That type of crime showed the community that the shooter and his or her gang were dangerous and powerful. The Eight Tray Hoovers gang, also known as the 83rd Hoovers gang, had approximately 200 members and was one of eight active cliques within the larger Hoovers gang. Their primary activities included murders, robberies, narcotic sales, 6 “Nap” is a derogatory term used to refer to members of the Neighborhood Crips gang. 5 weapons violations, carjackings, burglaries, identity thefts, and shootings. Officer Hartman opined that appellant was a member of the Eight Tray Hoovers based on a number of factors: her admitted membership, her gang tattoos which included “8” on her left tricep and “3rd” on her right tricep, as well as “Fuck” on her right shoulder, and “Napps” on her left shoulder, and the circumstances of the shootings and her arrest. When asked a hypothetical question based on the facts of this case, Officer Hartman opined that the shootings were committed for the benefit of and in association with a criminal street gang. The shootings benefitted the Eight Tray Hoovers by demonstrating the gang’s power over rival gangs and by causing fear and intimidation in the community. The area where Earles was shot was claimed by the Rollin’ 100’s gang, an affiliate of the Neighborhood Crips, which was a mortal enemy of the Eight Tray Hoovers. The Neighborhood Crips identified with the New York Yankees logo and an Eight Tray Hoovers gang member would wear a New York Yankees baseball cap so the shooter could blend into the surroundings in Rollin’ 100’s territory. The east side of the street where Brown was shot was claimed by the 65 Menlo Gangster Crips while the west side was claimed by the 67 Neighborhood Crips. Both Crips gangs were allies of each other and rivals of the Eight Tray Hoovers. No evidence was presented on behalf of appellant. DISCUSSION I. Appellant’s Motion to Dismiss and Motion to Admit Deported Witness’s Statement A. Contention Appellant contends that the denial of her motion to dismiss based on the prosecution’s alleged failure to immediately notify the defense of a witness’s deportation violated her federal constitutional rights to compulsory process and due process by depriving her of the favorable testimony of a material witness. Appellant also contends the court erred in excluding the deported witness’s hearsay statement to the police. 6 B. Background Attached to appellant’s motion for dismissal was a declaration in which defense counsel alleged that on June 11, 2008, Jose Ricardo De Lao told LAPD Detective Bertha Durazo that he witnessed the June 2, 2008 Brown shooting and that it was committed by two African-American men. Defense counsel was appointed on November 19, 2008, and understood that discovery of witnesses’ addresses was generally not provided in gang cases until trial. Nevertheless, defense counsel made written requests for De Lao’s address on January 12, 2009, March 31, 2009, and again on October 26, 2009. On December 22, 2009, when the defense investigator met with Detective Durazo to interview civilian witnesses, she informed him that De Lao had been deported to Mexico in August 2008. The defense investigator contacted various United States Immigration and Customs Enforcement offices to locate De Lao, but his efforts were unsuccessful. On April 26, 2011, a hearing on the motion to dismiss was held. Detective Durazo testified that she was aware that Brown told officers at the scene that two females shot him. When she interviewed De Lao on June 11, 2008, he told her that two men committed the murder. De Lao provided his employment and residence information to Detective Durazo. De Lao was not in custody at that time, lived and worked in the area, and gave no indication to Detective Durazo that he intended to move away from the area. The case against appellant was filed in November 2008. On July 8, 2009, when Detective Durazo was serving subpoenas for the preliminary hearing, she learned that De Lao had been taken into custody on a narcotics-related charge and deported. Detective Durazo was not aware of De Lao’s immigration status. After Detective Durazo testified, defense counsel conceded that he had not shown “sufficient misconduct on the part of law enforcement based on the record presented to the court” that warranted dismissal. Defense counsel asked the trial court to permit him to use De Lao’s statement at trial because it was reliable hearsay and material to the defense. The trial court denied the motion to dismiss on the ground that appellant had failed to show misconduct by the prosecution or police. With respect to the motion to 7 admit De Lao’s statement, defense counsel cited Chambers v. Mississippi (1973) 410 U.S. 284 (Chambers). The trial court found Chambers dealt with a “unique” situation and distinguished it from the “standard” situation presented in this case. De Lao’s statement did not fall within any exception to the hearsay rule and the trial court denied the motion to admit it. C. Analysis 1. Motion to Dismiss Under the “compulsory process” clauses of the federal and state Constitutions, a defendant has a constitutional right to compel the testimony of a witness who has evidence favorable to the defense. (People v. Jacinto (2010) 49 Cal.4th 263, 268–269.) To prevail on a claim of prosecutorial violation of the right to compulsory process, the defendant must establish that the prosecution engaged in conduct that was entirely unnecessary to the proper performance of its duties, the conduct was a substantial cause of the loss of the witness’s testimony, and the defendant must show that the testimony could have been material and favorable to the defense. (In re Martin (1987) 44 Cal.3d 1, 31–32.) When reviewing appellant’s claim that her compulsory process rights were violated, we use the standard generally applied to issues involving constitutional rights; i.e., we defer to the trial court’s factual findings if supported by substantial evidence, and independently review whether a constitutional violation has occurred. (See People v. Cromer (2001) 24 Cal.4th 889, 894, 900–901; People v. Seijas (2005) 36 Cal.4th 291, 304.) Appellant’s contention fails because substantial evidence supports the trial court’s finding that appellant failed to show any prosecutorial misconduct. De Lao’s deportation was handled by a federal government agency and Detective Durazo first learned of it in July 2009, 11 months after it had occurred. The discovery laws did not require the prosecution to provide any prosecution witness’s address until 30 days before trial. (§§ 1054.1 & 1054.7.) The prosecution informed the defense of De Lao’s deportation in December 2009, approximately one year and four months prior to trial. Appellant claims 8 the prosecution’s delay in informing the defense was a substantial cause in denying her a meaningful opportunity to locate De Lao. The prosecution played no role in the deportation of De Lao, and appellant has not shown how learning of the deportation five months earlier would have enabled her to locate De Lao. Furthermore, appellant failed to show that De Lao’s testimony was “material and favorable to [her] defense, in ways not merely cumulative to the testimony of available witnesses.” (United States v. Valenzuela-Bernal (1982) 458 U.S. 858, 873.) Much of De Lao’s statement was consistent with other prosecution testimony. His description of the driver matched appellant’s age, his description of the car was similar to appellant’s car, and he noted that the driver who shot Brown wore a cap. De Lao identified the assailants as men but this testimony was cumulative to McKeone’s pretrial statement to police that a man committed the June 2 shooting and to Grenald’s testimony that the voice of the assailant who shouted “Hoovers” sounded male. Additionally, evidence at trial indicated that appellant dressed and looked like a male at the time of the shootings. Appellant’s reliance on People v. Mejia (1976) 57 Cal.App.3d 574 (Mejia), is misplaced. In Mejia, the court upheld dismissal of a felony prosecution when percipient witnesses arrested with defendant were unavailable to testify because they had been released to immigration officials and deported. The court stated at page 580: “Generally speaking the People may select and choose which witnesses they wish to use to prove their case against a defendant. They are not, however, under principles of basic fairness, privileged to control the proceedings by choosing which material witnesses shall, and which shall not, be available to the accused in presenting his defense.” As previously noted, the prosecution played no role in De Lao’s deportation and Mejia is inapposite. Appellant asserts that regardless of any lack of bad faith by the prosecution, there was Brady7 error. Appellant cannot establish any element of a Brady claim. She does not assert a typical Brady violation, “involv[ing] the discovery, after trial, of information 7 Brady v. Maryland (1963) 373 U.S. 83 (Brady). 9 which had been known to the prosecution but unknown to the defense.” (United States v. Agurs (1976) 427 U.S. 97, 103, disapproved on another ground in United States v. Bagley (1985) 473 U.S. 667, 676–683.) Nor does she claim that true impeachment evidence, that is, evidence tending to cast doubt on the credibility of a testifying witness, was withheld. De Lao’s deportation does not assist appellant’s claim because it did not hurt the prosecution’s case or help the defense. (People v. Morrison (2004) 34 Cal.4th 698, 714.) Nor was it material to appellant’s defense because it was not reasonably probable that earlier disclosure of the deportation would have caused a different result. Appellant tried unsuccessfully for one year and four months to procure De Lao’s presence at trial and did not show how knowing about the deportation five months earlier would have produced a different result. 2. Motion to Admit De Lao’s Statement We review the trial court’s rulings on the admission of evidence for abuse of discretion. (People v. Waidla (2000) 22 Cal.4th 690, 724.) Evidence of out-of-court statements offered to prove the truth of the matter stated is hearsay, but such evidence is admissible if it qualifies under an exception to the hearsay rule. (Evid. Code, § 1200, subd. (a); People v. Lewis (2008) 43 Cal.4th 415, 497.) Appellant does not identify any Evidence Code exception to the hearsay rule that is relevant to this case but argues that “‘exceptions to the hearsay rule’” may also be found in “‘decisional law.’” Appellant contends that De Lao’s statement was reliable and crucial to establish her innocence and should have been admitted pursuant to Chambers, supra, 410 U.S. 284 to preserve her due process right to present a defense. In Chambers, a defendant in a murder trial called a witness who had previously confessed to the murder. (Chambers, supra, 410 U.S. at p. 294.) After the witness repudiated his confession on the stand, the defendant was denied permission to examine the witness as an adverse witness based on Mississippi’s “‘voucher’ rule” which barred parties from impeaching their own witnesses. (Id. at pp. 294–295.) Mississippi did not recognize an exception to the hearsay rule for statements made against penal interests, 10 thus preventing the defendant from introducing evidence that the witness had made self- incriminating statements to three other people. (Id. at pp. 297–299.) The United States Supreme Court noted that the State of Mississippi had not attempted to defend or explain the rationale for the voucher rule. (Ibid.) The court held that “the exclusion of this critical evidence, coupled with the State’s refusal to permit [the defendant] to cross- examine [the witness], denied him a trial in accord with traditional and fundamental standards of due process.” (Id. at p. 302.) In People v. Ayala (2000) 23 Cal.4th 225 (Ayala), the California Supreme Court considered whether the defendant “had either a constitutional or a state law right to present exculpatory but unreliable hearsay evidence that is not admissible under any statutory exception to the hearsay rule.” (Id. at p. 266.) The defendant relied on Chambers and argued the trial court had “infringed on various constitutional guaranties when it barred the jury from hearing potentially exculpatory evidence.” (Ayala, supra, at p. 269.) Ayala rejected the defendant’s argument and held that “‘[f]ew rights are more fundamental than that of an accused to present witnesses in his own defense. [Citations.] [But i]n the exercise of this right, the accused, as is required of the State, must comply with established rules of procedure and evidence designed to assure both fairness and reliability in the ascertainment of guilt and innocence.’ [Citation.] Thus, ‘[a] defendant does not have a constitutional right to the admission of unreliable hearsay statements.’ [Citations.] Moreover, both we [citation] and the United States Supreme Court [citation] have explained that Chambers is closely tied to the facts and the Mississippi evidence law that it considered. Chambers is not authority for the result defendant urges here.” (Ayala, supra, 23 Cal.4th at p. 269.) Appellant argues that De Lao’s statement bears persuasive assurances of trustworthiness and therefore its admission is compelled. But the United States Supreme Court has clarified that Chambers “does not stand for the proposition that the defendant is denied a fair opportunity to defend himself whenever a state or federal rule excludes 11 favorable evidence.” (United States v. Scheffer (1998) 523 U.S. 303, 316.) The Court went on to explain that, by its ruling, it was not signaling a diminution in the validity or respect normally accorded to the states regarding their rules of criminal procedure and evidence, but only that, given the unique facts of that case, the court had found the defendant there had been deprived of a fair trial. (Chambers, supra, 410 U.S. at pp. 302– 303.) The circumstances of this case did not approach those of Chambers where constitutional rights directly affecting the ascertainment of guilt were implicated. The trial court did not apply the hearsay rule “mechanistically to defeat the ends of justice” (Chambers, supra, 410 U.S. at p. 302) and we find no abuse of discretion. II. Appellant’s Wheeler/Batson Motion Appellant, who is African-American, contends the prosecutor improperly exercised a peremptory challenge against an African-American prospective juror on the basis of race. A party violates both the California and United States Constitutions by using peremptory challenges to remove prospective jurors solely on the basis of group bias, i.e., bias presumed from membership in an identifiable racial, religious, ethnic, or similar group. (Wheeler, supra, 22 Cal.3d at pp. 276–277; People v. Lancaster (2007) 41 Cal.4th 50, 74; Batson, supra, 476 U.S. at pp. 96–98.) A party who believes his opponent is doing so must timely object and make a prima facie showing of exclusion on the basis of group bias. (Wheeler, supra, at p. 280.) A prima facie showing requires that the party make as complete a record as possible, show that the persons excluded belong to a cognizable group, and produce evidence sufficient to permit the trial judge to draw an inference that discrimination has occurred. (Lancaster, supra, at p. 74; Johnson v. California (2005) 545 U.S. 162, 170.) If a prima facie case is shown, the burden shifts to the other party to show that the peremptory challenge was based upon “specific bias,” i.e., one related to the case, parties, or witnesses. (Wheeler, supra, 22 Cal.3d at pp. 276, 281–282.) This showing need not rise to the level of a challenge for cause. (Id. at pp. 281–282.) Although a party may 12 exercise a peremptory challenge for any permissible reason or no reason at all, implausible or fantastic justifications are likely to be found to be pretexts for purposeful discrimination. (People v. Huggins (2006) 38 Cal.4th 175, 227 (Huggins); Purkett v. Elem (1995) 514 U.S. 765, 768.) The trial court must then make a sincere and reasoned attempt to evaluate the explanation for each challenged juror in light of the circumstances of the case, trial techniques, examination of prospective jurors, and exercise of challenges. (People v. Fuentes (1991) 54 Cal.3d 707, 718.) It must determine whether a valid reason existed and actually prompted the exercise of each questioned peremptory challenge. (Id. at p. 720.) The proper focus is the subjective genuineness of the nondiscriminatory reasons stated by the prosecutor, not on the objective reasonableness of those reasons. (People v. Reynoso (2003) 31 Cal.4th 903, 924.) Neither Wheeler nor Batson overturned the traditional rule that peremptory challenges are available against individual jurors whom counsel suspects of bias even for trivial reasons. (People v. Montiel (1993) 5 Cal.4th 877, 910, fn. 9.) “To rebut a race– or group–bias challenge, counsel need only give a nondiscriminatory reason which, under all the circumstances, including logical relevance to the case, appears genuine and thus supports the conclusion that race or group prejudice alone was not the basis for excusing the juror.” (Ibid.) “[T]he issue comes down to whether the trial court finds the prosecutor’s race-neutral explanations to be credible. Credibility can be measured by, among other factors, the prosecutor’s demeanor; by how reasonable, or how improbable, the explanations are; and by whether the proffered rationale has some basis in accepted trial strategy.” (Miller-El v. Cockrell (2003) 537 U.S. 322, 339.) “In assessing credibility, the court draws upon its contemporaneous observations of the voir dire. It may also rely on the court’s own experiences as a lawyer and bench officer in the community, and even the common practices of the advocate and the office who employs him or her.” (People v. Lenix (2008) 44 Cal.4th 602, 613 (Lenix).) 13 Prospective Juror No. 6 (Juror No. 6) told the court he lived in Ladera Heights, was single, and had no prior jury experience. He was a college student majoring in criminal justice and aspired to work in law enforcement. When he was five or six years old in the mid 1990’s, his half-brother was convicted of felony assault. He did not know “too much” about the conviction and it did not affect how he thought about law enforcement. When the court asked if the jurors were familiar with criminal street gangs, Juror No. 6 stated that when he was in high school he knew gangs were “around” and he knew members of African-American gangs at his high school but was not friends with them and did not have any contact with them at the time of trial. He said he could limit himself to the gang evidence presented at trial and not insert his own knowledge of gangs into his decision making in the case. Juror No. 6 became aware of the Eight Tray Hoovers when he was in high school but was not friends with any members of that gang or other gangs that were either affiliated with or enemies of the Eight Tray Hoovers. He said he had been approached or “banged on” once or twice outside of school but nothing happened, and he had never been asked to join a gang. He indicated he understood circumstantial evidence, acknowledged that any witness can possibly lie, and felt he was “an independent person” and would not change his mind about his view of the case even if the other 11 jurors disagreed with him. The prosecutor exercised his fifth peremptory challenge against Juror No. 6 and defense counsel made a Wheeler/Batson motion stating that Juror No. 6 was “one of the only two African-Americans in the room.” The trial court explained that defense counsel was using the wrong standard and asked him to set forth the basis for the motion. The trial court found defense counsel made a prima facie showing with respect to Juror No. 6 and asked the prosecutor to explain why he excused him. The prosecutor replied, “The reason I excused Juror No. 6 is precisely some of the reasons that the defense attorney I guess thought he would be a good juror. He is by far the youngest person in the group. I question whether or not he has enough life experience for a case of this magnitude. He didn’t appear to be very mature in the way he answered the questions and the way he 14 responded to questions. The fact that he is a student taking criminal justice classes makes me nervous because I don’t know what he’s being taught about the law. He also had a half brother who I think had been convicted of an assaultive crime. And the last, but certainly not least, is the fact that he went to high school, was aware of a number of gang members, he even specifically had had contact or had knowledge of Eight Tray Hoovers.” The prosecutor concluded that he would have excused Juror No. 6 for any one of those reasons, but especially when considered collectively. The court denied the Wheeler motion finding there was no discriminatory purpose and stated, “The reasons stated by [the prosecutor] are race neutral reasons. And they’re supported by the record as given by the statements of the juror in court to answers of questions posed.” Appellant argues the trial court did not make a “sincere and reasoned evaluation of the proffered third step justifications.” Because Wheeler motions call upon trial judges’ personal observations, we view their rulings with considerable deference, provided that the trial court makes a sincere, reasoned effort to evaluate the justifications offered. (Lenix, supra, 44 Cal.4th at pp. 613–614.) Where deference is due, the trial court’s ruling is reviewed for substantial evidence. (Huggins, supra, 38 Cal.4th at p. 227.) In discussing Batson analysis the United States Supreme Court stated, “‘“First, a defendant must make a prima facie showing that a peremptory challenge has been exercised on the basis of race[; s]econd, if that showing has been made, the prosecution must offer a race-neutral basis for striking the juror in question[; and t]hird, in light of the parties’ submissions, the trial court must determine whether the defendant has shown purposeful discrimination.”’ [Citations.]” (Snyder v. Louisiana (2008) 552 U.S. 472, 476–477 (Snyder).) Snyder also noted, “The trial court has a pivotal role in evaluating Batson claims. Step three of the Batson inquiry involves an evaluation of the prosecutor’s credibility, [citation], and ‘the best evidence [of discriminatory intent] often will be the demeanor of the attorney who exercises the challenge,’ [citation].” (Id. at p. 477.) 15 Here, the prosecutor provided a race-neutral reason for excusing Juror No. 6. The trial court evaluated the prosecutor’s explanation and found it credible. The important point was the trial court’s opinion of the “subjective genuineness” of the nondiscriminatory reasons stated by the prosecutor, “not . . . the objective reasonableness of those reasons.” (People v. Reynoso, supra, 31 Cal.4th at p. 924.) A prosecutor’s “explanation need not be sufficient to justify a challenge for cause.” (People v. Turner (1994) 8 Cal.4th 137, 165, overruled on another point in People v. Griffin (2004) 33 Cal.4th 536, 555, fn. 5.) Even a hunch is sufficient, so long as it is not based on impermissible group bias. (Turner, supra, at p. 165.) What mattered here was not whether the prosecutor articulated a highly persuasive ground for excusing Juror No. 6, but that the ground was race-neutral and the trial court assessed the prosecutor’s explanation and concluded it was subjectively genuine. The trial court had the benefit of its contemporaneous observations of both voir dire and the prosecutor’s demeanor as he explained his reason for excusing Juror No. 6. Citing Miller-El v. Dretke (2005) 545 U.S. 231 (Dretke), appellant argues that this court should employ comparative analysis; in other words, to compare Juror No. 6 to jurors who were not excused to determine whether the prosecutor’s expressed reasons were pretextual. Dretke does not compel a different result. There, the high court held that if a prosecutor’s stated reason for striking a member of a cognizable group applies equally to an “otherwise-similar” juror who is not a member of the cognizable group, then that is “evidence tending to prove purposeful discrimination to be considered on Batson’s third step.” (Dretke, supra, at p. 241.) Appellant points out that some of the other jurors shared Juror No. 6’s familiarity with gangs, or also had family members arrested. However, none of the seated jurors had the same combination of characteristics as Juror No. 6–young and immature, currently enrolled in criminal justice courses, had a relative who was convicted of a violent offense and was familiar with African-American gangs, including appellant’s gang. On this record, therefore, appellant’s comparative analysis is unreliable and fails to demonstrate purposeful discrimination. The fact that we 16 might reasonably derive an inference of discriminatory intent from a comparative analysis does not mean that a Wheeler/Batson motion was incorrectly denied. (Lenix, supra, 44 Cal.4th at pp. 627–628.) Therefore, a comparative analysis does not compel a conclusion that the trial court erred in accepting the prosecutor’s stated reasons for excusing the prospective challenged juror. III. Appellant’s Pitchess Motion Appellant contends the trial court abused its discretion by denying her Pitchess motion. She asserts she presented a sufficient specific factual scenario to establish a plausible factual foundation for her allegations of police officer misconduct. Appellant’s Pitchess motion referred to the portion of the police report narrating the circumstances of her arrest. The report indicated that LAPD Officers Hartman and Prodigalidad and Probation Officer Chon observed appellant “remove a blue steel handgun with brown wooden grips from her waistband and . . . . throw the handgun over a wall” that was covered with green foliage. The gun was recovered by Officers Hartman and Prodigalidad immediately following appellant’s arrest. A declaration signed by defense counsel and attached to appellant’s Pitchess motion challenged her connection to the handgun: “She denied having a firearm on her possession to law enforcement. The defendant believes that these officers have lied about seeing her toss this gun. She continues to deny possession of the recovered firearm.” The police report attached to appellant’s Pitchess motion included details of appellant’s postarrest statement in which she stated that while running from the police she tried to discard the “narco” in her possession, but was unable to get it “out of her right pants coin pocket.” The trial court denied the Pitchess motion, stating, “The factual scenario offered by counsel can be characterized as a mere denial.” The court noted that the Pitchess motion claimed the police officers lied only about appellant throwing the firearm away. The only factual account of the incident was incorporated in the police report. The police 17 officer’s version of events as to how the chase occurred, what was found on appellant, and appellant’s explanation why she ran from the police was uncontroverted. The sole and exclusive means by which citizen complaints against police officers may be obtained are the Pitchess procedures codified in sections 832.7 and 832.8 and Evidence Code sections 1043 and 1045. (Brown v. Valverde (2010) 183 Cal.App.4th 1531, 1539.) A Pitchess motion must include, among other things, an affidavit showing good cause for the discovery sought. (Evid. Code, § 1043, subd. (b)(3); Galindo v. Superior Court (2010) 50 Cal.4th 1, 12.) “To show good cause as required by [Evidence Code] section 1043, [the] declaration in support of a Pitchess motion must propose a defense or defenses to the pending charges” and “articulate how the discovery sought may lead to relevant evidence or may itself be admissible direct or impeachment evidence [citations] that would support those proposed defenses.” (Warrick v. Superior Court (2005) 35 Cal.4th 1011, 1024 (Warrick).) The declaration “must also describe a factual scenario supporting the claimed officer misconduct.” (Ibid.) The threshold showing of good cause required to obtain Pitchess discovery is “relatively low.” (City of Santa Cruz v. Municipal Court (1989) 49 Cal.3d 74, 83, 94.) We review Pitchess orders under the abuse of discretion standard. (People v. Hughes (2002) 27 Cal.4th 287, 330.) Contrary to appellant’s assertion, she did not make a good cause showing by merely denying the relevant specific fact alleged in the officers’ report. Because the police report described the actions of Officers Hartman, Prodigalidad, and Probation Officer Chon during the chase and arrest, it was incumbent on appellant to present a specific factual scenario different from the scenario presented in the police report. The officers reported seeing appellant throwing a blue steel handgun over a wall. Appellant denied ever having a gun but did not offer an alternative factual scenario regarding what her specific actions were (e.g., she made no throwing motion at all, she threw some other object over the wall, or she threw some narcotics over the wall, etc.). On appeal, appellant contends that her postarrest statement that she tried to throw away the narcotics in her pocket sufficiently provides an alternative plausible factual scenario to explain the 18 officers’ alleged observation of her throwing the handgun. But, this contention has no merit. Appellant stated she was unsuccessful in removing the narcotics from her pants pocket, therefore she never made a throwing motion. Appellant did not allege the officers planted the gun and lied about having seen her throw it. (See People v. Thompson (2006) 141 Cal.App.4th 1312, 1317 (Thompson) [court rejected defendant’s explanation because it did not present a factual account of the scope of the alleged police misconduct].) Because appellant’s Pitchess motion was, as the trial court concluded, simply a denial of the officers’ report when she could and should have instead presented a specific, plausible, alternative factual scenario of officer misconduct, she did not make the good cause showing required for an in camera review of documents. (Warrick, supra, 35 Cal.4th at pp. 1023–1026.) Appellant did not present a specific factual scenario of police officer misconduct that might or could have occurred and was both internally consistent with and supportive of her defense. (Id. at p. 1026.) Appellant contends the trial court misapplied “Warrick and its progeny” and improperly required appellant’s factual scenario to be credible rather than plausible. The circumstances in this case are not of the type referred to in Warrick for which a mere denial of the officer’s report may suffice. (Warrick, supra, 35 Cal.4th at pp. 1024–1025.) As the trial court noted, this case is similar to Thompson, in which the defendant was required to do more than merely deny the officer’s report. In Thompson, the defendant was standing near a street and sold cocaine base to an undercover police officer who gave him two marked $5 bills. (Thompson, supra, 141 Cal.App.4th at p. 1315.) Fellow “buy” team officers heard and saw the exchange and then other uniformed officers arrested the defendant after the transaction was complete and found the marked bills on the defendant. (Ibid.) In his Pitchess motion, the defendant asserted the officers planted evidence, acted dishonestly, and committed other misconduct. (Thompson, supra, at p. 1317.) The supporting declaration of his counsel stated that “‘the officers did not recover any buy money from the defendant, nor did the defendant offer and sell drugs to the undercover officer.’ The ‘officers saw defendant 19 and arrested him because he was in an area where they were doing arrests.’ When ‘defendant was stopped by the police and once they realized he had a prior criminal history they fabricated the alleged events and used narcotics already in their possession and attributed these drugs to the defendant.’ The charges ‘are a fabrication manufactured by the officers to avoid any type of liability for their mishandling of the situation and to punish the defendant for being in the wrong area, at the wrong time and for having a prior criminal history. . . .”’ (Ibid.) Thompson concluded the defendant’s showing was insufficient because it was not internally consistent or complete. (Ibid.) The defendant “simply denied the elements of the offense charged.” (Ibid.) Because appellant, like the defendant in Thompson, did not provide an alternate version of the facts regarding her actions during the crucial event reported by the police officers (i.e., throwing a handgun over the wall) and did not otherwise dispute any other fact set forth in those reports, we, like Thompson, conclude appellant did not present a sufficient specific factual scenario of officer misconduct that was plausible considering the officer’s report. (Thompson, supra, 141 Cal.App.4th at p. 1316; Warrick, supra, 35 Cal.4th at p. 1025.) In our view, appellant has not set forth a proposed defense, established a plausible factual foundation for the alleged officer misconduct, or articulated a valid theory as to how the requested information might be admissible at trial. Given the foregoing circumstances, appellant was not entitled to have the trial court review the requested records in camera to determine what information, if any, should be disclosed. (People v. Gaines (2009) 46 Cal.4th 172, 178–179.) The trial court did not abuse its discretion by denying appellant’s Pitchess motion. 20 DISPOSITION The judgment is affirmed. NOT TO BE PUBLISHED IN THE OFFICIAL REPORTS. _____________________, J. * FERNS We concur: ____________________________, P. J. BOREN ____________________________, J. ASHMANN-GERST * Judge of the Los Angeles Superior Court, assigned by the Chief Justice pursuant to article VI, section 6 of the California Constitution. 21
Low
[ 0.514806378132118, 28.25, 26.625 ]
Q: Why is the variable set to echo is off? why i get this output "Echo is off" when i enter "test" in the variable userpw1 should hold it but it's just displays ECHO is OFF and the if statement wont work @echo off cls set /p userid1= ::----- user password ----- :liuserpw cls color 0a echo ======================================= echo Hi echo ======================================= echo. echo ==== Now enter your Password ==== echo. set /p %userpw1%= set /p userpw2=<Users\password\%userid1%.txt echo Loading file [OK] echo File response [%userpw2%] echo %userid1% echo %userpw1% :: Here it displayes ECHO is OFF echo. pause if %userpw1% == %userpw2% goto 2 :2 echo this is now working pause A: You mistyped set /p %userpw1%= should be set /p userpw1= Then echo %userpw1% is equivalent to echo (with no params)
Mid
[ 0.616867469879518, 32, 19.875 ]
Mickie stumbles into the bar late at night and runs into the local coroner. Introducing Clay Knight, the Fairview police department’s coroner! Death in the Afternoon, also called the Hemingway or the Hemingway Champagne, is a cocktail made up of absinthe and Champagne invented by Ernest Hemingway. The cocktail shares a name with Hemingway’s book Death in the Afternoon. Death in the Afternoon is known for both its decadence and its high strength. Click on the image below to view the recipe for the drink! Let me know what you think in the comments below! REDDIT: We’re on reddit! Upvotes on Reddit are more than welcome if you enjoyed the comic/punchline.
Low
[ 0.459349593495934, 28.25, 33.25 ]
Spanish tech star files for bankruptcy Spanish wireless networks provider Gowex, at the heart of an accounting fraud scandal that has hit Spain's reputation among investors, has started insolvency proceedings, say lawyers. David Pollard reports
Low
[ 0.441758241758241, 25.125, 31.75 ]
(o/(o/o**(-18)))/o*o)/(o/((o*o**6/o)/o)) assuming o is positive. o**(-14) Simplify (l*l**0*l**(-8))/(l/((l/l**(-1/2))/l))**21 assuming l is positive. l**(-35/2) Simplify (l**0)**(-1/10)/(l/l**0*l)**(6/11) assuming l is positive. l**(-12/11) Simplify ((z**(-9)/z)/z**0)/(((z*z**(6/7))/z*z*z)/((z/(z**(-5)/z))/z)) assuming z is positive. z**(-48/7) Simplify j**(-2)/j**(-2/17) assuming j is positive. j**(-32/17) Simplify (l**(-2))**(-22/7)/(l**0)**(-43) assuming l is positive. l**(44/7) Simplify c*c**0*c*c**(-4/5) assuming c is positive. c**(6/5) Simplify ((g*g**3)/(g/(g/(g*g**5/g*g))))/(g*g/(g*g**(-5))*g*g*g/g**0) assuming g is positive. g**(-11) Simplify z**39/(z*z*z**(-1/51)) assuming z is positive. z**(1888/51) Simplify ((((t**(-5)*t)/t)/t)/t**(-8))**(-41/5) assuming t is positive. t**(-82/5) Simplify (t**14*t**(-7))**26 assuming t is positive. t**182 Simplify (g**(-8)/g**(-1/37))**(-40) assuming g is positive. g**(11800/37) Simplify (v**0/v*v**3*v*v)**50 assuming v is positive. v**200 Simplify (m/(m**(-11)*m)*m)**(-10) assuming m is positive. m**(-120) Simplify (y/(y*y**(-2/13)*y)*y)**(-2/9) assuming y is positive. y**(-4/117) Simplify (z/(z/(z**(4/3)/z)*z*z))**(22/7) assuming z is positive. z**(-110/21) Simplify ((f/(f**(-9/4)/f)*f)/f)**(-11/7) assuming f is positive. f**(-187/28) Simplify ((s**(2/15)*s)/s**(-1/2))**(3/37) assuming s is positive. s**(49/370) Simplify f**(-21)/f**(-2/5) assuming f is positive. f**(-103/5) Simplify (x/(x/(x*x/x**(-1/4)*x*x)))**17/((x/(x/(x/(x**4*x)*x)))/(x*x**4)) assuming x is positive. x**(321/4) Simplify ((f*f*f*f**(-8)*f)/f**(-5))/(f**(2/5)/f)**34 assuming f is positive. f**(107/5) Simplify (t**(1/7)/t)/(t/(t**(-2/9)/t)) assuming t is positive. t**(-194/63) Simplify (x*x*x**0/x)/x**(-4/5) assuming x is positive. x**(9/5) Simplify (y**(1/2)/y)**(-2/31) assuming y is positive. y**(1/31) Simplify ((j/(j*(j*j**0)/j*j)*j)/(j*j*j**1))**(-15) assuming j is positive. j**45 Simplify (b**(-5))**(-7) assuming b is positive. b**35 Simplify (h*h**(3/7)*h**(1/7))/(h**(-1/2)*h)**(3/22) assuming h is positive. h**(463/308) Simplify (r**3*r**(1/2)*r)**(-5/6) assuming r is positive. r**(-15/4) Simplify q*q**(-33)*q*q**(7/10) assuming q is positive. q**(-303/10) Simplify x/x**6*x**4*(x**(-4/3)*x*x)/(x**(-2)*x) assuming x is positive. x**(2/3) Simplify n**16/n*n**(-27) assuming n is positive. n**(-12) Simplify o/(o*o**6/o)*o**(-5) assuming o is positive. o**(-10) Simplify (p*p**0/p*p*p)/p*p*p/p**(5/4)*p assuming p is positive. p**(11/4) Simplify (d*d*d**(-1))**(-1/45)*(d**(2/5)/d)**(-7/6) assuming d is positive. d**(61/90) Simplify (a**(2/13)*a**3)**(-4) assuming a is positive. a**(-164/13) Simplify (f**2*f*f**(1/9)*f)/(f**(2/21)/(f/((f/(f**(2/5)/f*f))/f))) assuming f is positive. f**(1706/315) Simplify ((c**(1/3))**(-2/21))**26 assuming c is positive. c**(-52/63) Simplify (l**(-1/3))**(1/4)/(l**(-1))**(-27) assuming l is positive. l**(-325/12) Simplify ((f/(f**8/f))/f*f)**0 assuming f is positive. 1 Simplify (t**(-2/25)/t)/t**(3/7) assuming t is positive. t**(-264/175) Simplify z**19/((z*z/(z*z**5))/z*z*z) assuming z is positive. z**22 Simplify ((n*n/(n*n**(1/3)))/(n*n/(n*n/(n*n**1*n*n*n)*n)))/((n/n**(3/4))/(n*n**2)) assuming n is positive. n**(-7/12) Simplify ((k/(k/((k*k**(-3/4)*k)/k)))**(12/13))**(-7/10) assuming k is positive. k**(-21/130) Simplify (n*n**(-4)/n)/n**(-2/9) assuming n is positive. n**(-34/9) Simplify (v/(v*v*v**9)*v*v/((v*v*v**(2/3))/v*v*v))**(-1/8) assuming v is positive. v**(35/24) Simplify (o**(-12/7))**(-25) assuming o is positive. o**(300/7) Simplify ((t/(t*t*t*t/(((((t*t**(1/5)*t*t)/t)/t)/t)/t)))/((t*t**(-2/7))/t))/(t/((t**1/t)/t*t)*t**(-4/9)) assuming t is positive. t**(-1282/315) Simplify (n/(n*n**(-1/3)))/((n/n**9*n)/n) assuming n is positive. n**(25/3) Simplify (k/k**6)**12 assuming k is positive. k**(-60) Simplify (d**(-4/17))**(2/7) assuming d is positive. d**(-8/119) Simplify (((v/(v*v/(v**5/v)))/v)/v**(-8))/(v**(-2/9)/(v/((((v/v**6)/v)/v*v*v)/v))) assuming v is positive. v**(155/9) Simplify n/(n/(n**(-4)*n))*n*n/n**(6/7) assuming n is positive. n**(-13/7) Simplify ((q/(q**(-4)*q*q))**5)**(1/11) assuming q is positive. q**(15/11) Simplify (k/((k*k**5/k*k)/k)*k*k)/(k**1*k)*(k**(-1/3))**(-10/9) assuming k is positive. k**(-98/27) Simplify q*q**(1/28)*q/(q/q**18) assuming q is positive. q**(533/28) Simplify (i**14*i)/(i/i**(-8/11)) assuming i is positive. i**(146/11) Simplify (k**(1/13)/k)**(-19) assuming k is positive. k**(228/13) Simplify ((j*j/j**1*j)/j)**(-35) assuming j is positive. j**(-35) Simplify (x**(-10)*x)/x*x**7 assuming x is positive. x**(-3) Simplify (j/(j/(j*j**(1/5)))*j)/(j*j**14) assuming j is positive. j**(-64/5) Simplify (j**(-3)*j*j*j**9*j)/(j*j**(1/4))**(-5) assuming j is positive. j**(61/4) Simplify ((r/((r**(4/9)/r)/r)*r*r)/r*r*r*(r/r**(2/15))/r)/(r/(r**(-1/2)/r))**(-22) assuming r is positive. r**(2719/45) Simplify ((r/r**(1/3))/r**6)**(4/27) assuming r is positive. r**(-64/81) Simplify (i**(-5)*i)/(i**(2/7)*i)*(i/(i/(i**(-1)/i))*i)**(-18/13) assuming i is positive. i**(-355/91) Simplify g**(-3/16)/(g*g/(g/g**(-1/35))) assuming g is positive. g**(-649/560) Simplify u*u**(3/8)*u/(u*u**(-40)) assuming u is positive. u**(331/8) Simplify (s**(-2/3)*s)**26 assuming s is positive. s**(26/3) Simplify g/(g/g**36)*g*g**(-12) assuming g is positive. g**25 Simplify z*z**(1/4)*z*z**(-5)*((z**(-1/3)*z)/z)/(z/(z*z**2*z*z)) assuming z is positive. z**(11/12) Simplify (((g/(g**(-4)/g))/g)/g)/(g*g**(-2)) assuming g is positive. g**5 Simplify ((o/o**(-3))**(-21))**(3/2) assuming o is positive. o**(-126) Simplify (t**(-3/16)*t**17)**(2/3) assuming t is positive. t**(269/24) Simplify c**(-1/34)*(c/c**(-5/9))/c assuming c is positive. c**(161/306) Simplify (x**0)**(1/8)/(x**4/(x*x**(2/3)*x)) assuming x is positive. x**(-4/3) Simplify ((j**(1/4))**(-9/4))**(1/43) assuming j is positive. j**(-9/688) Simplify (u**(-1/3))**(14/3) assuming u is positive. u**(-14/9) Simplify ((q**(-2/7))**(2/15))**(3/29) assuming q is positive. q**(-4/1015) Simplify ((k/(k**(-1/3)*k))/(k/(k*k**(-3/7))))**(15/2) assuming k is positive. k**(-5/7) Simplify (o**(1/14))**28 assuming o is positive. o**2 Simplify (w**(-5))**(-49) assuming w is positive. w**245 Simplify (q**(-1/2))**9/(((q**(-1/3)*q)/q)/(q/q**(-3/2))) assuming q is positive. q**(-5/3) Simplify (c**(-2/47)/c)/(c*c*c**(-1/4)*c) assuming c is positive. c**(-713/188) Simplify (f/(f/f**(-1/8)))/(f*f*f/(f**3*f))*(f**2)**(-39) assuming f is positive. f**(-617/8) Simplify ((m/(m**(-2/5)*m))**(-10))**29 assuming m is positive. m**(-116) Simplify (q/(q*(q/(q**1/q))/q))/q**(-3/10)*(q**(2/7))**(-44) assuming q is positive. q**(-859/70) Simplify (w**(-6/5)/w**(4/7))/(w*(w/(w/w**(-1/3)))/w)**31 assuming w is positive. w**(899/105) Simplify d**18/d**(2/15) assuming d is positive. d**(268/15) Simplify ((j*j**6*j)/j**(-5))/(j**1)**24 assuming j is positive. j**(-11) Simplify ((i*i/i**16)/i)/(i*i*i**17) assuming i is positive. i**(-34) Simplify l**(-3/2)*l*l/(l/(l/l**1)*l)*(l/l**(-6/7))/((l**(-6)/l)/l) assuming l is positive. l**(117/14) Simplify ((p/((p/p**1)/p))**(1/9))**(-48) assuming p is positive. p**(-32/3) Simplify v**16*v**4 assuming v is positive. v**20 Simplify ((n**(-2/19)*n*n)/(n/(n**(-12)/n)))**37 assuming n is positive. n**(-8510/19) Simplify ((m**(3/5))**(12/7))**(-18) assuming m is positive. m**(-648/35) Simplify b**(-4)*b**(4/9) assuming b is positive. b**(-32/9) Simplify c**(1/4)*c*c**(2/9) assuming c is positive. c**(53/36) Simplify ((m*m**(1/5))/(m**(1/4)/m))/((m*m**(1/2)*m)/(m*m**(-4/3))) assuming m is positive. m**(-53/60) Simplify (w*w/w**(1/31)*w*w)/(w/(w*w/(w*w*w/w**7))*w) assuming w is positive. w**(247/31) Simplify (t/(t/(t/t**6)))**(10/13) assuming t is positive. t**(-50/13) Simplify s**(2/7)*s/(s*s/s**(-11)*s*s) assuming s is positive. s**(-96/7) Simplify (i*i/((i*i/(i/(i**6*i)))/i)*i)/i**20 assuming i is positive. i**(-24) Simplify ((o/o**(-5/2))/o*o**(-1))/(o/(o/(o/o**(-3)))*o*o/(o*o/o**(1/4))) assuming o is positive. o**(-11/4) Simplify (o**(-1))**(3/11)/(o**8/(((o*o**(5/2)*o)/o*o)/o*o)) assuming o is positive. o**(-83/22) Simplify s**(3/13)/((s*(s*s**(-17)*s)/s)/s*s) assuming s is positive. s**(198/13) Simplify (k/k**(-8/11))/k**(1/5) ass
Low
[ 0.525798525798525, 26.75, 24.125 ]
--- layout: 'default' hljs: 'light' component: 'confirm' prop: 'labels' propType: 'p' label: '{ Object }' --- <section class="blue"> <div class="content"> <div class="grid two"> <div class="column"> <h1> Confirm Dialog</h1> A confirm dialog is often used if you want the user to verify or accept something. When a confirm dialog pops up, the user will have to click either "OK" or "Cancel" to proceed. </div> <div class="right column"> <%- @partial('ad') %> </div> </div> </div> </section> <section class="lic"> <div class="content"> Looking for a commercial license ? Keep your source code proprietary and <a href="https://www.uplabs.com/posts/alertifyjs" target="_blank"> Buy a Commercial License Today!</a> </div> </section> <section class="dark"> <div class="content"> <!--Settings--> <div class="segment has-menu"> <%- @partial('nomotion') %> <%- @partial('segment',false, @getDataItem(@document.component, @document.prop)) %> </div> <!--//Settings--> <%- @partial('menu', true, @getDataItem(@document.component, @document.prop)) %> </div> </section>
Low
[ 0.49230769230769206, 20, 20.625 ]
{ "type": "slimespawnedinvuln", "shortdescription": "Slime", "description": "It's all sticky and warm...ew.", "categories": ["slime"], "parts": ["body"], "animation": "slime.animation", "dropPools": [], "baseParameters": { "scripts": [ "/monsters/monster.lua", "/monsters/fu_bossExtraResistsHandler.lua", "/monsters/fu_monsterPercentRegen.lua" ], "bossExtraResistsValue": 100, "bossExtraResistsOverride": true, "fuMonsterRegenPercent": -15, "fuMonsterRegenUseSeconds":true, "behavior": "monster", "deathBehavior": "monster-death", "behaviorConfig": { "damageOnTouch": true, "targetQueryRange": 20, "targetOnDamage": true, "keepTargetInSight": true, "keepTargetInRange": 50, "targetOutOfSightTime": 5.0, "foundTargetActions": [{ "name": "action-aggrohop" }], "hostileActions": [{ "name": "action-hop", "cooldown": 0.0, "parameters": { "verticalSpeed": 20, "horizontalSpeed": 10, "hopSequence": 1, "timeBetweenHops": 0.0, "windupTime": 0.1, "landTime": 0.1, "hopAwayFromWall": false, "wallVerticalSpeed": 35 } }], "periodicActions": [{ "name": "action-hop", "cooldown": 0.0, "parameters": { "verticalSpeed": 25, "horizontalSpeed": 10, "hopSequence": 3, "timeBetweenHops": 0.25, "hopAwayFromWall": true } }], "followActions": [{ "name": "approach-teleport", "parameters": {} }, { "name": "action-hop", "cooldown": 0.0, "parameters": { "verticalSpeed": 20, "horizontalSpeed": 10, "hopSequence": 1, "timeBetweenHops": 0.0, "windupTime": 0.1, "landTime": 0.1, "hopAwayFromWall": false, "wallVerticalSpeed": 35 } } ], "deathActions": [{ "name": "action-spawnmonster", "parameters": { "offset": [0.5, 0], "monsterType": "microslimespawnedinvuln", "replacement": true } }, { "name": "action-spawnmonster", "parameters": { "offset": [-0.5, 0], "monsterType": "microslimespawnedinvuln", "replacement": true } } ] }, "touchDamage": { "poly": [ [-0.6875, -0.375], [-0.4375, -0.625], [0.4375, -0.625], [0.6875, -0.375], [0.6875, 0.25], [0.4375, 0.5], [-0.4375, 0.5], [-0.6875, 0.25] ], "damage": 10, "teamType": "enemy", "damageSourceKind": "lash", "knockback": 20, "statusEffects": [] }, "metaBoundBox": [-4, -4, 4, 4], "scale": 1.0, "movementSettings": { "collisionPoly": [ [-0.6875, -0.375], [-0.4375, -0.625], [0.4375, -0.625], [0.6875, -0.375], [0.6875, 0.25], [0.4375, 0.5], [-0.4375, 0.5], [-0.6875, 0.25] ], "mass": 1.0, "walkSpeed": 5, "runSpeed": 5, "jumpSpeed": 5 }, "bodyMaterialKind": "organic", "knockoutTime": 0.3, "knockoutAnimationStates": { "damage": "stunned" }, "deathParticles": "deathPoof", "knockoutEffect": "", "statusSettings": { "statusProperties": { "targetMaterialKind": "organic" }, "appliesEnvironmentStatusEffects": false, "appliesWeatherStatusEffects": true, "minimumLiquidStatusEffectPercentage": 0.1, "primaryScriptSources": [ "/stats/monster_primary.lua" ], "primaryScriptDelta": 5, "stats": { "grit": { "baseValue": 1.0 }, "maxHealth": { "baseValue": 12 }, "protection": { "baseValue": 0.2 }, "healthRegen": { "baseValue": 0.0 }, "poisonStatusImmunity": { "baseValue": 1.0 }, "slimestickImmunity": { "baseValue": 1.0 }, "slimeImmunity": { "baseValue": 1.0 }, "blacktarImmunity": { "baseValue": 1.0 }, "lavaImmunity": { "baseValue": 1.0 }, "tarImmunity": { "baseValue": 1.0 }, "powerMultiplier": { "baseValue": 1.0 } }, "resources": { "stunned": { "deltaValue": -1.0, "initialValue": 0.0 }, "health": { "maxStat": "maxHealth", "deltaStat": "healthRegen", "defaultPercentage": 100 } } }, "mouthOffset": [0, 0], "feetOffset": [0, -8], "capturable": false, "captureHealthFraction": 0.01, "nametagColor": [64, 200, 255], "captureCollectables": { "fu_monster": "slime" } } }
Low
[ 0.526901669758812, 35.5, 31.875 ]
GRFCfansDonoavan.jpg Grand Rapids FC fans are hoping to persuade Landon Donovan to host a camp in Grand Rapids. (MLive file (left), AP file (right)) The fan base for Grand Rapids FC isn't shy about pursuing a visit here from soccer star Landon Donovan. A couple Twitter followers trolled Donovan before Tuesday's Women's Cup soccer match against Germany, and ended up with a nice exchange with the legendary U.S. soccer standout. "I just saw a tweet that he was looking for suggestions as to where he should bring a soccer camp in the coming years," said Mark De Haan, a recent Calvin College grad who had the initial tweet. "To me, it seemed like Landon was looking to bring the camp to a strong soccer community, which Grand Rapids has shown that it has. So, I just thought I'd send a tweet at him about GRFC and the strong community support. I didn't expect him to respond as enthusiastically as he did, but that sums up the response to GRFC in general, it has been more than anyone really expected." We'll keep you posted if the efforts result in a Donovan camp coming to the area. @landondonovan Come to Grand Rapids, MI. First year minor league club @grandrapidsfc has 4000+ fans at home games! http://t.co/UpOjVzBZdf — Mark De Haan (@markdh92) June 30, 2015 @landondonovan @markdh92 @grandrapidsfc We'll even let you be the Grand Marshal of the Grand Army march from the tailgate to Houseman field! — Gretchen Starr (@soccerstarr17) June 30, 2015 Grand Rapids FC, which has been drawing crowds between 4,000 and 5,000, is home 7:30 p.m. Friday at Houseman Field against RWB Adria from Chicago. Pete Wallner covers sports for MLive/Grand Rapids Press. Email him at [email protected] or follow him on Twitter, Facebook or Google+.
Mid
[ 0.5793650793650791, 36.5, 26.5 ]
703 F.2d 574 Coombsv.Merit Systems Protection Bd. 82-7182 UNITED STATES COURT OF APPEALS Ninth Circuit 2/16/83 1 M.S.P.B. PETITION FOR REVIEW DISMISSED
Low
[ 0.41573033707865104, 23.125, 32.5 ]
<?php class EpiTemplate { /** * EpiRoute::display('/path/to/template.php', $array); * @name display * @author Jaisen Mathai <[email protected]> * @param string $template * @param array $vars * @method display * @static method */ public function display($template = null, $vars = null) { $templateInclude = Epi::getPath('view') . '/' . $template; if(is_file($templateInclude)) { if(is_array($vars)) { extract($vars); } include $templateInclude; } else { EpiException::raise(new EpiException("Could not load template: {$templateInclude}", 404)); } } /** * EpiRoute::get('/path/to/template.php', $array); * @name get * @author Jaisen Mathai <[email protected]> * @param string $template * @param array $vars * @method get * @static method */ public function get($template = null, $vars = null) { $templateInclude = Epi::getPath('view') . '/' . $template; if(is_file($templateInclude)) { if(is_array($vars)) { extract($vars); } ob_start(); include $templateInclude; $contents = ob_get_contents(); ob_end_clean(); return $contents; } else { EpiException::raise(new EpiException("Could not load template: {$templateInclude}", 404)); } } /** * EpiRoute::json($variable); * @name json * @author Jaisen Mathai <[email protected]> * @param mixed $data * @return string * @method json * @static method */ public function json($data) { if($retval = json_encode($data)) { return $retval; } else { $dataDump = var_export($dataDump, 1); EpiException::raise(new EpiException("json_encode failed for {$dataDump}", 404)); } } /** * EpiRoute::jsonResponse($variable); * This method echo's JSON data in the header and to the screen and returns. * @name jsonResponse * @author Jaisen Mathai <[email protected]> * @param mixed $data * @method jsonResponse * @static method */ public function jsonResponse($data) { $json = self::json($data); header('X-JSON: (' . $json . ')'); header('Content-type: application/x-json'); echo $json; } } function getTemplate() { static $template; if($template) return $template; $template = new EpiTemplate(); return $template; }
Mid
[ 0.565905096660808, 40.25, 30.875 ]
WASHINGTON, May 15, 2013 /PRNewswire-USNewswire/ -- Human cloning for any purpose is inconsistent with the moral responsibility to "treat each member of the human family as a unique gift of God, as a person with his or her own inherent dignity," said the chairman of the Committee on Pro-Life Activities of the U.S. Conference of Catholic Bishops (USCCB). "Creating new human lives in the laboratory solely to destroy them is an abuse denounced even by many who do not share the Catholic Church's convictions on human life," said Cardinal Sean O'Malley, OFM Cap., of Boston. He said this way of making embryos will also be taken up by people who want to produce cloned children as "copies" of other people. "Whether used for one purpose or the other, human cloning treats human beings as products, manufactured to order to suit other people's wishes." He added, "A technical advance in human cloning is not progress for humanity but its opposite." Cardinal O'Malley's statement responded to the news May 15 that researchers in Oregon have succeeded in producing cloned human embryos and obtained their embryonic stem cells. He added that the researcher's goal of producing genetically matched stem cells for research and possible therapies is already being addressed by scientific advances that do not pose the same more problems. The news that researchers have developed a technique for human cloning is deeply troubling on many levels. Over 120 human embryos were created and destroyed, to produce six embryonic stem cell lines. Creating the embryos involved subjecting healthy women to procedures that put their health and fertility at risk. And the researchers' alleged goal, producing genetically matched stem cells for research and possible therapies, is already being addressed by scientific advances that do not pose these grave moral wrongs. Creating new human lives in the laboratory solely to destroy them is an abuse denounced even by many who do not share the Catholic Church's convictions on human life. Also, this means of making embryos for research will be taken up by those who want to produce cloned children as "copies" of other people. Whether used for one purpose or the other, human cloning treats human beings as products, manufactured to order to suit other people's wishes. It is inconsistent with our moral responsibility to treat each member of the human family as a unique gift of God, as a person with his or her own inherent dignity. A technical advance in human cloning is not progress for humanity but its opposite.
Low
[ 0.533613445378151, 31.75, 27.75 ]