text
stringlengths 8
5.74M
| label
stringclasses 3
values | educational_prob
sequencelengths 3
3
|
---|---|---|
Slot machine games On FunnyGames.us you'll find the best collection of Slot Machine games! You'll find no less than 103 different Slot Machine games, such as Bells on Fire & The Fruits Slot Machine. You feeling lucky today? Don't let that feeling pass! Take a seat and enjoy the ride. Press the handle and make sure you hit those 'wilds'. What's that now? Did you just hit the jackpot? Wow we are dealing with a high roller here. | Low | [
0.29443690637720404,
13.5625,
32.5
] |
The background of the invention will be set forth in two parts. 1. Field of the Invention This invention relates to delay lines, and more particularly to bulk acoustic wave delay lines. 2. Description of the Prior Art The usefulness of propagating elastic wave energy in solids has been known for many years. Utilizing this technology, such devices which store and delay signals have been developed to a relatively high degree. Many texts are presently available which thoroughly describe the history and advancements of this art, such as, for example, "Ultrasonic Methods in Solid State Physics" Rohn Truell, Charles Elbaum and Bruce B. Chick, Academic Press, 1969. Probably the greatest interest in the field of bulk wave devices has been in bulk acoustic wave delay lines. Unlike surface acoustic wave delay lines in which most of the energy propagating along an elastic surface is converted to electromagnetic wave energy upon reaching a state-of-the-art transducer, only about 10% of the propagating bulk wave energy is converted at an output transducer, the rest being reflected back toward the input transducer. This relatively strong reflected wave is again reflected at the input transducer and is incident on the output transducer to produce a relatively strong signal known generally as the triple-transit signal. Although there was at first much interest in bulk acoustic wave devices because they are more adaptable for operation in the multi-gigahertz range as compared to surface acoustic wave devices (usually limited to about 500 MHz), the problem of the triple-transit signal has caused a decrease in such interest. In attempts to overcome spurious multiple transit signal problems resulting from reflections from the crystal end faces, it has been found that these unwanted signals are attenuated or suppressed through careful design utilizing several effects: A. attentuation -- if the main signal is attenuated .alpha..tau. dB, then the triple transit signal is attenuated an additional 2.alpha..tau. dB. PA1 B. diffraction loss due - to spreading. PA1 C. tilting the end faces of the crystal to cause phase cancellation and beam walk-off. PA1 D. acoustic matching of the transducer in order to reduce the acoustic reflection. Generally, all these effects are utilized to some extent in order to obtain what has been considered to be a reasonable value of triple-transit suppression, where triple-transit suppression is defined as the ratio of the main delayed signal to the triple-transit spurious signal. Attenuation is important in order to reduce this echo problem because it more greatly affects the multiple path triple-transit echo signal than the single transit main signal. However, the increase of attenuation per unit time of the propagating energy in the solid is extremely frequency dependent. That is, the attenuation .alpha. in dB per microsecond is proportional to f.sup.2, where f is the frequency of the propagating acoustic wave energy. Thus, a 5 dB attenuation of this type at 5 GHz will be approximately 20 dB at 10 GHz. This f.sup.2 dependence is more completely discussed in such articles as, "On the Absorption of Sound in Solids" by A. Akhieser in The Journal of Physics (USSR), Vol. 1, page 277 (1939), and in, "Absorption of Sound in Insulators" by T. O. Woodruff and H. Ehrenreich, in Physical Review, Vol. 123, page 1553 (1961). Several techniques have been developed in order to lessen the overall frequency dependent attenuation and thereby increase the bandwidth of the device. A technique has been developed in an effort to reduce the overall dependent attenuation. This scheme utilizes mode converting surfaces which convert a longitudinal or compressional bulk wave to a shear or transverse bulk wave. This technique uses the well-known principle that longitudinal waves can be converted to shear waves by reflection off of a free surface disposed at a predetermined angle with respect to the incident wave. Delay lines utilizing this scheme use a transducer to generate and receive the longitudinal waves while the substantial part of the propagating acoustic wave energy traverses the device in the shear wave mode. This scheme, along with the cooling technique, substantially reduces the f.sup.2 bandwidth problem but, as mentioned before, increases the triple-transit problem. It should, therefore, be evident that a new technique which will maintain a relatively wide bandwidth characteristic in a bulk delay line while providing a relatively high triple transit signal suppression would constitute a significant advancement in the art. | Mid | [
0.581018518518518,
31.375,
22.625
] |
WATCH: Two Women From Germany Are Killing It With Their Excellent Bollywood Dubsmash Videos Two women, who seem to absolutely love Bollywood, have been on a roll posting their dubsmash videos on social media for a while now. So, why are people loving it? Well, for one, these women are Germans who reside in Hamburg and have no connection with India except for an overwhelming love for it's pop culture. 'Padosan' is one of their favourite movies, and Shah Rukh is their 'love'. In a video compilation posted by DubMash on Facebook, the women who go by the usernames grzmotbilska and breshna on Twitter and Instagram, are seen saying some of Bollywood's funniest dialogues. | Low | [
0.49260042283298106,
29.125,
30
] |
Common questions about the management of gastroesophageal reflux disease. Common questions that arise regarding treatment of gastroesophageal reflux disease (GERD) include which medications are most effective, when surgery may be indicated, which patients should be screened for Barrett esophagus and Helicobacter pylori infection, and which adverse effects occur with these medications. Proton pump inhibitors (PPIs) are the most effective medical therapy, and all PPIs provide similar relief of GERD symptoms. There is insufficient evidence to recommend testing for H. pylori in patients with GERD. In the absence of alarm symptoms, endoscopy is not necessary to make an initial diagnosis of GERD. Patients with alarm symptoms require endoscopy. Screening for Barrett esophagus is not routinely recommended, but may be considered in white men 50 years or older who have had GERD symptoms for at least five years. Symptom remission rates in patients with chronic GERD are similar in those who undergo surgery vs. medical management. PPI therapy has been associated with an increased risk of hip fracture, hypomagnesemia, community-acquired pneumonia, vitamin B12 deficiency, and Clostridium difficile infection. | High | [
0.674884437596302,
27.375,
13.1875
] |
Q: Django annotate on several fields related to same model I have two models: class Account(models.Model): ... class Transaction(models.Model): .... account = models.ForeignKey(Account) source_account = models.ForeignKey(Account, null=True) I need to display the number of transactions for each of a user's accounts. Django's annotate seemed like the proper tool for this task. I did: queryset = models.Account.objects.filter(user=self.request.user) queryset.annotate(transactions_count=Count('transaction')) This gives the correct number for transactions with account field set to the predicate account but leaves out transactions where source_account is set to the predicate account. Using the Django shell I am able to do something like: accounts_count = user_transactions.filter(Q(account=account)|Q(source_account=account)).count() This gives the correct answer. Is there something I am doing wrong? Can someone point me in the correct direction. Any assistance is highly appreciated. A: I would set related_name to your ForeignKey fields. Then it's a bit easier to work with them. So for example in your models let's set: class Transaction(models.Model): ... account = models.ForeignKey(Account, related_name='transactions') source_account = models.ForeignKey(Account, null=True, related_name='source_transactions') then can do something like: queryset = models.Account.objects.filter(user=self.request.user).annotate(transactions_count=(Count('transactions')+Count('source_transactions')) it would work without the naming too, it's just more readable and easier. The main point is adding the two Count as one field in annotate. The best approach for these types of problems is to imagine them in raw SQL and then try to mimic it in Django ORM. (in raw sql you would also simply just add two columns like SELECT (a.col + a.col2) AS count | High | [
0.700831024930747,
31.625,
13.5
] |
2018 Thai League 3 Play-off Round The Thai League 3 Play-off Round is the last promotion quota of Thai League 2 and determines which club will be the champion of 2018 Thai League 3. The Thai Football clubs that are the champion and runner-up of the 2018 Thai League 3 Upper Region and the champion and the runner-up of the 2018 Thai League 3 Lower Region, compete in the 2018 Thai League 3 Play-off Round. In this round, home and away matches are played against each team, and the clubs that get the highest total scores are promoted to Thai League 2. The away goals rule is used in this tournament. 3rd Position of Play-off round |- |} 1st Position of Play-off round |- |} Winner References Official T3 Play-off rule External links official web of T3 Play-off round Category:Thai League 3 Category:2018 in Thai football leagues | High | [
0.673170731707317,
34.5,
16.75
] |
Advantages of ivermectin at a single dose of 400 micrograms/kg compared with 100 micrograms/kg for community treatment of lymphatic filariasis in Polynesia. In April and October in 1991-1993, 5 supervised single doses of ivermectin were given to inhabitants aged > or = 3 years in a Polynesian district: the first 3 treatments were with 100 micrograms/kg and the 2 latter with 400 micrograms/kg. At each treatment, about 97% of the eligible population (899) were treated and blood samples were collected just before treatment from 96% of the 613 inhabitants aged > or = 15 years. Following the 5 successive treatments, adverse reactions were observed in, respectively, 23.8, 13, 6.2, 13.6 and 7.9% of the microfilariae (mf) carriers, and in less than 1% of amicrofilaraemic subjects. Neither the frequency nor the intensity of adverse reactions was significantly different between single doses of 100 micrograms/kg and 400 micrograms/kg. Although the geometric mean microfilaraemia (GMM) was reduced, the mf carrier prevalence remained unchanged before and after 3 mass treatments with 100 micrograms/kg (21.4 and 20.7% respectively), and the mf recurrence rate 6 months after each dose of 100 micrograms/kg was roughly stable (respectively, 34.3%, 21.6% and 31.2% of the initial GMM). In contrast, after one dose round of 400 micrograms/kg, the mf carrier prevalence decreased significantly to 14.9% (P < 10(-6)), and the mf recurrence rate dropped to 9.9% (P < 10(-3)) of the initial GMM. These results confirm the safety and the effectiveness of 400 micrograms/kg of ivermectin for lymphatic filariasis control. | High | [
0.669565217391304,
38.5,
19
] |
Modulation of ACh release by presynaptic muscarinic autoreceptors in the neuromuscular junction of the newborn and adult rat. We studied the presynaptic muscarinic autoreceptor subtypes controlling ACh release and their relationship with voltage-dependent calcium channels in the neuromuscular synapses of the Levator auris longus muscle from adult (30-40 days) and newborn (3-6 and 15 days postnatal) rats. Using intracellular recording, we studied how several muscarinic antagonists affected the evoked endplate potentials. In some experiments we previously incubated the muscle with calcium channel blockers (nitrendipine, omega-conotoxin-GVIA and omega-Agatoxin-IVA) before determining the muscarinic response. In the adult, the M1 receptor-selective antagonist pirenzepine (10 micro m) reduced evoked neurotransmission ( approximately 47%). The M2 receptor-selective antagonist methoctramine (1 micro m) increased the evoked release ( approximately 67%). Both M1- and M2-mediated mechanisms depend on calcium influx via P/Q-type synaptic channels. We found nothing to indicate the presence of M3 (4-DAMP-sensitive) or M4 (tropicamide-sensitive) receptors in the muscles of adult or newborn rats. In the 3-6-day newborn rats, pirenzepine reduced the evoked release ( approximately 30%) by a mechanism independent of L-, N- and P/Q-type calcium channels, and the M2 antagonist methoctramine (1 micro m) unexpectedly decreased the evoked release ( approximately 40%). This methoctramine effect was a P/Q-type calcium-channel-dependent mechanism. However, upon maturation in the first two postnatal weeks, the M2 pathway shifted to perform the calcium-dependent release-inhibitory activity found in the adult. We show that the way in which M1 and M2 muscarinic receptors modulate neurotransmission can differ between the developing and adult rat neuromuscular synapse. | High | [
0.6980198019801981,
35.25,
15.25
] |
--- abstract: 'Absolute normalisation of the LHC measurements with $\cal{O}$(1%) precision and their relative normalisation, for the data collected at variable centre-of-mass energies, or for variable beam particle species, with $\cal{O}$(0.1%) precision is crucial for the LHC experimental programme but presently beyond the reach for the general purpose LHC detectors. This paper is the third in the series of papers presenting the measurement method capable to achieve such a goal.' address: - | LPNHE, Pierre and Marie Curie University, CNRS-IN2P3, Tour 33, RdC,\ 4, pl. Jussieu, 75005 Paris, France. - ' Institute of Teleinformatics, Faculty of Physics, Mathematics and Computer Science, Cracow University of Technology, ul. Warszawska 24, 31-115 Kraków, Poland.' - ' Institute of Nuclear Physics PAN, ul. Radzikowskiego 152, 31-342 Kraków, Poland.' author: - 'M. W. Krasny, J. Chwastowski , A. Cyzand K. S[ł]{}owikowski' title: | [**Luminosity Measurement Method for the LHC:\ Event Selection and Absolute Luminosity Determination**]{}[^1] --- Introduction {#sec:Introduction} ============ The requirements for the luminosity measurement precision at the LHC are often misunderstood. According to the present paradigm [@Mangano], 2% precision is the benchmark and the ultimate target for the LHC experiments. Such a target may soon be reached by the method based on van der Meer scans for which $\delta L /L = \pm 3.7~\%$ has already been achieved [@vdM]. The main argument underlying such a paradigm reflects the present precision of theoretical calculations of the cross sections for hard parton processes. These calculations are based on the perturbative, leading-twist QCD framework which allows only to approach the experimental precision target. Thus, as long as a significant reduction of uncertainties of both the matrix elements and the parton distribution functions (PDFs) is beyond the reach of the available calculation methods and DIS experiments, the impact of further reduction of the experimental luminosity error would hardly improve the present understanding of parton collisions. In our view, there are at least four reasons to push the luminosity measurement precision frontier as much as possible. 1. Precision observables, based on the data collected at variable c.m.-energies have been proposed for the precision measurement programme of the Standard Model parameters at the LHC [@krasnySMparameters], and for the experimental discrimination of the Higgs production processes against the SM ones [@krasnyHiggs]. These observables are specially designed to drastically reduce their sensitivities to the theoretical and modeling uncertainties limiting the overall measurement precision of the canonical ones. Their measurement precision is no longer limited by the theoretical and modeling uncertainties but by the precision of the relative luminosity measurements at each of the collider energy settings. 2. The general purpose LHC experiments, ATLAS and CMS, and the specialised experiment, LHCb, measure the same parton cross sections in different regions of the phase-space. Relative normalisation of their results determine the ultimate LHC precision for those of the measurements which are based on a coherent analysis of their data. The best example here is the measurement of $\sin^2(\theta _W)$ associated with concurrent unfolding of the valence and sea quarks PDFs. 3. The LHC collides proton beams while the Tevatron collides proton and anti-proton beams. The $W$ and $Z$ boson differential cross sections measured at these two colliders constrain the flavour dependent PDFs within the precision of the relative LHC and Tevatron luminosity measurements because the theoretical uncertainties on the hard partonic cross sections largely cancel in the cross section ratios. 4. A fraction of the LHC running time is devoted to collisions of ions. The precision of the relative normalisation of the $pp$ interaction observables with respect to the corresponding ion-ion ones is crucial for a broad spectrum of measurements ranging from the classical measurements of the shadowing effects to more sophisticated ones including the experimental studies of the propagation of W and Z bosons in the vacuum and in the hadronic matter [@krasny_jadach]. For all the above and many other reasons any future improvement of the luminosity measurement precision will be directly reflected in the increased precision of the interpretation of the LHC measurements in terms of the Standard Model (SM) and Beyond the Standard Model (BSM) processes, in particular if specially designed precision observables are used. The method of the luminosity measurement proposed in this series of papers attempts to push the precision frontier as much as possible at hadron colliders. Its ultimate goal is to reach the precision of $\cal{O}$(1%) for the absolute normalisation of the LHC measurements and of $\cal{O}$(0.1%) for the relative normalisation of the data recorded at variable centre-of-mass energies or different beam particle species (protons, ions). It is based on the detection of those of the electromagnetic collisions of the beam particles in which they can be treated as point-like leptons. For these collisions the corresponding cross sections can be calculated with a precision approaching the one achieved at lepton colliders. In this series of papers such collisions are tagged by the associated production of the unlike-charge electron-positron pairs. In the initial paper [@first] of this series we have selected the phase-space region where the lepton pair production cross section is theoretically controlled with precision better than 1%, is large enough to reach a comparable statistical accuracy of the absolute luminosity measurement on the day-by-day basis and, last but not the least, its measurement is independent of the beam emittance and of the Interaction Point (IP) optics. Collisions of the beam particles producing lepton pairs in this phase-space region cannot be triggered and selected by the present LHC detectors. A new detector must be incorporated within one of the existing general purpose LHC detectors to achieve this goal. In our second paper [@second] the performance requirements for a dedicated luminosity detector were discussed. In the present paper we discuss the luminosity measurement procedure based on the concrete luminosity detector model and on the present host detector performance parameters. We evaluate quantitatively the systematic precision of the proposed luminosity measurement method. This paper is organised as follows. In Section 2 the model of the luminosity detector is introduced. Section 3 specifies the performance requirements for the host detector. The on-line selection procedures for the luminosity events are proposed in Section 4 and optimised in Section 5. Section 6 presents the rates of the signal and background events at the consecutive stages of the event selection process. The methods of monitoring of the instantaneous, relative luminosity are described in Section 7. Section 8 introduces the luminosity measurement methods for the low, medium and high luminosity running periods of the LHC machine. Merits of the dedicated runs are discussed in section 9. Section 10 introduces a novel and simple method of calculating the absolute cross sections for any sample of the user-selected events. Finally, Section 11 is devoted to the evaluation of the systematic uncertainties of the proposed luminosity measurement method. Luminosity Detector Model ========================= In the studies presented below the model of the luminosity detector is specified in terms of a minimal set of its required output signals. Its concrete hardware design is open and depends solely on the specific constraints imposed by the host detector construction and by the wishes of the host detector collaboration. The luminosity detector can be realised using one of the available particle tracking technologies and does not require dedicated R&D studies. The D0 fibre tracker [@D0tracker] can serve as an example in most of its functional aspects with a notable exception of the radiation hardness issue which would need to be addressed anew in the LHC context. The geometry of the proposed luminosity detector model was presented in [@second]. The detector fiducial volume consists of two identical cylinders placed symmetrically with respect to the beam collision point. Each cylinder is concentric with the proton-proton collision axis defined in the following as the $z$-axis of the coordinate system. The cylinders have the following dimensions: the inner radius of 48 cm, the outer one of 106 cm and the length of 54.3 cm. They occupy the regions between $z_f^{right} = 284.9$ cm and $z_r ^{right}= 339.2$ cm and $z_f ^{left}= -284.9$ cm and $z_r ^{left}= -339.2$ cm. Each cylinder contains three layers (the $z_1, z_2, z_3$ planes) providing the measurements of the charged particles hits. These planes are positioned at the distances of $z_1 = 285.8$, $z_2 = 312.05$ and $z_3 = 338.3$ cm from the interaction point. Each of the luminosity detector $z$-planes is segmented into 3142 $\phi$-sectors providing the azimuthal position of the hits[^2]. Such a detector could occupy the space foreseen for the TRT C-wheels - a sub-component of the ATLAS detector which has not been built. The basic role of the luminosity detector is to provide, for each charged particle $j$, produced in $pp$ collisions, the “track segment", specified in terms of the three azimuthal hit positions, $i_{j,k}$, where $k \in \{1, 2, 3\}$ is the plane number and $j$ denotes the particle number. It is assumed that each reconstructed track segment has a “time-stamp". The time-stamp assigns the track segment to the ${\cal O}(1)$ ns wide time slot synchronised with the machine clock[^3]. Further timing requirements, discussed in detail in [@second], such as adding a precise, sub-nanosecond relative timing for each of the $\phi$-hits, for a better level 1 (LVL1) trigger resolution of the $z$ position of the origin of the track segments, are not required in the base-line detector model. The time stamped track segments and their associated hits are the input data for the algorithmic procedures allowing to select, within the 2.5$\mu s$ LVL1 trigger latency of the host detector, the “luminosity measurement bunch crossings" and “monitoring bunch crossings". The subsequent event selection algorithms are based solely on the host detector signals. The “Region of Interest" (ROI) [@ATLAS] mechanism is used to correlate, within the level 2 (LVL2) trigger latency, the luminosity detector data with the relevant data coming from the host detector. Host Detector Model =================== The ATLAS detector [@ATLAS] was chosen in our studies as the host detector. The following data coming from this detector are used in the LVL2 and Event Filter (EF) trigger algorithms: - the parameters of the tracks selected using the ROI mechanism – available within the LVL2 0.01 s long trigger latency. We recall here that the luminosity detector angular acceptance is covered fully by the acceptance of the ATLAS silicon tracker [@ATLAS], - the energies, the shape variables and timing of the electromagnetic/hadronic clusters in the LAr calorimeter in the ROI-restricted ($\phi, \eta$) regions – available within the LVL2 trigger latency, - the reconstructed hits in the LUCID detector [@ATLAS] – available within the LVL2 trigger latency, - the reconstructed hits in the BCM (Beam Current Monitor)[@ATLAS] – available within the 1 s long Event Filter (EF) trigger latency, - the precise parameters of the vertex-constrained tracks and the multiplicity of all the charged particles with $p_T > 0.4$ GeV/c produced within the range of pseudorapidity $|\eta| <2.5$ – available within the EF trigger latency. It is assumed further that the following host detector performance requirements will be fulfilled: - the pion/electron rejection factor of 10 at both the LVL2 and the EF trigger levels, - full identification of bunch crossings in which the charged particles are traversing the LUCID detector ($5.5 \leq |\eta| \leq 6.5$), - full identification of bunch crossings in which the charged particles are traversing the BCM ($3.9 \leq |\eta| \leq 4.1$) detector. In the studies presented below the host detector measurement and event selection biases are not taken into account. The LVL2 and the EF event selection efficiencies, the reconstruction and the electron/positron identification efficiencies, for the specified above values of the electron/pion rejection factors, are thus set to be equal 1. It is further assumed that the lepton pair momentum vectors are reconstructed with infinite precision. Detailed studies based on realistic host detector performance simulations, indispensable if the proposed method is endorsed by the host detector collaboration, are both external to the scope of this paper and, more importantly, of secondary importance. The luminosity measurement method presented in this series of papers was designed such that all the host detector measurement biases, resolution functions and efficiencies can be determined directly using the data coming from the host detector recorded event samples. The selection algorithms are discussed below for the two settings of the strength of a uniform solenoidal magnetic field: $B~=~0$ and $B~=~2$ T which are labeled respectively as $\bf{B0}$ and $\bf{B2}$. These two magnetic field configurations correspond to the zero current and the nominal current of the ATLAS central tracker solenoid. Event Selection Model {#sec:selection} ===================== Level 1 Trigger --------------- The luminosity detector trigger logic analyses the pattern of hits on a bunch-by-bunch basis. In the first step it selects only the “low multiplicity bunch crossing" candidates [@second] i.e. the crossings with less than $N_0$ hits in both the $z > 0$ and the $z < 0$ sections of the luminosity detector[^4]. In the second step, for each low multiplicity bunch crossing, the track segments are formed and time-stamp validated. The time-stamp validation of the track segments consist of assigning each of them to one of the following two classes: the “in-time segments" and the “out-of-time segments". These and the subsequent LVL1 selection steps are distinct for the $\bf{B0}$ and $\bf{B2}$ magnetic field configurations. ### $\bf{B0}$ Case {#sec:evseltrigB0} In the $\bf{B0}$ case a track segment is formed by any combination of the hits, $i_1$, $i_2$ and $i_3$, in the three detector planes satisfying the following requirements: $$min(| i_1 - i_3 |, | i_1 - i_3 +3142 |, | i_3 - i_1 +3142 | ) < i_{cut},$$ AND \ $$\left\{\frac{i_3+i_1}{2}+ i_{cut} > i_2 > \frac{i_3+i_1}{2}-i_{cut} {\rm\ \ \} \underline{{\rm\ if\ } | i_1 - i_3 | < 1571 \ }}\right.,$$ OR \ $$\left.\frac{i_3+i_1 + 3142}{2} + i_{cut} < i_2 < \frac{i_3+i_1 +3142}{2}-i_{cut} \right.\\$$ $$\left. \underline{{\rm\ if\ } | i_1 - i_3 | > 1571, ~ i_1 + i_3 < 3142\ }\right.,$$ [ ]{} $$\centerline{OR}\\$$ $$\left.\frac{i_3+i_1}{2} - 1571+ i_{cut} < i_2 < \frac{i_3+i_1}{2} -1571-i_{cut} \right.\\$$ $$\left. \underline{{\rm\ if\ } | i_1 - i_3 | > 1571,~ i_1 + i_3 \geq 3142\ }\right\}.$$ The $i_{cut}$ parameter value is driven by the luminosity detector thickness expressed in units of the radiation length. The in-time segments have the time stamps within the time window of $\Delta t_{B0}$ width and an off-set of $t0_{B0}$ with respect to the bunch crossing time stamp. All the other track segments are assigned to the out-of-time class[^5]. In the $\bf{B0}$ case the width of the time window reflects both the longitudinal LHC bunch size and the radial size of the luminosity detector. A bunch crossing is selected by the luminosity detector LVL1 trigger as the “2+0" candidate if there are exactly two in-time track segments specified in terms of the hit triples: $i_{1,1}$, $i_{1,2}$, $i_{1,3}$ ($i_{2,1}$, $i_{2,2}$, $i_{2,3}$) in the left (right) detector part and no in-time track segments in the opposite right(left) one. The “coplanar pair" bunch crossing candidates are those of the “2+0" ones in which the hit positions in the first and in the third plane satisfy the following conditions: $$\left\{ \begin{array}{ccl} (i_{1,1}+i_{1,3}) \leq 3142 &\wedge& (i_{2,1}+i_{2,3}) < (i_{1,1}+i_{1,3}) + a_0 \\ (i_{1,1}+i_{1,3}) \leq 3142 &\wedge& (i_{2,1}+i_{2,3}) > (i_{1,1}+i_{1,3}) + b_0 \\ (i_{1,1}+i_{1,3}) > 3142 &\wedge& (i_{2,1}+i_{2,3}) < (i_{1,1}+i_{1,3}) - b_0 \\ (i_{1,1}+i_{1,3}) > 3142 &\wedge& (i_{2,1}+i_{2,3}) > (i_{1,1}+i_{1,3}) - a_0. \end{array} \right. \label{eq:eqline}$$ The above conditions represent the algorithmic, LVL1 trigger implementation of the coplanar particle pair selection procedure for the $\bf{B0}$ magnetic field configuration discussed in [@second]. A bunch crossing is selected as the “silent bunch crossing" candidate if there are no in-time track segments in both the left and right side of the luminosity detector. ### $B2$ Case {#sec:evseltrigB2} In the $\bf{B2}$ case a track segment is formed by any combination of the hits $i_1$, $i_2$, $i_3$ in the three detector planes which satisfy the requirements outlined below: $$min(| i_1 - i_3 |, | i_1 - i_3 +3142 |, | i_3 - i_1 +3142 | ) < i_{cut}+i_{helix},$$ AND \ $$\left\{\frac{i_3+i_1}{2}+ i_{cut} > i_2 > \frac{i_3+i_1}{2}-i_{cut} {\rm\ \ \} \underline{{\rm\ if\ } | i_1 - i_3 | < 1571 \ }}\right.,$$ OR \ $$\left.\frac{i_3+i_1 + 3142}{2} + i_{cut} < i_2 < \frac{i_3+i_1 +3142}{2}-i_{cut} \right.\\$$ $$\left. \underline{{\rm\ if\ } | i_1 - i_3 | > 1571, ~ i_1 + i_3 < 3142\ }\right.,$$ [ ]{} $$\centerline{OR}\\$$ $$\left.\frac{i_3+i_1}{2} - 1571+ i_{cut} < i_2 < \frac{i_3+i_1}{2} -1571-i_{cut} \right.\\$$ $$\left. \underline{{\rm\ if\ } | i_1 - i_3 | > 1571,~ i_1 + i_3 \geq 3142\ }\right\}.$$ The magnetic field strength dependent $i_{helix }$ value drives the effective momentum cut-off of the charged particles reaching the luminosity detector. The in-time track segments have the time-stamps within the time window of $\Delta t_{B2}$ width and the off-set of $t0_{B2}$ ns with respect to the bunch crossing time-stamp. The timing parameters are different for the $\bf{B0}$ and $\bf{B2}$ cases because for a given charged particle its track length measured from the interaction vertex to the entry point into the fiducial volume of the luminosity detector is different in these two cases. For the equidistant spacing of the $z$-planes the track segment finding and the time-stamp validation algorithms remain invariant with respect to the change of the magnetic field configuration - its influence is restricted only to the parameters of the algorithms. As in the $\bf{B0}$ case, a bunch crossing is selected, by the luminosity detector LVL1 trigger, as the “2+0" candidate if there are exactly two in-time track segments in the left (right) detector part and no in-time track segments in the opposite one. However, in the $\bf{B2}$ case a supplementary condition is required to retain only the opposite charge particle tracks: $$( i_{1,1} - i_{3,1} \geq 0 ~~if~~ i_{1,2} - i_{3,2} < 0) ~~or~~ (i_{1,1} - i_{3,1} < 0 ~~if~~ i_{1,2} - i_{3,2} \geq 0).$$ The “coplanar pair" bunch crossing candidates are those of the “2+0" ones in which the hits $i_{1,1}$, $i_{1,3}$ $(i_{2,1}, i_{2,3})$ satisfy the following conditions: $$\left\{ \begin{array}{ccl} | i_{1,1} - i_{2,1}| &<& a_2\cdot( i_{1,1} - i_{1,3} + i_{2,1} - i_{2,3} )\\ | i_{1,1} - i_{2,1}| &<& b_2\cdot( i_{1,1} - i_{1,3} + i_{2,1} - i_{2,3})+e_2 \\ | i_{1,1} - i_{2,1}| &>& c_2\cdot( i_{1,1} - i_{1,3} + i_{2,1} - i_{2,3}) \\ | i_{1,1} - i_{2,1}| &>& d_2\cdot( i_{1,1} - i_{1,3} + i_{2,1} - i_{2,3})-f_2. \end{array} \right. \label{eq:diamond}$$ The above conditions define the LVL1 trigger algorithm for the coplanar particle pair selection procedure for the $\bf{B2}$ magnetic field configuration (see [@second] for detailed discussion). Similarly to the $\bf{B0}$ case, a bunch crossing is selected as the silent bunch crossing if there are no in-time track segments in both parts of the luminosity detector. LVL1 Trigger Bits ----------------- It is assumed that the following LVL1 trigger bits are sent by the luminosity detector to the Central Trigger Processor (CTP) of the host detector: - the “coplanar pair candidate" (CPC) trigger bit, - the “2+0" (two plus zero - TPZ) trigger bit, - the “silent bunch crossing" (SBC) trigger bit, - the “low multiplicity bunch crossing" (LMBC) trigger bit. They are assumed to be broadcasted on the bunch-by-bunch basis. For the CPC (TPZ)-trigger-selected bunch crossings the information on the $\phi$-sector positions of the two in-time track segments is delivered the host detector Level 2 trigger algorithms using the ROI mechanism. The CPC trigger is not prescaled at the CTP level. While the CPC-selected bunch-crossing data are used in the luminosity determination, the CTP-prescaled TPZ, SBC and LMBC ones, together with the CTP-selected random bunch crossings data, are used for a precision monitoring of the luminosity detector performance. In particular, in all the aspects which require correlating of the luminosity detector signals with the host detector ones on bunch-by-bunch basis. Level 2 Trigger --------------- The next step in the event selection chain is based entirely on the host detector LVL2 trigger algorithms. In the following we shall discuss only the LVL2 selection criteria for the LVL1-accepted coplanar pair candidate events, for which the CPC trigger bit was set to one[^6]. As before, the selection criteria are different for each of the two magnetic field configurations. ### $\bf{B0}$ Case {#bfb0-case} In the first step of the LVL2 selection algorithm chain a search for a narrow energy cluster of the total energy above 1 GeV is performed. The search is confined to the electromagnetic calorimeter $\phi$-sectors pointed out by the luminosity detector LVL1 ROI. If two electromagnetic clusters are found and if their timing, determined from the pulse-shapes in the channels belonging to clusters, is compatible with the track segment time-stamp the event is retained for the subsequent selection steps. The next step links the luminosity detector track segments to the Silicon Tracker (SCT) hits. If the linking is successful and if the corresponding SCT track segments cross each other in a space point within the proton bunch overlap IP region then the reduced $vertex$ acoplanarity $\delta\phi_r$ is recalculated using the SCT hits. Events are retained for further processing if $\delta\phi_r < \delta\phi_r^{cut}$ is fulfilled. The subsequent algorithm verifies if both clusters pass the electron selection criteria by analysing their lateral and longitudinal shape. Events passing all the above selection steps will be called in the following the LVL2 trigger “inclusive electron-positron pair" candidates. In the subsequent LVL2 step the LUCID detector signals are analysed and an event is selected as the LVL2 “exclusive electron-positron pair" candidate if there are no in-time particle hits in the LUCID tubes[^7]. ### $\bf{B2}$ Case The only difference in the selection steps of the LVL2 exclusive electron-positron pair candidates for the $\bf{B2}$ case is that the cluster energy cut is no longer imposed. This is because the equivalent cut, was already made by the LVL1 trigger. Event Filter ------------ The LVL2 selection of the exclusive electron-positron pair candidate events is subsequently sharpened at the Event Filter (EF) level. Similarly to the LVL1 and LVL2 cases the selection criteria for both configurations of the magnetic field slightly differ. ### $\bf{B0}$ case {#bfb0-case-1} In the first step of the EF selection algorithm chain events with any reconstructed particle tracks within the tracker fiducial volume other than the tracks of the electron pair candidate are rejected. Subsequently, the reduced [*vertex*]{} acoplanarity $\delta\phi_r$ is recalculated using the re-fitted vertex constrained values of the parameters of the lepton tracks and the $\delta\phi_r < \delta\phi_r^{cut}$ cut is sharpened. Next, the EF electron selection algorithms exploiting full information coming from both the tracker and the LAr calorimeter are run and a further rejection of hadrons mimicking the electron signatures is performed. Finally, the event exclusivity requirement is sharpened by demanding no particle hits in the BCM in a tight time window. ### $\bf{B2}$ Case There are only two differences in the selection steps of the exclusive lepton pair candidates in the $\bf{B2}$ field configuration case. These are: - a replacement of the exclusivity criterion using the reconstructed track segments by the corresponding one based on the reconstructed tracks with the transverse momentum $p_{T} > 0.4$ GeV/c, - a restriction of the transverse momentum of a pair to the region $p_{T,pair}~<~0.05$ GeV/c. Optimisation of the LVL1 Trigger ================================ Algorithm parameters -------------------- The LVL1 trigger optimisation goal is to determine an optimal set of the algorithm parameters. These parameters, defined in Section \[sec:evseltrigB0\] and \[sec:evseltrigB2\], specify: - the definition of the track segments, - the classification of the track segments into the in-time and out-of-time classes, - the coplanarity of particle pairs (at the interaction vertex). An optimal set of parameters maximises the rate of the exclusive coplanar pair candidate events while retaining the overall luminosity detector LVL1 trigger rate at ${\cal O}(1)$ kHz level. The latter restriction takes into account the present capacity of the ATLAS TDAQ system and assumes that at most $\sim 2\%$ of its throughput capacity can be attributed to the luminosity detector triggered events. For these events the event record length and the LVL2 and the EF filter processing times are significantly smaller than for any other physics triggers of the present host detector LVL1 trigger menu. Therefore, the strain on the LVL2 and EF throughputs is negligible. The LVL1 trigger algorithm parameters were optimized by simulating the selection process for large samples of the bunch crossings containing the signal and the background events. The LVL1 trigger algorithms assigned the 0 or 1 values to the CPC, SBC and LMBC LVL1 trigger bits for every bunch crossing[^8]. The following sets of parameters maximises the signal to background ration while retaining the overall luminosity detector LVL1 trigger rate below 2 kHz level: - track segment definition: $$i_{cut}=20, i_{helix }= 130,$$ - selection of in-time track segments: $$\Delta t_{B0}= 1.5~ns, \Delta t_{B2} = 4~ns, t0_{B0} = t0_{B2} = 19~ns,$$ - pair acoplanarity selection: $$a_0 = 3202, b_0 = 3082,$$ $$a_2 = 5.97, b_2 = 1.78, c_2 = 4.97,$$ $$d_2 = 30.97, e_2 = 953.4, f_2 = 6630.0.$$ The $i_{helix }= 130$ corresponds to the effective low-momentum cut-off for particles producing a track segment in the luminosity detector of 1 GeV/c. The $i_{cut}=20$ reflects the assumed thickness of the detector planes of 0.1$X_0$ each. The increase of the width of the in-time window, for the $\bf{B2}$ case reflects the dispersion of the helix length of the charged particle trajectories between the interaction vertex and the first plane of the luminosity detector. The optimal selection region of coplanar pairs is illustrated in Fig. \[plot15\] for a chosen set of selection parameters. (130,120) (0,65)[(0,0)\[lb\][ ]{}]{} (0, 3)[(0,0)\[lb\][ ]{}]{} (65,65)[(0,0)\[lb\][ ]{}]{} (65, 3)[(0,0)\[lb\][ ]{}]{} (95,62)[(0,0)\[cb\][**(b)**]{}]{} ( 33,62)[(0,0)\[cb\][**(a)**]{}]{} ( 33,-2)[(0,0)\[cb\][**(c)**]{}]{} (95,-2)[(0,0)\[cb\][**(d)**]{}]{} (110,121)[(0,0)\[cb\][**[B = 0 T]{}**]{}]{} ( 46,121)[(0,0)\[cb\][**[B = 0 T]{}**]{}]{} ( 46,59)[(0,0)\[cb\][**[B = 2 T]{}**]{}]{} (110,59)[(0,0)\[cb\][**[B = 2 T]{}**]{}]{} Note, that for the $\bf{B2}$ configuration a sizable fraction of the electron-positron pair signal events is rejected by the track segment validation criteria. In these events at least one of particles has the momentum lower than $\sim$1 GeV/c. The selection efficiency of the coplanar particle pairs is determined using the reduced acoplanarity, $\delta\phi_r$, variable which is defined as $$\delta\phi_r = \delta\phi/\pi,$$ with $$\delta\phi=\pi-min(2\pi-|\phi_{1}-\phi_{2}|,|\phi_{1}-\phi_{2}|),$$ where $\phi_{1}$, $\phi_{2}$ are the azimuthal angles of the particles at the interaction vertex. Algorithm efficiency -------------------- The efficiency of the LVL1 trigger algorithms of selecting the coplanar particle pairs using the above sets of parameters is illustrated in Fig. \[plot16\]. In this figure the reduced acoplanarity distributions are plotted for the signal and background events, the two magnetic field configurations and for the initial and the CPC-trigger selected samples events. In the $\bf{B0}$ case the LVL1 algorithms rejects majority of the background events while retaining the signal events. In the case of the $\bf{B2}$ field configuration a sizable reduction of the electron-positron pair selection efficiency is not related to the luminosity detector performance. It is almost entirely driven by the constraint on the luminosity detector position within the host detector fiducial volume giving rise to a large probability of bremsstrahlung of the hard photon by the electron or positron on the path between the collision vertex and the luminosity detector entry point. The efficiencies presented above correspond to the most pessimistic assumptions on the luminosity detector performance and on the dead material budget in front of the luminosity detector. Firstly, the base-line luminosity detector model was used. This model does not employ a precise relative timing in each of the three detector planes. Therefore, the event-by-event LVL1-trigger reconstruction of the position of the collision vertex, an option discussed in our previous paper [@second], was not made. Such a function would significantly improve the sharpness of the LVL1 acoplanarity algorithm. Secondly, the studies were made for the 0.9$X0$ of dead material in front of the luminosity detector giving a rise to large multiple scattering effects. Thirdly, and most importantly, the hard photon radiation was assumed to take place at the collision vertex, inducing significant loss of efficiency for the $\bf{B2}$ case. Therefore, the results in this section represents the most conservative estimate of the electron-positron pair event selection efficiency. Rates ===== In Figure \[plot17\] the integrated rates of the LVL1-accepted coplanar pair candidate events are plotted as a function of the upper limit on the pair $\delta\phi_r$ for the two magnetic field configurations, for the signal and the background events. The distributions for the background events are shown at all the stages of the event selection procedure. This plot was made for the machine luminosity of $L~=~10^{33}$ cm$^{-2}$ s$^{-1}$ distributed uniformly over all the available bunch crossing slots. (130,150) (0,65)[(0,0)\[lb\][ ]{}]{} (0, 0)[(0,0)\[lb\][ ]{}]{} (65,65)[(0,0)\[lb\][ ]{}]{} (65, 0)[(0,0)\[lb\][ ]{}]{} ( 33,63)[(0,0)\[cb\][**(a)**]{}]{} (95,63)[(0,0)\[cb\][**(b)**]{}]{} ( 33,-3)[(0,0)\[cb\][**(c)**]{}]{} (95,-3)[(0,0)\[cb\][**(d)**]{}]{} This figure demonstrates that the proton-proton collisions producing coplanar electron-positron pairs can be efficiently selected from the background of minimum bias events. A reduction of the signal rate seen for the $\bf{B2}$ field configuration as compared to the $\bf{B0}$ configuration is driven mainly by radiation of hard photons by the electrons/positrons passing the host detector dead material. The signal to noise ratio is the largest for the smallest reduced acoplanarity cutoff. It underlines the need for a fine $\phi$-segmentation of the luminosity detector. (130,70) (0,0)[(0,0)\[lb\][ ]{}]{} (65,0)[(0,0)\[lb\][ ]{}]{} ( 33,-5)[(0,0)\[cb\][**(a)**]{}]{} (95,-5)[(0,0)\[cb\][**(b)**]{}]{} In Figure \[plot18\] the ratio of the signal to the background rates at the EF selection stage is shown as a function of the machine luminosity for the three values of the upper limit of the reduced acoplanarity for the two settings of the detector magnetic field. These plots show that, already for the model of the base-line detector, the proposed event selection procedure assures a comfortable value of the signal to noise ratio over a large range of the machine luminosities. The drop of the signal to noise ratio for large luminosities is driven solely by a decreasing probability of the silent bunch crossing. For the $\bf{B2}$ case the signal to background ratio can be improved significantly by selecting events in a narrow bin of the reduced acoplanarity. (130,60) (0,0)[(0,0)\[lb\][ ]{}]{} (65,0)[(0,0)\[lb\][ ]{}]{} ( 33,-5)[(0,0)\[cb\][**(a)**]{}]{} (95,-5)[(0,0)\[cb\][**(b)**]{}]{} (130,60) (0,0)[(0,0)\[lb\][ ]{}]{} (65,0)[(0,0)\[lb\][ ]{}]{} ( 33,-5)[(0,0)\[cb\][**(a)**]{}]{} (95,-5)[(0,0)\[cb\][**(b)**]{}]{} In Figure \[plot19\] the signal rates for events passing the EF selection stage are shown as a function of the machine luminosity for the three values of the reduced acoplanarity cut. The rates drop significantly below the initial level of 1 Hz level discussed in [@first] but are sufficiently high to provide a statistically precise measurement of the luminosity. For the luminosities below $L\simeq 6\cdot 10^{32}$ s$^{-1}$cm$^{-2}$ the rate of the signal events rises with increasing luminosity. At higher luminosities the apparent rate drop is a consequence increasing average number of collisions per bunch crossing. For $L \simeq 6 \cdot 10^{32}$s$^{-1}$cm$^{-2}$ a 1% statistical precision can be achieved over the integrated time intervals of about 30 hours for the $\bf{B0}$ configuration and about 400 hours for the $\bf{B2}$ configuration. These time intervals could be decreased significantly if the LVL1 track acoplanarity tagging were made in vicinity of the collision vertex - a solution which is presently out of reach for the existing trackers of the LHC detectors but can be reconsidered while upgrading the LHC detector’s trackers for the high luminosity phase of collider operation. It is important to note that the direct extension of the proposed method beyond the luminosity value of $L= 10^{33}$s$^{-1}$cm$^{-2}$ requires at least one of the following three possible upgrades of the base-line luminosity detector: - adding a precise hit-timing measurement, - adding the z-planes to the luminosity detector and developing fast algorithms capable to determine the z-position of the track origins with a 1 mm precision within the LVL1 trigger latency, - adding the LVL1 trigger electron/pion rejection capacity functions. If such upgrades are not made, the absolute luminosity can be measured in the low and medium luminosity periods and subsequently “transported" to the high luminosity periods. Foundation of such a procedure is precise monitoring of the relative instantaneous luminosity over the whole range of the LHC luminosities and in fine time intervals. Monitoring of Instantaneous Luminosity ======================================= Goals ----- There are two basic reasons to measure the relative, instantaneous changes of the luminosity, $L(t)$, with the highest precision: - the method discussed here can reach the absolute luminosity precision target only for selected runs and selected bunch crossings and must be subsequently transported (extrapolated) to all the runs and bunch crossings, - the statistically significant samples of the electron-positron pair production events are collected over the time periods which are sizeably longer than the time scales of the changes in the detector recorded data quality (the electronic noise, the beam related noise, the event pile-up, the detector calibration and efficiencies, etc.) – the time evolution of the corresponding corrections must thus be weighed according to the instantaneous luminosity in the precision measurement procedures. Precise measurement of the relative instantaneous luminosity in fine time intervals is bound to be based on the rate of strong rather than electromagnetic interactions of the beam particles. In the scheme proposed in this paper the on-line luminosity is determined by counting the luminosity detector in-time track segments produced in strong interactions of the colliding protons. It is determined using solely the luminosity detector data and its local data acquisition system. The on-line luminosity is determined in ${\cal O}(1)$ minute intervals. The corresponding final off-line luminosity is then recalculated using the data recorded by the host detector. Counters -------- The following luminosity detector counters are proposed for a precision measurement of the instantaneous, relative on-line luminosities: 1. [**Track-Global**]{} - representing the mean number of the in-time track segments per bunch crossing both in the left and in the right side of the luminosity detector, 2. [**Track-Global-OR**]{} - representing the mean number of the in-time track segments per bunch crossing seen in both parts of the luminosity detector with the additional requirement that only those bunch crossings are considered for which there is at least one in-time track segment in either the left or in the right side of the luminosity detector, 3. [**Track-Global-AND**]{} - representing the mean number of the in-time track segments per bunch crossing in both sides of the luminosity detector with the additional requirement that only those bunch crossing are considered for which there is at least one in-time track segment in each of the luminosity detector sides, 4. [**Track-Event-OR**]{} - representing the fraction of bunch crossings with at least one in-time track segment either in the left or in the right half of the detector, 5. [**Track-Event-AND**]{} - representing the fraction of bunch crossings with at least one in-time track segment in each of the detector halfs, 6. [**Track-Left (Right)**]{} - representing the mean number of the in-time track segments per bunch crossing in the left (right) part of the luminosity detector, 7. [**Track-Event-OR-Left(Right)**]{} - representing the fraction of bunch crossings with at least one in-time track segment in the left (right) side of the luminosity detector, 8. [**Track-Sector($i_{L(R)})$**]{} - representing the mean number of the in-time track segments per bunch crossing in the $i_{L(R)}$–th $\phi$-sector of the left (right) side of the luminosity detector, 9. [**Track-Sector-OR($i_{L(R)})$**]{} - representing the fraction of bunch crossings with at least one in-time track segment in the $i_{L(R)}$–th $\phi$-sector of the left (right) side of the luminosity detector, 10. [**Track-Sector-Coinc($i_L,i_R)$**]{} - representing the mean number of the in-time track segments per bunch crossing in the $i_{L}$–th $\phi$-sector of the left detector side and in the $i_{R}$–th $\phi$-sector of the right detector side, 11. [**Track-Sector-Coinc-AND($i_L,i_R)$**]{} - representing the fraction of bunch crossings with at least one in-time track segment both in the $i_{L}$–th $\phi$-sector of the left detector side and in the $i_{R}$–th $\phi$-sector of the right detector side, 12. [**Track-SBC**]{} - representing the fraction of bunch crossings with no in-time track segments in each detector side. Subdividing the sample of track segments into 36 $\phi$-segment sub-samples allows to provide precise measurements over the whole range of the LHC luminosities ($10^{30} - 10^{34}$ cm$^{-2}$s$^{-1}$) independently of the number of pile-up collisions occurring within the same bunch crossing. The counting is done separately for paired, unpaired (isolated and non-isolated) and empty (isolated and non-isolated) bunch crossings. The counters proposed above, together with the corresponding hit-based and out-of-time track segments based counters, provide the necessary input data to determine also the instantaneous luminosity of the LHC in the whole range of the average number of collisions per bunch crossing, $< \mu >$ and in the full range of the dispersion of the bunch-by-bunch luminosity. The track segment based counters, contrary to the LUCID or BCM detector ones [@ATLAS_LUMI], are insensitive to the beam induced background and afterglow effects obscuring the extrapolation of the van der Meer scan luminosity to arbitrary data collection periods. Moreover, they can be precisely controlled in the off-line analysis of the host detector tracks traversing the luminosity detector volume. While the first five counters provide a fast diagnostic for the $L$ and $<\mu >$ dependent optimisation of the luminosity counting method, the following six counters are used directly by the instantaneous luminosity measurement algorithms. A detailed presentation of the counting algorithms is outside the scope of the present paper and will not be discussed here. The only aspect which may be elucidated is the extension of presently applied methods of the instantaneous luminosity measurement [@ATLAS_LUMI] to the full range of $< \mu >$. As the $< \mu >$ value increases the inclusive track counters are replaced, at first by counting of tracks separately in each of the $\phi$-sectors of both detector parts, and eventually, at the highest $< \mu >$, by counting of the left-right $\phi$-sector coincidences. For such a “step-by-step" procedure the unfolding of the number of interaction per bunch crossing $< \mu >$ is no longer necessary. Since the $L(t)$ value is determined locally by the luminosity detector algorithms it is, by definition, independent of the host detector dead time. Moreover, the relative luminosity can be monitored over the time periods when the host detector sub-components are in the stand-by mode. The precise off-line corrected $L(t)$ values can be determined for all the time periods for which at least the tracker and the calorimeter are in the data taking mode. The off-line correction factors can be determined using the sample of the host detector reconstructed tracks traversing the fiducial volume of the luminosity detector. The tracks are parasitically sampled with $\cal{O}$(1) kHz frequency using the sample of the host detector recorded events. In the method based on the track segments the event pile-up plays a positive role. It allows to increase the track sampling frequency and, as a consequence, to control the off-line correction factors in finer than ${\cal O}(1)$ minute time intervals. Absolute Luminosity =================== Low Luminosity Periods ---------------------- We shall consider first the case of the luminosity determination during the low instantaneous luminosity periods defined by the following condition on the average number of interaction per bunch crossing: $<\mu> \ll 1$. In these periods the probability of the silent bunch crossings is sufficiently large to base the luminosity determination on the measurement of the rate of the bunch crossings with exclusive production of electron-positron pairs. The integrated luminosity $L_{int}$ is calculated using the following formula: $$L_{int} = \sum_{t_i} \frac{N_s(t_i)\cdot(1- \beta(t_i))}{P^{silent} (t_i) \cdot Acc(t_i) \cdot \epsilon(t_i) \cdot \sigma_{e+e-} } \label{eq:lumi}$$ where, - $N_s(t_i)$ is the total number of the exclusive electron-positron pair candidates passing the LVL1, LVL2 and EF selection criteria which were recorded over the time interval $(t_i, t_i +\Delta t_i)$; - $\beta(t_i)$ is the fraction of the total number of the exclusive electron-positron pair candidates passing the LVL1, LVL2 and EF selection criteria which originate from the background strong interaction processes. This quantity is determined using a monitoring sample of the reconstructed TPZ trigger events. The rate of pairs created in strong interaction is measured in the $0.1 < \delta\phi_r < 0.3 $, where the contribution of the electromagnetic processes is negligible, and subsequently extrapolated to the signal $\delta\phi_r < \delta\phi_r^{cut}$ region. This extrapolation is insensitive to the particle production mechanism in strong interactions and can be performed in the model independent way. It is important to note that, as far as the silent bunch crossings are concerned, the time variation of $\beta(t_i)$ is very week as compared to the time variation of $N_s(t_i)$ and $P^{silent}(t_i)$. As a consequence only insignificant increase of the fraction of the host detector LVL1 band-width is required for the TPZ accepted events. The background sources other than those related to the genuine strong interactions processes are controlled using the unpaired and empty bunch crossings; - $P^{silent} (t_i)$ is defined as: $$P^{silent} (t_i) = \frac{R_{SBC}}{R_{BC}},$$ where $R_{BC}$ and $R_{SBC}$ are, respectively, the total number of paired bunch crossings and the total number of silent bunch crossings in the sample of paired bunch crossings within the time interval $(t_i, t_i +\Delta t_i)$; - $Acc(t_i)$ is the acceptance for the electron-positron pairs traversing the luminosity detector and satisfying the $ \delta\phi_r < \delta\phi_r^{cut}$ condition. The acceptance correction includes the detector smearing effects, the geometric acceptance of the luminosity detector, and all the dead material effects. The $Acc(t_i)$ values are determined in the model independent way using those of the particles produced are in recorded strong interaction collisions which traverse both the luminosity detector and the host detector tracker. The momentum scale, the detector smearing and the dead material effects, discussed in [@second], are directly measured using the abundant sources of electrons and positrons – the conversions of photons coming from the decays of neutral pions in the material of the beam pipe. The correction factors sensitive to the precise position of the electron (positron) track origin, are determined using the electron-positron pairs from Dalitz decays. The time variation of the acceptance due to an increase of the longitudinal emittance of the proton beam over the LHC run is controlled using the time evolution of the $z$-vertex distribution for the bulk of recorded events; - the efficiency $\epsilon(t_i)$ can be decomposed as follows: $$\epsilon(t_i) = \epsilon_{extr-}(t_i) \cdot\epsilon_{extr+}(t_i) \cdot\epsilon_{id-}(t_i) \cdot\epsilon_{id+}(t_i)\cdot P^{silent}_{LVL2/EF} (t_i),$$ where: $\epsilon_{extr+}(t_i) (\epsilon_{extr-}(t_i))$ is the efficiency of linking of the positive (negative) charge luminosity detector track segments to the vertex constrained, SCT/Pixel ones; $\epsilon_{id-}(t_i)$ $(\epsilon_{id+}(t_i))$ is the electron (positron) identification efficiency in the host detector LAr calorimeter; $P^{silent}_{LVL2/EF} (t_i)$ is the fraction of the luminosity detector silent bunch crossing with no LUCID (BCM) particle hits and no reconstructed charged particle tracks pointing the electron-positron pair vertex. The linking efficiency and the electron/positron identification efficiencies are determined using the full sample of recorded and reconstructed events. Their rate (${\cal O}(200)$ Hz) is sufficiently large for a precise control of the time dependence of these efficiencies. The $P^{silent}_{LVL2/EF}(t_i)$ values are determined using the CTP-prescaled fraction of the SBC triggered events. As in the previous case, the time variation of $P^{silent}_{LVL2/EF}(t_i)$ is by far less important than that of $N_s(t_i)$ or $P^{silent}(t_i)$; - $\sigma_{e+e-}$ is the total exclusive $e^+e^-$ pair production cross section. For exclusive coplanar pairs reconstructed in the fiducial volume of the luminosity detector this cross section is largely dominated by the cross section for peripheral collisions of the beam particles mediated by two photons [@first]. The acceptance and efficiencies depend upon the momenta of the electron and positron. This dependence and the corresponding integrations in eq. (\[eq:lumi\]) was dropped in the formulae for simplicity. The strength of presented method is that it is based on low $p_T$ electrons/positrons which are produced abundantly in the minimum bias collisions and recorded with the host detector, independently of the LVL1/LVL2 and EF class of events. These particles play the role of high precision calibration candles for the luminosity measurement procedures allowing to avoid almost completely the use of the Monte-Carlo based methods relying both on the modeling of the strong interactions and on the modeling of the luminosity detector performance. In addition, since the luminosity events are processed by the TDAQ system of the host detector, no corrections for the detector dead time and event losses at various stages of the data filtering process are required for the absolute normalisation of any recorded data sample. Medium Luminosity Periods ------------------------- In the phases of the LHC operation when the average number of $pp$ interactions per bunch crossing is $<\mu> = {\cal O}(1)$ the probability of an overlap of the electromagnetic and the strong interaction driven collisions becomes large and the losses of the $e^+e^-$ pair events by applying the exclusivity criteria at the LVL2 and EF levels need to be monitored with significantly higher precision and at much finer time intervals. The remedy is to extend the definition of the Silent Bunch Crossing, based so far exclusively on the luminosity detector signals, to a Global Silent Bunch Crossing (GSBC) based on the CTP coincidence of the SBC bit with the corresponding SBC bits coming from the LUCID and from the BCM detectors. The GSBC occurrence probability, $P_G^{silent}(t_i)$, would have to be monitored with precision similar to that of the instantaneous luminosity. Another, more elegant solution, is to multiplex selected LUCID and BCM LVL1 trigger signals and to send them as the input signals to the luminosity detector trigger logic. In this case, a care would have to be taken to position the luminosity detector trigger electronics racks in a place where the LUCID and BCM signals could arrive in-time. The luminosity formula \[eq:lumi\] remain valid for the medium luminosity periods. The only change with respect to the low luminosity case is to replace $P^{silent}(t_i)$ by $P_G^{silent}(t_i)$ and to replace $P^{silent}_{LVL2/EF}(t_i)$ by $P^{silent}_{EF}(t_i)$ representing the fraction of the global silent bunch crossings in which no reconstructed charged particle tracks pointing the lepton pair vertex were found within the tracker volume. High Luminosity Periods With Base-line Luminosity Detector ---------------------------------------------------------- For the phases of the LHC operation when the average number of $pp$ interactions per bunch crossing $<\mu> \gg 1$ the method based on the counting of $e^+e^-$ pairs in the restricted sub-sample of silent bunch crossings does not work any longer. There are two ways to proceed. An optimal one is to upgrade the capacities of the luminosity detector. This will be discussed in the next section. Another one, discussed below, uses the base-line detector and reorganises the data taking at the expense of a small reduction of the time-integrated luminosity. This method uses only a fraction of collisions for which the condition $<\mu> = {\cal O}(1)$ is fulfilled. The absolute luminosity is then extrapolated to the arbitrary data taking period using the measurement of the relative, instantaneous luminosity. There are three ways of collecting $<\mu> = {\cal O}$(1) data over the high luminosity running period of the LHC. 1. The first, obvious one, is based on dedicated machine runs with reduced luminosity per bunch crossing. If a fraction below 10% of the machine running time is devoted to such runs the effect of reduced overall luminosity on the physics results would be unnoticeable for the searches of rare events and very useful for the physics programs requiring large samples of events with single collisions per bunch crossing. This programme profits from the relatively large cross section for the electron-positron pair production. 2. In the second method the luminosity detector triggers are activated only at the end of the machine luminosity run when the currents of the beams decrease or the beam emittance increases such that the $<\mu> = {\cal O}(1)$ condition is fulfilled. The applicability of this method depends upon the beam life-time and the run-length of the collider. For the present running strategy, maximising the collected luminosity, only a small increase of the range is feasible. 3. The third method would require a special LHC bunch-train injection pattern in which one of the twelve bunch trains (4 $\times$ 72 bunches), reflecting the complete Linac, Proton Synchrotron Booster (PSB), Proton Synchrotron (PS) and Super Proton Synchrotron (SPS) cycle, contains bunches with a reduced number of protons. The reduction factor, $~ \sqrt{<\mu>}$, depends upon the average number of collisions per bunch crossing for the remaining eleven bunch trains. The luminosity detector triggers are proposed to be masked unless they are in coincidence with crossings of the low intensity bunches. If such running mode can be realised at the LHC[^9] the absolute and relative luminosities could be sampled over the same time periods. This could allow for a drastic reduction of all the relative, time dependent measurement uncertainties. More importantly, a concurrent storage of the highest possible intensity bunches with the low intensity bunches at the LHC would be beneficial for the LHC precision measurement programme. It would allow concurrent measurement of the pile-up effects in those of the physics observables which need to be measured over the large time span, thus inevitably over a large $<\mu>$ range. The above running scenario is technically feasible [@Myers], but requires a wide consensus of the four LHC experiments. In each of the above strategies the extrapolation of the absolute luminosity measured for the $<\mu>= {\cal O}(1)$ bunch crossings, to an arbitrary bunch crossing set and data collection period is derived from the following $\phi$-sector track counters: [**Track-Sector$(i_{L(R)})$**]{}, [**Track-Sector-OR$(i_{L(R)})$**]{}, [**Track-Sector-Coinc$(i_L,j_R)$**]{} and [**Track-Sector-Coinc-AND$(i_L,j_R)$**]{}). The reason for choosing the method based upon the $\phi$-sector track rates is to assure that the probability of an observation of a track segment in a restricted phase space per minimum bias event is sufficiently small to disregard the pile-up effects in the luminosity calculation algorithms in the whole luminosity range: ([**Track-Sector($i_{L(R)}$)**]{}$ \ll 1$ or [**Track-Sector-Coinc($i_L,j_R)$**]{} $\ll 1$). The statistical precision of this method is assured by the use of the mean values over all the $\phi$-sectors of the [**Track-Sector-OR($i_{L(R)})$**]{} or [**Track-Sector-Coinc-AND($i_L,j_R)$**]{}. High Luminosity Periods with Upgraded Luminosity Detector --------------------------------------------------------- In all the studies presented so far in this paper the model of the base-line detector was used. For the direct measurement of the absolute luminosity in large $<\mu>$ runs an upgrade of the detector functionalities is necessary. The detailed discussion of such an upgrade is beyond the present work scope. However, it is worth sketching already here the basic conceptual and hardware aspects of such an upgrade. The principal upgrade goal is to provide, within the LVL1 trigger latency, not only the luminosity detector in-time track segments but, in addition, the measured $z$-positions of their origin with ${\cal O}(1)$mm precision. The search of the coplanar particle pairs must be restricted, in large $<\mu>$ runs producing multiple vertices, only to the track segments pointing to the same vertex and satisfying the condition that no other luminosity detector track segment, except for the two coplanar tracks, is associated with it. Similarly, at the EF level, the luminosity events have to be selected only if there were no reconstructed host detector tracks pointing to the $e^+e^-$ pair vertex. It should be noted that the LUCID and the BCM detectors’ signals can no longer be used in the search process of the exclusive electron-positron pairs. The corresponding reduction of the rejection power of the background events based on the exclusivity criteria would have to be recuperated by a more efficient electron/pion recognition. On the other hand, the restriction of the luminosity measurement to only the silent bunch crossings would no longer be necessary. The rate of selected electron-positron pairs would increase significantly at the price of a smaller signal to noise ratio, which in turn could be compensated by a higher cut-off on the electron (positron) momentum leading to a better pion rejection than the one assumed in the base-line model. Two directions of upgrading the luminosity detector can be singled out. The first one is to use the detector technology which provides a high precision timing of the particle hits, such as the one being developed for the Roman Pot Project [@RomanPot]. The use of the hit timing was discussed in our earlier paper [@second]. In addition an extra radial segmentation of the luminosity detector would have to complement the hit-timing based backtracking by reconstructing the track segments in three rather than in two dimensions. Both can be achieve applying for example the micromegas technology [@micromegas]. The second one, is to use a hadron blind detector [@Charpak]. Note that in both cases a significant increase of the processing power of the LVL1 trigger FPGAs would be required. Merits of [**B0**]{} Runs ========================= All the algorithms and methods presented above can be used for the runs with the nominal strength of the host detector magnetic field and in those for which the solenoid magnetic field switched off. The merits of supplementing the standard [**B2**]{} configuration runs by the dedicated, [**B0**]{} ones are numerous. First of all, the luminosity measurement becomes almost insensitive to the radiation of photons by the electrons and positrons traversing the host detector dead material. The [**B0**]{} runs allow to cross-check the precision of the understanding of radiation effects which otherwise have to be monitored using the Dalitz pairs and photon conversions. Moreover, the complexity of the luminosity detector FPGA algorithms would be drastically reduced facilitating the applicability of the proposed method to the periods of high luminosity. This is because, only in the [**B0**]{} case, the algorithms reconstructing the vertices of the interactions occurring in the same bunch crossing can be decoupled from the acoplanarity algorithms. In addition, the EF exclusivity cut based on the reconstructed host detector reconstructed tracks is significantly more efficient in rejecting fake electron-positron pairs due to the strong interactions (all the charged particle tracks including a very low momentum ones could be reconstructed). It has to be stressed that the rate of the selected coplanar electron-positron pairs in the high luminosity periods reaches the ${\cal O}(1)$Hz level. Therefore, only insignificant fraction of the luminosity would be sacrificed for the absolute luminosity measurement runs. The propagation of the absolute luminosity from the [**B0**]{} runs to the [ **B2**]{} runs can be performed precisely and elegantly by a simultaneous measurement of the rate $Z$ bosons in the [**B0**]{} runs[^10] which can be subsequently used for the absolute luminosity extrapolation to any [**B2**]{} period of the duration larger than that required to collect ${\cal O}(10)$pb$^{-1}$ of the integrated luminosity. Note, that several systematic uncertainties of the absolute luminosity measurement cancel in the ratio of cross section for the coplanar electron-positron pair production and to that for the $Z$ boson production (among these are the LAr absolute scale error, remaining effects of electron radiation, etc..,) The advantage of the scheme described above comes from the extraordinary coincidence that in the fiducial volume of the proposed luminosity detector the rate of the electron-positron pairs produced in “elastic" collisions of point-like protons is comparable to the rate of the $Z$ bosons produced in the inelastic ones. Absolute Cross Sections ======================= The luminosity method proposed in this series of papers extends and simplifies the present techniques of the absolute normalisation of those of the distributions of physical observables which are derived from the data collected over a long time-period. The standard luminosity block based technique [@ATLAS_LUMI] remains valid. However, it is no longer indispensable. In the proposed method users could choose independently the optimal fraction of collected data satisfying the data quality criteria defined on the bunch-by-bunch rather than on the block-by-block basis. The only requirement is that the same criteria are used both for the user selected sample and for the corresponding sample of the electron-positron pair luminosity events. Instead of the list of valid luminosity blocks the user would be provided with the ready-to-use offline algorithms to calculate the integrated luminosity corresponding to the user-specific data quality criteria[^11]. The overall detector dead time and all the losses of the events at any stage of the data selection process which are independent of the data content (LVL2/EF processors’ timeouts, etc.) need no longer be monitored and accounted for - these effects are automatically taken into account in the coherent analysis of the two data samples: - the user selected sample, - the electron-positron pair luminosity event sample. The key simplification, proposed below, is to avoid altogether all the uncertainties on the rate of the silent bunch crossing and on the time dependent efficiencies of the $e^+e^-$ pair selection. The underlying trick is to assign each event of the user selected samples of bunch crossings into the following two samples: the first one containing all the user preferred events and the second one containing only those events for which there were no luminosity detector track segments other than those associated with selected $pp$ collision. The absolute luminosity is determined first for the latter class of events and subsequently extrapolated, using the measurement of the relative luminosity, to the full sample. For such a procedure the uncertainties in the monitoring of the silent bunch crossings cancel in the ratio of the numbers of accepted luminosity and the user-selected events. The electron-positron pair selection efficiencies for the luminosity events are calculated, within this method, using only the soft particles produced in the user-selected events. Thus all the efficiencies can be sampled in the same way for the luminosity and for the user selected events. Precise monitoring of the time dependent detector and the beam quality related features is thus no longer necessary. Systematic Uncertainties ======================== The overall luminosity measurement errors, are dominated by the systematic measurement uncertainties[^12]. The key element of the measurement strategy presented in this series of papers, allowing for a significant reduction of the systematic errors and for a precise control of the remaining ones, is the placement of the luminosity detector within the host detector tracker and calorimeter geometrical acceptance regions. The dominant systematic uncertainties, reflecting all the time dependent aspects of the machine and detector performances, can be controlled using the soft particle tracks and energy deposits in the sample of ${\cal O}(100)$ bunch crossings recorded by the host detector. The rate of soft particles is large enough to use only the preselected bunch crossing data for the monitoring[^13]. If the sample of the bunch crossings chosen for the monitoring is identical to the one selected for the measurement of any given observable than, by definition, both the luminosity measurement monitoring data and the user selected data are sampled concurrently. Such a concurrent sampling does not only reduce the systematic errors but, in addition, simplifies the procedure of the absolute normalisation of the measured observables by replacing the time dependent quantities, entering the luminosity master formula, by the quantities averaged over the selected sample of bunch crossings. This is one of the merits of the proposed method. It allows for a continuous improvement of the accuracy of the absolute measurements with the increase of the measurement time without a necessity of a precise book-keeping of all the time dependent detector and machine performance features. Background Subtraction ---------------------- The fraction of the total number of exclusive electron-positron pair candidates coming from the strong interaction background sources, $\beta(t_i)$, is determined using solely the monitoring data, bypassing all the modeling uncertainties of the minimum bias events. The upper bound of its initial (anticipated) systematic uncertainty was determined using the simulated PYTHIA events. This procedure, illustrated in Figure \[plot20\], mimics algorithmically the ultimate procedure based on the recorded events. The continuous line, depicted in Figure \[plot20\], represents a linear fit to the reduced acoplanarity distribution of the unlike charge particle pairs for those of the background events (PYTHIA) for which the TPZ LVL1 trigger bit was set to 1. This distribution was fitted in the $(0.1, 0.3)$ interval in which the contribution of the electron-positron pair events is negligible [@first], and then extrapolated down to $\delta\phi_r = 0$ using the fit parameters. The extrapolation result is represented by the dash-doted line in Figure \[plot20\]. In the next step, the $\delta\phi_r$ distribution for the like charge pairs satisfying the $TPZ = 1$ condition was fitted in the $(0.1, 0.3)$ interval and extrapolated to $\delta\phi_r = 0$ using the fit parameters. The extrapolation result is marked with the dotted line in Figure \[plot20\]. The upper bound of the background subtraction uncertainty corresponds to the spread of the distributions and is of the order of 0.4% for $\delta\phi_r < 0.05$[^14]. This plot proves that the extrapolation to the small acoplanarity region is insensitive to the total particle pair charge and reflects merely the phase-space for multi-particle production in strong interactions. Acceptance Correction --------------------- The systematic uncertainties on the acceptance, $Acc(t_i)$, are, as before, controlled using the host detector recorded data. The systematic effects are subdivided into two classes: the host detector effects and the luminosity detector effects. The contribution of the host detector effects such as: - the losses of the electrons and positrons on the way from the interaction vertex to the luminosity detector fiducial volume – due to hard photon radiation in the material of the beam pipe or of the host detector, - the biases in the reconstructed momentum scale of those of the host detector reconstructed charged particles which traverse the fiducial volume of luminosity detector, - the momentum resolution biases, - the biases in the absolute energy calibration of the LAr calorimeter (important only for the [**B0**]{} configuration), - the systematic shifts and resolution biases in the reconstructed azimuthal angles of particles at the interaction vertices, to the overall measurement error is significantly smaller that the contribution of the luminosity detector effects[^15]. The luminosity detector systematic errors were determined by simulating the full data selection and measurement chain with the biases introduced on one-by-one basis. The goal of these simulations was to quantify the impact of each of the luminosity detector systematic effects on the final systematic uncertainty of the measured luminosity. The results of these simulations can be summarised as follows: - the effect of the luminosity detector misplacement by $0.5$ cm with respect to the nominal $z$-collision point of the LHC bunches translates into a 0.3% luminosity bias, - the effect of decentering of the luminosity detector with respect to the beam axis ($x = y = 0$) by $1$ mm translates into a 0.8% luminosity bias, - the effect of the relative $\phi$-tilt of the luminosity detector planes with respect to each other by 1 mrad translates into a 0.1% luminosity bias, - the effect of misjudgement of the length of the LHC bunches by 1 cm around the central value of 7.5 cm translates into a 0.6% luminosity bias, - the effect of 0.1% uncertainty on the value of magnetic field in the volume of the host detector tracker and in the fiducial volume of the luminosity detector translates into a 0.4% luminosity bias. These results show that already for the initial geometrical survey of the luminosity detector, before applying the alignment corrections deduced form the monitoring data, these contributions are below a 1% level. Ultimately the luminosity detector contribution to the overall measurement uncertainty are expected to be driven by the monitoring precision of the length of the LHC bunches. The corresponding biases are expected to be smaller than 1%, provided that the length of the LHC bunches is controlled with a 10% precision. Efficiencies ------------ The systematic errors of the efficiencies of the electron/positron identification and of the efficiencies of linking of the luminosity detector tracks to the host detector tracks reflect the purity of the monitoring sample of small invariant mass electrons and positions pairs originated from photons converted in the beam pipe and from the Dalitz decays of neutral pions. A special care have to be taken to understand the dependence of the electron selection efficiency on the isolation of the electromagnetic cluster. For low and medium luminosity runs this can be done by pre-selecting only those of the minimum bias events which are characterised by a low multiplicity of particles traversing the fiducial volume of the luminosity detector. For the high luminosity runs the efficiency of electron identification decreases and must be monitored in the instantaneous luminosity dependent way. The impact of the precision of monitoring of $P^{silent}_{LVL2/EF}(t_i)$ on the luminosity measurement systematic uncertainty is expected to be negligible in the low luminosity periods in which the $P^{silent}_{LVL2/EF}$ value is approaching 1. For medium luminosities this quantity needs to be precisely monitored using the dedicated samples of random events. The monitoring precision and the corresponding precision of the luminosity measurement depend only on the fraction of the total host detector throughput which can be allocated to these events. In general systematic errors reflecting the achievable precision of monitoring of the selection efficiencies can be properly estimated using only real data collected at the LHC. They are not expected to contribute significantly to the overall measurement uncertainty for the low luminosity periods. To which extent this can be achieved for the medium and high luminosity runs remains to be demonstrated. Theoretical Cross Section ------------------------- The following source of uncertainties on the theoretical calculations of $\sigma_{e+e-}$ were taken into account: - the uncertainty of the elastic form factors of the proton, - the uncertainty in inelastic form factors of the proton, in the resonance, photo-production, transition and in the deep inelastic regions, - the uncertainty in the strong and electromagnetic re-scattering cross sections, - the uncertainty in the higher order electromagnetic radiative corrections. The first two sources of systematic errors have been analysed and evaluated in our previous paper [@first]. It was found that for the $e^+e^-$ pairs in the fiducial volume of the luminosity detector fulfilling the requirement: $\delta\phi_r^{cut} < 0.01$ the present uncertainty on the elastic and inelastic proton form factors translates into a 0.3% precision of the cross section. The size of the re-scattering corrections was analysed and found to be smaller than 0.1% in the above kinematic region. At present, the largest contribution to the total uncertainty comes from the missing calculation of the higher order electromagnetic radiative corrections. Their contribution to the cross sections can reach 1%. There is, however, no other than a technical obstacle in calculations of these corrections – if requested they can be made to the precision significantly better than the form factors related uncertainties [@Skrzypek]. Conclusions and Outlook ======================= It has been demonstrated that the proposed method of the luminosity measurement has a large potential to provide the highest achievable precision at hadron colliders. It is based on the electromagnetic collisions of the beam particles in the kinematic regime where they can be treated as point-like leptons and, as a consequence, their collision cross sections can be calculated with a precision approaching the one achieved at the lepton colliders. The rate of a fraction of these collisions which can be selected and reconstructed using a dedicated luminosity detector is large enough to deliver better that 1% precision over the data collected over less than one month of the data taking with a nominal solenoid current, and a couple of days for the ${\bf B0}$ field configuration. The systematic measurement uncertainties can be controlled to a better than 1% precision, by using parasitically, samples of the host detector recorded events. The absolute luminosity measurement procedures are insensitive to the modeling of the collisions mediated by the strong interaction. The proposed method can be directly applied by the LHCb experiment which can take a full profit from the sizable Lorentz boost of the lepton-pair rest frame allowing to replace the electron-positron pairs by the unlike charge muon pairs which radiate less and can be identified more easily. Thus the LHCb and the host detector distributions could be cross-normalised to a high precision. The method presented in this series of papers can be directly used in the $pA$ and $AA$ collision modes of the LHC collider. For these collision modes the signal-to-background ratio increases, respectively, by $Z^2$ and $Z^4$ due to the nucleus charge coherence effects. Moreover, a significant increase of the signal rate allows to retain its high statistical precision. Last but not the least, the scaling of the collider energy dependent rate of the luminosity events can be theoretically controlled with a per-mille precision allowing to cross normalise the data taken at variable collision energies. This series of papers is only a first step towards its ultimate goal: an implementation of the proposed method in the LHC environment. The preliminary studies presented in this series of papers, and the use of the base-line detector model, are sufficient as a proof of feasibility of the method. Any further steps must be preceded by the acceptation of the method by one of the LHC collaboration. Should this happen the concrete detector design using the host detector preferred technology would be the next step. A conservative approach would be to design a detector for the measurement of the absolute luminosity in the dedicated low/medium luminosity periods. An ambitious programme must have as a target an upgrade the present base-line detector concept such that it can be directly used in the high luminosity period of the machine operation. This is anything but easy but the gain in the precision of all the LHC measurements makes it worth an effort. [999]{} M. Mangano, Motivations and precision targets for an accurate luminosity determination, an opening talk at the CERN Lumi-days workshop, CERN, 13-14 Jnuary 2011. ATLAS Collab., ATLAS-CONF-2011-116, 19 August 2011. M. W. Krasny, F. Fayette, W. Placzek, A. Siodmok, Eur. Phys. J. [**C51**]{} (2007) 607 (2007) and hep-ph/0702251,\ F. Fayette, M.W. Krasny, W. Placzek, A. Siodmok, Eur. Phys. J .[**C63**]{} (2009) 33 and arXiv:0812.2571 \[hep-ph\],\ M. W. Krasny, F. Dydak, F. Fayette, W. Placzek, A. Siodmok, Eur. Phys. J. [**C69**]{} (2010) 379 and arXiv:1004.2597 \[hep-ex\]. M. W. Krasny, Acta Phys.Polon. [**B42**]{} (2011) 2133 and arXiv:1108.6163v1 \[hep-ph\]. M. W. Krasny, S. Jadach, W. Placzek, Eur. Phys. J. [**C44**]{} (2005) 333 and hep-ph/0503215. M. W. Krasny, J. Chwastowski and K. S[ł]{}owikowski, Nucl. Instrum. Meth. [**A584**]{} (2008) 42. M. W. Krasny, J. Chwastowski, A. Cyz, and K. S[ł]{}owikowski, Luminosity Measurement Method for the LHC: The Detector Requirements Studies, June 2010, arXiv:1006.3858 \[physics.ins-det\]. ATLAS Collab., G. Aad et al., J. Inst. [**3**]{} (2008) S08003, ATLAS Collab., CERN-LHCC-2003-022. D0 Collab., V.M. Abazov et al., Nucl. Instrum. Meth. [**A565**]{} (2006) 463. S. P. Baranov, O. Dunger, H. Shooshtari and J.A.M. Vermaseren, LPAIR - A Generator for Lepton Pair Production. Proceedings of Physics at HERA, vol. 3, (1992) 1478. T. Sjöstrand, P. Edén, C. Friberg, L. Lönnblad, G. Miu, S. Mrenna and E. Norrbin, Computer Phys. Commun. [**135**]{} (2001) 238. R. Brun et al., Geant 3.21, CERN Program Library Long Writeup W5013,\ Geant4 Collab., S. Agostinelli et al., Nucl. Instrum. Meth. [**A506**]{} (2003) 250,\ Geant4 Collab., J. Allison et al., IEEE Trans. Nucl. Science [**53**]{} (2006) 270. The ATLAS Collab., G. Aad et al., Eur. Phys. J. C (2011) 71: 1630. S. Myers, private communication Proceedings of the Workshop on Fast Timing Detectors: Electronics, Medical and Particle Physics Applications, November 29 – December 1, 2010, eds. J. Chwastowski, P. Le Du, C. Royon, Acta. Phys. Polonica B Proceedings Supplement, vol. 4, no. 1, 2011. Y. Giomataris, Ph. Rebourgeard, J-P. Robert and G. Charpak, Nucl. Instr. Meth. [**A376**]{} (1996) 29. M. Skrzypek and S. Jadach, private communication. Y.Giomataris, G. Charpak., Nucl. Instr. Meth. [**A310**]{} (1991) 589. [^1]: This work was supported in part by the programme of co-operation between the IN2P3 and Polish Laboratories No. 05-117, Polonium Programme No. 17783NY and Polish Grant No. 665/N-CERN-ATLAS/2010/0. [^2]: Adding the pseudorapidity segmented $z$-planes (or equivalently $\phi$-tilted planes) would certainly be useful, in particular for the periods of the highest LHC luminosities. Since, the main purpose of this paper is a proof of principle, these aspect will not be discussed further here. The presented luminosity detector model can be thus considered as the model of the base-line detector satisfying a minimal set of requirements. [^3]: Note that this requirement may lead to adding, if necessary for a chosen technology, an extra, coarse $\phi$-granularity timing plane in front of the three highly segmented hit planes. [^4]: The $N_0$ parameter depends upon the processing power of FPGA-based electronics. [^5]: For a detailed discussion of the timing of the luminosity detector signals see [@second]. [^6]: The LVL2 and EF processing of the monitoring events will be discussed later while addressing the precision of the proposed luminosity measurement method. [^7]: The LUCID “particle hit" is expected to be set at a sufficiently high discriminator threshold to be as much as possible noise-free. [^8]: The electron-positron pair signal events were generated with the LPAIR [@LPAIR] generator. This generator was upgraded to suit our needs (see [@first] for details). For the simulations of the minimum bias events the PYTHIA [@PYTHIA] event generator was used. As discussed in details in our earlier papers [@first], [@second], the studies were based on simplified methods of particle tracking in the detector magnetic field, parametrised simulation of their multiple scattering in the dead material, and on conservative estimation of the effects of the photon radiation by electrons. [^9]: Such a running mode is anything but easy in the presence of coherent bunch interaction effects which depend on the bunch charge. The maximal acceptable dispersion range of the bunch intensities would have to be determined by the LHC machine experts. There are several consequences for such a running scenario. For example, one has to take into account here that the four LHC experiments are running at the same time, and that the colliding bunch partners are different in different Interaction Points (IPs) of the LHC – it is worth stressing here that the distance between the ATLAS and CMS IPs is half of the LHC ring circumference, and that the IPs at which the bunches of different intensity interact are those of the ALICE and LHCb experiments for which maximising the collected luminosity is of secondary importance. [^10]: The invariant mass of the electron-positron pair is determined in the [**B0**]{} runs using the angles of the leptons and the LAr energy deposits linked to the lepton tracks. [^11]: Physicists using the data coming from different detector components may prefer different data quality criteria. In addition, depending on their tasks they may reject algorithmically, again on bunch-by-bunch basis, a sample of selected bunch crossings properties e.g. reject the bunch crossings with interactions of the halo particles within the detector volume. [^12]: For example, for the machine luminosity of $L= 10^{33}$s$^{-1}$cm$^{-2}$ the sampling time to reach the 1 % statistical precision of the luminosity measurement is three days for the [**B0**]{} runs and one month for the [**B2**]{} runs. [^13]: Of the order of 10$^{5}$ electrons (positrons) coming from the photon conversions and crossing the luminosity detector volume are recorded over one minute. [^14]: Note that the pairs used in the luminosity measurement contribute to the like sign pair sample only if the charge of one of the particles charge is wrongly reconstructed. For the low momentum particles the probability of the particle charge misidentification is small enough to be neglected. The sample of like sign pairs represents, thus, a pure background sample. [^15]: The performance precision targets for the host detector can be relaxed by about an order of magnitude with respect to those necessary for the precision measurements of the parameters of the electroweak models such as e.g. the mass of the W bosons [@krasnySMparameters]. | Mid | [
0.592760180995475,
32.75,
22.5
] |
It’s so exciting seeing your little ones grow up, and with this colourful traditional wooden height chart you’ll enjoy being able to keep track of all their little growth spurts! Beautifully painted with a variety of colourful characters, this wooden animal height chart is perfect for the nursery door! Made from lightweight wood with a convenient hanging recess on the back. Materials & Dimensions Height: 100cm. Width: 0.9cm. Length: approx. 10.5cm. Composition: Plywood. Hole in back for hanging on wall. Delivery Information Subject to availability, we aim to dispatch within 5 working days of receiving your order. UK Deliveries Standard Delivery - £4.95, 2 day delivery after dispatch.Next Day Delivery - £7.95, can only be guaranteed on orders placed by 12pm Monday-Thursday. International Deliveries Eire and Europe - £9.95, Royal Mail International.Rest of the World - £14.95, Royal Mail International. Goods may be returned in saleable condition for refund or exchange within one month of purchase, using the returns form enclosed with each order. Goods are returned at your own cost, but will be refunded if the return is due to an error on our part. Email me when available Please select size. http://www.houseofbruar.com/wooden-height-chart-cg15021dinosaur/CG15021DINOSAURTraditional Wooden Height Chart/images/products/medium/CG15021DINOSAUR.jpg19.95GBP/present-shop/childrens/traditional-toys/It’s so exciting seeing your little ones grow up, and with this colourful traditional wooden height chart you’ll enjoy being able to keep track of all their little growth spurts! Beautifully painted with a variety of colourful characters, this wooden animal height chart is perfect for the nursery door! Made from lightweight wood with a convenient hanging recess on the back. | High | [
0.6843373493975901,
35.5,
16.375
] |
2 Bdrm available at 411 Duplex Avenue, Toronto Located in the heart of midtown Toronto, residents in these luxurious midtown Toronto apartments for rent enjoy the inside walk to the Yonge & Eglinton subway station. It is also just minutes from the cosmopolitan downtown core of Toronto. With an almost perfect Walk Score of 98 and Transit Score of 95 the combination of shopping within the Yonge-Eglinton Centre (411 Duplex and 33 Orchardview), including a 24 Hour grocery store, great restaurants, nightclubs, theatres, schools, daycare centres and great parks, make Yonge Eglinton Apartments the ideal place to call home. Ask our friendly staff about our beautiful rooftop patio and sundeck. What's included in your rent Water, Heat Have Questions? Contact the Property Manager: (416) 486-5728 Building Features New Luxury Penthouse Available; visit www.caprent.com/luxury/penthouse All Major Credit Cards Accepted Via RentMoola Open, spacious living areas featuring with large closets, ceramic tile tub surround, private balcony Inside connection to shopping centre with movie theatre, fitness club, drug store, music store and grocery store plus many others Steps to TTC Subway station Secure building with Enterphone system, as well as on site professional and friendly staff Underground parking available Roof top deck and Patio Indoor Pool Neighbourhood Features Walk Score of 98: lively neighbourhood including bars, nightclubs, pubs, cafes, and restaurants Transit Score of 95: Inside walk to Yonge & Eglinton subway 10 minutes to downtown 15-minute drive to highways 401 and 404 45 minutes to Pearson International Airport Financial institutions: Bank of Montreal, TD Canada Trust, CIBC, RBC Schools and local park with swing set, slides and playground in the area 15 minutes to Sunnybrook Hospital (Bayview, north of Eglinton) Police: Division 53 located at Duplex and Eglinton 416-808-5300 Yonge Eglinton Medical Centre across the street North Toronto Memorial Community down the street Northern District Public Library across the street | High | [
0.698412698412698,
35.75,
15.4375
] |
Water clarification is well known throughout a number of industries. Various physical means have been used to remove particulate matter dispersed in a bulk liquid phase. Examples of common particulate separation techniques include filtration, settling, desalting, electrochemical techniques, centrifugation, flotation, and the like. Such separation processes can often be made more efficient by the use of coagulating and flocculating agents. Coagulation may be defined as the stabilization of colloids by neutralizing the forces that keep the colloidal particles dispersed or separated from each other in the wastewater. Cationic coagulants are often used to provide positive electrical charges to the colloidal particles to neutralize the negative charge on the particles. As a result, the particles collide to form larger particles called flocs. Flocculation, on the other hand, refers to the action of polymeric treatments in the formation of bridges between the flocs to thereby form large agglomerates or clumps. Anionic and cationic polymers are commonly employed as flocculants to agglomerate the flocs so that the agglomerates will float and not settle. Once suspended in the wastewater, they can be removed via sedimentation, filtration, or other separation techniques. Commonly employed cationic coagulants such as those based on polydiallyldimethylammonium chloride (PDADMAC) are disclosed for example in U.S. Pat. No. 3,288,770. Additionally, cationic copolymers such as those based on acrylamide copolymers with cationic repeat units such as quaternary ammonium acrylates dimethylaminoethylacrylate methyl chloride (AETAC) or dimethylaminoethylmethacrylate methyl chloride (METAC) are often used. In those situations in which quaternary ammonium salt moieties are present in polymers that are employed as cationic coagulants, the anionic counter ion to the cationic nitrogen is often a chloride ion. These chloride ions are corrosive, and when excessive amounts of same are found in the wastewater, corrosion of metal surfaces in contact with the water can occur. Additionally, environmentally based requests to limit the amount of total dissolved solids (TDS) present in effluents have been increasing over the years. Inorganic ions that are measured as part of the TDS discharge include chloride ions. Many industries and municipal wastewater facilities must comply then with new TDS standards; thus raising concern for chloride content in such discharge. TDS also presents an issue for water reuse of treated wastewater. | High | [
0.6602409638554211,
34.25,
17.625
] |
All pictures are for presentation reasons only, for this reason the products that you obtain may conflict in outlines, saturation or other trademarks. While retaining the major features and details, we have the power to adjust, remove or boost small trademarks. Without the initial packaging, we may also recommend to stow away or transport the spare parts or associated equipment. All pictures are for demonstration purposes only, therefore the products that you receive may differ in shape, colour or other features. We reserve the rights to change, remove or improve minor features while retaining the major features and specs. We may also choose to stock or transport the spares or accessories without the original packaging. Quality Warranty:Worth assurance for all competent spare parts and accessories sold, will be provided in which case the spare parts and accessories are utilized on our machinery and setup by our technician or under our experts's supervision. In which case spare parts or accessories have been fitted into other company's machines or put together without our consent, the worth warranty will be void except when pre-approved by the company.We provide a quality warranty for qualified spares and accessories sold only when the spares or accessories is used on our machines and installed by our technician or under our technician's supervision. The quality warranty is void if spares or accessories have been fitted into 3rd party machines or installed without our acknowledgment ( unless pre-approved by management and stated above ) . Free Installation:No costs be requested for diagnosis and new service parts or components fitting for machinery purchased from us. After the new spare parts or accessories has been fitted by our specialist, they will insure that the clients machines is 100% working and well tested. We may suggest to supply technical services online or send a technician to supply technical support for other companies machinery. Unfortunately, we cannot, when concerning 3rd-party machines, guarantee which steps (if any) will be taken, or guidance (if any) will be provided by our technicians.Diagnosis and new spares or accessories installation is free of charge for machines sold by us. Our technicians will ensure that your machine is fully functional and well tested after the new spares or accessories have been installed.For 3rd-party machines, we may choose to provide technical advice remotely or send technician to provide technical assistance. We cannot however guarantee actions taken, or advice given by technicians regarding 3rd party machinery. Money Back Guarantee:You will be presented with a 100% refund should our specialist remotely recommend or proposes a redundant - reinstatement or component add-on for machines that we have sold, including for unused or spare spares or accessories as soon as you send the spare parts or associated equipment back in its initial quality.If our technician remotely diagnoses or proposes a spare replacement or accessory add-on for machines that we have sold, and for unused or redundant spares or accessories, we will provide you with a 100% refund on credit note once you send the spares or accessories back with 30 days from purchase in its original condition. Call-out Installation & Travel Cost:Would you prefer that we travel to your premises for a spare- restoration or add-on fitting for machinery that we have sold, we shall not charge you a labour fee, but only for the traveling expenses and accommodation fees if compulsory. We choose that you organize for the specialist to get picked up and sent back. The existing rate payable, would you wish for us to travel to you, is R 6.00/km increased by the traveling distance between our workplace and your property.Should you prefer that we come to your premises for a spare-replacement or add-on installations for machines that we have sold, we will not charge you a labour fee, but only for the travel costs and accommodation fees if necessary.We prefer that you arrange that the technician picked up and sent back. If you would like us to travel to you, the current rate is R 6.00/km multiplied by the driving distance between our branch and your premises. | Mid | [
0.585567010309278,
35.5,
25.125
] |
Customer Reviews Customer Ratings & Reviews Sorry, No Product Reviews yet.Be the first to rate this item! Write Review Overall Shopping Experience: Customers gave us a 4.897 out of 5 Rating. Thank You! 5 out of 5 Will do businesses with company again. Fast, convenient service and hassle free. Product arrive in perfect condition. No worries here. - Mary Ann (Havre de Grace, MD) 5 out of 5 - Tanya8624 (Florida) 5 out of 5 Ordering was simple; shipping was free and ahead of projected timeframe. Thank you for making this process so easy and who can beat "free." - DG (Lewisberry PA) 5 out of 5 See top - Vietnam Warrior (W.Yarmouth Mass) 5 out of 5 - New Bride (CA) 5 out of 5 I was a little reluctant to order from aGarden Place due to the lack of reviews however I called their customer support number and spoke with Diana (she answered the phone, no voicemail or callbacks) who was professional, provided information about the company and the product. I then placed my order online and it was delivered to my front door within a few days, everything happened as Diana told me it would. I would definitely order from aGarden Place again. - briane (Sutton, NH) 5 out of 5 I couldn’t be more pleased with my order from aGardenPlace.com. The items arrived quickly, and I received the best price compared to all other vendors. I will be back! - AggieMOM (San Antonio, TX) 5 out of 5 I trust this website, and am 100% pleased with their service & professionalism. - Dana (Adamsville, Tennessee) Great experience on your website and with your shipping speed. - (Sun City Center FL) 5 out of 5 Outstanding Customer Service, great array of products. Could not be happier with the service I received. They go beyond what is the norm for excellent customer service! - Deb in Ohio (Springfield, OH) 5 out of 5 I was having trouble placing my order, my fault, they stepped in and corrected my errors and placed my order. Exceptional customer service. Went above and beyond what I expected. - Deb in Ohio (Springfield, OH) 5 out of 5 When I encountered problems from my own ignorance, they fixed the problem quickly. Very nice website and I will be back again. - Deb in Ohio (Springfield, OH) Description Tops are not threaded; they simply slide on and off the mount. Ez Vane weathervane products are crafted of 14-guage solid steel with a durable and scratch resistant baked on finish. The beautiful copper vein powder-coat finish gives this weathervane the appearance of old-time hammered copper at a fraction of the cost. Sealed ball bearings in the windcups keep them spinning freely with even the slightest breeze. Design tops are laser cut in one piece, minimizing welding and making them a unique addition to your home. | Low | [
0.529545454545454,
29.125,
25.875
] |
Street view looking at the east side of Delaware just north of 6th Street. Buildings in view include what was the National Bank of Commerce at 545-547 Delaware (building on the left) and the T. M. James... | Low | [
0.39712918660287005,
20.75,
31.5
] |
Discover the Magnificence of California's 45 Greatest Lakes 5 Star Rating System What makes a 5 Star Lake? Stars are based on the variety and quantity of services available. Low stars does not mean that the services provided at a particular lake are poor, only that they are limited. Low stars can often be a positive. Anglers seeking quiet waters probably don't want a 5-star boating lake full of jet skis and water skiers. Campers looking for a back-to-nature lake likely would want to avoid a lake ringed with resorts and motels. What makes the perfect lake? It all depends on what you are looking for. Which Lakes Earned the Most Stars? Shasta Lake - 28 Lake Tahoe - 26 Big Bear Lake - 24 Mammoth Lakes - 24 Lake Havasu - 23 Trinity Lake - 23 Lake Almanor - 23 Shaver Lake - 23 June Lakes Loop - 22 Lake Oroville - 21 These lakes earned high overall stars because they offer a wide variety of boating, camping and lodging. Find the best lake for you by exploring California's Greatest Lakes. Shore to Shore Coverage of Each of California's 45 Greatest Lakes Waterskiing, Wakeboarding, and every other kind of thrill-seeking adventure on the end of a tow rope. California lakes are home to some of the greatest excitement on water, from the glassy channels of Lake Havasu to the winding arms of Shasta Lake. Camping, Picnicking, Swimming, Sports, Hiking Choose the lake to suit your family. Some lakes like Big Bear Lake and Bass Lake seem as if they were planned just for families. At those lakes and many others you'll find a wide choice of comfortable campgrounds, easy to access boat rentals, sandy beaches, spectacular scenery, and a myriad of activities available around the shore. Many lakes have programs specially designed for children from water skiing camps to Junior Ranger programs. Jet Skis Ski Boats Sailboats Houseboats Classic Wooden Boats Whatever your pleasure, there's likely a great lake to suit your needs within easy driving distance. Some lakes buzz like hornets' nests on summer days with personal watercraft and wakeboarding boats, while other lakes spread in restful quiet with the only sounds the rustle of the wind in the trees and the swish of a paddle. Find the right lake. We have all the best ones right here on California's Greatest Lakes. Fishing is good at virtually every one of California's Greatest Lakes. Bass anglers favor the warmer lakes were the largemouth bass and stripers grow to trophy size and keep calling the fishermen back again and again. Other sportsmen would rather go after the colder water Kokanee salmon and trout. Some of the lakes are home to almost every species of fish found in the state while others like Eagle Lake specialize in one particular fish. You'll find the best lake for you right here. Every lake offers something different. Shasta has caves, Bass Lake is next to Yosemite, Folsom has bike and horse trails, Millerton has a casino for a neighbor, New Melones provides guided history and nature walks, Clear Lake is home to the Konocti Harbor Resort which hosts big name concerts, Big Bear offers skiing, mountain biking, and ziplines, Casitas has a water play area, and Havasu has the London Bridge. Check out the activities available at your favorite lakes. Every effort is made to provide accurate and up to date information, but we cannot be responsible for errors or changes that may have occurred since publication. Always confirm information with the service provider and check for any recent changes that may have been made. The information provided on this website is done so without warranty of any kind, either expressed or implied, including but not limited to, the implied warranties of merchantability and fitness for a particular purpose. | Mid | [
0.620689655172413,
33.75,
20.625
] |
players.engines.bms package ============================ Module contents --------------- .. automodule:: players.engines.bms :members: :undoc-members: :show-inheritance: | Mid | [
0.5532786885245901,
33.75,
27.25
] |
Effect of anti-smoking legislation on school staff smoking may dissipate over time. This study describes student perceptions of school staff smoking before and after implementation of legislation prohibiting smoking on school grounds. Students completed self-report questionnaires before (grade 6) and after (grade 7, 9 and 11) the law. The percentage of students reporting that school staff smoked in areas where smoking is forbidden was 19%, 32% and 33% in grade 7, 9 and 11, respectively. The mean(SD) score for the frequency with which students saw school staff smoking decreased after the ban but increased thereafter [2.5(1.1), 1.9(1.0), 2.4(1.1) and 2.3(1.1)] in grade 6, 7, 9 and 11, respectively [F(2.861,1662.229) = 45.350, P < 0.001]. These data suggest that the effect of the law dissipated over time. | Mid | [
0.580152671755725,
38,
27.5
] |
Commonwealth Games 2014: GiveMeSport top five performing athletes (1) After 11-days of competition that has seen 4500 athletes, from 71-nations, competing in 17-sports, a thrilling 20th Commonwealth Games came to an end last night in Glasgow, in what is being hailed as the greatest Commonwealth Games ever to be staged. Whether watching or competing in Scotland over the last week or so, few would argue the value of entertainment which has seen 261 of the most talented athletes be crowned Commonwealth Champions in their respective events, has been unprecedented to the previous 19-editions. Over the years critics have suggested the Queens Games, that take place every four-years, are out of touch with the modern, commercial and profit driven world of today's sport and is an event that doesn't feature very high up on the priority list of a number of the competing athletes. All I can say that if this mindset persists after the superb showpiece in Glasgow, that has seen no fewer than 142 Games records, including nine world records, then one wonders if it is the critics that are out of touch, and not the event itself. Scotland had never hosted a sporting event of this magnitude before and for those that thought that they couldn't pull it off, well you were hopelessly wrong. World class venues, typical Scottish hospitality and most importantly, a stunning 53 medals in all for the host nation, which saw them finish fourth in the overall medals table, are just some of the reasons that have contributed towards an enormously successful event. Another home nation, England, finished top of the medals table, dethroning the mighty Australia for the first time since 1986. England finished with an overall total of 174 medals, a staggering 58 of them gold, nine clear of the normal table toppers Australia. Lulu and Kylie Minogue brought the curtain down on 11-days of sporting excellence last night at Hampden Park, which featured Glasgow handing over the Games Baton to the 2018 hosts, Gold Coast City in Queensland, Australia, and GiveMeSport would like to express our congratulations to the hosts and all competing athletes on a phenomenal effort. Most importantly GiveMeSport would like to express our sincere congratulations on the medal winning performances of our very own athletes. Join us in applauding our athletes on their achievements during the 20th Commonwealth Games. Do YOU want to write for GiveMeSport? Get started today by signing-up and submitting an article HERE: http://gms.to/writeforgms Report author of article DISCLAIMER This article has been written by a member of the GiveMeSport Writing Academy and does not represent the views of GiveMeSport.com or SportsNewMedia. The views and opinions expressed are solely that of the author credited at the top of this article. GiveMeSport.com and SportsNewMedia do not take any responsibility for the content of its contributors. Want more content like this? Like our GiveMeSport Facebook Page and you will get this directly to you. | Mid | [
0.573333333333333,
32.25,
24
] |
Follow button ruins ecosystems? - munsays http://blog.kornar.com/?p=36 ====== kornarcom i actually don't agree with this, becuase i think a lot of apps have some sort of reputation, like youtube has views and likes, its the same with the followers number!! they need to be thier to make twitter, instagram and picplz succesfull! ------ munsays im not trying to put to disrepute the success of followers its been proven it works i am merely suggesting that the concentration based on it has significant consequences | Mid | [
0.544329896907216,
33,
27.625
] |
Does MITE Make Right? On Decision-Making Under Normative Uncertainty Brian Hedden 1 Introduction We are not omniscient agents. Therefore, it is our lot in life to have to make decisions without being apprised of all of the relevant facts. We have to act under conditions of uncertainty. This uncertainty comes in at least two kinds. First, there is ignorance of descriptive facts; you might be ignorant of the potential causal impacts of the various actions available to you. For instance, you might be unsure whether to give the pills to your headache-suffering friend because you are uncertain whether they are painkillers or rat poison. Second, there is ignorance of normative facts, or facts about whether a certain action or outcome is good or bad, permissible or impermissible, blameworthy or praiseworthy, etc.1 For instance, you might know exactly what would happen (descriptively speaking) if you (or your partner) had an abortion and what would happen if you didn't, and yet still be uncertain about whether having an abortion is a morally permissible thing to do.2 Due to the ubiquity of normative, and not just descriptive, uncertainty, we might want a theory that provides some guidance about how to take this normative uncertainty into account in deciding what to do. While I will be concerned with specifically moral uncertainty, much of what I say will carry over to other cases of normative uncertainty, such as uncertainty about what would be instrumentally rational to do or what it would be epistemically rational to believe. At this stage-setting phase, some terminology will be helpful. Let us say that what you objectively ought to do depends only on how the world in fact is, and not on how you believe the world to be. For Utilitarians, what you objectively ought to do is whatever will in fact maximize happiness, irrespective of your beliefs about what will maximize happiness. For non-consequentialists, what you objectively ought to do might depend, for instance, on facts about whether 1Moral non-cognitivists might resist talk of moral facts. But I do not take this paper to be committed one way or the other regarding moral cognitivism vs. non-cognitivism. I speak of moral facts merely for the sake of convenience. See Sepielli (2012) for discussion of how the problem of what to do under conditions of normative uncertainty arises even for non-cognitivists. 2This example is from Sepielli (2009). 1 some act would in fact cause an innocent person to die (thereby violating that person's rights), irrespective of your beliefs about whether it would cause an innocent person to die. And let us say that what you subjectively ought to do depends in some way on your descriptive beliefs about how the world is. For consequentialists, what you subjectively ought to do might be whatever will maximize expected world value (expected total happiness, for utilitarians), relative to your beliefs. And for non-consequentialists, what you subjectively ought to do might depend on whether you believe that some act would cause an innocent person to die. (Note that my usage of the term 'subjective ought' differs from that of some authors, who define what you subjectively ought to do as whatever you believe that you objectively ought to do. There are at least two ways in which my usage of the term differs from theirs. First, their usage is incompatible with an expectational account of what you subjectively ought to do, such as the consequentialist one just mentioned. Second, their usage makes the subjective ought simultaneously sensitive to both descriptive and normative uncertainty.) Neither the objective ought nor the subjective ought, on my usage, is sensitive to your moral uncertainty. We might then introduce a super-subjective ought and say that what you super-subjectively ought to do depends on both your descriptive uncertainty and your moral uncertainty. This gives us a tripartite distinction: Objective ought : Insensitive to your descriptive and moral uncertainty. Subjective ought : Sensitive to your descriptive uncertainty but insensitive to your moral uncertainty. Super-subjective ought : Sensitive to your descriptive and moral uncertainty. Before moving on, let me flag that making these distinctions does not commit one to the claim that the English word 'ought' is ambiguous or that it admits all and only these three readings. It may be, for instance, that modals are highly context-sensitive, so that different contexts can give rise to all sorts of readings of 'ought' claims. My aim here is simply to highlight some possible senses of 'ought' that may be of particular interest to normative theorists. The remainder of this paper is dedicated to evaluating the prospects for theorizing about decision-making under moral uncertainty. My evaluation, which is largely negative, has two parts. I begin by examining what has already emerged as the preeminent proposal for what you super-subjectively ought to do. This proposal takes the dominant theory of what you subjectively ought to do, namely expected value theory, and attempts to extend it to take into account your moral uncertainty as well. I argue that this proposal is unworkable. In the second part of my evaluation, I question whether we should want an account of decision-making under moral uncertainty in the first place. I tentatively suggest that a super-subjective ought has no important role to play in our normative theorizing and should thus be abandoned. There is no normatively 2 interesting sense of ought in which what you ought to do depends on your uncertainty about (fundamental) moral facts.3 In this respect, moral uncertainty is importantly different from descriptive uncertainty. 2 Does MITE Make Right? While theorizing about what you objectively and subjectively ought to do has a long and distinguished history, theorizing about the super-subjective ought, about decision-making under normative uncertainty, is still in its infancy. But already, a preeminent theory has emerged. This theory incorporates insights from the preeminent theory of the subjective ought, namely expected value theory. For this reason, it will be helpful to start with a brief overview of this theory. Suppose that we can represent your doxastic, or belief-like, state with a probability function P and that the value function V represents how good or bad different outcomes Oi are. If we are interested in what you morally ought to do, then V can be thought of as representing moral goodness in some way, while if we are interested in what you prudentially ought to do, then it can represent your own preferences or levels of happiness. Since we are concerned with morality, let us understand V in the former way. Then, we say that what you subjectively ought to do is to make-true the act-proposition with the highest expected moral value, defined thus: Expected Moral Value: EMV(A) = ∑ i P (Oi | A)V (Oi)4 Given the attractiveness of the expected value maximization framework for theorizing about the subjective ought, it is tempting to try to extend it to the super-subjective ought. If it is possible to represent all moral theories in expected value terms (this assumption will be questioned shortly5), then there is 3This is compatible with the idea that there may be natural language uses of 'ought' where the context is such as to give rise to a reading on which it is sensitive to your moral uncertainty. I am just denying that such a sense of 'ought' is important for purposes of normative theorizing. 4This is the formula for evidential expected value, rather than causal expected value. The debate over evidential decision theory and causal decision theory is an important one, but it is not our topic, and so I set it aside. 5See Sen (1982), Oddie and Milne (1991), Dreier (1993, Smith (2009), Colyvan, Cox, and Steele (2010), and Portmore (2011) among others, for discussion of this question outside the context of moral uncertainty. In my view, the considerations raised in section 2.2, among others, show that not all moral theories can be represented in expected value terms (or 'consequentialized,' insofar as this means being represented using a value function). More exactly, I take these considerations to show that not all moral theories can be represented in the same expected value maximization framework. Perhaps there are different modifications of the expected value framework that can helpfully represent different moral theories, but they cannot all be squeezed into the same framework. But that seems to be what is necessary in order to do the relevant trade-offs and aggregations needed to yield a theory about what on ought to do in light of one's moral uncertainty. 3 an apparently straightforward way in which to extend the expected value framework to deal with moral uncertainty as well. Expected moral value (EMV ) is an intratheoretical notion. When we take the expected moral value of an action on each moral theory and sum them up, weighted by the probability of each theory, we get an intertheoretical notion, which we can call the 'intertheoretic expectation.' Intertheoretic expectation: IE(A) = ∑ i P (Ti)EMVi(A) = ∑ i P (Ti) ∑ j P (Oj |A)Vi(Oj) Now, the proposal is that what you super-subjectively ought to do is to make-true the act-proposition with the highest intertheoretic expectation. Let us call this theory 'MITE,' for: Maximize InterTheoretic Expectation. MITE is a natural extension to the super-subjective ought of expected value theory as a theory of the subjective ought. Expected value theory evaluates an action by looking at how objectively good (or bad) an action would be in different states of the world and discounting that goodness by your degree of belief that that state of the world is actual. MITE evaluates an action by looking at how (subjectively) good (or bad) an action would be according to each moral theory you take seriously and discounting that goodness (or badness) by your degree of belief that that moral theory is correct. Versions of MITE have been defended by Lockhart (2000), Ross (2006), and Sepielli (2009, dissertation), and it has swiftly established itself as the dominant theory of the super-subjective ought. This is no accident. MITE has the attractive feature of taking into account both how confident you are in each moral theory and how good or bad the given act would be, according to each of those moral theories. (Later I will be questioning whether it makes sense to speak of how good or bad an act is, according to different moral theories, but for now I grant the intuition that such talk does make sense.) By contrast, a decision rule which just recommended acting in accordance with the moral theory to which you assign highest credence would ignore facts about the relative goodness or badness of acts according to the different moral views.6 You might be 51% confident that having an abortion would be slightly morally better than not having one, and 49% confident that having an abortion would be absolutely monstrous, but this decision rule would say that you should just go with the view you're 51% confident in. Similarly, a maximin-style decision rule which recommends ranking acts according to their worst possible moral badness and then performing the highest act in that ranking would ignore your differing levels of confidence in each moral theory. (In the next section, however, I will be questioning whether there are any grounds for making these sorts of comparisons between how good or bad a given act is, according to different moral theories.) 6This view is sometimes called the 'My Favorite Theory' view, and is defended by Gracely (1996) and Gustafsson and Torpman (2014). 4 For this reason, I would venture so far as to say that when it comes to trying to devise a formal theory of what you super-subjectively ought to do, MITE (or some slight variant thereof) is the only game in town. This is important, since if MITE ultimately fails, as I will argue it does, then this casts serious doubt on the prospects for coming up with any formal theory of what you super-subjectively ought to do. In 2.1 and 2.2, I consider two serious problems for MITE that show that its ambitions must be considerably scaled back. The first is the problem of intertheoretic value comparisons, first noted by Hudson (1989), Gracely (1996), and Lockhart (2000). To employ MITE, we must make precise comparisons of 'degrees of wrongness' across moral theories. I argue that there is no principled way to make these comparisons, unless we start off with a considerable number of judgments about what agents in various circumstances super-subjectively ought to do. Thus, MITE can at best aspire to take us from a smaller set of judgments about the super-subjective ought to a larger set of such judgments. The second problem is the impossibility of adequately representing certain sorts of moral theories, such as theories which distinguish between supererogatory and merely permissible acts, in expected value maximization terms, as MITE requires. If there are moral theories that cannot be squeezed into the expected value maximization framework that MITE presupposes, then MITE cannot say anything about what an agent who assigns any credence to such theories supersubjectively ought to do. Thus MITE cannot provide a general framework for decision-making under moral uncertainty. 2.1 Axiological Uncertainty and the Problem of Intertheoretic Value Comparisons Let us begin with a type of moral uncertainty which would seem to be naturally and fruitfully dealt with by MITE. Consider an agent who is certain that (maximizing) consequentialism is correct; that is, she is certain that one ought to maximize value. However, she is uncertain about what is of value. She doesn't know what the right axiology is. It would seem that we should be able to straightforwardly give her advice about what to do by calculating the expected moral values of the available actions, relative to the value function corresponding to each possible axiology, and summing up those expected moral values, weighted by her degree of belief that the corresponding axiology is correct, thus arriving at an intertheoretic expectation for each action. But even in this highly artificial case, we already run into problems. In particular, we run into the problem of calibrating value functions. As we know from decision theory, a preference ordering (satisfying certain axioms) over worlds and prospects (gambles) does not uniquely determine a value function. Instead, such a preference ordering only determines a value function which is unique at most up to addition of a constant and multiplication by a positive scalar.7 As such, if 7For the systems of von Neumann and Morgenstern (1944) and Savage (1954), if your preferences satisfy their axioms, you are representable as an expected utility maximizer with 5 the value function V represents a given set of preferences, so does the function aV + b, for real numbers a (> 0) and b. Axiologies generally only give us a preference ordering, but in order to apply the expected value framework to cases of axiological uncertainty, we need to fix on one value function corresponding to each axiology. And it is doubtful whether there is any principled reason for privileging any one function from axiologies to value functions over the other possible such functions. This is the problem of intertheoretic value comparisons. The thrust of this problem can be seen through an example which is wellknown from Parfit (1984). Even if one is certain that happiness is what matters, one can be uncertain about whether worlds are ranked by total happiness or by average happiness. This uncertainty will be important in situations where one has the option of implementing a policy which will increase the world's population, but at the cost of decreasing average happiness. In order to give guidance to the agent making this choice using MITE, we have to choose value functions to correspond to Totalism and to Averagism. However, it appears that any such choice will be arbitrary and have unintuitive consequences.8 Suppose we start with a simple proposal – for Totalism we let the value of a world be the total happiness in that world, while for Averagism we let the value of a world be the average level of happiness. Unfortunately, this will have the result that for most real-life cases where one can substantially increase population at the cost of decreasing average happiness, our framework will recommend doing what Totalism recommends unless the agent is overwhelmingly confident that Averagism is correct. Suppose that the agent has the choice of increasing the world's population from 6 billion to 24 billion people at the cost of halving the average happiness level. Let the present average happiness level be x (x > 0). Then, for Totalism, the difference between the expected moral value of increasing the world's population and the expected moral value of the status quo will be 24, 000, 000, 000 × (x/2) − 6, 000, 000, 000x = 6, 000, 000, 000x. For Averagism, the difference between the expected moral value of increasing the population and the expected moral value of the status quo is −(x/2). Crunching the numbers, maximizing intertheoretic expectation will recommend that the agent implement the population-increasing policy (i.e. doing what Totalism recommends) unless she is over 99.9999999916% confident that Averagism is right. But this seems crazy. We could perhaps improve things by representing Averagism not by the value function that assigns each world its average happiness as its value, but rather by a value function that assigns each world some large multiple of its average happiness as its value. But this proposal is not without its own problems. No matter what value functions we use to represent Averagism and Totalism, once we fix on proposed decrease in average happiness, Averagism will swamp a utility (or value) function that is unique up to positive linear transformation. In Jeffrey's (1983) system, the uniqueness condition for utility functions is more complicated, but nonetheless it is true that if V represents your preferences, so does aV + b (a > 0). 8William MacAskill recently informed me that he also uses Totalism and Averagism to illustrate this point, though he attributes it to Toby Ord. See MacAskill (2014, 93-4). 6 Totalism for smaller population increases while Totalism will swamp Averagism for larger population increases. This is perhaps natural enough. After all, in situations where one can increase population by decreasing average happiness, Totalism will say that the moral significance of the situation increases with the size of the possible increase in population, while Averagism will say that the moral significance of the situation does not depend on the size of the possible population increase. So we would expect Averagism to outweigh Totalism for small possible population increases, and we would likewise expect Totalism to outweigh Averagism for very large possible population increases. The problem is that representing Totalism and Averagism by particular value functions requires us to choose a point along the continuum of possible population increases where Totalism starts to outweigh Averagism (for a given reduction in average happiness). And any such choice will seem arbitrary and unmotivated. There is nothing in the moral theories themselves that tells us how to make intertheoretic value comparisons.9 Can we make any plausible non-question-begging stipulations about interthe9The astute reader may notice a structural similarity between the problem of intertheoretic value comparisons for MITE, and the familiar problem of interpersonal comparisons of utility for theories of social choice. One difference, however, is that we may have some grip on how to make interpersonal comparisons of utility that doesn't depend just on the functions that we'd get if we used Ramsey's (1931) method to construct utility functions for the individuals involved. For one, our shared biology may provide some grounds for calibration–it seems plausible that two people undergoing the same painful medical procedure, with each protesting as loudly as the other and displaying similar patterns of neuronal activity, perspiration, and other common indicators of discomfort, should be treated as suffering a similar level of disutility, at least for the purposes of social choice. While such considerations may help us ground interpersonal comparisons of utility, it's not obvious whether there's anything that could play a similar role in grounding intertheoretic comparisons of value. While I'll discuss a different method for attempting to solve this problem from Sepielli (2009) later in this section, in more recent work (Sepielli (2010, ch 4)), he offers a strategy that's somewhat analogous to the one I've just suggested might work in the case of interpersonal utility comparisons. He suggests that we might be able to appeal to conceptual connections between various normative concepts in order to ground intertheoretic value comparisons. Just as we might ground interpersonal utility comparisons by assuming that people in similar behavioral and neurological states are undergoing similar levels of disutility, we might ground intertheoretic value comparisons by assuming, for instance, that if two theories recommend similar degrees of blame for an act, that they each regard the reasons against that act as equally weighty. While Sepielli acknowledges that he hasn't provided a detailed, psychologically realistic account of the various conceptual connections between normative concepts of the sort he thinks would solve the problem of intertheoretic value comparisons, there are reasons for skepticism about the prospects for any such strategy. For example, two moral theories might disagree about how much we should blame somebody for acting in a certain way for reasons that have nothing to do with what they say about the reasons in favor of acting in that way (Gustafsson and Torpman (2014) and MacAskill (2014) also make this point). One theory might imply that we ought never blame anybody because it implies that justified blame would require contra-causal free will, while the other theory might be compatibilist about blame. Similar issues will also arise with consequentialist theories on which whether blame is recommended in a given circumstance depends not on the wrongness of the act in question, but rather on the consequences that would result from blaming. I raise this example to motivate skepticism that there is anything like a silver bullet that will allow us to determine that two theories must be interpreted as assigning some act equal value, so long as they agree on some other normative claim. 7 oretic value comparisons? In the remainder of this section, I look at three prominent proposals for doing so and find them wanting. Start with Lockhart (2000), who proposes a Principle of Equity among Moral Theories (PEMT), according to which all moral theories should be deemed to have the same amount of moral rightness at stake in any given situation. In each situation, the worst available actions according to each moral theory should be assigned the same (low) expected moral value, and similarly for the best available actions according to each moral theory. This is a version of the 'zero-one' rule, a proposal for solving the problem of interpersonal comparisons of utility by scaling each person's utility function to the zero-one interval. (Note that the PEMT will likely require us to use different value functions to represent a given moral theory in different choice situations.) Unfortunately, the PEMT is implausible (see Ross (2006) and Sepielli (2013)). It arbitrarily rules out the possibility of situations in which moral theories would seem to differ dramatically in how morally significant they consider the choice at hand. Consider again the case of Averagism and Totalism. We can imagine a scenario in which one has the option of creating on another planet a population of ten billion people who are all just slightly less happy than the average here on earth the difference between our average happiness and theirs is equivalent, say, to the difference between not having a hangnail and having one. The PEMT rules out by fiat the possibility of saying that this is a situation that carries far more weight for Totalism than for Averagism. Now, I am not claiming that this in fact is a situation that carries more weight for Totalism than for Averagism. After all, I am denying the possibility of making such intertheoretic value comparisons. My claim is simply that there is no intuitive support for the PEMT's claim that this is a situation that is equally weighty for Averagists and Totalists, and that more generally, moral theories cannot differ in how morally significant they consider a given choice to be.10 10Of course, we could modify the PEMT and instead stipulate that all moral theories should be treated as having the same maximum and minimum possible moral value at stake. That is, we consider the worst possible actions (not holding fixed a given choice situation) according to the various theories and make sure that they are all assigned the same (very low) expected moral value, and we also consider the best possible actions according to the competing theories and assign them all the same (very high) expected moral value. But this too is implausible. First, there is little reason to think that there will be worst and best possible actions for given moral theories, or even that expected moral value should be bounded for every moral theory (Sepielli (2013)). Certainly, utilitarians will likely think that possible acts grow better and better without bound as more and more happiness is created, and also that acts grow worse and worse without bound as more and more suffering is created. Second, some moral theories may just think that no possible situation can be terribly significant from a moral standpoint. Various moral nihilistic views hold that no acts are morally better than any others. Note, however, that such nihilistic theories are independently problematic for MITE, since some versions of decision theory prohibit all acts and outcomes being equally preferred. For instance, Savage's (1954) postulate P5 says that it is not the case that for all pairs of acts, one is at least as good as the other. One can also imagine slight deviations from moral nihilism which hold that no acts are are substantially morally better than any others (MacAskill (2014, 135)). It would be a distortion of what such a view says to represent it as being such that its best and worst possible acts have the same expected moral values as the utilitarian's best and worst possible acts, respectively. This is especially relevant for Ross (2006), who 8 Next consider an interesting proposal made by Sepielli (2009) (though Sepielli (2010) disavows it). Sepielli's approach relies on the existence of some background agreement among moral theories that will serve as a fixed point that we can use to make the requisite intertheoretic value comparisons.11 The idea is to find at least three actions or outcomes A, B, and C such that all of the moral theories the agent takes seriously agree that A is better than B, which is better than C and also agree about the ratio of the value difference between A and B and the value difference between B and C. We then stipulate that the value functions chosen to represent each moral theory must agree in the numbers they assign to A, to B, and to C. Consider Averagism and Totalism again. They agree about the one-person case. They agree that a world A where there is one person with happiness level 10 is better than a world B where the one person has happiness 4, which in turn is better than a world C where the one person has happiness 2. Moreover, they agree on the ratio of value differences between A and B, and B and C; they agree that the value difference between A and B is three times the value difference between B and C. So, on Sepielli's proposal, we just pick three numbers x, y, and z to serve as the values of A, B, and C for both Averagism and Totalism, with the constraints that x > y > z and x − y = 3 × (y − z). So, for instance, we can assign world A value 10, world B value 4, and world C value 2. And, having set down these values, we fill in the rest of Averagism's value function and the rest of Totalism's value function in the usual way. This proposal has some intuitive appeal, but it will not provide a general solution to the problem of intertheoretic value comparisons. First, there is no guarantee that there will always be even this minimal sort of background agreement among all of the moral theories to which the agent assigns some credence (Gustafsson and Torpman (2014)). Sepielli's approach to the problem of intertheoretic value comparisons will not work in these cases, and so MITE will not provide a fully general framework for decision-making under conditions of moral uncertainty. Worse, there are cases in which Sepielli's proposal will lead to contradiction.12 This problem can arise when theories agree on more than one ratio of value differences. Indeed, this will happen in the case of Averagism and Totalism. As noted, Averagism and Totalism agree about the ratio of value differences between A and B, and B and C. But they also agree about a lot of other ratios of value differences. Consider, for examples, worlds D, E, and F. World D contains two people, each with happiness level 10; world E contains two people, each with happiness level 4; and world F contains two people, each with happiness level 2. Averagism and Totalism agree that the degree to which employs MITE for the purpose of arguing that moral theories that hold that there is little moral difference between the acts available to us should be treated as false for the purposes of deliberation, since having some credence in such theories will not affect which act has highest intertheoretic expectation. This result is impossible if the PEMT or modifications thereof are adopted. 11Ross (2006) briefly considers a proposal like this. 12I recently learned that Gustafsson and Torpman (2014) independently sketched this sort of problem. 9 D is better than E is three times the degree to which E is better than F. Now, we cannot apply Sepielli's proposal both to A, B, and C and to D, E, and F without contradiction. Suppose that we start with A, B, and C. We'll set the values of A, B, and C as, say, 10, 4, and 2 (respectively) for both Averagism and Totalism. But then, Averagism and Totalism must differ in the values they assign to D, E, and F. Averagism must assign worlds D, E, and F values 10, 4, and 2 (respectively), while Totalism must assign D, E, and F values 20, 8, and 4 (respectively). Similarly, if we start by applying Sepielli's proposal to D, E, and F, Averagism and Totalism will agree on the values of D, E, and F but differ in the values they assign to A, B, and C. So, Sepielli's proposal leads to contradiction if we try to apply it both to A, B, and C and also to D, E, and F. More generally, contradiction threatens whenever moral theories agree about more than one ratio of value differences, for the constraints that result from applying Sepielli's proposal to one ratio of value differences may be incompatible with the constraints that result from applying it to a different one. Finally, consider a proposal which explicates intertheoretic value comparisons in terms of their practical implications. This strategy is explicit in Ross (2006) and Riedener (2015), and also hinted at in Sepielli (unpublished). Ross (2006, 763) outlines the strategy thus: [W]e can explicate intertheoretic value comparisons in terms of claims about what choices would be rational assuming that the ethical theories in question had certain subjective probabilities. Thus, to say that the difference in value between ordering the veal cutlet and ordering the veggie wrap is one hundred times as great according to Singer's theory as it is according to the traditional moral theory is to say, among other things, that if one's credence were divided between these two theories, then it would be more rational to order the veggie wrap than the veal cutlet if and only if one's credence in Singer's theory exceeded .01. But this proposal is circular, if MITE's ambition is to provide a framework which takes as input an agent's credences in moral theories (and credences about descriptive matters of fact) and outputs what the agent supersubjectively ought to do, without presupposing any facts about what agents super-subjectively ought to do in various situations (see also Gustafsson and Torpman (2014)). After all, Ross's proposal is to start with facts about what agents super-subjectively ought to do in certain cases and use those facts to reverse-engineer the desired intertheoretic value comparisons. But we could scale back MITE's ambitions. Instead of trying to use MITE to yield what agents super-subjectively ought to do given only their credences in moral theories, we could instead content ourselves with starting out with some facts about what agents super-subjectively ought to do in some circumstances (arrived at by some independent means, such as brute intuition) and then just using MITE to arrive at further facts about what agents super-subjectively ought to do in other circumstances. MITE could be thought of simply as a framework for imposing consistency on our judgments about what agents in different states of 10 uncertainty super-subjectively ought to do. This is how many decision theorists think of expected utility theory, as simply requiring a certain coherence among your preferences and decisions. If we scale back MITE's ambitions in this way, then Ross's observation does solve our problem. Riedener (MS) proves that if our judgments about what agents in various states of uncertainty super-subjectively ought to do obey certain decision-theoretic axioms, and if each moral theory's 'preferences' obey the same decision-theoretic axioms, then there is a choice of value functions to represent each moral theory such that an act A is super-subjectively better than B just in case the Intertheoretic Expectation (IE) of A is higher than that of B, relative to the aforementioned choice of value functions. I am unsatisfied. There is an analogy between Riedener's proof and Harsanyi's (1955) proposed solution to the problem of intertheoretic comparisons of utility. Harsanyi proves (with some supplemental assumptions, which I set aside) that if there are 'social preferences' that satisfy standard decision-theoretic axioms, and if each individual's preferences also satisfy those axioms, then there is a choice of individual utility functions such that the social preferences can be represented by a social utility function which is the weighed sum of those individual utility functions. Importantly, however, Harsanyi's theorem doesn't tell us how to pick an individual utility function to represent a given individual's preferences unless we already have the social utility function in hand. For this reason, Harsanyi's proposal leaves much to be desired. As one with Utilitarian sympathies (with utility understood as a representation of preferences), I would have liked to be told how to start off with individual's preferences and construct a social preference ordering therefrom, but I am instead told that if I start off with individual's preferences and a social preference ordering, then there is a way of fixing the zero point and scale of each individual's utility function such that social utility can be thought of as a weighted sum of individual utility. But I have no independent way of arriving at judgments about the social preference ordering. Insofar as I am a Utilitarian, I think that any facts about social betterness must be rooted in prior facts about individuals' preferences. I don't come up with judgments about social betterness through brute intuition, for instance. Similarly, I might want to be told how to start off with my credences in moral theories and use them to derive a verdict on what I super-subjectively ought to do, but instead the Ross/Sepielli/Riedener approach tells me that if I start off with credences in moral theories and facts about the 'preferences' of the super-subjective ought, then there is a way of fixing the zero point and scale of each moral theory's value function such that the super-subjective ought can be thought of as mandating IE-maximization relative to those choices of zero points and scales. But I have little or no independent grip on (alleged) facts about super-subjective betterness. I, for one, have few if any brute intuitions about what agents super-subjectively ought to do in a various cases (with the possible exception of extreme cases, such as where one assigns all but a vanishingly small probability to one theory's being true). And while this is simply an autobiographical report, I suspect that most readers will likewise find 11 themselves with few if any firm intuitions about what agents super-subjectively ought to do in various cases. Note that the case of ordinary decision theory is importantly different. Expected utility theory may just be a framework for imposing consistency on preferences, but it is still of some use since I come to the table with many preferences arrived at independently of thinking about expected utility theory. In sum, if MITE is understood modestly, as a framework for imposing consistency on our judgments about the super-subjective ought, it is of little value unless we start off with at least some such judgments which are arrived at by independent means. But I am skeptical of whether we can or do arrive at such independent judgments about the super-subjective ought. 2.2 Options and Non-EVM-Representable Theories Many moral theories cannot be represented in expected value maximization terms. For example, many moral theories hold that morality shouldn't be overly demanding. Morality gives us options.13 According to these views, some actions are supererogatory, while others are merely permissible. For instance, giving a large proportion of one's time and money to charity is a wonderful thing to do, but it isn't required. After all, these theorists say, morality leaves us space to pursue our own goals and projects.14 Options are a challenge for MITE because on the face of it, they seem to say that one needn't always maximize value, whereas MITE requires all theories to be put in an expected-value maximization framework.15 At first blush, these theories seem to differ from consequentialist theories not in their value theories, but in their decision rules. Some moral theories involving options, such as Slote's Satisficing Consequentialism (1984) are explicitly presented as differing from maximizing consequentialist views in employing a different decision rule.16 Now consider an agent who gives some credence to a moral theory which accepts a utilitarian axiology but gives the agent options. For instance, suppose that it says that while it's best to give as much money as possible to charity, one is only required to give away $1,000. How should defenders of MITE deal with this agent's state of uncertainty? If we represent this options theory using a utilitarian value function and then plug it into MITE, then we effectively ignore the fact that the theory says that 13The term 'options' comes from Kagan (1989). 14See Williams (116-117, from Smart and Williams (1973)) for a famous defense of this claim, presented as an argument against Utilitarianism. 15See Sepielli (2010) for further interesting discussion of various issues regarding moral uncertainty and supererogation, although his focus is considerably different from ours. 16The problem of supererogation is discussed by Lockhart (2000, ch. 5), but he takes the strategy of arguing against moral theories involving supererogation. I am inclined to agree with him that the true moral theory, whatever it is, will not involve supererogation. But in the context of defending a framework for decision-making under moral uncertainty, this move is beside the point. As long as an agent could reasonably have some credence in a options or supererogation theory (even if such a theory is in fact false), then a theory of the super-subjective ought must be able to say something about that case. 12 there are options! We would be ignoring the distinction between this optionsbased utilitarian view and standard utilitarianism. In the extreme case in which the agent is certain of that options theory, MITE would say that she is supersubjectively obligated to maximize total happiness and give as much money as possible to charity. This is the wrong result. So clearly some added complexity is required if MITE is to deal with options theories. We might try to somehow 'average' the relevant decision rules in aggregating her moral uncertainty. That is, in the case where an agent divides her credence between an options theory and a standard maximizing consequentialist theory, we might not only try to weight the different value functions by her credence in the corresponding theories, but also try to weight the different decision rules (maximization versus satisficing, for instance) by her corresponding credences. But it is doubtful whether any sense can be made of the notion of 'averaging' decision rules. What would it be, for instance, to average maximization with satisficing? A more promising approach for the defender of MITE would be to draw a distinction between different senses in which a theory might be associated with a value function. Suppose our options theory T makes various claims about which outcomes are better than others and by how much, and that these claims can be unified by representing T as endorsing a value function (as is the case for our options theory which accepts the utilitarian axiology). Call this value function T 's explicit value function. We have already seen that options theories cannot be interpreted as requiring that agents maximize value according to their explicit value functions (else there would be no supererogatory acts, according to the theory). However, perhaps it will be possible to represent theories like T as recommending that agents maximize expected value, so long as the value function whose expectation they're asked to maximize is not T 's explicit value function, but rather one reverse-engineered by looking at which actions T recommends in which choice-situations. Call this sort of reverseengineered value function an implicit value function. Will such implicit value functions always exist? Sepielli (2009) and Ross (2006) both suggest that arguments ultimately inspired by Ramsey (1931) show that they will. Roughly, the idea is that Ramsey showed that if an agent's preferences satisfy certain axioms, then they can be represented with a value function. So for any moral theory, we can just imagine an agent who always prefers to act as the theory recommends, and then use Ramsey's method to construct the implicit value function of the theory, which can then be used together with the other theories the agent takes seriously to generate intertheoretic expectations for actions. (What Sepielli and Ross are appealing to here is known as a Representation Theorem, which says that if an agent has preferences which satisfy such-and-such axioms, then she can be represented as a agent who maximizes expected value, relative to some probability-utility function pair < P,U > which is unique up to certain sorts of transformations, which differ depending on the axiom system in question. See von Neumann and Morgenstern (1944), Savage (1954), and Jeffrey (1983) for examples of Representation Theorems.) But there are reasons to doubt whether we really can represent options 13 theories using implicit value functions in this way. One main reason is that the options theory's preferences are likely to violate standard decision-theoretic axioms. In particular, the preferences of the options theory are likely to be negatively intransitive. That is, there will likely be acts A, B, and C such that neither of A and C is preferred to the other (in the sense that, given a choice between the two, neither is required), neither of B and C is preferred to the other, and yet A is preferred to B. For instance, let A be giving $1,000 to charity online (so that it arrives immediately), let B be giving $1,000 to charity by snail mail (so that it arrives after some delay), and let C be saving the money. Neither of A and C is preferred to the other, since both are permissible, and similarly for B and C. But A is preferred to B; if you're going to give to charity, you ought to choose the option that gets the money there more quickly if that requires no extra cost or effort. If the options theory has such negatively intransitive preferences, then it cannot be represented in EMV-maximization terms.17 Even setting this aside, it seems unlikely that any implicit value function assigned to an options theory would yield plausible results when plugged into MITE. For the implicit value function cannot assign the supererogatory act a higher expected moral value than the merely permissible one, for this would mean that in the limiting case where the agent is certain of that options theory, she would be required to perform the supererogatory act. And the implicit value function cannot assign the supererogatory and the merely permissible acts equal expected moral values, for then options theories can be easily swamped by other theories when we apply MITE to a morally uncertain agent. Consider an agent who gives some credence to an options theory which says that donating to charity is supererogatory while saving is merely permissible. The other theory 17This points merits some clarification. In an important and underappreciated paper, Oddie and Milne (1991) prove that in a certain sense of 'representation,' any moral theory whatsoever (subject to two constraints mentioned below) can be represented in EMV-maximization terms, relative to some agent-neutral value function. But their interpretation of what it is for a moral theory to be represented by another differs importantly from the interpretation that is relevant in the context of evaluating MITE. Oddie and Milne assume that each moral theory (i) has finitely many deontic categories (where deontic categories are things like supererogatoriness, obligatoriness, permissibility, wickedness, etc.), and (ii) that the moral theory gives a partial ordering of these deontic categories (supererogatoriness will be ranked higher than impermissibility, for instance). Then, they prove that for each such moral theory M , there is an agent-neutral value function V such that, if act A's deontic category is ranked at least as highly as act B's according to M, then the expected value of A is at least as great as the expected moral value of B, relative to value function V . But importantly, as Carlson (1995) notes, Oddie and Milne allow one moral theory to count as representing another even if the former does not even contain the same deontic categories as another. This is relevant because expected value theory as standardly interpreted employs just two deontic categories permissibility (corresponding to having maximal expected value) and impermissibility (corresponding to having sub-maximal expected value). So on Oddie and Milne's criterion of representation, a theory on which A is supererogatory and B is merely permissible is adequately represented by a value function which assigns greater value to A than to B and hence deems A to be obligatory and B to be impermissible. This may be fine for some purposes. But in the context of MITE, it is unacceptable, for it does not enable us to respect the original moral theory's distinction between the supererogatory and the merely permissible. In effect, squeezing the supererogation theory into the EMV-maximization framework needed for MITE obliterates distinctions that the theory deems to be of fundamental importance. 14 to which the agent assigns some credence is a mild egoist theory that says that saving is slightly better than donating. For the options theory, on the proposal under consideration, donating and saving have the same expected moral value. For the mild egoist theory, saving has a slightly higher expected moral value than donating. Applying MITE, we will get the result that the agent ought to save her money no matter what (non-zero, real-valued) credence she assigns to each theory. This is an implausible result. Even if the agent is overwhelmingly confident that donating is supererogatory and saving merely permissible, a tiny degree of confidence that saving is required will tip the balance in favor of saving, so long as we represent options theories as assigning supererogatory and merely permissible acts the same expected moral value. I am skeptical that there is any satisfactory way to squeeze options theories into MITE's expected value maximization framework.18 But even if I am wrong about the case of options, it is overwhelmingly likely that very many moral theories that are worth taking seriously will be unable to be squeezed into this framework. They will have 'preferences' that fail to satisfy the axioms of the relevant Representation Theorem (see MacAskill (2014)). Just to take one possible example, an absolutist moral theory, on which some acts (murder, say) are absolutely prohibited, might have 'preferences' which fail to satisfy the Continuity axiom of Von Neumann and Morgenstern's decision theory. Suppose that our absolutist moral theory says that murdering one person (M) is worse than the status quo (S), which is worse than rescuing one person (R). That is, M < S < R. Moreover, murdering is absolutely prohibited, which on this theory means that if you're uncertain whether some act would result in murdering someone or saving someone, it's wrong to do it. In particular, for any probability p, an act with probability p of resulting in M and probability 1−p of resulting in R is worse than the status quo S. This violates the Continuity axiom19, which says: Continuity : If A ≤ B ≤ C, then there exists some positive probability p such that: (p)A + (1− p)C ∼ B (where ∼ is the relation of indifference) So, an absolutist moral theory on which it is impermissible to run any risk at all of murdering someone, even for the sake of having a chance of rescuing someone will have 'preferences' which violate one of the standard axioms of decision theory. As a result, that absolutist moral theory's verdicts will not be representable by a value function.20 18Recently, Ben West suggested to me that it may be possible to represent options theories in EU-maximization terms using vector-valued value functions. I will not pursue this strategy here. 19An absolutist moral theory would also likely violate the Archimedean axiom adopted by many decision theories, which in effect says that no options are infinitely good or infinitely bad. See Sepielli (2009) and Smith and Jackson (2006) for further discussion of absolutist moral theories in the context of decision-making under moral (Sepielli) and descriptive (Smith and Jackson) uncertainty. 20One might simply reject the Continuity axiom (and the Archimedean axiom) and assign 15 Even if options-based moral theories, absolutist moral theories, and others whose preferences cannot be represented by a value function are ultimately false, it seems that insofar as any moral uncertainty at all is rationally permissible, it should be rationally permissible to assign some positive credence to one of these problematic types of moral theory. If so, then there are moral theories which it can be rational to take seriously and which are such that if you do take them seriously, MITE cannot say anything about what you super-subjectively ought to do. There is a general lesson here. MITE, and probably any plausible theory of the super-subjective ought, requires that the different moral theories in which an agent has some credence be translated into a common currency so as to allow them to be weighed up against each other.21 But moral theories differ radically, and often in deep, structural ways. There is no reason to think that all respectable moral theories, from consequentialism, to Kantianism, to absolutist theories, to Ross-style pluralist theories (perhaps involving incommensurability, or Chang's (1997) 'parity'), to virtue ethical theories, will all be amenable to being squeezed into a common framework, whether that common framework is an expectational decision-theoretic one, or something else entirely. This doesn't necessarily mean that not all moral theories can be put into some decisiontheoretic framework or other, but it is important to be careful about quantifier scope. It may be that, for each moral theory, there is some formal decisiontheoretic framework that can (in some sense) represent it,22) but I am deeply skeptical that there will be some formal decision-theoretic framework that can be used to represent each moral theory. Instead, different departures from orthodox expected value theory (the system of Savage (1954), say) will be needed for different moral theories; some may require infinite values, others may require sets of value functions, still others may require a non-maximizing rule, and so on. For some purposes, like coming up with a way to think about how that absolutely prohibited actions a negative infinite value. But then absolutist moral theories will swamp non-absolutist theories. It may be possible to attempt to avoid this swamping by representing Absolutist theories using a variety of technical devices, such as context-dependent value functions which, in any context, always assign values in such a way as to prohibit the absolutely prohibited action (Sepielli (2010)). Or perhaps surreal numbers will be of help (see Hàjek (2003) for discussion of surreal numbers in the context of decision theory). This technical moves may help the defender of MITE avoid uncomfortable conclusions when faces with absolutist theories e.g., that if you given any credence to an absolutist moral theory, it will swamp all other theories to which you give some credence in virtue of its involving infinite values and disvalues. But it is difficult to see how one would motivate a particular choice among the various technical devices that might be wheeled in to help deal with absolutist theories, and yet different choices will yield different recommendations from MITE in various situations. At any rate, the present point is simply that many moral theories would seem, on the face of it, to violate standard decision-theoretic axioms needed to get representation theorems off the ground. 21An exception is the view that one super-subjectively ought to take the theory in which one has highest credence, and then simply act on its basis. See Gracely (1996) and Gustafsson and Torpman (2014) for a defense of this approach. Unfortunately I do not have the space to argue against it here. 22See footnote 5 above for references to discussions of attempts to find decision-theoretic representations of various moral theories. 16 theory should say you ought to act under descriptive uncertainty, it may only be important that each moral theory be representable in some formal decisiontheoretic framework or other. But for the purpose of coming up with a formal framework for decision-making under moral uncertainty, it is crucial that each moral theory be representable in the same formal decision-theoretic framework (or common currency, as I put it earlier). And this, I am arguing, is not the case. 3 Whither the Super-Subjective Ought? In the previous two sections, I have argued that MITE is unlikely to succeed as a theory of what a morally uncertain agent super-subjectively ought to do. If my arguments are sound, what does that mean for the super-subjective ought? I see three possibilities. First, perhaps we just need to pull up our socks and continue the hard work of trying to devise an adequate decision theory for the super-subjective ought. This strikes me as unattractive. The problems I have raised seem like in-principle problems, not likely to be solved through technical subleties. Second, we might hold that there are facts about what one super-subjectively ought to do in most, or perhaps all, possible situations, but that these facts cannot be encapsulated in any formal or otherwise finitely statable theory. Perhaps there is little to be said by way of exceptionless principles, save for extreme cases (e.g., that if you are certain that A is not morally worse than B, and not certain that B is not worse than A, then you super-subjectively ought to do A).23 This would amount to a sort of particularism about the super-subjective ought. I have no compelling argument against this second option, but I want to explore a third, perhaps more radical, response. I want to suggest that perhaps there is no need to come up with a theory of the super-subjective ought, for the super-subjective ought has no clear role to play in our normative theoriz23Note that one who adopts the third option I consider (below), which denies the existence of a super-subjective ought, can still hold that there is something wrong with someone who is certain that A is better than B but then goes on to do B. But the explanation of what is wrong with that person will be different. If fundamental moral facts are a priori, then there is a sense in which one always ought to believe the true moral theory (though this this need not entail that one is blameworthy for having false moral beliefs, as Harman (2011) holds; the sense of ought may be purely epistemic, for instance). Then, if the true moral theory is one on which A is better than B, our imagined agent is criticizable for simply for acting wrongly, while if the true moral theory is one on which the A is not better than B, then our imagined agent is criticizable for having a moral belief that she ought not have. So in essence, we can account for what's wrong with an akratic agent by appealing (perhaps among other things) to a wide-scope norm stating that one ought to be such that if one believes one ought to do A, then one does A. But there are multiple ways ot satisfy such a wide-scope norm. One can make the antecedent of the embedded conditional false, or one can make the consequent true. In my view, if the moral belief refered to in the antecedent is false, then one ought to make the antecedent false (i.e. not have the false moral belief), while if that moral belief is true, then one ought to make the consequent true (i.e. perform the action that is in fact morally required). 17 ing. This discussion will be regrettably brief and speculative. I cannot show conclusively that the super-subjective ought has not role to be play in our theorizing. Instead, I proceed by looking at three main motivations for introducing the subjective ought to supplement the objective one, and then showing how we might resist the thought that these motivations carry over to motivate the introduction of a super-subjective ought. This discussion will clarify what kinds of commitments will likely have to be take on board by someone who wishes to adopt this third, more deflationist, response. Start by recapping three interrelated motivations for bringing in the subjective ought. First, what you objectively ought to do often depends on factors inaccessible to you. You might be in no position to know that the pills in your bottle are rat poison, and you justifiably take them to be painkillers. In this case, even though you objectively ought not given them to your friend, you are not in a position to know that you ought not do so. Second, and relatedly, the objective ought is insufficiently action-guiding. It does not give advice to the deliberating agent that she can effectively use to determine what to do. Third, non-culpable ignorance of the facts which determine what you objectively ought to do is typically an excusing factor. Suppose you give your friend the pills, and after taking them he writhes around on the floor foaming at the mouth, and then dies. While you helped cause his death, you are not blameworthy for it, since you were justifiably ignorant of the fact that the pills were rat poison. On the basis of these considerations, we then introduce the subjective ought, which is intended to (i) be such that what you subjectively ought to do doesn't depend on things inaccessible to you, (ii) is action-guiding, and (iii) links up more closely with blameand praiseworthiness than does the objective ought. (It is not clear that (i) should be regarded as a separate motivation, since it may be that the only grounds for wanting an ought which is always accessible to you is that accessibility is required for action-guidingness and blameworthiness.) The subjective ought is supposed to satisfy these demands by making what you ought to do depend on your credences in the relevant descriptive propositions, rather than on which of the relevant descriptive propositions are in fact true. Now, there are serious questions about whether the subjective ought really can satisfy these demands, especially in light of Williamson's (2000) AntiLuminosity Argument. If Williamson is right, then there are no conditions that are such that whenever they obtain, you are in a position to know that they obtain. Even the facts about your own doxastic state that determine what you subjectively ought to do may be inaccessible to you. And in a case where you are not in a position to know what your own beliefs or credences are, the subjective ought may not be fully action-guiding, and your self-ignorance might excuse you from any blame stemming from your failure to do what you subjectively ought to do. But set these issues aside. After all, my aim is not to defend the subjective ought but to oppose the super-subjective ought. What I now want to do is suggest that these considerations accessibility, action-guidingness, and links with blameand praiseworthiness might not carry over to motivate the introduction of a super-subjective ought to supplement the objective and subjective ones. 18 First, even if descriptive facts may often be inaccessible to you, it is not clear that normative facts are likewise inaccessible. If fundamental moral truths are a priori, then there is a sense in which any agent is in a position to know the moral truth. There is no in-principle obstacle to her coming to know the moral facts. Moreover, your evidence (whatever it is), will entail each of the fundamental moral truths By contrast, your evidence will often not entail, or even support, the true descriptive propositions that are relevant in a given decision situation. Of course, even if the fundamental moral truths are a priori, this does not mean that they are obvious. But it is not clear that we should demand a sense of ought on which what you ought to do depends only on factors that are obvious as opposed to merely knowable in some weaker sense. Admittedly, I have not argued that in fact fundamental moral truths are a priori. While I find this claim plausible (after all, the sorts of considerations typically given for or against particular moral theories tend to be of the a priori variety), it is certainly open to dispute. Some theorists might doubt that fundamental moral truths are even necessary (and it's unlikely that they would be contingent a priori), while others might hold that they are necessary a posteriori, in which case fundamental moral truths might be no more accessible than descriptive necessary a posteriori truths like the proposition that Hesperus is Phosphorus. Nevertheless, those theorists sympathetic to an a priori conception of ethics should hold that fundamental moral truths are unlike even very unobvious descriptive truths in being in-principle accessible. Second, consider the morally uncertain agent's felt need for some sort of guidance. Certainly, such an agent will wish she knew what morality demands of her, and she will often have reason to deliberate further (though if she must act now, she may need to simply make a decision and defer deliberation until later). But reasons to deliberate further may be ordinary, garden variety epistemic and moral reasons. We have epistemic reasons to deliberate about matters of great importance in our lives. And the true moral theory T , whatever it is, will often want the agent to deliberate further about morality, since deliberating (insofar as it is reliable) will lead her to beliefs which better approximate T , and (insofar as her motivational state is sensitive to her moral beliefs) this will lead her to act in accordance with T more often. So a theory of the super-subjective ought is not needed to account for why uncertain agents often ought to continue deliberating about morality (and indeed, it gives no special role to deliberation anyway). The super-subjective ought really aims to earn its keep by giving agents guidance about how to hedge their bets, morally speaking. That is, it tells them how to act so as to minimize their expected degree of wrongness. But there is a case to be made that a desire for guidance about how to engage in moral hedging involves an objectionable sort of moral fetishism, so that a morally good agent would not look to a theory of the super-subjective ought like MITE to guide her actions in the first place. Michael Smith (1994, 75) distinguishes between caring about morality de dicto and caring about morality de re: Good people care non-derivatively about honesty, the weal and woe 19 of their children and friends, the well-being of their fellows, people getting what they deserve, justice, equality, and the like, not just one thing: doing what they believe to be right, where this is read de dicto and not de re. Indeed, commonsense tells us that being so motivated is a fetish or moral vice, not the one and only moral virtue. Now, Smith uses the allegedly fetishistic character of de dicto concern for morality to argue against judgment externalism, the view that it is possible to judge that an action is morally required without being in any way motivated to perform that action. The details of Smith's anti-externalist argument needn't occupy us here, since the internalist/externalist debate is not our topic, and in any event I am persuaded by criticisms of Smith's argument by Shafer-Landau (1998), and Svavarsdóttir (1999), and especially Dreier (2000).24 But Harman (2011) and Weatherson (2013) have recently raised this moral fetishism objection against theories on which an agent's moral beliefs affect how she ought to act. It is easy to overstate the case, however (and I suspect that Harman, at least, has). An agent who feels the need to deliberate further about some moral matter needn't always be fetishistic. This is especially clear where the agent's deliberation concerns thick moral concepts like fairness or respect, rather than thin ones like wrongness or permissibility.25 And we do want agent's motivational states to somehow be sensitive to their beliefs about morality; else what is the point in debating moral matters? (Indeed, see Dreier (op cit) for discussion of how to explain why good, well-motivated agent's motivations are sensitive to their moral beliefs without attributing to them de dicto concern for morality.) So my narrow, and hopefully more cautious, claim is just that the kind of motivation involved in moral hedging is objectionably fetishistic, even if a felt need to deliberate further, and a general sensitivity of one's motivational state to one's beliefs about morality, are not. But reasons to deliberate further, or to have a motivational state that is responsive to beliefs about morality, can be accounted for without positing a super-subjective ought. Third, and finally, while it is quite clear that (non-culpable) ignorance of relevant descriptive facts often excuses you from blame, it is rather controversial whether (non-culpable) ignorance of fundamental moral facts likewise exculpates. Harman (2011) has recently argued that it does not.26 She argues that it is possible to come to have deeply false moral beliefs without having been epistemically irresponsible in any way (unless failure to know a priori facts itself constitutes epistemic irresponsibility), but that in such cases an agent who acts on those false moral beliefs still strikes us as blameworthy. As just one example, she considers people who protest at abortion clinics and yell at the women and 24See also Lillehammer (1997) for an argument that de dicto concern for morality needn't be fetishistic in the first place. By contrast, Dreier agrees that de dicto concern for morality is objectionably fetishistic but argues that we can explain the fact that good, strong-willed agents are motivated to act in accordance with their moral beliefs without attributing to them such de dicto concern for morality. 25This point is emphasized by Sepielli (unpublished). 26See Zimmerman (1997) and Rosen (2004) for defenses of the opposing view. 20 doctors going inside. Assume that abortion is morally permissible and that it 'is wrong to yell at women outside abortion clinics: these women are already having a hard time and making their difficult decision more psychologically painful is wrong' (458). While perhaps many of these particularly strident protesters have been epistemically irresponsible in coming to their beliefs, it is not plausible to think that this is the case for all of them. But nevertheless, these protesters are blameworthy for the distress they cause. Arpaly (2003), using Smith's de dicto/de re distinction, argues for the same conclusion, writing that: An action is blameworthy just in case the action resulted from the agent's caring inadequately about what is morally significant where this is not a matter of de dicto caring about morality but de re caring about what is in fact morally significant. Now, it is clear that we are not inclined to excuse Hitler, say, from blame simply on account of his erroneous moral beliefs (if indeed he believed he was acting rightly). But matters are less clear, and so the stance of Harman and Arpaly is less compelling, in cases where the stakes are smaller or where the moral ignorance or error is less egregious. Vegetarians typically do not have strong negative reactive attitudes when their friends or colleagues eat meat. But this may not be because the carnivores do not merit blame, but rather because we are generally disinclined to hold others to a higher standard than we hold ourselves. Often when we judge that someone acting wrongly, we do not blame them if we could easily see ourselves acting in that manner. And it is coherent to judge that someone is blameworthy despite not actually having a strong negative reactive attitude toward that person.27 So the fact that an agent's false moral beliefs may sometimes make us disinclined to blame her, not because moral ignorance is itself exculpatory, but rather because we could easily see ourselves being in her situation. Now, even if you are not convinced, and believe that (non-culpable) moral ignorance is exculpatory (whether always, often, or just sometimes), we can still resist the thought that this means there must be an ought that is sensitive to moral uncertainty. For it is possible for some factor to be exculpatory without there being an ought that is specially sensitive to that factor. For instance, if an agent commits a violent act, the fact that he had a brutal, abusive upbringing can excuse him from blame, or at least mitigate his blameworthiness. But that does not mean that there is a special ought which is sensitive to the degree to which one's upbringing was normal. There is no sense in which he ought to have done as he did. Similarly, it may be that moral ignorance is exculpatory without there being a super-subjective ought. 27Consider a case not involving false moral beliefs, but rather akrasia. I believe that I and other reasonably well-off people are morally required to give very large portions of our wealth to the distant needy, but do not have strong negative reactive attitudes towards people who don't do so, even when they also believe they are so obligated. That's because I myself don't give tons of money away! The people who don't give generously to charity (myself included) are blameworthy, even though few actually experience an attitude of blame toward them (even those who are convinced about our obligations toward the distant needy). 21 This concludes my tentative argument that the introduction of the supersubjective ought is unmotivated. Admittedly, it is not a water-tight case. Perhaps one of the three motivations for introducing the subjective ought does carry over to the case of moral uncertainty. Perhaps there are other possible motivations for introducing a super-subjective ought besides the three that I have considered here. But I hope at least to have done some softening-up work to suggest that before we become invested in solving the technical problems that face particular theories of decision-making under moral uncertainty, such as MITE, or accept a form of particularism about the super-subjective ought, we should get clearer about whether and why we wanted such a ought in the first place. Until a strong case is made that we need the super-subjective ought to play certain well-defined roles in our normative theorizing, we should be neither surprised nor worried when attempts to theorize about a super-subjective ought run into trouble. The default position should be that there are no rules for how to act in light of moral uncertainty; beliefs about descriptive matters make a difference to how you ought to act, while beliefs about moral matters do not. What you ought to do, in any moral sense of ought, depends on which moral theory is in fact true, not on your (possibly mistaken) beliefs about what morality requires.28 References Arpaly, N. 2003. Unprincipled Virtue: An Inquiry into Moral Agency. New York: Oxford University Press. Carlson, E. 1995. Consequentialism Reconsidered. Dordrecht: Springer. Chang, R. 1997. 'Introduction.' In R. Chang (ed), Incommensurability, Incomparability, and Practical Reasoning. Cambridge: Harvard University Press. Colyvan, M., Cox, D., and Steele, K. 2010. 'Modelling the Moral Dimensions of Decisions.' Noûs 44, 503-29. Dreier, J. 1993. 'Structures of Normative Theories.' The Monish 76, 22-40. Gracely, E. 1996. 'On the Noncomparability of Judgments Made by Different Ethical Theories.' Metaphilosophy 27, 327-332. Gustafsson, J. and Torpman, O. 2014. 'In Defence of My Favourite Theory.' Pacific Philosophical Quarterly 95, 159-74. Hàjek, A. 2003. 'Waging War on Pascal's Wager.' Philosophical Review 112, 27-56. 28Thanks to Caspar Hare, Stefan Riedener, and especially Daniel Greco, who collaborated on an ancestor of this paper. Thanks also to audiences at the 2010 Australasian Association of Philosophy Conference, Oxford University, and the 2014 Wisconsin Metaethics Workshop. This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation. 22 Harman, E. 2011. 'Does Moral Ignorance Exculpate?' Ratio 24, 443-68. Harsanyi, J. 1955. 'Cardinal Welfare, Individualistic Ethics, and Interpersonal Comparisons of Utility.' Journal of Political Economy 63, 309-21. Hudson, J. 1989. 'Subjectivization in Ethics.' Americal Philosophical Quarterly 26, 221-9. Jeffrey, R. 1983. The Logic of Decision. Chicago: University of Chicago Press. Kagan, S. 1989. The Limits of Morality. New York: Oxford University Press. -2012. The Geometry of Desert. New York: Oxford University Press. Lockhart, T. 2000. Moral Uncertainty and its Consequences. New York: Oxford University Press. MacAskill, W. 2014. Normative Uncertainty. Ph.D. thesis, University of Oxford. Oddie, G. and Milne, P. 1991. 'Act and Value: Expectation and the Representability of Moral Theories.' Theoria 57, 42-76. Parfit, D. 1984. Reasons and Persons. Oxford: Oxford University Press. Portmore, D. 2011. Commonsense Consequentialism. New York: Oxford University Press. Ramsey, F. 1931. 'Truth and Probability.' In his The Foundations of Mathematics and other Logical Essays. New York: Routledge. Riedener, S. 2015. Maximizing Expected Value under Axiological Uncertainty. Ph.D. thesis, University of Oxford. Rosen. G. 2004. 'Skepticism about Moral Responsibility.' Philosophical Perspectives 18, 295-313. Ross, J. 2006. 'Rejecting Ethical Deflationism.' Ethics 116, 742-68. Savage, L. 1954. The Foundations of Statistics. New York: John Wiley and Sons. Sen, A. 1982. 'Rights and Agency.' Philosophy and Public Affairs 11: 3-39. Sepielli, A. Unpublished. 'Normative Uncertainty, Intertheoretic Comparisons, and Conceptual Role.' University of Toronto. -2009. 'What to Do When You Don't Know What to Do.' Oxford Studies in Metaethics 4, 5-28. 23 -2010. Along an Imperfectly-Lighted Path: Practical Rationality and Normative Uncertainty. Ph.D. thesis, Rutgers University. -2012. 'Normative Uncertainty for Non-Cognitivists.' Philosophical Studies 160, 191-207. -2013. 'Moral Uncertainty and the Principle of Equity Among Moral Theories.' Philosophy and Phenomenological Research 86, 580-9. Slote, M. 1984. 'Satisficing Consequentialism.' Proceedings of the Aristotelian Society, Supplementary Volumes 58, 139-63. Smart, J. and Williams, B. 1973. Utilitarianism: For and Against. Cambridge: Cambridge University Press. Smith, M. 1994. The Moral Problem. Oxford: Blackwell. -2009. 'Kinds of Consequentialism.' In E. Sosa and E. Villanueva (eds), Philosophical Issues: Metaethics. New York: Wiley-Blackwell. Smith, M. and Jackson, F. 2006. 'Absolutist Moral Theories and Uncertainty.' Journal of Philosophy 103, 267-83. von Neumann, J. and Morgenstern, O. 1944. Theory of Games and Economic Behavior. Princeton: Princeton University Press. Weatherson, B. 2013. 'Running Risks Morally.' Philosophical Studies 167, 1-23. Williamson, T. 2000. Knowledge and its Limits. Oxford: Oxford University Press. Zimmerman, M. 1997. 'Moral Responsibility and Ignorance.' Ethics 107, 41026. | Mid | [
0.54713493530499,
37,
30.625
] |
Immunotherapy of cancer with dendritic cells loaded with tumor antigens and activated through mRNA electroporation. Since decades, the main goal of tumor immunologists has been to increase the capacity of the immune system to mediate tumor regression. Considerable progress has been made in enhancing the efficacy of therapeutic anticancer vaccines. First, dendritic cells (DCs) have been identified as the key players in orchestrating primary immune responses. A better understanding of their biology and the development of procedures to generate vast amounts of DCs in vitro have accelerated the development of potent immunotherapeutic strategies for cancer. Second, tumor-associated antigens have been identified which are either selectively or preferentially expressed by tumor cells and can be recognized by the immune system. Finally, several studies have been performed on the genetic modification of DCs with tumor antigens. In this regard, loading the DCs with mRNA, which enables them to produce/process and present the tumor antigens themselves, has emerged as a promising strategy. Here, we will first overview the different aspects that must be taken into account when generating an mRNA-based DC vaccine and the published clinical studies exploiting mRNA-loaded DCs. Second, we will give a detailed description of a novel procedure to generate a vaccine consisting of tumor antigen-expressing dendritic cells with an in vitro superior capacity to induce anti-tumor immune responses. Here, immature DCs are electroporated with mRNAs encoding a tumor antigen, CD40 ligand (CD40L), CD70, and constitutively active (caTLR4) to generate mature antigen-presenting DCs. | High | [
0.6965174129353231,
35,
15.25
] |
Bitterness prediction of H1-antihistamines and prediction of masking effects of artificial sweeteners using an electronic tongue. The study objective was to quantitatively predict a drug's bitterness and estimate bitterness masking efficiency using an electronic tongue (e-Tongue). To verify the predicted bitterness by e-Tongue, actual bitterness scores were determined by human sensory testing. In the first study, bitterness intensities of eight H(1)-antihistamines were assessed by comparing the Euclidean distances between the drug and water. The distances seemed not to represent the drug's bitterness, but to be greatly affected by acidic taste. Two sensors were ultimately selected as best suited to bitterness evaluation, and the data obtained from the two sensors depicted the actual taste map of the eight drugs. A bitterness prediction model was established with actual bitterness scores from human sensory testing. Concerning basic bitter substances, such as H(1)-antihistamines, the predictability of bitterness intensity using e-Tongue was considered to be sufficiently promising. In another study, the bitterness masking efficiency when adding an artificial sweetener was estimated using e-Tongue. Epinastine hydrochloride aqueous solutions containing different levels of acesulfame potassium and aspartame were well discriminated by e-Tongue. The bitterness masking efficiency of epinastine hydrochloride with acesulfame potassium was successfully predicted using e-Tongue by several prediction models employed in the study. | High | [
0.6800535475234271,
31.75,
14.9375
] |
Q: Pygame Space Invaders Bug '''There's a minor bug in this Space Invaders Code, everything works fine and well at the start , until all of a sudden sometimes, mostly at the end of the game, some of the enemy spaceships suddenly moves extremely fast and wayy faster than the other enemies, but follows the same path as the others..If I manage to shoot down the fast spaceships, more of them start moving at the very same fast speed, how it that??I have attached the code down below:''' import pygame import random import math from pygame import mixer # Initialize and create a Game Window pygame.init() screen = pygame.display.set_mode(size=(800,600)) #Caption and Icon pygame.display.set_caption('Space Invaders') icon = pygame.image.load('ufo.png') pygame.display.set_icon(icon) #Background background = pygame.image.load('background.png') #BackgroundSound mixer.music.load('background.wav') mixer.music.play(-1) #Player playerImg = pygame.image.load('player.png') playerX=360 playerY=470 playerX_change = 0 #Enemy enemyImg=[] enemyX = [] enemyY = [] enemyX_change = [] enemyY_change = [] num_of_enemies=6 for i in range(num_of_enemies): enemyImg.append(pygame.image.load('enemy.png')) enemyX.append(random.randint(0,736)) enemyY.append(random.randint(40,120)) enemyX_change.append(4) enemyY_change.append(20) #Bullet bulletImg = pygame.image.load('bullet.png') bulletX=0 bulletY=480 bulletX_change = 0 bulletY_change = -10 bulletstate = 'ready' # Bullet is 'ready' if its ready to shoot and 'fire' means a bullet is fired #Number of Bullets no_of_bullets=0 #Score score_value=0 Score_font = pygame.font.Font("freesansbold.ttf" , 32) textX=10 textY=10 #Number of bullets def Show_score(x,y): score=Score_font.render("Score: " + str(score_value) , True , (255,255,255)) screen.blit(score , (x,y)) def player(x,y): screen.blit(playerImg , (playerX , playerY)) def enemy(x, y, i): screen.blit(enemyImg[i] , (enemyX[i] , enemyY[i])) def bullet_fire(x,y): global bulletstate bulletstate='fire' screen.blit(bulletImg ,(x+16 , y+10)) def Collision(bulletX , bulletY , enemyX , enemyY): distance=math.sqrt((math.pow(enemyX-bulletX,2)) + (math.pow(bulletY-enemyY,2))) if distance<29: return True else: return False def game_over(): font1 = pygame.font.Font("BUBBLEGUMS.ttf" , 68) gameover=font1.render("GAME OVER " , True , (255,255,255)) screen.blit(gameover , (100,200)) def numberofbullets(): font2=pygame.font.Font("blow.ttf",35) number_of_bullets=font2.render("Number of Bullets Shot: " + str(no_of_bullets) , True , (0,255,255)) screen.blit(number_of_bullets , (170,375)) #Game Loop running=True while running: # BackgroundImage screen.blit(background ,(0,0)) for event in pygame.event.get(): if event.type == pygame.QUIT: running = False #Keystroke if event.type == pygame.KEYDOWN: if event.key ==pygame.K_LEFT: playerX_change -=6 if event.key == pygame.K_RIGHT: playerX_change +=6 if bulletstate=='ready': if event.key == pygame.K_SPACE: no_of_bullets+=1 bulletX=playerX bullet_fire(bulletX,bulletY) bullet_sound = mixer.Sound('laser.wav') bullet_sound.play() if event.type ==pygame.KEYUP: if event.key ==pygame.K_LEFT or event.key == pygame.K_RIGHT: playerX_change=0 #Checking for Boundaries of Spaceship playerX +=playerX_change if playerX>=736: playerX=736 elif playerX<=0: playerX=0 # Enemy Movement for i in range(num_of_enemies): if enemyY[i]>440: for j in range(num_of_enemies): enemyY[j]=2000 game_over() numberofbullets() break enemyX[i] +=enemyX_change[i] if enemyX[i]>735: enemyX_change[i]+=-4 enemyY[i] +=enemyY_change[i] elif enemyX[i]<0: enemyX_change[i]+=4 enemyY[i] +=enemyY_change[i] #Collision collision = Collision(bulletX, bulletY, enemyX[i], enemyY[i]) if collision: collision_sound = mixer.Sound('explosion.wav') collision_sound.play() bulletY=480 bulletstate='ready' enemyX[i]=random.randint(0,800) enemyY[i]=random.randint(40,120) score_value+=100 enemy(enemyX[i],enemyY[i] ,i) #Bullet Movement if bulletY<=0: bulletY=480 bulletstate='ready' if bulletstate=='fire': bullet_fire(bulletX, bulletY) bulletY+=bulletY_change player(playerX,playerY) Show_score(textX, textY) pygame.display.update() A: The problem of your code is that when your write this: enemyX[i] +=enemyX_change[i] if enemyX[i]>735: enemyX_change[i]+=-4 enemyY[i] +=enemyY_change[i] elif enemyX[i]<0: enemyX_change[i]+=4 enemyY[i] +=enemyY_change[i] every time the enemy reaches an edge enemyX_change[i] should be set to the opposite of what it was before, not by adding or subtracting 4, but directly by setting it to 4 or -4. Also you should set enemyX[i] to be 735 when the enemy reaches the left side and 0 when he reaches the right side. If you change that piece of code with this one everything should work fine: if enemyX[i]>735: enemyX[i] = 735 enemyX_change[i] = -4 enemyY[i] +=enemyY_change[i] elif enemyX[i]<0: enemyX[i] = 0 enemyX_change[i] = 4 enemyY[i] +=enemyY_change[i] | Low | [
0.480565371024734,
34,
36.75
] |
Laws in Manitoba that make it illegal for restaurant employers to force employees to foot the bill for dine-and-dashes get murky when it comes to the topic of tips. The issue of dine-and-dashing, and the act of getting employees to cover the cost, came to light when the CBC reported on a woman who felt discriminated against after being asked to pay up front at Kum Koon Restaurant, despite other patrons not having to do so. The restaurant owner told CBC that dining-and-dashing is a problem at his restaurant. "We have had some bad experiences in this area they come to eat and they just left without paying," Geoffrey Young told the CBC on Friday. "The serving personnel is responsible for this kind of thing and they have to pay." In Manitoba, it's illegal for employers to use employees' wages to cover dine-and-dashes. "You cannot take from people's wages anything to cover dine-and-dashes or other kinds of shortages that are felt for in the workplace," said Jay Short, manager of the Special Investigations Unit with Manitoba Employment. His group investigates restaurants' violations of employment laws. If an employee is expected to use their own money for those dine-and-dash costs (or any other loss), even to make up a difference, that would be against the law, he said. But things get complicated when it comes to tips, and whether tips are actually part of an employees' wages, he added. "Tips themselves are not necessarily part of a wage, and this is where it gets difficult for us, because it's a case-by-case investigation to determine if it was a wage or a tip," he said. He said managers need to be up front about how tips are used. When there's a clear, pre-designed system like a shared pot for tips that go to things like losses and other restaurant employees, it may be perfectly legal for managers to use that money to cover for those who feed and flee. "If that's the case, the tip isn't really part of their wage, it is belonging to the employer and that may be okay," he said. The owner of Kum Koon Restaurant was not clear how employees pay for dine and dishes- whether it's with tips or wages- and was not immediately available to respond to requests for comment on how his employees cover those costs. The CBC contacted several restaurants in Winnipeg to ask about their policies for dine-and-dashers. At many, like Moxie's, Earls, State-and-Main, the owners themselves pay for walk-outs so that employees are never on the hook for their customers' tabs. But several servers working elsewhere had horror stories of having to pay out of pocket. 'That's coming out of your pocket' Kendra Stuart, 22, had to pay $150 for a group of young men who fled without settling up when she was serving at a sports bar several years ago. She said it's not an uncommon practice in the city. "I find that wrong," said Stuart, who quit over the incident. "I went to another table, I turned back and they were all gone. First they said they were going for a smoke then I went outside and they were all gone." Kendra Stuart was once on the hook for her customers' $150 tab at a Winnipeg restaurant and had to use her tips to pay for it. Now she works at Earls, which has a fund for diners-and-dashers to protect the employees. (CBC) If that wasn't frustrating enough, she said it was even worse when her boss blamed her for it. "I had to tell my manager, I'm like, 'Look, their bill's at 150 bucks. I cant find them," she recalled. "He said, 'Well you need to be keeping an eye on your table ... He said 'That's coming out of your pocket'." Stuart said the bill came out of the tips she made that night, though her boss gave her the option of having the total come out in instalments from her paycheque instead. On another occasion at the same restaurant, Stuart said she chased down and caught a separate group of dine-and-dashers, to avoid having to cough up her own tip money to pay their tab. Tip money pools At Perkins Restaurant and Bakery, servers pay a dollar out of the tips they make each day into a walk-out fund. "That way they're not out of pocket and neither is the restaurant out of pocket," said Tony Nieuwhof, general manager at Perkins on Regent Avenue. But he feels that walk-outs are technically the server's responsibility. "They take payment at the table. They're the ones responsible for collecting payment." Short encourages restaurant employees to file a complaint if they feel their employers are unfairly taking their wages. He said in most cases restaurant owners just aren't aware of the laws and voluntarily comply on their own. Failure to comply can result in a fine from $500 to $10,000 and offenders are featured on a webpage. Stuart now works at Earls where she said her experience has been much more positive, especially because she's never expected to pay for her customers out of the tips she makes. According to managers, payment for dine-and-dashes comes out of a separate fund in the store's budget. "I love the people, I love the customers. You'll always get those few diner-and-dashers but the fact they have a fund for it really helps, knowing that they don't want you to run after these people and put [yourself] in danger," she said. "You want to work somewhere where you leave with money at the end of the night, not owing." If you have a complaint or tip about a restaurant in Manitoba, the province's Special Investigations Unit wants to hear from you. Call 204-945-3352 or 1-800-821-4307 toll free, or email [email protected]. Tips can remain anonymous. | Mid | [
0.554809843400447,
31,
24.875
] |
Bettel v Yim Bettel v Yim (1978), 20 O.R. (2d) 617 is a Canadian tort case from Ontario. The Court established that an individual is liable for all harm that flows from his or her conduct even where the harm was not intended. See also List of notable Canadian lower court cases Category:Canadian tort case law Category:1978 in Canadian case law Category:1978 in Ontario Category:Ontario case law Category:Personal injury | High | [
0.71261378413524,
34.25,
13.8125
] |
/*--------------------------------------------------------------------------------------------- * Copyright (c) Microsoft Corporation. All rights reserved. * Licensed under the MIT License. See License.txt in the project root for license information. *--------------------------------------------------------------------------------------------*/ // Do not edit this file. It is machine generated. { "goto.reference": "转到引用" } | Low | [
0.363844393592677,
19.875,
34.75
] |
--- abstract: 'An abstract would go here.' author: - | \ Mechanical Science and Engineering, University of Illinois at Urbana Champaign, Urbana, IL-61801\ Address 2 bibliography: - 'thoughts.bib' - 'refs\_pan.bib' title: 'Safe and Robust Control using Gaussian Process Regression and $\mathcal{L}_1$-Adaptive Control' --- List of keywords Questions to discuss with Chengyu - Does our observations make sense to you? These observations include the performance improvement (even under reduced sampling rate), robustness margin maintaining (if L1 filter is not changed) or improvement (if L1 filter bandwidth is reduced) - To remove the control input jumps induced by GP model update, we can either use a low-pass filter or rate limiter. Which one makes more sense? - Adaptive filter bandwidth for L1. Is it easy to implement? Any practical issues? - The concept of safe learning guaranteed by L1. After learning well, the closed-loop system can transform from an adaptive system to a non-adaptive system, which will have higher robustness margins. Does this make sense? - In practice, one can never learns all the uncertainties, especially when the uncertainties are time-varying, e.g. disturbances. We claim that we just learn the learn-able part while using L1 to cancel the remaining uncertainties. Does this make sense to you? Introduction Placeholder {#sec:intro} ======================== [@rawlings2009model] Transient bounds blah blah with learning blah blah Fast adaptation with slow learning Performance, via performance bounds, improve both with fast adaptation (hardware dependent) and learning (data dependent) An important point to mention, GP updates are produced at very slow rates, however, the fast adaptation in the $\mathcal{L}_1$ ensures that the closed loop system remains stable w/ computable bounds between every update of the GP. The manuscript is organized as follows: In Section \[sec:problem\_statement\] we present the problem statement. Section \[sec:problem\_statement\] presents the design of the $\mathcal{L}_1$ controller without learning, i.e., relying only on the fast adaptation. In Section \[sec:L1-GP\] we present the proposed methodology of incorporating GPR learned dynamics within the $\mathcal{L}_1$ architecture. Finally, Section \[sec:sims\] presents the numerical results. Notation {#subsec:notation} -------- We denote $\|\cdot\|_p$ as the $p^{th}$ norm defined on the space $\mathbb{R}^n$, $n \in \mathbb{N}$. Similarly, defined on the space $\mathbb{R}^{n \times m}$, $n,m \in \mathbb{N}$, $\|\cdot\|_p$ denotes the induced $p^{th}$ norm and $\|\cdot|$ denotes the $2$-norm. We denote by $\|\cdot\|_{\mathcal{L}_p}$ the $p^{th}$ norm on $\mathcal{L}_p$, the space of real-valued functions, where the domain and range of the functions will be omitted when clear from context. Given a positive scalar $\kappa$, we define by $\mathbb{X}_\kappa$ the compact set containing all $x \in \mathbb{R}^n$ satisfying $\|x\|_\infty \leq \kappa$. Similarly, we denote arbitrary compact subsets of $\mathbb{R}^n$ by $\mathbb{X}$. Given any $\xi \in \mathbb{N}$, we denote by $\mathbb{X}^\xi$ the discretization of the compact set $\mathbb{X}$ such that $\max_{x \in \mathbb{X}} \min_{x' \in \mathbb{X^\xi}}\|x - x'\| \leq \xi$. Similarly, we define $C(\xi,\mathbb{X})$ as the covering number of $\mathbb{X}$ with respect to $\xi$, i.e., the number of $\xi$-norm balls needed to completely cover $\mathbb{X}$. Problem Statement {#sec:problem_statement} ================= Consider the following Single-Input Single-Output (SISO) system: \[eqn:system\_dynamics\] $$\begin{aligned} \dot{x}(t) = &A_m x(t) + b(\omega u(t) + f(x(t))), \quad x(0) = x_0, \\ y(t) = & c^\top x(t),\end{aligned}$$ where $x(t) \in \mathbb{R}^n$ is the measured system state, $A_m \in \mathbb{R}^{n \times n}$ is a known Hurwitz matrix which specifies the desired closed-loop dynamics, $b,~c \in \mathbb{R}^n$ are known vectors, $\omega \in \mathbb{R}$ is the *unknown* input gain, $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is the *unknown* non-linearity, and $y(t) \in \mathbb{R}$ is the regulated output. We place the following assumptions: \[assmp:kernel\] The unknown non-linearity $f(x)$ is a sample from Gaussian process $\mathcal{GP}(0,K_f(x,x'))$ where the kernel $K_f:\mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}$ is known. Furthermore, we assume that the kernel and its partial derivatives are Lipschitz on compact subsets of $\mathbb{R}^n$ with known Lipschitz constants. That is, there exist known $L_k(\mathbb{X})$ and $\partial L_k(\mathbb{X})$ such that $$L_k(\mathbb{X}) = \max_{x,x' \in \mathbb{X}}\left\| \nabla_x K_f(x,x') \right\| \quad \text{and} \quad \partial L_k(\mathbb{X}) = \max_{x,x' \in \mathbb{X}}\left\| \nabla_x^2 K_f(x,x') \right\|,$$ where $\nabla_x K_f(x,x') \in \mathbb{R}^n$ and $\nabla_x^2 K_f(x,x') \in \mathbb{R}^{n \times n}$ denote the gradient and the Hessian matrix of $K_f$ with respect to $x$, respectively. \[assmp:Lip\_bounds\] There exist known conservative bounds $L_f(\mathbb{X})$, $\partial L_f(\mathbb{X})$, and $B$ such that $$\left\| \nabla_x f(x) \right\|_\infty \leq L_f(\mathbb{X}), \quad \left\| \nabla_x^2 f(x) \right\|_\infty \leq \partial L_f(\mathbb{X}), \quad |f(0)| \leq B, \quad \quad \forall x \in \mathbb{X}.$$ Note that this assumption implies that $f$ and $\partial_x f$ are Lipschitz continuous on $\mathbb{X}$ with the conservative knowledge of the constants being $ L_f(\mathbb{X})$ and $\partial L_f(\mathbb{X})$, respectively. \[assmp:3\] The scalar $\omega$ is the unknown input gain whose conservative bounds are known a-priori, i.e., $\omega \in [\underline{\omega},\bar{\omega}]$, $0 < \underline{\omega} < \bar{\omega} < \infty$. \[assmp:initial\_condition\] The initial condition $x_0$ satisfies $\|x_0\|_\infty \leq \rho_0 < \infty$, where the arbitrarily large positive scalar $\rho_0$ is assumed to be known. Assumption \[assmp:kernel\] merits a discussion on its conservatism. We argue that this assumption does not introduce any undue conservatism. Indeed, by Mercer’s theorem [@williams2006gaussian Thm. 4.2], the compact integral operator induced by $K_f$, admits countable eigenfunctions $\phi_i$ and absolutely summable eigenvalues $\lambda_i$ such that $K_f(x,x') = \sum_{i=1}^\infty \lambda_i \phi_i(x)\phi_i(x')$, $\forall x,x' \in \mathbb{X} \subset \mathbb{R}$, $\mathbb{X}$ compact. Thus, using the fact that the normal distribution has an infinite support, it is easily established that each sample $f$ from $\mathcal{GP}(0,K_f(x,x'))$ satisfies $f \in \mathcal{F}$, where $$\mathcal{F} = \{f:\mathbb{X} \rightarrow \mathbb{R}~:~ f \in \text{span}\{\phi_i(x)\}.$$ The set of functions to which $f$ belongs is solely a function of the kernel $K_f$. Any prior knowledge of $f$ can be incorporated by designing the kernel appropriately. For example, continuous functions can be learned arbitrarily well when $K_f$ is universal in the sense of [@steinwart2001influence]. Furthermore, the set $\mathcal{F}$ is larger than the Reproducing Kernel Hilbert Space (RKHS) associated with $K_f$. For example, the set of sample paths $\mathcal{F}$ for the squared-exponential kernels contains continuous functions on $\mathbb{X}$. Whereas, its associated RKHS contains only analytic functions [@vaart2011information]. Finally, various kernels like the squared-exponential and Matérn kernels verify the assumption that they and their derivatives be Lipschitz continuous. The objective is to use Gaussian Process Regression (GPR) to learn the model uncertainty $f$, and use the learned dynamics to design a controller so that the regulated output $y(t)$ tracks a given bounded reference signal $r(t)$ with *uniform performance bounds* and *guaranteed robustness margins*. Preliminaries {#sec:prelim} ------------- Gaussian Process Regression {#subsec:GPR} --------------------------- Brief overview of GPR. A paragraph, max. $\mathcal{L}_1$-Adaptive Control {#subsec:L1} -------------------------------- Brief overview of $\mathcal{L}_1$ with the important architectural components, decoupling of estimation from control, adaptation dependent performance w.r.t. the reference system etc. A couple of paragraphs, max. In this subsection, we briefly review the existing standard control architecture for the uncertain system incorporation of learned dynamics. Consequently, in we will show how the GPR learned dynamics can be incorporated within the $\mathcal{L}_1$ architecture. The reader is directed to [@L1_book] for further details on the following material. An controller consists of four components: a state predictor, an adaption law, a low-pass filter and a control law. The state predictor predicts the state trajectories of the actual system, while the prediction error is used to update the uncertainty estimates based on the adaption law. In terms of the adaptation law, we consider the pieweise-constant adaptation law, introduced in [@L1_book Section 3.3], since it is easier to implement compared to the projection-based adaptation law, due to direct connection with the controller sampling rate. The control law tries to cancel the estimated uncertainty within the bandwidth of the low-pass filter These components are detailed as follows. **Low-Pass Filter**: The filter, denoted by $C(s)$, is a $m$ by $m$ transfer function matrix with $C(0) = \mI_m$. In addition, it is designed to ensure that for a given $\rho_0$, there exists a positive constant $\rho_r$ such that the following *$\mathcal{L}_1$-norm condition* $$\label{eqn:L1_norm_condition} \|G(s)\|_{\mathcal{L}_1} < \frac{\rho_r - \|H(s)C(s)k_g\|_{\mathcal{L}_1} \|r\|_{\mathcal{L}_\infty} - \rho_{in}}{L_{\rho_r}\rho_r + B_0}$$ holds, where $G(s) = H(s)(1-C(s))$, $H(s) = (s\mathbb{I} - A_m)^{-1}b$, $\rho_{in} = \|s(s\mathbb{I} - A_m)^{-1}\|_{\mathcal{L}_1} \rho_0$, $\rho_0$ is defined in Assumption \[assmp:initial\_condition\], and $k_g = (C_m A_m^{-1}B_m)^{-1}$ is the feed-forward gain matrix designed to track step reference commands with zero steady-state error. Furthermore, $$\label{eqn:L_rho_r} L_{\rho_r} \triangleq \frac{\bar{\rho}_r }{\rho_r}L_f(\mathbb{X}_{\bar{\rho}_r }), \quad \bar{\rho}_r \triangleq \rho_r + \bar{\gamma}_1,$$ where $\bar{\gamma}_1$ is an arbitrary positive scalar. Finally, $L_f(\cdot)$ and $B$ are defined in Assumption \[assmp:Lip\_bounds\], $r(t)$ is a bounded reference signal, and $\mathbb{X}_{(\cdot)}$ is defined in Section \[subsec:notation\]. Before proceeding further, we define the following constants of interest. We define $$\label{eqn:constants} \rho \triangleq \rho_r + \bar{\gamma}_1, \ \gamma_1 \triangleq \frac{\|H(s)C(s)H_y^{-1}(s)C_m\|_{\mathcal{L}_1}}{1 - \|G(s)\|_{\mathcal{L}_1} L_{\rho_r}} \bar{\gamma}_0 + \beta,\quad \rho_u \triangleq \rho_{ur} + \gamma_2, % \quad \theta_b = L_f(\mathbb{X}_{\bar{\rho}_r}), \quad \Delta = B + \epsilon,$$ where $H_y(s) \triangleq C_m(s\mI-A_m)^{-1}B_m$, $\bar{\gamma}_0$ and $\beta$ are arbitrary small positive scalars ensuring $\gamma_1 < \bar{\gamma}_1$, and $$\begin{aligned} \rho_{ur} &\triangleq \left\| C(s) \right\|_{\mathcal{L}_1}(\|k_g\|_\infty\|r\|_{\mathcal{L}_\infty} + L_{\rho_r} \rho_r + B_0), \label{eqn:rho_ur_defn}\\ \gamma_2 & \triangleq \left\| C(s) \right\|_{\mathcal{L}_1} L_{\rho_r}\gamma_1 + \left\| C(s)H_y^{-1}(s)C_m \right\|_{\mathcal{L}_1} \bar{\gamma}_0. \label{eqn:gamma_2_defn}\end{aligned}$$ **State Predictor**: The state predictor is defined as \[eqn:vanilla\_predictor\] $$\begin{aligned} \dot{\hat{x}}(t) = &A_m \hat{x}(t) + B_m(u(t) + \hat{\sigma}_1(t)) + B_{um} \hat{\sigma}_2(t), \ \hat{x}(0) = x_0\\ \hat{y}(t) = & C_m \hat{x}(t),\end{aligned}$$ where $\hat{x}(t)\mathbb{R}^n$ is the predicted system state $B_{um}\in \mathbb{R}^{n\times(n-m)}$ is a constant matrix such that $[B_m \ B_{um}]$ is invertible and $B_m^TB_{um} = 0$[^1], $\hat{\sigma}_1(t)\in \mathbb{R}^m$ and $\hat{\sigma}_2(t)\in \mathbb{R}^{n-m}$ are the adaptive estimates, which are computed based on the adaptation law as explained below. **Adaptation Law**: The adaptive estimates are defined as $$\label{eqn:vanilla_adaptation} \begin{split} \begin{bmatrix} \hat{\sigma}_1(t) \\ \hat{\sigma}_2(t) \end{bmatrix} & = \begin{bmatrix} \hat{\sigma}_1(iT_s) \\ \hat{\sigma}_2(iT_s) \end{bmatrix}, t\in [iT_s, (i+1)T_s), \\ \begin{bmatrix} \hat{\sigma}_1(iT_s) \\ \hat{\sigma}_2(iT_s) \end{bmatrix} & = - \begin{bmatrix} \mI_m & 0 \\ 0 & \mI_{n-m} \end{bmatrix} [B_m\ B_{um}]^{-1}\Phi^{-1}(T_s)\mu(iT_s),\ \textup{for } i=0,1,2,\dots, \end{split}$$ where $\Phi(T_s) \triangleq A_m^{-1}(e^{A_mT_s}-\mI_n)$, $\mu(iT_s)\triangleq e^{A_mT_s}\tilde{x}(iT_s)$, and $\Tilde{x}(t) \triangleq \hat{x}(t) - x(t)$ is the prediction error. **Control Law**: The control law is given as $$\label{eqn:vanilla_control_law} u(s) = C(s)(\hat{\sigma}_1(s) - k_g r(s)),$$ where $\hat{\sigma}_1(s)$ is the Laplace transform of $\hat{\sigma}_1(t)$. Note that since the actual system only contains matched uncertainty, $f(x(t))$, the control law tries to cancel only the [*matched*]{} uncertainty estimate, $\hat{\sigma}_1(t)$, within the bandwidth of the filter $C(s)$. In case when the actual system has both matched an unmatched uncertainty, the control law can be adjusted to cancel both of them. See [@L1_book Section 3.3] for the details. In summary, the $\mathcal{L}_1$ controller is defined via \[eqn:vanilla\_predictor,eqn:vanilla\_adaptation,eqn:vanilla\_control\_law\], subject to the $\mathcal{L}_1$-norm condition in . The analysis of the closed-loop system with the controller follows two steps. First, the stability of a *reference system* is established. The reference system is given by \[eqn:reference\_system\] $$\begin{aligned} \dot{x}_{\text{ref}}(t) = & A_m x_{\text{ref}}(t) + B_m ( u_{\text{ref}}(t) + f(x_{\text{ref}}(t))), \quad x_{\text{ref}}(0) = x_0,\\ u_{\text{ref}}(s) = &C (s)(k_gr(s) - \eta_{\text{ref}}(s)),\quad y_{\text{ref}}(t)= C_m x_{\text{ref}}(t),\end{aligned}$$ where $\eta_{\text{ref}}(s)$ is the Laplace transform of $\eta_{\text{ref}}(t) \triangleq f(x_{\text{ref}}(t))$. The reference system defines the *ideal achievable performance* where the effect of the uncertainties are cancelled only within the bandwidth of the filter $C(s)$. The stability of the reference system is guaranteed by the norm condition . The following lemma characterizes the behavior of the reference system. Since it is a simpler version of Lemma 3.3.2 in [@L1_book], the proof is omitted for brevity. For the closed-loop reference system in , subject to the $\lone$-norm condition in , if Assumption \[assmp:initial\_condition\] holds, then $$\label{eqn:xref_uref_bounds} x_\Ref \in \mX_{\rho_r}, \quad u_\Ref \in \mX_{\rho_{ur}},$$ where $\rho_r$ is from and $\rho_{ur}$ is defined in . When the bandwidth of the filter tends to infinity, one can easily see that the behavior of the reference system will converge to that of an idea system, defined as \[eqn:ideal\_system\] $$\begin{aligned} \dot{x}_{\text{id}}(t) = & A_m x_{\text{id}}(t) + B_m k_g r(s), \quad x_{\text{id}}(0) = x_0,\\ u_{\text{id}}(s) = & k_gr(s) - \eta_{\text{id}}(s),\quad y_{\text{id}}(t)= C_m x_{\text{id}}(t),\end{aligned}$$ where the uncertainty $f(x(t))$ is perfectly cancelled by $\eta_{\text{id}}(s)$ in the control law, which is the Laplace transform of $\eta_{\text{id}}(t) \triangleq f(x_{\text{id}}(t))$. Therefore, the filter determines the difference between the reference system and the ideal system . Define $$\begin{aligned} \alpha_1(t) \triangleq \norm{e^{A_mt}}_2,\ \nonumber \alpha_2(t), \triangleq \int_0^t \norm{e^{A_m(t-\tau)}\Phi^{-1}(T_s)e^{A_mT_s}}_2 d\tau,\nonumber, \alpha_3(t) \triangleq \int_0^t \norm{e^{A_m(t-\tau)}B_m}_2 d\tau.\end{aligned}$$ Let $$\bar{\alpha}_1(T_s) \triangleq \max_{t\in[0,T_s]}\alpha_1(t),\ \bar{\alpha}_2(T_s) \triangleq \max_{t\in[0,T_s]}\alpha_2(t),\ \bar{\alpha}_3(T_s) \triangleq \max_{t\in[0,T_s]}\alpha_3(t), \ \Delta = (L_\rho \rho + B_0) \sqrt{m}$$ and $$\gamma_0(T_s) \triangleq (\bar{\alpha}_1(T_s) + \bar{\alpha}_2(T_s)+1) \bar{\alpha}_3(T_s) \Delta.$$ Since $\bar{\alpha}_1(T_s)$, $\bar{\alpha}_1(T_s)$ and $\Delta$ are bounded, and $\lim_{T_s \rightarrow 0}\bar{\alpha}_3(T_s) = 0,$ we have $$\lim_{T_s \rightarrow 0}\gamma_0(T_s) = 0.$$ If we select $T_s$ to ensure that $$\label{eqn:Ts_requirement} \gamma_0 (T_s) < \bar{\gamma}_0,$$ then uniform performance bounds can be obtained for the signals of the closed-loop system with the controller defined via \[eqn:vanilla\_predictor,eqn:vanilla\_adaptation,eqn:vanilla\_control\_law\], both input and output, simultaneously, with respect to the reference system . This is formalized in the following theorem. The proof can be readily obtained following the proof of Theorem 3.3.1 in [@L1_book], and is thus omitted due to space limit. Let the adaptation rate be selected to satisfy . Given the closed-loop system with the controller defined via \[eqn:vanilla\_predictor,eqn:vanilla\_adaptation,eqn:vanilla\_control\_law\],subject to the $\lone$-norm condition , and the closed-loop reference system in , then we have $$\begin{aligned} x\in \mX_\rho, u\in \mX_{\rho_u},\label{eqn: x_and_u_bounds}\\ \Linfnorm{\tilde{x}} \leq \bar{\gamma}_0 \label{eqn: xtilde_bound}\\ \Linfnorm{x_{ref}-x} \leq {\gamma_1} \label{eqn:x_xref_error_bound} \\ \Linfnorm{u_{ref}-u} \leq \gamma_2 \label{eqn:u_uref_error_bound}\end{aligned}$$ $\mathcal{L}_1$-GP Controller {#sec:L1GP} ============================= In this section we present the $\mathcal{L}_1$-GP controller wherein the online GPR learned dynamics are incorporated within the $\mathcal{L}_1$ architecture. We begin with the learning problem. Learning the dynamics using GPR {#subsec:GP_learning} ------------------------------- The uncertain component in is $$\label{eqn:GP:uncertain_component} h(x,u) = f(x) + g(u),$$ where $g(u) = \omega u$. Now, by Assumption \[assmp:kernel\], $f \sim \mathcal{GP}(0,K_f(x,x'))$, i.e., the underlying function is a sample from the known associated GP. Furthermore, since $g(u) = \omega u$, the following holds true $$\label{eqn:GP:input_gain_GP} g \sim \mathcal{GP}(0,K_g(u,u')), \quad K_g(u,u') = uu'.$$ Note that this is equivalent to the statement that $g(u) = \omega u$ with $\omega$ being a sample from a normal distribution. Due to the independence of the functions $f$ and $g$, we can jointly learn the function $f+g$ by using an Additive Gaussian Process (AGP) regression as presented in in [@duvenaud2014automatic Chapter 2]. To be precise, we may write $$\label{eqn:GP:h_GP} f+g \sim \mathcal{GP}(0,K_h(x,x',u,u')), \quad K_h(x,x',u,u') = K_f(x,x') + K_g(u,u').$$ For the learning, we now setup the measurement model. Assuming we have $N \in \mathbb{N}$ measurements of the form $$y_i = h(x_i,u_i) + \zeta = \|b\|^{-2} b^\top (\dot{x}_i - A_m x_i) + \zeta, \quad \zeta \sim \mathcal{N}(0,\sigma_n^2), \quad i \in \{1,\cdots, N\},$$ we define the data set $$\label{eqn:GP:data} \mathcal{D}_N = \{\mathbf{Y},\mathbf{X},\mathbf{U}\},$$ where $\mathbf{Y},~\mathbf{U} \in \mathbb{R}^N$ and $\mathbf{X} \in \mathbb{R}^{N \times n}$ are defined as $$\mathbf{Y} = \begin{bmatrix} y_1 & \cdots & y_N \end{bmatrix}^\top, \quad \mathbf{X} = \begin{bmatrix} x_1 & \cdots & x_N \end{bmatrix}^\top, \quad \mathbf{U} = \begin{bmatrix} u_1 & \cdots & u_N \end{bmatrix}^\top.$$Note that we usually only have access to measurements of $x$ and $u$, and not $\dot{x}$. However, estimates of $\dot{x}$ may be numerically generated with the estimation errors incorporated into $\zeta$. As an example, one may use the Savitsky-Golay filter for this purpose [@schafer2011savitzky]. Now, using the a priori independence of $f \in \mathcal{GP}(0,K_f(x,x'))$ and $g \in \mathcal{GP}(0,K_g(u,u'))$, we get the following joint distributions at any test inputs $x \in \mathbb{R}^n$ and $u \in \mathbb{R}$ \[eqn:GP:joint\_dsitributions\] $$\begin{aligned} \begin{bmatrix} f(x) \\ \mathbf{Y} \end{bmatrix} \sim & \mathcal{N} \left( \begin{bmatrix} 0 \\ 0_N \end{bmatrix}, \begin{bmatrix} \mathbf{K_f^{\star \star}}(x) & \mathbf{K_f^{\star}}(x)^\top \\ \mathbf{K_f^{\star}}(x) & \mathbf{K_f} + \mathbf{K_g} + \sigma_n^2 \mathbb{I}_N \end{bmatrix} \right), \\ \begin{bmatrix} g(u) \\ \mathbf{Y} \end{bmatrix} \sim &\mathcal{N} \left( \begin{bmatrix} 0 \\ 0_N \end{bmatrix}, \begin{bmatrix} \mathbf{K_g^{\star \star}}(u) & \mathbf{K_g^{\star}}(u)^\top \\ \mathbf{K_g^{\star}}(u) & \mathbf{K_f} + \mathbf{K_g} + \sigma_n^2 \mathbb{I}_N \end{bmatrix} \right),\\ \begin{bmatrix} \nabla f(x) \\ \mathbf{Y} \end{bmatrix} \sim &\mathcal{N} \left( \begin{bmatrix} 0_n \\ 0_N \end{bmatrix}, \begin{bmatrix} \mathbf{\nabla^2 K_f^{\star \star}}(x) & \mathbf{\nabla K_f^{\star}}(x)^\top \\ \mathbf{ \nabla K_f^{\star}}(x) & \mathbf{K_f} + \mathbf{K_g} + \sigma_n^2 \mathbb{I}_N \end{bmatrix} \right),\end{aligned}$$ where we have used the fact that GPs are closed under linear transformations [@williams2006gaussian Sec. 9.4]. Here $\mathbf{K_f^{\star \star}}(x),\mathbf{K_g^{\star \star}}(u) \in \mathbb{R}$, $\mathbf{K_f^{\star }}(x),\mathbf{K_g^{\star}}(u) \in \mathbb{R}^N$, $\mathbf{K_f},\mathbf{K_g} \in \mathbb{S}^N$, $\mathbf{\nabla^2 K_f^{\star \star}}(x) \in \mathbb{S}^n$, and $\mathbf{\nabla K_f^{\star }}(u) \in \mathbb{R}^{N \times n}$ are defined as $$\begin{aligned} &\mathbf{K_f^{\star \star}}(x) = K_f(x,x), \quad \mathbf{K_g^{\star \star}}(u) = K_g(u,u), \quad \mathbf{K_f^{\star}}(x) = K_f(X,x), \quad \mathbf{K_g^{\star}}(u) = K_g(U,u), \\ &\mathbf{K_f} = K_f(X,X), \quad \mathbf{K_g} = K_g(U,U), \quad \left[\mathbf{\nabla K_f^{\star \star}} \right]_{i,j} = \partial_{x_i x_j'}K_f(x,x), \quad \mathbf{\nabla K_f^{\star}} = \left( \nabla_x K_f(X,x) \right)^\top.\end{aligned}$$ Further discussion on the derivation of these joint distributions can be found in [@duvenaud2014automatic Chapter 2]. Now, we derive the conditional distributions by using [@bishop2006pattern Sec. 2.3.1] to get $$\label{eqn:GP:conditional_posteriors} f(x)|\mathbf{Y} \sim & \mathcal{N}(\mu_f(x),\sigma_f(x)^2), \quad \nabla f(x)|\mathbf{Y} \sim & \mathcal{N}(\partial \mu_f(x),\partial \sigma_f(x)^2), \quad \omega|\mathbf{Y} \sim & \mathcal{N}(\mu_\omega,\sigma_\omega^2),$$ where $$\begin{aligned} \mu_f(x) = & \mathbf{K_f^\star}(x)^\top \left( \mathbf{K_f} + \mathbf{K_g} + \sigma_n^2 \mathbb{I}_N \right)^{-1} \mathbf{Y}, \quad \partial \mu_f(x) = \mathbf{\nabla K_f^\star}(x)^\top \left( \mathbf{K_f} + \mathbf{K_g} + \sigma_n^2 \mathbb{I}_N \right)^{-1} \mathbf{Y}, \\ \mu_\omega = & \mathbf{U}^\top \left( \mathbf{K_f} + \mathbf{K_g} + \sigma_n^2 \mathbb{I}_N \right)^{-1} \mathbf{Y},\quad \sigma_\omega^2 = 1 - \mathbf{U}^\top \left( \mathbf{ K_f} + \mathbf{K_g} + \sigma_n^2 \mathbb{I}_N \right)^{-1} \mathbf{U},\\ \sigma_f(x)^2 = & \mathbf{K_f^{\star\star}}(x) - \mathbf{K_f^\star}(x)^\top \left( \mathbf{K_f} + \mathbf{K_g} + \sigma_n^2 \mathbb{I}_N \right)^{-1}\mathbf{K_f^\star}(x),\\ \partial \sigma_f(x)^2 = & \mathbf{\nabla^2 K_f^{\star\star}}(x) - \mathbf{\nabla K_f^\star}(x)^\top \left( \mathbf{ K_f} + \mathbf{K_g} + \sigma_n^2 \mathbb{I}_N \right)^{-1}\mathbf{\nabla K_f^\star}(x). \end{aligned}$$ Note that we obtained the posterior distribution of $\omega$ by fist determining the posterior conditional $g(u)$. Then, the linearity of $g(u) = \omega u$ allows us to then extract the posterior distribution of $\omega$. We now proceed by obtaining high probability uniform prediction error bounds between each of the means and the true values. The following result is directly obtained by following the material in [@lederer2019uniform], with the exception of a few additional terms. The proof is provided in Appendix \[app:GP\_bound\_proof\]. \[thm:GP\_bounds\] . Given any $\delta \in (0,1)$, we have following bounds hold w.p. at least $1-\delta$ $$\|\nabla_x (f(x) - \mu_f(x))\|_1 \leq L_{\Tilde{f}}(\mathbb{X}_\kappa), \quad |f(0) - \mu_f(0)| \leq \tilde{B}, \quad |\omega - \mu_\omega| \leq L_{\tilde{\omega}}, \quad \forall x \in \mathbb{X}_\kappa$$ Note that for both upper bounds presented in this theorem, $\gamma(\tau)$ and $\partial \gamma(\tau)$ can be made arbitrarily small compared to $\sqrt{\beta(\tau)}\tilde{\sigma}(x)$ and $\sqrt{\beta(\tau)}\partial \tilde{\sigma}(x)$, respectively, since $\beta(\tau)$ grows only logarithmically with a vanishing $\tau$. We now set-up the on-line learning algorithm presented in Algorithm \[algo:learning\]. The algorithm has access to the discrete measurements of the form from the system . The main idea is that the learner computes the posterior distributions conditioned on data using . Then, using the bounds provided by Theorem \[thm:GP\_bounds\], the algorithm returns $\mu_f$ and $\mu_\omega$ if the bounds are improved by some heuristically designed factor. We begin by defining the *model parameters* $\mathcal{M}$ as $$\mathcal{M}=\{\mu_f,\mu_\omega,L_{\tilde{f}}(\mathbb{X}),L_{\tilde{\omega}},\tilde{B}\}.$$ Additionally, we define $\eta_{\text{tol}} \in (0,1)$ as the *tolerance* which measures the improvements of the learning based updates. \[algo:learning\] Initialize: $\mathcal{M} \leftarrow {0}$, $\mathcal{D} \leftarrow \emptyset$, $N \in \mathbb{N}$, $\mathbb{X}$, $L_f(\mathbb{X})$, B, $L_\omega = \bar{\omega} - \underline{\omega}$, $\eta_{\text{tol}}$\ Input: Discrete data stream $\mathcal{D}_i$\ $\mathcal{L}_1$ using GP learned dynamics {#subsec:GP_w_learning} ----------------------------------------- Numerical Experiments {#sec:sims} ===================== Comparison of performance with i) only $\mathcal{L}_1$, and ii) $\mathcal{L}_1$ with incorporated learned dynamics with various sizes of datasets (say, 3 different datasets). Conclusions and Future Work {#sec:sims} =========================== Online update of predictor, using variance as a guide. Consideration of spatio-temporal non-linearities. Extension to MIMO plants with non-linear reference dynamics for use in robotics. SSGP uniform bounds for incorporation within L1. Proof of Theorem \[thm:GP\_bounds\] {#app:GP_bound_proof} =================================== [^1]: In case $B_m$ is an invertible matrix, there is no need to introduce $B_{um}$ and $\hat{\sigma}_2(t)$. | Mid | [
0.610079575596817,
28.75,
18.375
] |
Commentary: Advice of Horace Greeley Greeley's "Go in Peace" Idea Greeley's "go in peace" idea proved confusing to contemporaries and, later, to historians. Some have argued that Greeley intended to strengthen southern unionism, but if that failed, to provide for peaceable separation, especially if it were accomplished in the proper manner. Peaceable separation was preferable to the horrors of civil war. Yet Greeley's motives were by no means as clear as the slogan "go in peace" implied. A number of historians, including David M. Potter, Kenneth M. Stampp, and Glyndon G. Van Deusen, contend that Greeley's proposal was motivated in large part by hostility to compromise. His plan was, in fact, an alternative to making concessions to the South. He surrounded his proposal for peaceable secession with so many constraints and obstacles that the possibility of legitimate peaceable secession was nullified. Secession must, for example, be based upon genuine popular approval after full discussion and deliberation. That, clearly, was not how he viewed events in the South, including South Carolina. As Potter explains, Greeley's position was: "First, the South may depart in peace. Second, she must observe certain forms in doing so. Third, she is not, in the present movement, observing these forms." Greeley's proposal, then, may have lacked sincerity, and Stampp bluntly labels it a "fraud." He was never a pacifist and was prepared to fight if secessionists did not meet his conditions. Greeley's anti-compromise idea was complicated and ambiguous, subject to different interpretations. Whatever Greeley's own views, some northerners sincerely adopted his "go in peace" idea. They were prepared to accept southern independence, which they considered likely to be temporary, rather than resort to war. (Click here to see an example of the conditional nature of Greeley's "go in peace" idea.) | Mid | [
0.6334056399132321,
36.5,
21.125
] |
Nelson McCausland Nelson McCausland told Radio Ulster a remark by Arlene Foster the previous day, in which she claimed Sinn Fein is trying to assert “cultural supremacy” over other traditions, “probably does reflect the view of the majority of people within the unionist community”. He told the BBC’s William Crawley he has spoken to members of the public, MLAs and councillors, and “there is a deep concern within the unionist community about the demands of Sinn Fein, and a fear and a concern that if these were to be acceded that it would totally change the nature of Northern Ireland”. Mr McCausland held his North Belfast seat from 2003 until losing it in the election this March. Sign up to our daily newsletter The i newsletter cut through the noise Sign up Thanks for signing up! Sorry, there seem to be some issues. Please try again later. Submitting... Alban Maginness, a former SDLP MLA for North Belfast, told the same show that “Orangeism and unionism [were] supreme for maybe 70 years here, and that was domination and that was cultural surpemacy and political supremacy and that’s what caused the Troubles”. | Mid | [
0.5906040268456371,
33,
22.875
] |
Introduction The Pentagon has been paying hundreds of millions of tax dollars a year to people and companies that don’t deserve it, but its financial management shortcomings are so severe that it’s made little progress in halting the errors or even measuring their magnitude, according to a report released by a Senate committee Thursday. Although the Defense Department reported making over $1.1 billion in overpayments in fiscal year 2011 to military personnel and retirees, civilian defense workers, contractors, and others, investigators from the Government Accountability Office said that figure is not credible due to missing invoices and other flawed paperwork, as well as errors in arithmetic. The Pentagon is required by law to ferret out programs susceptible to significant payment errors and then use statistical sampling to estimate the size of those errors, so that Congress can determine the size of the problem. But GAO found defense finance officials didn’t have procedures in place to collect and maintain the data they need to come up with a credible estimate. Even when the department could find and document mistaken payments, it frequently did not take cost-effective steps to recover the money, the GAO said. The U.S. Army Corps of Engineers, for example, has spent $256,000 since 2009 on an automated overpayment-detection program that has recovered just one improper payment of $20.79, GAO said. The Pentagon’s payment system is so weak that sometimes it doesn’t pay what’s owed. By its own estimate, for example, the Pentagon made $238.2 million in overpayments and $48.4 million in underpayments related to travel alone during fiscal 2011, for a total of $286.6 million in incorrect payments. But when pressed by GAO, defense finance officials were only able to identify $1.6 million, or less than 1 percent, of the program’s estimated overpayments as recoverable, explaining that they lacked supporting documentation for a significant portion of the total. The Defense Department “is at risk of foregoing the detection and recovery of potentially substantial funds owed to the government,” the GAO report said. Instead of conducting cost-effective audits to identify funds that can be recovered, GAO said the Pentagon relies on such methods as self-reporting by defense contractors and other recipients of the money, random sampling of payment records, and findings by the Defense Department inspector general or other auditors. Members of the Senate Homeland Security and Governmental Affairs Committee, which released the GAO report, expressed anger and frustration over the findings. Several accused the Pentagon of failing to comply with a 2010 law requiring federal agencies to identify, prevent and recover payments made in error. “The Department spends about a trillion dollars annually, but officials have no idea how much of that money it loses to waste and fraud. This is simply unacceptable,” said Committee Chairman Tom Carper, D-Del., who co-sponsored the law with Sen. Susan Collins, R-Maine, and others. Sen. Tom Coburn, R-Okla., ranking member of the committee, cited the GAO’s conclusions while promising to reintroduce legislation requiring the Defense Department to conduct an accurate audit of its books, as required by federal rules the Department has repeatedly flouted. “When our largest federal agency cannot produce a viable financial audit, it should be no surprise DOD cannot account for how much money it wastes on improper payments,” said Coburn, who vowed to use all the oversight tools at his disposal to expose and prevent defense finance abuses. “Improper payments should be low-hanging fruit when it comes to eliminating government waste — but that clearly hasn’t been the case here,” said Sen. Claire McCaskill, D-Mo., who leads the panel’s Financial and Contracting Oversight subcommittee. The problem is an old one at the Pentagon. Twenty years ago, GAO delivered a scathing report to the same Senate committee documenting a military payroll system that was so badly managed the Army had inadvertently paid $6 million to 2,269 troops who had already quit the service, were absent without leave or had deserted their units. In one case, a dead deserter was sent a paycheck. A separate probe at the time revealed that managers at a defense finance and accounting center had miscalculated the pay of more than 201,000 Air Force retirees in 1986, giving them an extra dollar or two each month. It took nine years for managers to correct the error, which ended up costing taxpayers $16 million. Officials said they decided not to try to recover the money because the some of the retirees were probably dead, and the effort to collect would be too expensive. DOD’s Fiscal Year 2011 Reported Outlays and Improper Payments. GAO analysis of DOD’s data on improper payment amounts. In the report released Thursday, GAO said the Pentagon acknowledged overpayments in military pay and military retirement pay as part of the $1.1 billion in erroneous payments in fiscal 2011. In addition, the Pentagon made overpayments of military health benefits, civilian pay, travel pay, commercial pay to vendors and contractors and Army Corps of Engineers travel and contractor fees. GAO faulted the Defense Department comptroller not only for these mistakes, but for doing a poor job of reporting on the issue to Congress. In one instance, the department claimed it would recover $67.6 million in improper military retirement payments while estimating that only $18.8 million in overpayments had actually occurred, GAO said. Defense officials also failed to do “risk assessments” to determine what kind of corrective action is needed to reduce mistakes, GAO said. They failed to identify the “root causes” for errors, such as whether manual or automated controls were insufficient or even working. In addition, GAO said the department did not comply with the 2010 law requiring “recovery audits” to evaluate the cost-effectiveness of procedures to recover money paid improperly to companies whose contracts have a total value exceeding $500 million in a fiscal year. According to GAO, defense officials said they were having a hard time tracing transactions and finding the original justifications for them, preventing them from conducting effective recovery audits. If the Defense Department does not implement strategies to comply with federal improper payment laws, GAO warned, it will remain “at risk of continuing to make improper payments and wasting taxpayer funds.” In a written statement released with the report, Robert F. Hale, the Defense Department’s comptroller, said the department is now developing risk assessments and corrective actions, reviewing its recovery efforts to ensure that they are cost effective, and working to ensure its reporting is complete and accurate. “I continue to believe this program [to deal with overpayments] is fundamentally sound and I remain fully committed to comply in all respects with current statutory requirements,” Hale said. Efforts to obtain further Pentagon comment Thursday were unsuccessful. | Mid | [
0.542735042735042,
31.75,
26.75
] |
WASHINGTON — The Ted Stevens guilty verdict can be viewed through a variety of prisms. These include its impact on the Democrats' drive for a filibuster-proof Senate (a 60-seat majority), the sullying mark of corruption on the Republican brand as a whole, and potential shifts to the GOP leadership pecking order. Let's tackle these in reverse order. First, it seems pretty unlikely Alaska voters will send a convicted Ted Stevens back to Capitol Hill. This means some true lions of the Senate are going to be clearing out this election cycle. Ted Stevens has been in the Senate for all or part of five decades; retiring Republicans Pete Domenici and John Warner have been in the upper chamber for all or part of four. Trent Lott resigned late last year, but should also be considered part of this "leaving lions" list. Add the possible departures of Elizabeth Dole (a Dole has been in the Senate for all or part of the past five decades) and Minority Leader Mitch McConnell — and you’ve got a somewhat unrecognizable upper chamber. The Senate Republicans are finding themselves in a similar spot to Senate Democrats in 1980. That’s when a handful of big names lost in a GOP landslide. Consider how devastating it must be for John McCain and a whole herd of GOP incumbents to see the words "Senator," "Republican,” “convicted," and "corruption" in the news just eight days before the election. If it weren't for bad luck, the Republicans would have no luck at all this year. These are some of those infamous "tea leaves" that many in the media force themselves to read. It's just another link in the negative political news chain that's weighing down the GOP ahead of Nov. 4. Perhaps some will see the sweeping out of a pork-loving Senator like Stevens as a good thing in the long term. But, don’t forgot, he’s the poster child for big government Republicanism — the very same phenomenon that caused the party to lose its "less government" conservative way over the past decade. | High | [
0.6650602409638551,
34.5,
17.375
] |
I just registered to this site to comment on this picture. I've seen it floating around the web in recent weeks and decided to hire a fabricator to make this into a live costume. I'm happy to have found the origin of this artwork and would like to say the Japanese themed star wars is amazing. Now that I know who made this artwork, if you don't mind, I'd like to make it into a costume. I'll even post some pics upon completion of the costume at the end of the year. I hope to have this made before the next star wars movie comes out in December. Frigging awesome. May I give an idea for you to do? I love starwars and have been seeing your samurai forging skills and I was thinking to myself "he should totally do a gordan freeman in the hazard suit with a blended samurai look!" Idk if youve played half life or notbnotbut I would assume youve at least heard of it. You should do one I was going to attempt it but figured you were 50x better at it. Lemme know what you think! Wonderful job bub! I envy your skill | Low | [
0.49440715883668906,
27.625,
28.25
] |
Medical Errors Pegged As Third Leading Cause of Death You’ve probably heard or read about nursing home abuse cases like that of Ms. Simmons, or stories about terrible mistakes at hospitals and clinics, or reports about medical devices that fail and cause serious injury or death. Most medical negligence claims involve significant injury or death. Click for full view. So why in the world would anyone propose a new bill in Washington, D.C., that would make it nearly impossible for you to pursue lawsuits and hold insurance companies and big corporations accountable for these mistakes? But that’s exactly what this new legislation would do, capping damages on medical malpractice and nursing home abuse lawsuits to $250,000. Other restrictions would protect for-profit nursing homes, insurance providers and even caregivers who intentionally abuse a patient. Supporters of these measures argue that they are necessary to deter greedy patients from exploiting doctors and health care facilities for personal gain. Many say that caps will keep health care costs down by reducing the amount of money paid out for medical malpractice lawsuits and insurance. But the experts tell a very different story. Nearly half a million people die from preventable medical errors each year, making it the third leading cause of death in the United States after heart disease and cancer. In addition, 10 to 20 times more people are seriously injured. Caps do nothing to make health care safer and instead protect the financial interests of big corporations and insurance companies rather than save you money. Lawyers are not filling the courts with frivolous medical malpractice and nursing home abuse lawsuits. Medical malpractice lawsuits are rare and make up only 0.2 to 2 percent of all civil cases each year, and that number continues to decrease. Instead of lowering health care costs, research shows that costs have actually increased by about 4 to 5 percent in states with damage caps. And something else you should know about these laws: They allow the federal government to override state laws that protect consumers and patients in favor of laws that protect corporate health care at the expense of patient safety. | High | [
0.677966101694915,
35,
16.625
] |
Fiber Laser, Nanosecond, 1030nm, 1000mW Cybel Quick View Zoom In - Spec's and Photo Description: Fiber laser from Cybel can be operated in either CW mode or nanosecond pulsed operation. Applications for the fiber laser include Frequency conversion, Seed laser and Range finding. Adjustable repetition rate and adjustable pulse width while maintaining a Gaussian beam profile with M2.<1.1. Efficient fiber laser was designed for OEM integration and is compact rugged design. In CW mode the laser output is up to 1.5 watts. In nanosecond pulse mode the laser peak power output is up to 2.5kW with a pulse energy up to 10µJ. Fiber laser output wavelength is 1030nm. view all products byMANUFACTURER MANUFACTURERS & SUPPLIERS Manufacturers and suppliers of Fiber Lasers, DSPSS Lasers, and Ultra-Fast Lasers.......reach customers searching specifically for your products by partnering with SolidStateLaserSource.com. Free of charge, we list your devices in the appropriate categories to make your products easy to find and purchase! We also offer selling partner or advertising partner memberships for companies who wish to increase their visibility and orders. | Mid | [
0.635359116022099,
28.75,
16.5
] |
691 F.2d 510 National Lifev.Hartford 81-5850 UNITED STATES COURT OF APPEALS Eleventh Circuit 10/21/82 1 S.D.Fla. AFFIRMED | Low | [
0.445972495088408,
28.375,
35.25
] |
(AP Photo/Michael R. Sisak, File). FILE - In this Jan. 22, 2018, file photo, Bill Cosby, in his first public performance since his last tour ended amid protests in May 2015, plays the drums at the LaRose Jazz Club in Philadelphia. Cosby is set to atten... (AP Photo/Don Thompson, File). FILE - In this April 12, 2016, file photo, Kelly Johnson, then referred to as "Kacey," front left, one of Bill Cosby's accusers, and attorney Gloria Allred, front right, attend a hearing at the State Capitol in Sacramento... (AP Photo/Nick Ut, File). FILE - In this March 29, 2016, file photo, model Janice Dickinson leaves a hearing about her defamation lawsuit against Bill Cosby in Los Angeles Superior Court. Cosby is set to attend a Monday, March 5, 2018, pretrial hearing... (Pedro Portal/ The Miami Herald via AP). this March 10, 2018 photo shows an early morning view of the main span of the a pedestrian bridge that is being positioned to connect the City of Sweetwater, Fla., to Florida International University near Miami.... The deadly collapse of a pedestrian bridge in Florida has cast a spotlight on a rapid construction technique widely used around the U.S.More >> The deadly collapse of a pedestrian bridge in Florida has cast a spotlight on a rapid construction technique widely used around the U.S.More >> (AP Photo/Elise Amendola, File). FILE - This Monday, June 19, 2017, file photo shows a user signing in to Facebook on an iPad, in North Andover, Mass. Facebook has a problem it just can’t kick: People keep exploiting it in ways that could sway election... The federal government is demanding that the company building a giant nuclear waste treatment plant in Washington state provide records proving that the steel used in the nearly $17 billion project is safe.More >> The federal government is demanding that the company building a giant nuclear waste treatment plant in Washington state provide records proving that the steel used in the nearly $17 billion project is safe.More >> Unsealed court documents from an on-going gender discrimination lawsuit against Microsoft claims that the tech giant confirmed only one of 118 complaints made by female employees was "founded.".More >> Unsealed court documents from an on-going gender discrimination lawsuit against Microsoft claims that the tech giant confirmed only one of 118 complaints made by female employees was "founded.".More >> (Nick Wagner/Austin American-Statesman via AP). FBI agents meet at the scene of an explosion in Austin, Texas, Sunday, March 18, 2018. At least a few people were injured in another explosion in Texas' capital late Sunday, after three package bombs deto... Police have warned residents near the site of the latest explosion in Austin to remain indoors and to call 911 if they need to leave home before 10 a.m.More >> Police have warned residents near the site of the latest explosion in Austin to remain indoors and to call 911 if they need to leave home before 10 a.m.More >> (Nick Wagner/Austin American-Statesman via AP). FBI agents work the scene of an explosion in Austin, Texas, Sunday, March 18, 2018. At least a few people were injured in another explosion in Texas' capital late Sunday, after three package bombs detonat... At least two people injured in another explosion in Texas' capital after three package bombs that detonated earlier this month in other parts of the city killed two people and injuring two others.More >> At least two people injured in another explosion in Texas' capital after three package bombs that detonated earlier this month in other parts of the city killed two people and injuring two others.More >> (AP Photo/Eric Gay). Federal investigators work near the site of Sunday's explosion, Monday, March 19, 2018, in Austin, Texas. Multiple people were injured in the explosion Sunday night, and police warned nearby residents to remain indoors overnight as... Austin police chief's plea for bomber to come forward may be an attempt to coax the perpetrator to give additional clues.More >> Austin police chief's plea for bomber to come forward may be an attempt to coax the perpetrator to give additional clues.More >> Authorities said Friday that the cables suspending a pedestrian bridge were being tightened after a "stress test" when the 950-ton concrete span collapsed over traffic, killing at least six people and injuring 10 Authorities said Friday that the cables suspending a pedestrian bridge were being tightened after a "stress test" when the 950-ton concrete span collapsed over traffic, killing at least six people and injuring 10 NORRISTOWN, Pa. (AP) - Bill Cosby made his first court appearance of the #MeToo era on Monday as defense lawyers tried without success to get his sexual assault case thrown out, then turned their attention to blocking some of the 80-year-old comedian's dozens of accusers from testifying at his looming retrial. Prosecutors are trying to persuade the judge to allow as many as 19 other women to take the stand, including model Janice Dickinson, as they attempt to show the comedian had a long history of drugging and attacking women. They're also trying to insulate Cosby's accuser, Andrea Constand, from what a prosecutor called "inevitable attacks" on her credibility. Allowing other women to take the stand will show jurors that Cosby "systematically engaged in a signature pattern of providing an intoxicant to his young female victim and then sexually assaulting her when she became incapacitated," Assistant District Attorney Adrienne D. Jappe told the judge. Cosby's lawyers will address the issue in court Tuesday. They've argued in writing that some of the women's allegations date to the 1960s and are impossible to defend against, given that some witnesses are dead, memories are faded and evidence has been lost. Judge Steven O'Neill said he would not rule on whether to allow the testimony by the end of the two-day hearing, calling it an "extraordinarily weighty issue" that he needs time to review. The judge allowed just one other accuser to take the stand at Cosby's first trial last year, barring any mention of about 60 others who have come forward to accuse Cosby in recent years. The only other hint that jurors got of Cosby's past came from deposition excerpts from 2005 and 2006 in which the star admitted giving quaaludes to women he wanted to have sex with. Cosby, who entered the courtroom on the arm of his spokesman, has said his encounter with Constand was consensual. A jury deadlocked on the case last year, setting the stage for a retrial. Earlier Monday, Cosby's retooled defense team, led by former Michael Jackson lawyer Tom Mesereau, had argued that telephone records, travel itineraries and other evidence show the alleged assault couldn't have happened when Constand says it did and thus falls outside the statute of limitations. The defense disputed Constand's testimony at last year's trial that he assaulted her at his suburban Philadelphia home in January 2004, when she was a Temple University women's basketball executive and he was a powerful Temple trustee. Constand didn't give a specific date, but said the incident had to have happened prior to Jan. 20, when her cousin moved into her Philadelphia apartment. Cosby's lawyers told O'Neill they'd found evidence that Cosby wasn't even in Pennsylvania during that time. Constand testified she would have called Cosby to be let into his home, but his lawyers said her phone records don't reflect such a call within her timeframe. The date is important because Cosby wasn't arrested until Dec. 30, 2015 - meaning any assault prior to Dec. 30, 2003, would have fallen outside the 12-year statute of limitations. O'Neill said he'd leave that for the jury to decide, rejecting a defense motion to dismiss the charges. Jury selection is slated to begin March 29. Even before Monday's arguments got underway, Cosby's lawyers were rapped by the judge for falsely accusing prosecutors of hiding or destroying evidence. District Attorney Kevin Steele asked O'Neill to throw Cosby's legal team off the case for claiming that prosecutors failed to reveal they'd interviewed a woman who cast doubt on Cosby's accuser. The defense withdrew the allegation days later after his former lawyer confirmed he knew that the prosecution interviewed the woman before Cosby's first trial. The DA argued Cosby's new lawyers acted recklessly and "are at best incompetent and otherwise unethical." O'Neill, who presided over Cosby's first trial, said he was reluctant to break up Cosby's legal team with his retrial several weeks away. But he added the defense lawyers were essentially "on notice." Monday's hearing came just 10 days after Cosby's 44-year-old daughter, Ensa, died of kidney disease. The judge expressed condolences to Cosby at the start of the hearing. The Associated Press does not typically identify people who say they are victims of sexual assault unless they grant permission, which Constand and Dickinson have done. ___ Follow Mike Sisak at twitter.com/mikesisak ___ For more coverage visit apnews.com/tag/CosbyonTrial Copyright 2018 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | Low | [
0.501094091903719,
28.625,
28.5
] |
#!/usr/bin/env python # pyroute2 - ss2 # Copyright (C) 2018 Matthias Tafelmeier # ss2 is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # ss2 is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. import json import socket import re import os import argparse from socket import (AF_INET, AF_UNIX ) try: import psutil except ImportError: psutil = None from pyroute2 import DiagSocket from pyroute2.netlink.diag import (SS_ESTABLISHED, SS_SYN_SENT, SS_SYN_RECV, SS_FIN_WAIT1, SS_FIN_WAIT2, SS_TIME_WAIT, SS_CLOSE, SS_CLOSE_WAIT, SS_LAST_ACK, SS_LISTEN, SS_CLOSING, SS_ALL, SS_CONN) from pyroute2.netlink.diag import (UDIAG_SHOW_NAME, UDIAG_SHOW_VFS, UDIAG_SHOW_PEER) try: from collections.abc import Mapping from collections.abc import Callable except ImportError: from collections import Mapping from collections import Callable # UDIAG_SHOW_ICONS, # UDIAG_SHOW_RQLEN, # UDIAG_SHOW_MEMINFO class UserCtxtMap(Mapping): _data = {} _sk_inode_re = re.compile(r"socket:\[(?P<ino>\d+)\]") _proc_sk_fd_cast = "/proc/%d/fd/%d" _BUILD_RECURS_PATH = ["inode", "usr", "pid", "fd"] def _parse_inode(self, sconn): sk_path = self._proc_sk_fd_cast % (sconn.pid, sconn.fd) inode = None sk_inode_raw = os.readlink(sk_path) inode = self._sk_inode_re.search(sk_inode_raw).group('ino') if not inode: raise RuntimeError("Unexpected kernel sk inode outline") return inode def __recurs_enter(self, _sk_inode=None, _sk_fd=None, _usr=None, _pid=None, _ctxt=None, _recurs_path=[]): step = _recurs_path.pop(0) if self._BUILD_RECURS_PATH[0] == step: if _sk_inode not in self._data.keys(): self._data[_sk_inode] = {} elif self._BUILD_RECURS_PATH[1] == step: if _usr not in self._data[_sk_inode].keys(): self._data[_sk_inode][_usr] = {} elif self._BUILD_RECURS_PATH[2] == step: if _pid not in self._data[_sk_inode][_usr].keys(): self._data[_sk_inode][_usr].__setitem__(_pid, _ctxt) elif self._BUILD_RECURS_PATH[3] == step: self._data[_sk_inode][_usr][_pid]["fds"].append(_sk_fd) # end recursion return else: raise RuntimeError("Unexpected step in recursion") # descend self.__recurs_enter(_sk_inode=_sk_inode, _sk_fd=_sk_fd, _usr=_usr, _pid=_pid, _ctxt=_ctxt, _recurs_path=_recurs_path) def _enter_item(self, usr, flow, ctxt): if not flow.pid: # corner case of eg anonnymous AddressFamily.AF_UNIX # sockets return sk_inode = int(self._parse_inode(flow)) sk_fd = flow.fd recurs_path = list(self._BUILD_RECURS_PATH) self.__recurs_enter(_sk_inode=sk_inode, _sk_fd=sk_fd, _usr=usr, _pid=flow.pid, _ctxt=ctxt, _recurs_path=recurs_path) def _build(self): for flow in psutil.net_connections(kind="all"): proc = psutil.Process(flow.pid) usr = proc.username() ctxt = {"cmd": proc.exe(), "full_cmd": proc.cmdline(), "fds": []} self._enter_item(usr, flow, ctxt) def __init__(self): self._build() def __getitem__(self, key): return self._data[key] def __len__(self): return len(self._data.keys()) def __delitem__(self, key): raise RuntimeError("Not implemented") def __iter__(self): raise RuntimeError("Not implemented") class Protocol(Callable): class Resolver: @staticmethod def getHost(ip): try: data = socket.gethostbyaddr(ip) host = str(data[0]) return host except Exception: # gracefully return None def __init__(self, sk_states, fmt='json'): self._states = sk_states fmter = "_fmt_%s" % fmt self._fmt = getattr(self, fmter, None) def __call__(self, nl_diag_sk, args, usr_ctxt): raise RuntimeError('not implemented') def _fmt_json(self, refined_stats): return json.dumps(refined_stats, indent=4) class UNIX(Protocol): def __init__(self, sk_states=SS_CONN, _fmt='json'): super(UNIX, self).__init__(sk_states, fmt=_fmt) def __call__(self, nl_diag_sk, args, usr_ctxt): sstats = nl_diag_sk.get_sock_stats(states=self._states, family=AF_UNIX, show=(UDIAG_SHOW_NAME | UDIAG_SHOW_VFS | UDIAG_SHOW_PEER)) refined_stats = self._refine_diag_raw(sstats, usr_ctxt) printable = self._fmt(refined_stats) print(printable) def _refine_diag_raw(self, raw_stats, usr_ctxt): refined = {'UNIX': {'flows': []}} def vfs_cb(raw_val): out = {} out['inode'] = raw_val['udiag_vfs_ino'] out['dev'] = raw_val['udiag_vfs_dev'] return out k_idx = 0 val_idx = 1 cb_idx = 1 idiag_attr_refine_map = {'UNIX_DIAG_NAME': ('path_name', None), 'UNIX_DIAG_VFS': ('vfs', vfs_cb), 'UNIX_DIAG_PEER': ('peer_inode', None), 'UNIX_DIAG_SHUTDOWN': ('shutdown', None)} for raw_flow in raw_stats: vessel = {} vessel['inode'] = raw_flow['udiag_ino'] for attr in raw_flow['attrs']: attr_k = attr[k_idx] attr_val = attr[val_idx] k = idiag_attr_refine_map[attr_k][k_idx] cb = idiag_attr_refine_map[attr_k][cb_idx] if cb: attr_val = cb(attr_val) vessel[k] = attr_val refined['UNIX']['flows'].append(vessel) if usr_ctxt: for flow in refined['UNIX']['flows']: try: sk_inode = flow['inode'] flow['usr_ctxt'] = usr_ctxt[sk_inode] except KeyError: # might define sentinel val pass return refined class TCP(Protocol): INET_DIAG_MEMINFO = 1 INET_DIAG_INFO = 2 INET_DIAG_VEGASINFO = 3 INET_DIAG_CONG = 4 def __init__(self, sk_states=SS_CONN, _fmt='json'): super(TCP, self).__init__(sk_states, fmt=_fmt) IDIAG_EXT_FLAGS = [self.INET_DIAG_MEMINFO, self.INET_DIAG_INFO, self.INET_DIAG_VEGASINFO, self.INET_DIAG_CONG] self.ext_f = 0 for f in IDIAG_EXT_FLAGS: self.ext_f |= (1 << (f - 1)) def __call__(self, nl_diag_sk, args, usr_ctxt): sstats = nl_diag_sk.get_sock_stats(states=self._states, family=AF_INET, extensions=self.ext_f) refined_stats = self._refine_diag_raw(sstats, args.resolve, usr_ctxt) printable = self._fmt(refined_stats) print(printable) def _refine_diag_raw(self, raw_stats, do_resolve, usr_ctxt): refined = {'TCP': {'flows': []}} idiag_refine_map = {'src': 'idiag_src', 'dst': 'idiag_dst', 'src_port': 'idiag_sport', 'dst_port': 'idiag_dport', 'inode': 'idiag_inode', 'iface_idx': 'idiag_if', 'retrans': 'idiag_retrans'} for raw_flow in raw_stats: vessel = {} for k1, k2 in idiag_refine_map.items(): vessel[k1] = raw_flow[k2] for ext_bundle in raw_flow['attrs']: vessel = self._refine_extension(vessel, ext_bundle) refined['TCP']['flows'].append(vessel) if usr_ctxt: for flow in refined['TCP']['flows']: try: sk_inode = flow['inode'] flow['usr_ctxt'] = usr_ctxt[sk_inode] except KeyError: # might define sentinel val pass if do_resolve: for flow in refined['TCP']['flows']: src_host = Protocol.Resolver.getHost(flow['src']) if src_host: flow['src_host'] = src_host dst_host = Protocol.Resolver.getHost(flow['dst']) if dst_host: flow['dst_host'] = dst_host return refined def _refine_extension(self, vessel, raw_ext): k, content = raw_ext ext_refine_map = {'meminfo': {'r': 'idiag_rmem', 'w': 'idiag_wmem', 'f': 'idiag_fmem', 't': 'idiag_tmem'}} if k == 'INET_DIAG_MEMINFO': mem_k = 'meminfo' vessel[mem_k] = {} for k1, k2 in ext_refine_map[mem_k].items(): vessel[mem_k][k1] = content[k2] elif k == 'INET_DIAG_CONG': vessel['cong_algo'] = content elif k == 'INET_DIAG_INFO': vessel = self._refine_tcp_info(vessel, content) elif k == 'INET_DIAG_SHUTDOWN': pass return vessel # interim approach # tcpinfo call backs class InfoCbCore: # normalizer @staticmethod def rto_n_cb(key, value, **ctx): out = None if value != 3000000: out = value / 1000.0 return out @staticmethod def generic_1k_n_cb(key, value, **ctx): return value / 1000.0 # predicates @staticmethod def snd_thresh_p_cb(key, value, **ctx): if value < 0xFFFF: return value return None @staticmethod def rtt_p_cb(key, value, **ctx): tcp_info_raw = ctx['raw'] try: if tcp_info_raw['tcpv_enabled'] != 0 and \ tcp_info_raw['tcpv_rtt'] != 0x7fffffff: return tcp_info_raw['tcpv_rtt'] except KeyError: # ill practice, yet except quicker path pass return tcp_info_raw['tcpi_rtt'] / 1000.0 # converter @staticmethod def state_c_cb(key, value, **ctx): state_str_map = {SS_ESTABLISHED: "established", SS_SYN_SENT: "syn-sent", SS_SYN_RECV: "syn-recv", SS_FIN_WAIT1: "fin-wait-1", SS_FIN_WAIT2: "fin-wait-2", SS_TIME_WAIT: "time-wait", SS_CLOSE: "unconnected", SS_CLOSE_WAIT: "close-wait", SS_LAST_ACK: "last-ack", SS_LISTEN: "listening", SS_CLOSING: "closing"} return state_str_map[value] @staticmethod def opts_c_cb(key, value, **ctx): tcp_info_raw = ctx['raw'] # tcp_info opt flags TCPI_OPT_TIMESTAMPS = 1 TCPI_OPT_SACK = 2 TCPI_OPT_ECN = 8 out = [] opts = tcp_info_raw['tcpi_options'] if (opts & TCPI_OPT_TIMESTAMPS): out.append("ts") if (opts & TCPI_OPT_SACK): out.append("sack") if (opts & TCPI_OPT_ECN): out.append("ecn") return out def _refine_tcp_info(self, vessel, tcp_info_raw): ti = TCP.InfoCbCore info_refine_tabl = {'tcpi_state': ('state', ti.state_c_cb), 'tcpi_pmtu': ('pmtu', None), 'tcpi_retrans': ('retrans', None), 'tcpi_ato': ('ato', ti.generic_1k_n_cb), 'tcpi_rto': ('rto', ti.rto_n_cb), # TODO consider wscale baking 'tcpi_snd_wscale': ('snd_wscale', None), 'tcpi_rcv_wscale': ('rcv_wscale', None), # TODO bps baking 'tcpi_snd_mss': ('snd_mss', None), 'tcpi_snd_cwnd': ('snd_cwnd', None), 'tcpi_snd_ssthresh': ('snd_ssthresh', ti.snd_thresh_p_cb), # TODO consider rtt agglomeration - needs nesting 'tcpi_rtt': ('rtt', ti.rtt_p_cb), 'tcpi_rttvar': ('rttvar', ti.generic_1k_n_cb), 'tcpi_rcv_rtt': ('rcv_rtt', ti.generic_1k_n_cb), 'tcpi_rcv_space': ('rcv_space', None), 'tcpi_options': ('opts', ti.opts_c_cb), # unclear, NB not in use by iproute2 ss latest 'tcpi_last_data_sent': ('last_data_sent', None), 'tcpi_rcv_ssthresh': ('rcv_ssthresh', None), 'tcpi_rcv_ssthresh': ('rcv_ssthresh', None), 'tcpi_segs_in': ('segs_in', None), 'tcpi_segs_out': ('segs_out', None), 'tcpi_data_segs_in': ('data_segs_in', None), 'tcpi_data_segs_out': ('data_segs_out', None), 'tcpi_lost': ('lost', None), 'tcpi_notsent_bytes': ('notsent_bytes', None), 'tcpi_rcv_mss': ('rcv_mss', None), 'tcpi_pacing_rate': ('pacing_rate', None), 'tcpi_retransmits': ('retransmits', None), 'tcpi_min_rtt': ('min_rtt', None), 'tcpi_rwnd_limited': ('rwnd_limited', None), 'tcpi_max_pacing_rate': ('max_pacing_rate', None), 'tcpi_probes': ('probes', None), 'tcpi_reordering': ('reordering', None), 'tcpi_last_data_recv': ('last_data_recv', None), 'tcpi_bytes_received': ('bytes_received', None), 'tcpi_fackets': ('fackets', None), 'tcpi_last_ack_recv': ('last_ack_recv', None), 'tcpi_last_ack_sent': ('last_ack_sent', None), 'tcpi_unacked': ('unacked', None), 'tcpi_sacked': ('sacked', None), 'tcpi_bytes_acked': ('bytes_acked', None), 'tcpi_delivery_rate_app_limited': ('delivery_rate_app_limited', None), 'tcpi_delivery_rate': ('delivery_rate', None), 'tcpi_sndbuf_limited': ('sndbuf_limited', None), 'tcpi_ca_state': ('ca_state', None), 'tcpi_busy_time': ('busy_time', None), 'tcpi_total_retrans': ('total_retrans', None), 'tcpi_advmss': ('advmss', None), 'tcpi_backoff': (None, None), 'tcpv_enabled': (None, 'skip'), 'tcpv_rttcnt': (None, 'skip'), 'tcpv_rtt': (None, 'skip'), 'tcpv_minrtt': (None, 'skip'), # BBR 'bbr_bw_lo': ('bbr_bw_lo', None), 'bbr_bw_hi': ('bbr_bw_hi', None), 'bbr_min_rtt': ('bbr_min_rtt', None), 'bbr_pacing_gain': ('bbr_pacing_gain', None), 'bbr_cwnd_gain': ('bbr_cwnd_gain', None), # DCTCP 'dctcp_enabled': ('dctcp_enabled', None), 'dctcp_ce_state': ('dctcp_ce_state', None), 'dctcp_alpha': ('dctcp_alpha', None), 'dctcp_ab_ecn': ('dctcp_ab_ecn', None), 'dctcp_ab_tot': ('dctcp_ab_tot', None)} k_idx = 0 cb_idx = 1 info_k = 'tcp_info' vessel[info_k] = {} # BUG - pyroute2 diag - seems always last info instance from kernel if type(tcp_info_raw) != str: for k, v in tcp_info_raw.items(): refined_k = info_refine_tabl[k][k_idx] cb = info_refine_tabl[k][cb_idx] refined_v = v if cb and cb == 'skip': continue elif cb: ctx = {'raw': tcp_info_raw} refined_v = cb(k, v, **ctx) vessel[info_k][refined_k] = refined_v return vessel def prepare_args(): parser = argparse.ArgumentParser(description=""" ss2 - socket statistics depictor meant as a complete and convenient surrogate for iproute2/misc/ss2""") parser.add_argument('-x', '--unix', help='Display Unix domain sockets.', action='store_true') parser.add_argument('-t', '--tcp', help='Display TCP sockets.', action='store_true') parser.add_argument('-l', '--listen', help='Display listening sockets.', action='store_true') parser.add_argument('-a', '--all', help='Display all sockets.', action='store_true') parser.add_argument('-p', '--process', help='show socket holding context', action='store_true') parser.add_argument('-r', '--resolve', help='resolve host names in addition', action='store_true') args = parser.parse_args() return args def run(args=None): if psutil is None: raise RuntimeError('ss2 requires python-psutil >= 5.0 to run') if not args: args = prepare_args() _states = SS_CONN if args.listen: _states = (1 << SS_LISTEN) if args.all: _states = SS_ALL protocols = [] if args.tcp: protocols.append(TCP(sk_states=_states)) if args.unix: protocols.append(UNIX(sk_states=_states)) if not protocols: raise RuntimeError('not implemented - ss2 in fledging mode') _user_ctxt_map = None if args.process: _user_ctxt_map = UserCtxtMap() with DiagSocket() as ds: ds.bind() for p in protocols: p(ds, args, _user_ctxt_map) if __name__ == "__main__": run() | Mid | [
0.5498007968127491,
34.5,
28.25
] |
Hydroponics RSS Feed from 783 ABC Alice Springshttp://www.abc.net.au/local/topics/alicesprings/rural/hydroponics/rss.xml Hydroponics RSS Feed from 783 ABC Alice Springs2010, Australian Broadcasting Corporationen-au15Mon, 22 Nov 2010 15:11:00 +1100Hydroponic garden keeping remote workers healthyhttp://www.abc.net.au/rural/nt/content/201011/s3073158.htm?site=alicesprings&source=rss We all know the importance of eating two fruit and five vegetables every day. However, when you live in a remote community a few hundred kilometres from the nearest town, how can you get the fresh produce you require? Out by Arlparra, about 250 kilometres north-east of Alice Springs, local Ernie Polley has found an easy solution. He and wife Kerry Kasmira have set up their own hydroponic vegetable garden in a small tin shed out the back of their house. Mr Polley says the initial idea came from when they lived in, what he described as, an "even more remote" community than Arlparra. "We hit upon the idea of having a vegie garden and that became hard work. "I read an article about hydroponics and how it was so allegedly easy and it really is." He says he was surprised to find out how many different vegetables he could successfully grow using hydroponics. "We've got some basil...we got parsley, chives, silverbeet, tomato, beetroot. "We've successfully grown carrots, beans, peas, the salad vegies...and all your herbs, of course."Mon, 22 Nov 2010 15:11:00 +1100\xmlcontent\201011\3073158.xml783 ABC Alice SpringsalicespringsHealth:Diet and Nutrition:AllRural:All:AllRural:Agricultural Crops:HerbsRural:Agricultural Crops:VegetablesRural:Hydroponics:AllAustralia:NT:Alice Springs 0870Bumper season for vegetables, but demand still outstrips supplyhttp://www.abc.net.au/rural/nt/content/201008/s2981255.htm?site=alicesprings&source=rss Territorians can't seem to get enough of their leafy green vegetables. Despite great growing conditions in central Australia, demand for the region's lettuces are outstripping supply. Steve Douglas is the manager of Territory Lettuce, a hydroponic vegetable farm, close to Alice Springs. He says he just can't grow enough lettuces to meet the demand of wholesalers in town. "Darwin just love their lettuce." However, frosty weather conditions in Alice Springs has slowed the lettuces' growth, with it taking around eight weeks for a crop, from seeding to harvesting. The farm is currently operating at full capacity with workers cutting, cleaning and replanting lettuces on a daily basis. Mr Douglas believes the demand for hydroponic lettuce is great enough for the farm to be able to double its planting capacity. "We could put another whole system in and keep that full as well."Thu, 12 Aug 2010 15:46:00 +1000\xmlcontent\201008\2981255.xml783 ABC Alice SpringsalicespringsRural:All:AllRural:Agricultural Crops:VegetablesRural:Hydroponics:AllAustralia:NT:Alice Springs 0870Using fish poo to grow vegetableshttp://www.abc.net.au/rural/nt/content/201008/s2980113.htm?site=alicesprings&source=rss Imagine if you could have a vegetable patch and a fish farm side by side, without having to pay for the water and chemicals for both. That's the idea behind "aquaponics", a combination of both aquaculture and hydroponics in a more environmentally friendly and cost effective way. While it might sound like something you would hear about in a science fiction novel, the idea has been around since the 1960s. So how does it work? Steve Patman, an aquaponics enthusiast says its about recycling water from the fish tanks to use in the garden, then back into the fish tank again. "The water is laden with all the fish nutrient, through both their breath and their [waste]. "All that is then pumped into the grow beds, and these grow beds are just basically bathtubs that are filled with gravel. "We just plant straight into the gravel. "The nutrients from the water feed the plant, then the plants look after filtering the water, and that's great for the fish as well." Mr Patman says he has been able to grow a very good winter crop of tomatoes, silverbeet, parsley and cauliflower, thanks to aquaponics. He says aquaponics systems are not expensive, with a system for an average backyard costing about $2000.Wed, 11 Aug 2010 15:00:00 +1000\xmlcontent\201008\2980113.xml783 ABC Alice SpringsalicespringsRural:All:AllRural:Fishing, Aquaculture:AllRural:Hydroponics:AllAustralia:NT:Alice Springs 0870NT Country Hour podcast 22/04/2010http://www.abc.net.au/rural/nt/content/201004/s2880287.htm?site=alicesprings&source=rss Its an industry that is blooming to such an extent that Australia's largest retailer is moving in for a piece of the action. Where is the nursery and garden industry heading?Thu, 22 Apr 2010 15:23:00 +1000\xmlcontent\201004\2880287.xml783 ABC Alice SpringsalicespringsdarwinkatherineEnvironment:All:AllEnvironment:Environmental Management:AllRural:All:AllRural:Fertilisers:AllRural:Hydroponics:AllRural:Irrigation:AllRural:Rural Media:AllEnvironment:Environmental Impact:AllEnvironment:Water Management:AllAustralia:NT:Alice Springs 0870Australia:NT:Darwin 0800Australia:NT:Katherine 0850NT Country Hour 22/04/2010NT Country Hour 22/04/2010Lorna Perry | Mid | [
0.540037243947858,
36.25,
30.875
] |
Background {#Sec1} ========== Atypical hemolytic uremic syndrome (aHUS) is a disorder of the microvasculature with hemolytic anemia, thrombocytopenia and acute kidney injury. The pathogenesis of aHUS involves the uncontrolled activation of the alternative complement pathway \[[@CR1]--[@CR5]\]. Nowadays, aHUS is successfully treated with eculizumab \[[@CR3]--[@CR7]\]. Eculizumab is a humanized, chimeric IgG2/4 kappa antibody, which binds human complement C5 and blocks C5a generation and complement-mediated cell lysis via membrane-attack-complex \[[@CR6]\]. However, it is not known whether the administration of eculizumab in pregnant patients with end-stage renal disease due to aHUS may cause reduced membrane-attack-complex formation also in the fetal circulation. The objective of the present study was to alert clinicians to the effect of therapeutic antibodies in newborns. For this report we measured the deposition of complement C3 and C9 in the mother's blood, in index newborn's umbilical cord vein blood (obtained after delivery, i.e., 2 h after the last eculizumab infusion), and in blood 3 weeks after birth we measured deposition of complement C3 and C9 using the Palarasah-Nielsen-ELISA as previously described \[[@CR8], [@CR9]\]. The sensitive and specific Palarasah-Nielsen-ELISA determines the capacities of three complement pathways using wells pre-coated with immune complexes, lipopolysaccharides, or mannan, to activate classical, alternative, and lectin pathway, respectively \[[@CR9]\]. The deposition of C3 was measured using monoclonal anti-human C3, clone C3 F1--8, which identifies C3, C3b, iC3b and C3c; and deposition of C9 was measured using anti-human C9 (Bioporto A/S, Gentofte, Denmark), which reacts with the membrane-attack-complex \[[@CR9]\]. The advantages of the Palarasah-Nielsen-ELISA had been described \[[@CR9]\]. Briefly, CH50 and AH50 methods are not based on ELISA principle but based on the spectrophotometric measurements of the degree of cell lysis following addition of antibody-sensitized sheep erythrocytes and sheep erythrocytes in solution, respectively. The protocol for the CH50 and AH50 methods is laborious, difficult to standardize, and it is well established that the ELISA methodology is more sensitive compared to these older techniques. Further, and in contrast to the CH50 and AH50 methods, Palarasah-Nielsen-ELISA is able to distinguish complement capacity between C3- and C9 (membrane-attack-complex) -level. Case presentation {#Sec2} ================= Clinical findings {#Sec3} ----------------- A previously healthy 25-year-old woman presented to the hospital's emergency department with high blood pressure, hemolytic anemia, thrombocytopenia, and oliguric acute kidney injury. Her blood pressure was 158/101 mmHg. Laboratory data revealed elevated plasma creatinine level, 925 μmol/L (normal range, 45--90), plasma urea, 34.1 mmol/L (normal range 2.6--6.4), reduced hemoglobin, 5.5 mmol/L (normal range, 7.3--9.5), plasma lactate dehydrogenase, 714 U/L (normal range, 105--205), reduced plasma haptoglobine levels less than 0.08 g/L (normal range, 0.35--1.85), and reduced platelet count, 42 per nL (normal range, 165--400). A peripheral blood smear showed 6--12 schistocytes per high power field (normal, less than 5). Antinuclear antibodies, antineutrophil cytoplasmic antibodies, anti-glomerular basement membrane antibodies, anti-complement factor H antibodies, and Hanta virus antibodies were negative. A Disintegrin And Metalloproteinase with a ThromboSpondin type 1 motif, member 13 activity was normal, thus excluding thrombotic thrombocytopenic purpura. Urine analyses showed microscopic hematuria and urinary protein/creatinine ratio was 3.807 mg/g. Stool culture and multiplex polymerase chain reaction for verotoxin-producing *Escherichia coli* in stool were negative. A renal biopsy showed 7 glomeruli without fresh thrombotic material, but ischemic damage of glomeruli and tubuli. Vessels showed increased wall thickening without thrombotic material, which may indicate weak thrombotic microangiopathy, and immunofluorescence was negative. Genetic findings {#Sec4} ---------------- Genetic workup revealed no mutations located in the genes for complement factor H, complement factor I, and membrane cofactor protein. The patient had a homozygous deletion of exon 3--6 in the complement factor H related gene 1 (CFHR1), and a heterozygous deletion of exon 4--6 in the complement factor H related gene 3 (CFHR3). Treatment of atypical hemolytic uremic syndrome and chronic kidney disease {#Sec5} -------------------------------------------------------------------------- As the case dates back several years, daily plasmapheresis had been started (i.e., plasma exchange of 1.0 plasma volume every day), resulting in attenuation of the hemolytic anemia whereas renal function did not recover. Nowadays administration of eculizumab may be considered \[[@CR3]--[@CR7]\]. Hemodialysis treatment was continued until 20 months later when she received a crossmatch negative AB0-compatible, nonrelated living donor kidney transplant. The immunosuppressive regimen included basiliximab, tacrolimus, and mycophenolate mofetil, and immediate transplant function was unremarkable. However, rising plasma creatinine levels were observed after transplantation together with hemolytic anemia and thrombocytopenia, indicating a relapse of atypical hemolytic uremic syndrome. One biopsy obtained 1 week after transplantation showed 17 glomeruli without thrombotic material, there were no signs for rejection, g0-1v0i1-3 t0-1ah0ptc0. Another biopsy obtained 6 weeks after transplantation showed 7 glomeruli without thrombotic material. Vessels showed increased wall thickening without thrombotic material, and immunofluorescence was positive for C3 and C4d. There were no signs for rejection, g1v0i1t0ah1ptc0. Although tacrolimus was discontinued, whereas prednisolone, plasmapheresis, and eculizumab were started, we observed a progressive deterioration of transplant function and three months later hemodialysis treatment was resumed because of uremic symptoms. Treatment with eculizumab during pregnancy {#Sec6} ------------------------------------------ The patient performed home dialysis 6 days per week for 5 h using a biocompatible membrane. Ten months later, she got pregnant. At gestational age 11 + 1 a relapse of the hemolytic anemia and thrombocytopenia was observed (hemoglobin 4.8 mmol/L, haptoglobine levels less than 0.08 g/L, platelet count 83 per nL). Intravenous eculizumab (1200 mg every other week) was started and given throughout pregnancy. The pregnancy was followed closely with repeated ultrasound monitoring growth and fetal blood flows. Intrauterine growth retardation was diagnosed due to suspected fetal distress, a healthy male index baby was delivered by cesarean section in week 34 + 2. An eculizumab infusion was given 2 h before cesarean section. Hemodialysis and eculizumab treatment were continued in the mother and follow ups in both baby and mother after 12 months were uneventful. Complement C3 deposition is not affected by eculizumab {#Sec7} ------------------------------------------------------ In the mother's blood, in index newborn's umbilical cord vein blood (obtained after delivery, i.e., 2 h after the last eculizumab infusion), and in blood three weeks after birth we measured deposition of complement C3 and C9 using the Palarasah-Nielsen-ELISA as previously described \[[@CR8], [@CR9]\]. Fig. [1](#Fig1){ref-type="fig"} shows the capacities of three complement pathways as determined by complement C3 deposition in the Palarasah-Nielsen-ELISA. Complement C3 deposition was similar in umbilical cord blood from control newborns and index child. The control group consisted of five pre-term (born in gestational week 35--36) boys born to healthy mothers. The lectin pathway activity was abrogated in the index baby as well as the control newborns.Fig. 1Complement C3 deposition in index newborn and mother. Complement C3 deposition after activation of the classic, alternative, and lectin pathway in umbilical cord blood from preterm new born controls (box and whiskers plot), umbilical cord blood from the index newborn, in blood from the mother treated with eculizumab, and in blood from the index newborn at 3 weeks. Results are given for the deposition of complement C3 using the Palarasah-Nielsen-ELISA \[[@CR9]\] Eculizumab reduces membrane attack complex formation in the index newborn {#Sec8} ------------------------------------------------------------------------- Complement C9 deposition which occurs downstream of eculizumab inhibition is shown in Fig. [2](#Fig2){ref-type="fig"}. As expected in the mother's blood, complement C9 deposition induced by activation of the classical pathway was completely abolished (0% compared to 59 to 130% in healthy adults) \[[@CR9]\]. It should be noted that complement C9 deposition induced by activation of the classical pathway was almost completely abrogated in umbilical cord blood from the index newborn (2%), whereas newborn controls showed a median of 70%. The control group consisted of five pre-term (born in gestational week 35--36) boys born to healthy mothers. Complement C9 deposition normalized in the index child after 3 weeks. Furthermore, in vitro administration of 100 μg/mL complement factor C5 increased complement C9 deposition in index child from 2 to 38%. The in vitro effect of eculizumab on complement C9 deposition is depicted in Fig. [3](#Fig3){ref-type="fig"}. In-vitro administration of eculizumab to control umbilical cord blood dose-dependently reduced complement C9 deposition with apparent IC50s ranging from 6 to 10 μg/mL.Fig. 2Complement C9 deposition in index newborn and mother. Complement C9 deposition after activation of the classic pathway in umbilical cord blood from preterm new born controls (box and whiskers plot), umbilical cord blood from the index newborn, in blood from the mother treated with eculizumab, and in blood from the index newborn at 3 weeks. Results are given for the deposition of complement C9 using the Palarasah-Nielsen-ELISA \[[@CR9]\]Fig. 3Dose-dependent effect of eculizumab on complement C9 deposition induced by activation of the classical pathway in umbilical cord blood from controls in vitro Discussion and conclusions {#Sec9} ========================== Servais et al. indicated that the administration of eculizumab during pregnancy in three patients with atypical haemolytic uremic syndrome displayed no overt safety issues \[[@CR10]\]. However, it should be noted that all babies were born preterm \[[@CR10]\]. The index child presented in our study was also born preterm. End-stage renal disease and dialysis may well explain premature birth in our case. However, Segura-Cervantes et al. showed that women with premature preterm rupture of membranes as well as preterm labor had lower soluble C5b-9 complement complexes compared to women during term labor \[[@CR11]\]. Human IgG are known to cross the human placental barrier especially in the third trimester \[[@CR12]\]. Furthermore, in umbilical cord blood from three mothers with paroxysmal nocturnal hemoglobinuria or antiphospholipid syndrome eculizumab concentrations had been reported \[[@CR13]\]. Thus eculizumab may potentially cause terminal complement inhibition in the fetal circulation \[[@CR14]\]. This assumption is supported by our present observation. We showed that eculizumab specifically reduces complement C9 deposition, but not complement C3 deposition, in umbilical cord blood from a mother with end-stage renal disease. The impact of eculizumab was supported by the observation that complement C9 deposition could be rescued in-vitro by administration of complement C5. Furthermore, we confirmed that eculizumab may reduce formation of membrane-attack complex in umbilical cord blood from controls in-vitro. The in-vitro effect of antibodies which neutralize complement factors have recently been reported \[[@CR15]\]. The fact that eculizumab infusion was given very close to cesarean section may explain why the present results may differ from other cases in the literature. Genetic studies showed deletions in the gene for CFHR1 and CFHR3 in the mother. According to literature, deletions in CFHR1 and CFHR3 may be associated with atypical hemolytic uremic syndrome \[[@CR2]\]. Our case shows that the lectin pathway activity was abrogated in the index baby as well as the control newborns, which is often found in premature children, thus excluding major contamination with blood from the mother, who had normal lectin pathway activity. The present case shows that complement C9 deposition induced by activation of the classical pathway was almost completely abrogated in umbilical cord blood from the index newborn, whereas newborn controls showed a median of 70%. These findings may indicate that the observed effect in the index newborn is more likely due to eculizumab. That is also supported by our finding that complement C9 deposition normalized in the index child after 3 weeks consistent with eculizumab's half-life being of 12--14 days \[[@CR16]\]. Limitation of the study {#Sec10} ----------------------- The present study has several limitations: Additional spectrophotometric assays may be appropriate to confirm the complement findings. Furthermore, genetic findings were not based on next-gen sequencing. Another limitation is that eculizumab titers would have been helpful to strengthen the conclusions. Taken together we give evidence that eculizumab treatment of the index child's mother reduces the membrane-attack-complex formation in the newborn. This may cause reduced innate immunity which could render newborns more susceptible to infections. aHUS : Atypical hemolytic uremic syndrome CFHR1 : Complement factor H related gene 1 CFHR3 : Complement factor H related gene 3 **Publisher's Note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Subagini Nagarajah, Christian Nielsen and Kristian Assing contributed equally to this work. None. SN, MT, LLTA, LBL, and CB contributed to patient management. KA organized sampling. CN and YP performed the Palarasah-Nielsen-ELISA-assay. SN, MT, KA, and CB designed the study. SN and MT wrote the initial draft of the manuscript. SN, MT, CN, KA, YP, LLTA, and CB contributed to writing of the report. SN, MT, CN, KA, YP, LLTA, LBL, and CB approved the final version. Written consent to publication was obtained. Authors' information {#FPar1} ==================== None. The authors declare that there was no funding. All data generated or analyzed during this study are included in this published article. Approval was obtained from the regional committee (Data protection agency, Datatilsynet Region Syddanmark, number 17/43602). Written informed consent was obtained from the person to publish information. The authors declare that they have no competing interests. | Mid | [
0.627249357326478,
30.5,
18.125
] |
This is the first post in my GLOB, so I'm going to open with a simple picture that I took a few weeks ago.. I had noticed this skull on the drive way of a condo near our place. I ran home to get my camera and by the time I got back it was gone. It took me a about a week to find it again. It is the sun reflecting of the windows. Unfortunately there is a bush that is obscuring the left side of the face, but that kind of adds to the creepiness. This photo has not been altered. I just did a bit of color correction. This photo was taken with a different camera setting. I then used picnik (love picnik) and fiddled with the exposure and sharpened it a bit. | Low | [
0.325688073394495,
17.75,
36.75
] |
Adolescents separated from their father are more likely to suffer from stress and transient depression symptoms 4 to 9 months following their parents' parting. Family breakdown and the insecure financial situation that may result is more likely to cause worry, anxiety and depressive symptoms in adolescents who are separated from their father, says Professor Jennifer O'Loughlin of the University of Montreal. However, these symptoms can disappear in the nine-month period following the separation. O'Loughlin came to these conclusions after conducting a study that was recently published in the Canadian Journal of Psychiatry. During a five-year period, O'Loughlin and her team of researchers followed 1,160 French-speaking and English-speaking students who, at the beginning of the study initiated in 2002, were 12 and 13 years old and living with both parents. At each year of high school, they answered a questionnaire every three months measuring indicators of mental health, including depressive symptoms, worry, and stress about family relationships, and the family situation. Compared with adolescents living with both parents, adolescents separated from their fathers were more likely to report depressive symptoms four to six months post-separation, as well as worry or stress about their parents separating or divorcing, a new family, the family financial situation, and their relationship with their father. At seven to nine months post-separation, separation from their father continued to be associated with worry or stress, but not with depression or their relationship with their father; however, separation was associated with worry or stress about their relationship with their mother. "This relational change may be attributed to the fact that the mother must often play a new role in terms of greater monitoring and discipline, which can cause tension between her and her children," said O'Loughlin. Another hypothesis suggested by the professor is that the adolescents feel greater anxiety regarding the additional challenge their mother must face by taking on greater family responsibilities. In addition, the use of alcohol or cigarettes among the adolescents was not related to separation from their father in the short term, contrary to what a study conducted 30 years ago concluded. "It is possible that these substances are perceived negatively by the adolescents and that they avoid using them, especially if substance abuse by their father was the source of marital discord," said O'Loughlin. The authors of the study write that "separated parents and their adolescents can be reassured by the results of the study, which show that depression symptoms are usually transient following separation." Nevertheless, they call for greater vigilance among all those in direct contact with young adults whose parents have recently separated - families, teachers, trainers, friends, and family physicians ."They may need informal support or therapy to prevent further progression of depressive symptoms and the development of more serious mental health problems," O'Loughlin concluded. ### | High | [
0.676229508196721,
41.25,
19.75
] |
#!/bin/bash # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # This script helps to verify full life cycle of Gradle build and all # PostCommit tests against release branch on Jenkins. # # It reads configurations from script.config, setup environment and finally # create a test PR to run Jenkins jobs. # # NOTE: # 1. Please create a personal access token from your Github account first. # Instructions: https://help.github.com/en/articles/creating-a-personal-access-token-for-the-command-line # 2. Please set RELEASE_BUILD_CONFIGS in script.config before running this # script. # 3. Please manually comment trigger phrases to created PR to start Gradle # release build and all PostCommit jobs. Phrases are listed in # JOB_TRIGGER_PHRASES below. . script.config set -e BEAM_REPO_URL=https://github.com/apache/beam.git RELEASE_BRANCH=release-${RELEASE_VER} WORKING_BRANCH=postcommit_validation_pr function clean_up(){ echo "" echo "==================== Final Cleanup ====================" rm -rf ${LOCAL_BEAM_DIR} echo "* Deleted workspace ${LOCAL_BEAM_DIR}" } trap clean_up EXIT echo "" echo "==================== 1 Checking Environment Variables =================" echo "* PLEASE update RELEASE_BUILD_CONFIGS in file script.config first *" echo "" echo "Verify release build against branch: ${RELEASE_BRANCH}." echo "Use workspace: ${LOCAL_BEAM_DIR}" echo "" echo "All environment and workflow configurations from RELEASE_BUILD_CONFIGS:" for i in "${RELEASE_BUILD_CONFIGS[@]}"; do echo "$i = ${!i}" done echo "[Confirmation Required] Are they all provided and correctly set? [y|N]" read confirmation if [[ $confirmation != "y" ]]; then echo "Please rerun this script and make sure you have the right configurations." exit fi echo "" echo "==================== 2 Checking Requirements =======================" echo "====================== 2.1 Checking git ========================" if [[ -z ${GITHUB_TOKEN} ]]; then echo "Error: A Github personal access token is required to perform git push " echo "under a newly cloned directory. Please manually create one from Github " echo "website with guide:" echo "https://help.github.com/en/articles/creating-a-personal-access-token-for-the-command-line" echo "Note: This token can be reused in other release scripts." exit else echo "====================== Cloning repo ======================" git clone ${BEAM_REPO_URL} ${LOCAL_BEAM_DIR} cd ${LOCAL_BEAM_DIR} # Set upstream repo url with access token included. USER_REPO_URL=https://${GITHUB_USERNAME}:${GITHUB_TOKEN}@github.com/${GITHUB_USERNAME}/beam.git git remote add ${GITHUB_USERNAME} ${USER_REPO_URL} # For hub access Github API. export GITHUB_TOKEN=${GITHUB_TOKEN} # For local git repo only. Required if global configs are not set. git config user.name "${GITHUB_USERNAME}" git config user.email "${GITHUB_USERNAME}@gmail.com" fi echo "====================== 2.2 Checking hub ========================" HUB_VERSION=2.12.0 HUB_ARTIFACTS_NAME=hub-linux-amd64-${HUB_VERSION} if [[ -z `which hub` ]]; then echo "There is no hub installed on your machine." if [[ "${INSTALL_HUB}" = true ]]; then echo "====================== Installing hub =======================" wget https://github.com/github/hub/releases/download/v${HUB_VERSION}/${HUB_ARTIFACTS_NAME}.tgz tar zvxvf ${HUB_ARTIFACTS_NAME}.tgz sudo ./${HUB_ARTIFACTS_NAME}/install echo "eval "$(hub alias -s)"" >> ~/.bashrc rm -rf ${HUB_ARTIFACTS_NAME}* else echo "Refused to install hub. Cannot proceed into next setp."; exit fi fi hub version echo "" echo "==================== 3 Run Gradle Release Build & PostCommit Tests on Jenkins ===================" echo "[Current Task] Run Gradle release build and all PostCommit Tests against Release Branch on Jenkins." echo "This task will create a PR against apache/beam." echo "After PR created, you need to comment phrases listed in description in the created PR:" if [[ ! -z `which hub` ]]; then git checkout -b ${WORKING_BRANCH} origin/${RELEASE_BRANCH} --quiet # The version change is needed for Dataflow python batch tests. # Without changing to dev version, the dataflow pipeline will fail because of non-existed worker containers. # Note that dataflow worker containers should be built after RC has been built. sed -i -e "s/${RELEASE_VER}/${RELEASE_VER}.dev/g" sdks/python/apache_beam/version.py sed -i -e "s/sdk_version=${RELEASE_VER}/sdk_version=${RELEASE_VER}.dev/g" gradle.properties git add sdks/python/apache_beam/version.py git add gradle.properties git commit -m "Changed version.py and gradle.properties to python dev version to create a test PR" --quiet git push -f ${GITHUB_USERNAME} --quiet hub pull-request -b apache:${RELEASE_BRANCH} -h ${GITHUB_USERNAME}:${WORKING_BRANCH} -F- <<<"[DO NOT MERGE] Run all PostCommit and PreCommit Tests against Release Branch You can run many tests automatically using release/src/main/scripts/mass_comment.py." echo "" echo "[NOTE]: Please make sure all test targets have been invoked." echo "Please check the test results. If there is any failure, follow the policy in release guide." fi | Mid | [
0.573684210526315,
27.25,
20.25
] |
Show HN: Memfinity.org – Social, spaced-repetition web service - kohlmeier http://memfinity.org ====== djtriptych Social learning will become a huge space. I love where you're positioned and if you continue to iterate on the UI, I think you could really have something. I'd be working on some killer use cases. You should certainly also allow visitors to see some cool content without having to sign in (and give you partial access to my Google account.) ------ rcy This is great, nice work, I have been looking for a lightweight browser based alternative to anki. After showing the back of a card when practicing, I find myself wanting wanting to see the front of the card at the same time. Either that, or be able to flip it back and forth. Any plans to add an ability to batch import from a .txt or csv file? ~~~ chunkiestbacon Check out my solution for the first one: [https://oboeta.com/signup](https://oboeta.com/signup) ------ bravura Is there a way to discuss cards or suggest revisions? For example: "t-distributed stochastic neighbor embedding (t-SNE) is an algorithm to convert high dimensional 2d or 3d space, where similar points and clusters are preserved. Useful for displaying scatterplots." is not correct and should be rewritten. ~~~ kohlmeier Not currently, though that would be quite cool. For now, I clarified my version of that card, which unfortunately had a typo. Thanks.. ------ jurassic I'm really excited for this! I always wanted to use Anki but found card creation to be a huge bottleneck. Bringing that to the browser, with citations, is extremely exciting. ------ Numberwang Practicing specific tags would be a good function to add. ~~~ kohlmeier I'm a bit ambivalent about that feature. I do think many folks will want/expect it, and it would be useful at times. At the same time, the primary use of the app I personally wanted is to add things I wanted to remember permanently, and let the algorithms handle all of the work of scheduling optimal practice. Gonna think about this. | Mid | [
0.5789473684210521,
26.125,
19
] |
Bone marrow and peripheral blood C-kit ligand concentrations in patients with thrombocytosis and thrombocytopenia. C-kit ligand (stem cell factor, SCF) is a hematopoietic growth factor with diverse effects. It has stimulatory effects on megakaryocytopoiesis acting in synergism with interleukin-3 (IL-3), thrombopoietin (TPO) and granulocyte-macrophage colony stimulating factor (GM-CSF). The relationship between SCF and megakaryocytopoiesis, especially the correlations between blood and bone marrow SCF levels have been not clearly established in the literature. We therefore, investigated peripheral and bone marrow SCF levels in patients with thrombocytosis and thrombocytopenia. Subjects were divided into three groups: those with (i) thrombocytopenia, (ii) thrombocytosis and (iii) healthy adults as controls. When the three groups were compared, the mean peripheral blood SCF level of the thrombocytosis group (2149±197) was significantly higher than the thrombocytopenia (1586±178) and normal control groups (1371±68; p<0.05) and the bone marrow SCF level was higher (2694±267) than the thrombocytopenia group (1700±182; p<0.05). In the correlation analysis, considering all the groups together the bone marrow and peripheral blood SCF concentrations were positively and significantly correlated (p<0.01; r=0.93). Correlations between platelet number and both bone marrow SCF concentration (p<0.01; r=0.51) and peripheral blood concentrations (p<0.01; r=0.40) were also shown. Our results indicate that SCF is operative in the pathological megakaryopoiesis of clonal origin and reactive thrombocytosis both in the local bone marrow microenvironment and the peripheral circulating blood. We feel that further studies on the platelet-SCF relationship and SCF levels in different disease states are required. | High | [
0.717579250720461,
31.125,
12.25
] |
Q: Why does std::thread::native_handle return a value of type 'long long unsigned int' instead of void* (a.k.a. HANDLE)? I need to suspend a thread on windows via Windows SDK on msys. I tried something like std::thread thread(somefunction, someparameters); HANDLE handle=thread.native_handle(); SuspendThread(handle); But gcc told me the return value of native_handle() is 'long long unsigned int' but not void*. So I tried HANDLE handle=reinterpret_cast<HANDLE>(thread.native_handle()); But it does not work because when I called GetLastError() I received the error code 6 which means the handle is invalid. What should I do? A: The returned "handle" is the thread id not the HANDLE as returned by CreateThread. You need to use OpenThread to get a handle from the id. | Mid | [
0.6091370558375631,
30,
19.25
] |
Search This Blog The Downfall of My Diet I would have to say hands down that cheese is one of my weaknesses. I can't get enough of it and the more fat in it the better! So when Ile De France asked if I would like some free cheese of course I jumped at the chance and this also gave me the opportunity to enter their picture contest for making a fantastic cheese plate! I would have to say that cheese plates are a huge part of my entertaining repertoire because they are easy to make and require very little intervention from me for people to enjoy. I can just set out a cheese plate and everyone can mix and match what they like. I have 4 must haves I require when making a cheese plate. Must have 3 - 5 cheeses, any more than that its too confusing and any less and its not really showcasing any cheese variety. Mine has Goat Cheese (chevre), Brie, and Roquefort. Must have a fruit, honey and a fruit spread in order to gain the sweet notes out of the cheese and to complement the tang of the cheese. In this case, I have apples, grapes, raw white honey, and fig jam. All of these play well with the cheeses I have selected. Must have both bread and crackers. I think the combination of the two will please everyone and it gives you more flexibility with how you can make combinations. And Last, I don't like to label the cheese on purpose. I like people to try a cheese before they know what it is. Some one might completely write off chevre because it is goat cheese, but if you don't tell them they might try it and find out that they love it. I think Cheese plates are all about experimenting and education. There are so many great cheeses out there and if you don't try them you will never know what you are missing out on. So this weekend why not make a cheese plate for a Sunday afternoon lunch or appetizer for Sunday dinner. Try a new cheese that you have never heard of before. Be sure make friends with your Cheese Monger, most will let you sample any cheese before you buy. And let's face it who doesn't like free cheese! I know I do! Katie, I LOVE your cheese plate. It looks so elegant and tasty. I agree with you about not letting people know what kind of cheese they are tasting, it keeps biases out of the experience. Good show my friend. You couldn't have posted this at a more perfect time! I'm hosting my future in-laws at our new place this weekend and we're having cheese and wine before dinner tonight! I'm thinking brie, camembert, cheddar and swiss with butter crackers and triscuits. I'm also gonna pick up some grapes and apples along with some honey and jam. Am I missing anything? I like this post. I have been on a diet lately and I find it easyish to forego cheese on sandwiches for lunch and such. But if I'm faced with a more flavorful cheese like feta, brie, havarti (you name it), I have a tough time ignoring it :) Post a Comment Thanks for leaving a comment on my site. I read and appreciate every comment that I get. I try to respond but sometimes life gets in the way. So please keep the comments coming and thanks for visiting! | Mid | [
0.6304909560723511,
30.5,
17.875
] |
Social, political, cultural, random and rambling opinions, musings, diatribes and rants from the mind of a disturbed individual with too much free time and unlimited internet access. Friday, July 27, 2012 Chick Fil Gay (lol) The way I see it, as a privately-owned company, Chick-fil-a can support whatever causes it wants to and its leadership can make whatever statements they choose in the media. I fully support the right of Dan Cathy to express his opinion whenever and wherever he wants to. This is, after all, America and, as of this afternoon, the first amendment still exists. My only beef (get it?) with this whole gay marriage "scandal" that's put Cathy's company in the news and brought down a deluge of both anger and support from both sides of the gay rights issue is that his comments are just stupid and betray the hypocritical one-sided bias that the anti-gay marriage crowd displays every time they open their chicken pot pie holes and decide to spew another round of their self-righteous ignorance. Cathy said "I pray God's mercy on our generation that has such a prideful, arrogant attitude to think that we have the audacity to define what marriage is about." What annoys the holy fuck out of me about that statement is, what exactly are Cathy and others who believe as he does trying to do? Are they not also audaciously defining what marriage is about based on their own selective interpretation of the bible? That's all the religious right does on the gay marriage issue - arrogantly define what marriage is about. It's been said over and over again on scores of progressive blogs and websites that conservative Christians love to pick and choose which parts of the bible they choose to strictly adhere to and which parts are just "symbolism" and "open to interpretation", or better yet, "reflect an outdated era in society and should be updated to fit our modern times." These are arguments that I've heard to explain why conservative Christians don't still beat their wives for talking out of turn or beat their children for disobeying them. It's why we don't still see evangelicals stoning women for entering church with their heads uncovered or forcing them into isolation for a week after their periods. It's why modern evangelicals can wear mixed fabrics, eat shellfish, enjoy figs and make casual, non-sexual physical contact with a women while she's menstruating without also being quarantined until sufficient time has passed for them to be "clean" once again. See, the bible is chock full of rules and beliefs that are completely ignored by the overwhelming majority of the exact same conservative Christians who refuse to budge on the acceptance of gay marriage due to the strict letter of the bible. Never mind the fact that the same book of the bible - Leviticus - that denounces homosexuality is also the same book that prescribes all of the other aforementioned remedies for those odd and arbitrary offenses, because all that other stuff is just outdated, symbolic or just silly stuff and therefore doesn't have to be obeyed to the letter anymore... oh, but that gay stuff, there's no budging on that! The above is a perfect example of my point. Chick-fil-a, like all other hypocritical conservative Christian organizations and people, has an unwavering adherence to their interpretation of biblical principles when it comes to the issue of homosexuality, but when it comes to the other decrees taken from the exact same book of the bible, mere paragraphs before and after the ones pertaining to homosexuality? Meh, we'll decide what that really means and we'll follow it as much as we think is necessary given the changing times. Why is it that a restaurant chain that is so rooted in "biblical principles" that it closes on Sunday - the day of rest - at the cost of untold millions in lost profits, that it fervently supports the defense of "traditional marriage" - to the point of alienating even more customers and costing them even more profits, but they don't refuse to serve the biblically-banned "swine flesh" on their menu? IT'S FROM THE SAME BOOK OF THE BIBLE! It's not like you have to jump all over the bible and piecemeal quotes out of context together to make these points, they're written plain as day right along the same verses about homosexuality. Incidentally, there's not a single mention anywhere in the bible about gay marriage. Don't believe me? Look it up. Don't worry, I'll wait... That's another great hypocrisy in the evangelical homophobic agenda, they have to make ridiculous leaps and connect all kinds of invisible dots in order to extract preachings against gay marriage from the bible. While, at the same time, completely disregarding clear and direct commandments against things like eating pork and shellfish or allowing a woman to speak her mind without permission without beating her severely for it. They even ignore other rules about marriage from Leviticus that conflict with their arbitrary vivisection of scripture. For example, Leviticus also states that if a woman's husband dies, his brother has first dibs on claiming her as his wife and she is forbidden to marry outside of her late husbands family so long as their is a man in the family willing to take her as his bride. Another little Levitican tidbit - if a woman is a virgin and she is raped, then her father must pay a penalty for her purity being despoiled and she must marry her rapist. How's that hypocrisy taste? Not too good? Try a little Polynesian sauce on it, it really perks up the flavor. So, like I said, if Dan Cathy and the company he heads want to base their company policies on the same "pick and choose to get what you want" mentality that drives their menu options, like good cafeteria Christians do, that's their right. I support Cathy's right to run his company based on an inconsistent, arbitrary and ignorant personal ideology just as much as I support the right of the American people to either support him or oppose him and make their chicken sandwich buying decisions accordingly. Really, I just wish these assholes would practice a little consistency, that's all. If you're going to be unwavering in your adherence to one chapter and verse in a particular book of the bible, then hold that same strict adherence to ALL the chapters and verses in that book. If you're going to oppose homosexuality, then stop eating pork, stop mixing your fabrics, put down that lobster or crab, start beating your wives when they talk out of turn, start killing people who violate church doctrines and start picking out a wedding venue for your daughters and the guys who bang them while they're passed out at a frat party. Seriously, are you trying to spend eternity burning in hell? What gives you the arrogant audacity to define what God's word is all about? | Low | [
0.471458773784355,
27.875,
31.25
] |
UEI CO71A Carbon Monoxide Detector $195.99 Product Description Monitors, records, and alerts you to the presence of dangerous carbon monoxide in ambient air. The visual display constantly indicates precise quantities while the audible and visual alarms respond to various threshold levels. | Mid | [
0.647342995169082,
33.5,
18.25
] |
Liselotte Blumer Liselotte Blumer (born 1957) is a retired female badminton player from Switzerland. Career In 1980 Blumer was a surprise winner of the women's singles gold medal at the European Badminton Championships. The powerfully built Blumer won the Swiss national women's singles title sixteen times, fifteen of them consecutively between 1973 and 1987, and the Swiss Open women's singles title six times. Her other international titles included the French Open women's doubles, the Polish Open women's singles (1981, 1982), and the Malta International women's singles and doubles (1984). References Category:Swiss female badminton players Category:1957 births Category:Living people | Mid | [
0.6121951219512191,
31.375,
19.875
] |
He sat down with Rolling Stone magazine before his new movie The Lone Ranger gallops into theaters on July 3. As you can see, Johnny snapped his cover shot dressed as his character Tonto. But is this movie going to be one of Johnny's last?!? Maybe his heartbreaking break-up from Vanessa Paradis has Johnny reevaluating what's most important in life. The 50-year-old actor talked about his tattoos, not wearing underwear and what the future may hold for him. No worries! Star magazine wants to make sure you understand that some celebrities face fitness challenges too! Beyoncé might be killin' it with her hard-worked bod, but she's in the minority of this Spring edition of the Best and Worst Beach Bodies!! Think you can guess who the others are??? IN OTHER NEWS… Jennifer Aniston, though she's reportedly pushed the wedding to the summer, doesn't want a prenup according to OK! magazine. Umm… we're all for true love and trusting in your partner, but this is 2013. Let's be smart here. Who KNOWS what could happen down the line, and we wanna make sure both parties are protected in a legal compromise. LOLz! Always a favorite among grocery store line-waiters, The National Enquirer has plastered the "best and worst" beach bodies of early 2013 all over their pages, ready for tab-lovers to soak it up like the sun itself! While everyone loves seeing beautiful actresses and their Hollywood good looks on the front of magazines, the models want them back, and Naomi Campbell is leading the charge! The supermodel was recently asked about the issue, and didn’t hold back, saying: "Of course, we want the magazine covers back. Of course, we do. There are less covers for the girls out there who are the new, young, trendy girls. She’s got more to compete with and there are only a certain amount of covers they’re going to give a model a year. Before, you had models twelve months a year." And now, with the exception of Kate Moss and the occasional up and coming model, it's the complete opposite. | Low | [
0.5295404814004371,
30.25,
26.875
] |
2017 ASI Election Results Results for the 2017-2018 ASI Elections were tallied Thursday, March 30 at noon by representatives from the League of Women Voters. There was a total of 3,228 votes cast totaling to a 14.18% participation rate. ASI Election results are as follows: PRESIDENT Blake Zante VICE PRESIDENT OF FINANCE Cam Patterson VICE PRESIDENT OF EXTERNAL AFFAIRS Demi B. Wack JORDAN COLLEGE/AGRICULTURAL SCIENCES & TECHNOLOGY Amanda N. Smith ARTS & HUMANITIES Evangelia Pappas CRAIG SCHOOL OF BUSINESS Mario Vargas KREMEN SCHOOL OF EDUCATION & HUMAN DEVELOPMENT Chase R. Viramontes LYLES COLLEGE OF ENGINEERING Elias J. Karam HEALTH & HUMAN SERVICES Chelsea Haflich SCIENCE & MATHEMATICS Kevin Ngo SOCIAL SCIENCES Edgar Bolanos SENATORS AT-LARGE (Top 10) Alexandra Chavez Amber K. Malhi Brandon Sepulveda Casandra Ramirez- Sanchez Cody T. Sedano Edgar Castro Josh T. Dowell Primavera L. Martinez Sebastian K. Wenthe Travis Childress The new senators and executives will be officially installed on June 1, 2017. Results for referendums on the ballot are as follows: Fee Referendum Vote: “Bold New U” No (1,846) Student Representation Referendum Vote: Senate Positions Yes ( 2,434) (Copy of ASI Communications Assistant, Gina De Young) ### As the recognized student body government organization at California State University, Fresno, Associated Students, Inc. (ASI) provides a means for effective student participation in the governance of the University, fosters awareness of student opinions on campus issues, assists in the protection of student rights, and provides programs and services to meet the needs of the students and campus community. | Mid | [
0.5475728155339801,
35.25,
29.125
] |
Barbecue grills often are fueled by liquid propane (LP) gas supplied from a portable tank. The tank is refillable and when empty, is removed from its cart mounting and either refilled or replaced. When mounted on the cart, the tank is normally secured against movement by screws, bolts, straps, and other means. The securing means maintain the tank in an upright position for use and are released for transporting the tank to a refilling or replacement facility. The tank is normally mounted on the bottom shelf or a bottom strut of the cart frame. One of the reasons for this mounting is that a full LP gas tank is relatively heavy and a bottom mount requires less lifting of the tank. Another reason is to maintain some distance between the tank and the grill itself, the spacing serving as a heat shield. This positioning makes access to the tank difficult as the user must bend down or squat down to secure the tank, access the on-off valve and regulator, and inspect the tank for leaks. It is to these difficulties that the present disclosure is addressed. | Mid | [
0.554493307839388,
36.25,
29.125
] |
+ 10 = 4, -3*i - z + 48 = 0. Calculate the remainder when i is divided by d. 3 Let k(u) = u**3 + 14*u**2 + 4*u + 25. Calculate the remainder when k(-12) is divided by 38. 37 Suppose -z = 2*u - 101, -3*u - 513 = -5*z - 5*u. What is the remainder when z is divided by 52? 51 Let u(q) = -q**2 + 22*q + 4. Let t(g) = g + 3. Let s be t(17). Calculate the remainder when u(s) is divided by 12. 8 Let x(j) = -6*j + 16. Let l be x(-14). Suppose l - 22 = 2*q. What is the remainder when q is divided by 8? 7 Let g = 212 - 178. What is the remainder when g is divided by 18? 16 Let r = -48 - -54. Suppose -r*s + 81 = -105. Calculate the remainder when 59 is divided by s. 28 Calculate the remainder when 127 is divided by 336/((-128)/(-16)) + (-1 - -3). 39 Let j = 676 + -648. Calculate the remainder when 185 is divided by j. 17 Suppose 5*u + 253 = 2*q, -4*u + 351 = 3*q - 2*u. What is the remainder when q is divided by 15? 14 Suppose 3*s + 16 = 7*s. Suppose -s*p + 37 = -19. Let h = 17 - p. What is the remainder when h is divided by 2? 1 Calculate the remainder when ((-300)/35)/((-18)/84) is divided by 38. 2 Calculate the remainder when 1145 is divided by (46/12)/((-67)/(-402)). 18 Let j be (5/5*-6)/(-2). Calculate the remainder when 2 + (-15)/j + 73 is divided by 36. 34 Suppose 5*k - 138 = 5*a + 197, a = -2*k + 143. Suppose -f + k = -0*f + s, 2*s - 10 = 0. Let c = -49 - -71. Calculate the remainder when f is divided by c. 21 Let o(f) = f**3 + 6*f**2 + 4*f - 3. Let q be o(-5). Suppose 0 = 3*a + 6, -a = q*k + a - 118. Calculate the remainder when k is divided by 3 - (2/1 + -15). 13 Let i = -87 - -56. Let v = -8 - i. Calculate the remainder when v is divided by 12. 11 Let u = 210 + -178. Calculate the remainder when 87 is divided by u. 23 Let s(t) = 4*t**2 - 32*t + 1. Suppose 2 - 26 = -3*b. Let v = -3 - -5. What is the remainder when v is divided by s(b)? 0 Suppose 4*t + 16 = 0, -5*v + 3*t + 1633 + 459 = 0. Calculate the remainder when v is divided by 105. 101 Suppose -221*q + 246*q = 1250. Let r = 16 - -8. Suppose -28 = -4*l + r. What is the remainder when q is divided by l? 11 What is the remainder when 10 is divided by (-28)/(-12)*3/1? 3 Let b = -39 - -110. Let a = b - 46. What is the remainder when 98 is divided by a? 23 Let t be 22*((-9)/(-2) - 4). Suppose t*c - 207 = -53. What is the remainder when 39 is divided by c? 11 Let r = 13 + -9. Let t be 21/(-28) + 11/4. Suppose l + t*l - 21 = 0. Calculate the remainder when l is divided by r. 3 Suppose 5*i - 50 = -3*r, 0*r = 2*r. Suppose 37 = x - 12. Calculate the remainder when x is divided by i. 9 Let n = -14 + 21. Suppose 4*l - 276 = -4*j, 0 = 5*j + n + 3. Suppose 0 = o - l + 7. What is the remainder when o is divided by 33? 31 Let m(o) = -55*o**3 + o**2 - o - 2. Let w be m(-1). Let i = w + -16. Calculate the remainder when i is divided by (14/(-3))/(1/(-3)). 11 Suppose -3*d - 5 = -8*d. Let l be (d + 0)*(-65)/(-5). Calculate the remainder when l/(2 + (-36)/20) is divided by 22. 21 Let a = -20 + 48. Suppose 2*i = 5*t - 2*i - a, -5*i = -2*t + 1. What is the remainder when t is divided by 3? 2 Calculate the remainder when (-59 - -23)/((-9)/60) is divided by 63. 51 Suppose -16 = -4*k + 4*r, 3*k - 3*r - 2*r = 14. Suppose -2*c = -1 + k, -c = 3*s - 398. Calculate the remainder when s is divided by 45. 43 Suppose 0 = 156*d - 159*d + 744. Suppose 5*r + d = -2*u + 1529, 0 = 4*r + 2*u - 1024. What is the remainder when r is divided by 52? 49 Let v(b) = 2*b**2 + 9*b + 44. Calculate the remainder when 69 is divided by v(-4). 29 Let d be (6/18)/((-2)/6). Let b be 0 + (-2)/d - 4. Calculate the remainder when 113 is divided by b/((-4)/38) + 4. 21 Suppose 7*x - 2*x = 10. Suppose 104 = x*a - 28. Suppose -6*o + a = -0*o. Calculate the remainder when 30 is divided by o. 8 Suppose 5*p - 5*t + 9 = 2*p, 15 = 3*p + 3*t. Suppose 3*m + 297 = 8*m - p*a, -5*a = m - 81. Let c = 102 - m. Calculate the remainder when c is divided by 11. 8 Let h be ((-81)/6)/((-3)/(-4)). What is the remainder when (5 - 6)*h/2 is divided by 4? 1 Suppose 7 = -11*a + 106. Calculate the remainder when 128 is divided by a. 2 Let m(j) = -13*j + 83. Suppose -2*n + 3*w = 6, 0 = -4*w + 9 + 7. Calculate the remainder when m(-3) is divided by 188/6 + n/(-9). 29 Let a = 8 - 3. Let b(c) = c**3 + c**2 - 16*c - 6. Calculate the remainder when b(a) is divided by 14. 8 Let t(d) = d**3 + 19*d**2 + 6. Let l be t(-19). What is the remainder when 29 is divided by l/27 - ((-426)/27 - 0)? 13 Calculate the remainder when 54/(-12)*(-474)/(-18)*-2 is divided by 16. 13 Suppose -i - 5*r = 7, 2 = -r - 0. Suppose 0 = -i*a + 139 - 115. Let m(j) = -8*j - 3. Calculate the remainder when m(-4) is divided by a. 5 Let v(p) = p**3 + 31*p**2 + p + 86. What is the remainder when 105 is divided by v(-31)? 50 Let d(o) = 2*o**3 + o**2 - 7*o + 2. Let p be d(-4). Let l = p - -94. Calculate the remainder when 34 is divided by l. 10 Let x(h) = 65*h**2 + 2*h + 2. What is the remainder when x(-1) is divided by 27? 11 Suppose -4*o + 58 = q, -2*q + 6*q = 5*o - 62. What is the remainder when 107 is divided by o? 9 Let g = 6 - -4. Suppose -g*x + 150 = -5*x. Calculate the remainder when x is divided by 8. 6 Let j = 40 - 37. Suppose 2*q - 109 = j*d, -q = q + 4*d - 74. What is the remainder when q is divided by 17? 13 Suppose -4*c - 120 = -8*c. Let b(i) = 127*i - 2333. Calculate the remainder when b(20) is divided by c. 27 Let z(r) = -r**3 + 18*r**2 - 14*r + 18. Suppose 0 = c + 2*c - 42. Calculate the remainder when z(17) is divided by c. 13 Suppose -5*d = -4*b + b - 12, -3*d + 2*b = -7. Suppose -4*n = -c + 9, 90 = 4*c + d*n - n. Calculate the remainder when 60 is divided by c. 18 Suppose 9 - 6 = t. Suppose 15 = t*i - 21. Calculate the remainder when 34 is divided by i. 10 Let d(l) = 2*l**3 - 9*l**2 + 2*l + 10. Let c be d(6). Let x = c + -55. Calculate the remainder when x is divided by 26. 23 Suppose b + 12 = 3*m, 0 = 2*b - 4*b - 6. Suppose -3*y + 1 = -2, -m*y = 3*t - 267. What is the remainder when t is divided by 9? 7 Let a(s) = s**3 + 10*s**2 - 24*s + 18. Let i be a(-12). Suppose -22*u + 356 = -i*u. Calculate the remainder when u is divided by 15. 14 Let y(a) = -a + 274. What is the remainder when y(5) is divided by 16? 13 Suppose -35*a + 33*a + 18 = 0. Suppose -w + a = 2*s, 28 = 4*w + 5*s - 23. Calculate the remainder when 75 is divided by w. 18 Let x = 256 + -248. Calculate the remainder when 366 is divided by x. 6 Let p be (-219)/(-21) - (-6)/(-14). Let m be 28/(-2)*5/p. Let w = 21 + m. What is the remainder when 55 is divided by w? 13 Let x(b) be the third derivative of b**5/60 - b**4/6 - 16*b**3/3 - 5*b**2. What is the remainder when x(11) is divided by 8? 5 Let z(l) = l**3 + 4*l**2 - 5. Let g be z(-4). Suppose -m - 2*k = -16 - 5, 2*k + 45 = 5*m. Let d = m + g. Calculate the remainder when 9 is divided by d. 3 Let m(f) = f**3 + 5*f**2 - 11*f + 7. Suppose 32*x - 34*x = 12. Suppose l = 4*l - 30. What is the remainder when m(x) is divided by l? 7 Let d = 29 - -16. What is the remainder when 162/d + (-3)/5 is divided by 2? 1 Suppose 0 = 5*n + 4*n + 72. Calculate the remainder when -4 - (0 + n) - -73 - 3 is divided by 38. 36 Suppose 9*g - 71 - 19 = 0. Suppose -12 = -g*m + 6*m. What is the remainder when m is divided by 3? 0 Suppose -4*m = 40 - 56. What is the remainder when m is divided by 2? 0 Let q(v) = 3*v**2 - 4*v - 7. What is the remainder when q(-3) is divided by 21? 11 Calculate the remainder when 9/(-18)*20*(-3)/1 is divided by 16. 14 Suppose 0 = -23*j + 24*j - 84. What is the remainder when 249 is divided by j? 81 Let k = 7 - -50. Calculate the remainder when 187 is divided by k. 16 What is the remainder when (-8129)/(-11) - (-3 + 9) is divided by 8? 5 Let p(w) = w**3 + 17*w**2 - 58*w + 50. Let s be p(-20). Let u = 85 + -57. Suppose u = f - s. What is the remainder when f is divided by 10? 8 Let f(c) = 3*c**2 - 4*c. Suppose 7*p - 12 = 16. What is the remainder when f(3) is divided by (2/(-4))/(p/(-48))? 3 Let t(x) = -82*x - 482. Calculate the remainder when t(-13) is divided by 98. 94 Let i(b) = -b + 4. Let l = 5 - 1. Let q be i(l). Suppose 8 = u - q*u + 3*f, 3*u = 2*f + 79. Calculate the remainder when u is divided by 5. 3 Suppose 0 = 2*x - 5*x + 9. Let n(f) = 2*f**3 - 5*f**2 + 4*f + 1. What is the | Low | [
0.5050505050505051,
25,
24.5
] |
Molecular imaging of endogenous and exogenous chromophores using ground state recovery pump-probe optical coherence tomography. We present a novel molecular imaging technique which combines the 3-D tomographic imaging capability of optical coherence tomography with the molecular sensitivity of pump-probe spectroscopy. This technique, based on transient absorption, is sensitive to any molecular chromophore. It is particularly promising for the many important biomarkers, such as hemoglobin, which are poor fluorophores and therefore difficult to image with current optical techniques without chemical labeling. Previous implementations of pump-probe optical coherence tomography have suffered from inefficient pump-probe schemes which hurt the sensitivity and applicability of the technique. Here we optimize the efficiency of the pump-probe approach by avoiding the steady-state kinetics and spontaneous processes exploited in the past in favor of measuring the transient absorption of fully allowed electronic transitions on very short time scales before a steady-state is achieved. In this article, we detail the optimization and characterization of the prototype system, comparing experimental results for the system sensitivity to theoretical predictions. We demonstrate in situ imaging of tissue samples with two different chromophores; the transfectable protein dsRed and the protein hemoglobin. We also demonstrate, with a simple sample vessel and a mixture of human whole blood and rhodamine 6G, the potential to use ground state recovery time to separate the contributions of multiple chromophores to the ground state recovery signal. | High | [
0.678851174934725,
32.5,
15.375
] |
Effects of low sodium concentrations on the development of post-implantation rat embryos in culture and on their sensitivity to anticonvulsants. Anticonvulsant drug treatment during pregnancy is associated with an increased incidence of developmental disorders. The finding that anticonvulsant treatment can induce hyponatraemia prompted us to study the role of this parameter in the induction of malformations. Rats were treated orally with the anticonvulsants phenytoin, phenobarbitone, carbamazepine and valproic acid, and their sera were used as media in cultures of post-implantation (day 10) rat embryos. Sodium concentrations in the media were adjusted by mixing sera with 25% tissue-culture medium with or without sodium chloride. We found that low sodium concentrations caused retardation of development and enhanced the sensitivity of embryos to the retardant effects of anticonvulsants. These results show that apart from the direct effects of drugs and their metabolites secondary factors may be important in the aetiology of maldevelopment. | High | [
0.657210401891252,
34.75,
18.125
] |
{ "id": "dawn-tee", "name": "Dawn Tee", "category": "Tops", "games": { "nl": { "orderable": true, "fashionThemes": [ "Iconic" ], "sellPrice": { "currency": "bells", "value": 95 }, "sources": [ "Able Sisters (summer)" ], "buyPrices": [ { "currency": "bells", "value": 380 } ] } } } | Mid | [
0.583132530120481,
30.25,
21.625
] |
Click the image to join in for some Good Times at the 40th Edition of the Gold Coast Marathon 2018 Thursday, September 8, 2011 How Much Should A Race Cost? A conversation with a client this morning. Client: I heard you're into running. Me: Yeah, I do a little running now and then. Client: That's good, I used to run quite a while back. Me: Cool, not running anymore? Client: No la, races these days are too expensive. Which brings me to my question for today. Are races really on the expensive side these days? I don't know, I'm in two minds about this. Yes, I do think races are priced pretty steep these days but then, as a person who comes from an advertising background and have been involved in organizing events, the cost to organize a race with an average number of participants ranging from anywhere between 5,000 to 20,000 people doesn't come cheap. The logistics involved can be pretty phenomenal at times. Besides, in my humble opinion, no one forced me to pay the exorbitant race fees anyway. I did that on my own free will cos I relish the opportunity to run with countless other runners. Training runs on your own are one thing, but running with a host of people is a total different feel altogether. It's where you put all that training to the test. So yeah, though I do think that races these days are on the high side, I will still pay to take part. Of course, next year, I'll only take part in select few races because of the cost involved. The conversation with my client went on to the point of him saying that all the stuff are sponsored anyway so why do we have to pay so much. Back in the old days, races were only RM6-RM10. The key word here is 'back in the old days'. Heck, these days, Chinese tea alone cost RM0.60 compared to when it cost only RM0.20 'back in the old days'. You really shouldn't compare the cost of races from back then with now. Everything has gone up in price, so it's only natural that the cost of races would go up as well. Like I said, I'm in two minds about this. With the cost of taking part in a race being on the steep side, it might just turn away lots of would be runners from taking up the sport, which would be a sad thing cos we need more healthy people in this country. Maybe the bigger corporations that sponsor these runs could look in subsidizing these races a little more. I know they already do, but go a little further to encourage more people to take up the sport without being turned off by the cost. So, what do you think? Are races just too expensively priced these days? What would you think a reasonable cost for a race should be? Even here in the Philippines, it's become very expensive to join races. When I started running in 2003, it only cost 22RM to run a 10k, now it's already 105RM. I'll just run in my neighborhood, at least it's free! | Mid | [
0.558232931726907,
34.75,
27.5
] |
The effects of concomitant administration of theophylline and toborinone on the pharmacokinetics of both compounds in poor and extensive metabolizers via CYP2D6. This study investigated the effects of the concomitant administration of theophylline and toborinone on the pharmacokinetics of both compounds in poor and extensive metabolizers via CYP2D6. In period 1, a single dose of 3.5 mg/kg theophylline was administered orally. In period 2, a single dose of 1.0 microg/kg/min toborinone was infused over 6 hours. In period 3, 3.5 mg/kg theophylline was coadministered with 1.0 microg/kg/min toborinone. Serial blood and pooled urine samples were collected before and after toborinone administration for the quantification of toborinone and its metabolites in plasma and urine. Serial blood samples were collected before and after theophylline administration for the quantification of theophylline and its metabolites in plasma. No significant differences were observed in toborinone pharmacokinetics between poor and extensive metabolizers via CYP2D6. Toborinone coadministration with theophylline did not result in a substantive effect on the disposition of theophylline and vice versa. | Mid | [
0.642696629213483,
35.75,
19.875
] |
Buttle UK Buttle UK, formerly known as The Frank Buttle Trust, is a UK charity that provides financial grants to children in need. Founded by Frank Buttle in 1937 but not operational until after his death in 1953, the charity has helped many thousands of people throughout the United Kingdom. In 2015–2016, it made 10,068 grants totalling just over £3.9 million. The people the charity helps are often in particularly difficult circumstances and may be experiencing significant deprivation. They may be estranged from their family, seriously ill, or experiencing a range of other social problems. Areas of support Buttle UK offers a wide variety of support to vulnerable children and young people. Small grants In 2009–2010 Buttle UK made 8,887 awards to nearly 20,000 disadvantaged individual children and young people across the UK to help them obtain basic necessities. The BBC Children in Need Emergency Essentials programme Buttle UK distributes grants on behalf of BBC Children in Need and welcomes applications from referring agencies throughout the United Kingdom on behalf of children and young people aged 18 or under who are in need. Grants are generally for such items as clothing, beds, bedding, washing machines, cookers and other basic essentials. In 2007, online grant applications were launched on the charity’s website for Child Support and BBC Children in Need grants, streamlining the process and greatly reducing the response time. School fees The organization provides funding for school fees for at-risk children. Students and trainees Buttle UK has now closed this programme, but supports Estranged Young People ages 16–20. Awards financial support to young people (aged 16–20), with severe social problems, particularly those who are estranged from their parents, to attend further education and training. By funding course costs, equipment, field trips or basic day-to-day living costs, Buttle UK relieves the financial pressures and worries that often force these vulnerable young people to abandon their studies early. In 2009–2010 Buttle UK enabled 172 young people to access courses as varied as architecture, music technology, business and tree surgery. Access to the Future Buttle UK has now closed this programme, but supports Estranged Young People ages 16–20. Offers bespoke packages of support for hard to reach young people (aged 18–25) to aid their return to education, employment or training. Working with local partner organisations, our grants are targeted at removing the barriers to learning and work for vulnerable young people as well as funding a range of courses, activities and learning that would otherwise be unavailable. The support may vary from something like the cost of security guard training and licence, to driving lessons, or buying suitable cloths for an interview. Buttle UK aims to provide a complete package designed specifically for each young person, that will help them access a better future. Estranged Young People Buttle UK ran the Students and Trainees and Access to the Future Programmes successfully however analysis showed that these grants would be more impactful when focused on a certain group of young people. Therefore a programme for estranged young people ages 16–20 offers more focused support enabling young people who have no support from their families to re-engage with education, training and employment. Quality Mark for Care Leavers Buttle UK has now closed the Quality Mark for Care Leavers programme. It was launched in 2006 to address the specific challenges that this group of people face in higher education. The Quality Mark represents a statement of commitment for higher education (HE) institutions to sign up to which requires them to meet certain criteria demonstrating their commitment to support this group of students. It stemmed from Buttle UK’s grant giving activities. In the process of its grants programme for students and trainees it recognised that Care Leavers have a unique set of difficulties in aspiring to and progressing well through higher education. Buttle UK therefore commissioned a five-year action research study "By Degrees: Going to University from Care", in which 129 Care Leavers participated. The commitment seeks to facilitate an increase in the number of Care Leavers entering HE, help HE institutions to identify how best to support Care Leavers, raise awareness of the needs of this group of students, enable Care Leavers to make the most of their time in HE and to complete their courses successfully, as well as contribute to a national framework to assist local authorities to fulfil their obligations to Care Leavers. The four broad Quality Mark for Care Leaver criteria are: 1. to raise aspirations and achievements, 2. to have appropriate admissions procedures, 3. to provide entry and ongoing support, and 4. to monitor the implementation of the Commitment. If all HE institutions work towards implementing the scheme then large steps will be made towards making the aspirations of young people leaving care achievable. Research – a Strategic Approach to Children’s Problems Buttle UK commissions research projects. They have found this to be an effective way of obtaining knowledge to be able to target specific issues. Crisis Points Buttle UK worked with nkm (Mayhew Harpers Associates Ltd) to analyse 10 years worth of data from their grant giving database, representing 125,000 grant applications made from 10,000 referral agencies in the UK, to commission their Crisis Points report. The groundbreaking research revealed the many families and children currently at crisis point in the UK. Moreover, it also highlights those who are potentially falling under the radar, living in unreported poverty. Your Family Your Voice : Growing up with relatives or friends Buttle UK and Bristol University have received funding from the Big Lottery Fund to research kinship care. The first findings of the project were published in 2011 and were based on the 2001 census. The findings showed that more than 90% of kinship care arrangements in each region of the UK were informal agreements between parents and relatives. Therefore, carers were not entitled to financial support from social services. Poverty was a recurrent feature with 44% of kinship families were living in the poorest areas of the country. The second phase of the project involves interviewing children growing up in kinship care, and their carers and is expected to be disseminated at the end of 2012. Dyslexia Action Research Project This two-year action research project was funded jointly by The Frank Buttle Trust and the British Dyslexia Association (BDA). The project raised the level of awareness of the needs of children with dyslexia in the state education system and published a report "'I'm glad that I don't take No for an answer': Parent-Professional Relationships and Dyslexia Friendly Schools". Parenting on a Low Income: Stress, Support and Children’s Well-being Buttle UK commissioned the NSPCC and the University of York to undertake this research project, which was funded by the Big Lottery Fund. The final report was called "Living with hardship 24/7: the diverse experiences of families in poverty in England". Influencing Policy Buttle UK is a founder member of the charity End Child Poverty. It seeks to influence government on public policy that affects children and young people, and works collaboratively with a number of other children’s charities to effect change for children. Change of name In March 2011, The Frank Buttle Trust changed its name to Buttle UK. Finances Buttle UK's income for the year ending March 2011 was £3.33m compared with £3.53m in the previous year. References External links Buttle UK Category:1953 establishments in the United Kingdom Category:Charities based in London Category:Children's charities based in the United Kingdom Category:Organizations established in 1953 | Mid | [
0.6451612903225801,
35,
19.25
] |
Posts Tagged ‘lawyers’ The Researching Legal Careers guide points to resources about general career planning and covers career specialties such as elder law or tax law. You can also learn about business etiquette and how to interview successfully. Look for it under the library’s Guides tab. Read more... Calista Flockhart plays Ally McBeal, a 28-year-old Harvard Law School grad whose former college friend hires her at his prestigious law firm. Things get complicated when she finds out that her first love and his wife also work at the firm. The Emmy-winning program introduced its audience to dancing babies, unisex bathrooms and the word […] Read more... In Runaway Jury a widow sues a gun manufacturer after her husband is killed in a shooting accident. When the case is set to go to trial, the gun manufacturer tries to manipulate the jury selection so that it can guarantee the results of the trial. It is not not however the only one trying […] Read more... Philadelphia Prosecutor Rusty Sabich (Harrison Ford) is having an affair with his coworker until she is murdered. When he is asked to lead the investigation, he finds that the evidence points to him and he is put on trial for the murder. Check out Presumed Innocent from the law library and watch this film version of […] Read more... A powerful New York law firm calls in its “fixer,” attorney Michael Clayton (George Clooney), when their top litigator turns whistleblower in a multi-billion dollar case. Check out Michael Clayton from the law library and watch as Clayton must choose between ethics and loyalty. Read more... In Suspect a judge commits suicide and his secretary is murdered. Public Defender Kathleen Riley (Cher) is assigned to defend the homeless deaf mute who is accused of murdering the secretary. Riley begins her search for the real killer and gets help from lobbyist juror, Eddie Sanger (Dennis Quaid), and the closer they get to the truth […] Read more... In The Client, based on the John Grisham novel, an 11-year-old boy (Brad Renfro) witnesses the confession and suicide of a mob lawyer. With the mob after him and a federal attorney (Tommy Lee Jones) who wants him to tell everything he knows, he must find a way to protect himself. He hires attorney Reggie Love (Susan Sarandon), who puts […] Read more... Steven Matthews has a great post on Slaw (Canada’s online legal magazine) on how law firms fail at social media. His observations (along with his recommendations) should be required reading for anyone thinking of using social media to advance their legal practice. Read more... Pro Bono Net is a national nonprofit organization dedicated to increasing access to justice through innovative uses of technology and increased volunteer lawyer participation. Their website holds a wealth of information, from free webinars and guides to job and pro bono opportunities. Read more... Fatou Bensouda was recently sworn in as the new Prosecutor for the International Criminal Court in the Hague replacing Luis Moreno-Ocampo. Bensouda is a native of The Gambia and formerly served as its Solicitor General and Minister of Justice. She graduated from law school in Nigeria and holds a Masters of Laws from the International […] Read more... | Mid | [
0.620087336244541,
35.5,
21.75
] |
Q: Use preprocessor to print defaulted functions in c++ classes Is there a way to have the C++ Prepossessor print the code for all auto generated functions such as copy and move constructors, along with copy and move assignment operators via a command line option to perhaps g++ or clang? A: No, prepossessor is working on your source code, treating it as just a text, before c++ compilation starts and it does not perform C++ syntax analysis, it is unaware of any c++ language constructs. The output of preprocessor, which is another text, is used as input for actual c++ compilation Having said that, I want also to mention very interesting article that I read just today - Can Qt's moc be replaced by C++ reflection, which along other things also touches a bit question about reflection in C++ language and links to Call for Compile-Time Reflection Proposals. So it looks like we just need to wait a bit and what you are asking for will become possible soon :) | Mid | [
0.644705882352941,
34.25,
18.875
] |
# coding=utf-8 # Copyright 2020 The Tensor2Tensor Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Encoder for audio data.""" import os from subprocess import call import tempfile import numpy as np from scipy.io import wavfile class AudioEncoder(object): """Encoder class for saving and loading waveforms.""" def __init__(self, num_reserved_ids=0, sample_rate=16000): assert num_reserved_ids == 0 self._sample_rate = sample_rate @property def num_reserved_ids(self): return 0 def encode(self, s): """Transform a string with a filename into a list of float32. Args: s: path to the file with a waveform. Returns: samples: list of int16s """ def convert_to_wav(in_path, out_path, extra_args=None): if not os.path.exists(out_path): # TODO(dliebling) On Linux, check if libsox-fmt-mp3 is installed. args = ["sox", "--rate", "16k", "--bits", "16", "--channel", "1"] if extra_args: args += extra_args call(args + [in_path, out_path]) # Make sure that the data is a single channel, 16bit, 16kHz wave. # TODO(chorowski): the directory may not be writable, this should fallback # to a temp path, and provide instructions for installing sox. if s.endswith(".mp3"): out_filepath = s[:-4] + ".wav" convert_to_wav(s, out_filepath, ["--guard"]) s = out_filepath elif not s.endswith(".wav"): out_filepath = s + ".wav" convert_to_wav(s, out_filepath) s = out_filepath rate, data = wavfile.read(s) assert rate == self._sample_rate assert len(data.shape) == 1 if data.dtype not in [np.float32, np.float64]: data = data.astype(np.float32) / np.iinfo(data.dtype).max return data.tolist() def decode(self, ids): """Transform a sequence of float32 into a waveform. Args: ids: list of integers to be converted. Returns: Path to the temporary file where the waveform was saved. Raises: ValueError: if the ids are not of the appropriate size. """ _, tmp_file_path = tempfile.mkstemp() wavfile.write(tmp_file_path, self._sample_rate, np.asarray(ids)) return tmp_file_path def decode_list(self, ids): """Transform a sequence of int ids into a wavform file. Args: ids: list of integers to be converted. Returns: Singleton list: path to the temporary file where the wavfile was saved. """ return [self.decode(ids)] @property def vocab_size(self): return 256 | Low | [
0.517525773195876,
31.375,
29.25
] |
5. Essentials The change in setting – from tranquility, contemplation to representation – reflects, by and large, the distinctive features of the First, Second and the Third Quadrant of the quadralectic world view. These stages lead to the Fourth Quadrant, which is characterized here as the essentials. This qualification has a touch of superiority, but does not pretend to be better or ‘higher’ than the previous ones. It just wants to get things clear and straight, go into depth by touching the indispensable requirements in the relation between a way of thinking and a way of building. Those elements can be divided in the two main characteristics: space and time. The former entity (space) is an abstract unity where a division takes place, the latter (time) is an invisible image, pointing to the existence of movement (fig. 781). Every communication is ‘ruled’ by these supra-human, natural variables. Their distribution in a four-fold (quadralectic) setting is proposed as follows: Fig. 781 – A quadralectic interpretation of the main constituents of any conceivable communication. The characterization of the visibilities is derived from the duality visible – invisible. The assignment of the foursome Space/Time – Division – Time/ Space – Movement to the quadrants is a very general indication of the nature of these positions. The essentials in architecture can only be expressed if the value system of the internal relations is understood. We have to know what visibility means (and how it comes about) before any sensible word about architecture can be said. A deliberate choice in the division-environment is the primary action before any judgment of a building or urban design can be given. What is beauty? Alternatively, why are certain designs better than others? The answer to this type of fundamental questions should be found in the knowledge of a measuring system, which provides the parameters to come to a result (an opinion, a judgment). This chapter will try to convey some aspects of the abstract entities, which regulate a communication and in particular with regards to the world of architecture. The inquiry starts with the nature of Space, the great Enigma in architecture (and science in general). The journey continues to the more visible items of volume, surface and outlay (plan). The understanding of the multiplicity of architecture-as-a-whole is the final step. All approaches will be described briefly, but the main aim of this overview is the notion that any valuation of architecture is a matter of going through these motions in a conscious way. Quadralectic architecture aims at the understanding of a value system, which underlies any creative activity. The old-fashioned division in an objective or subjective approach – which is still at the roots of classical science – has to be abolished. The human mind has since Descartes’’ hypotheses in the early seventeenth century made an enormous progress. Einstein’s theory of relativity opened the doors to a wider view, adding the speed of light as a philosophical contingency. Observation found a new boundary (limit) in the invisible invisibility, which can now be measured in a proper division environment. A new cadre had to be found to accommodate the revealing insight that broke the opposition between objectivity and subjectivity. It is believed that the quadralectic philosophy can provide this width of thinking – or is, at least, a pointer in the right direction. The oppositional survival strategy is replaced by an equilibrium of four stages, providing a deeper and richer insight in the nature of our communication with the world. The journey (of life) is no longer a matter of ‘to be or not to be’, but can be calculated in terms of a concordance between ourselves and the universe in a four-fold setting. The essential understanding of architecture – and life in general – depends on the position in a divided reality. This ‘choice’ is given to every individual in a point of recognition (POR), where and when our being in the world gets its identity and historical meaning. The correspondence of these points of recognition with those of other people gives in a common understanding and realization of a mutual history. A remarkable correspondence was found between the interpretation of the (quadralectic) quadrants and the way in which Roget’s Thesaurus of English Words and Phrases (1962/1966) was organized (fig. 782). This similarity might be a case of pure coincidence, but it seems more likely that the ordering followed lines of thinking, which are similar to the four-fold way of thinking. The classification was (first?) applied in the above mentioned Penguin Edition of the early sixties and not yet in the American Signet Books edition of 1958. Peter Mark Roget (1779 – 1869), who was a medical doctor, started to collect lists of words as a hobby and grouped them when they were related. Synonyms like illegal and unlawful and antonyms, like peaceful and warlike, were joined together. The first publication of Roget’s list of words took place in 1852 and was called a thesaurus, or treasury of words. The various classes in Roget’s Thesaurus in the Penguin Edition of 1962/66 might be applied in quadralectic architecture to find its essentials. Fig. 782 – The organization of Roget’s Thesaurus of English Words and Phrases (1962/1966) contains distinct ‘quadralectic’ elements. The organization of the great quantity of words probably followed similar lines of thinking as those, which led to the interpretation of the (quadralectic) quadrants. The six classes can be brought back to a four division by joining the classes 4, 5 and 6 in the Fourth Quadrant (because they are all dealing with ‘feelings’ as subjective mind actions, in one way or another): This overview is most remarkable, because it offers in this form a perfect guideline to place the essentials of architecture in perspective. Any building of a certain stature can be ‘measured’ along the lines of these four classes and its significance rated accordingly. Another example of the quadripartite approach was found in the work of the anthropologist Mark MOSKO (1985). He studied the inter-human relations in the Bush Mekeo, a Papuan tribe living in southeastern part of New Guinea. Some anthropological questions with regards to the position of the observer (with his or her own cultural background) and the application of certain concepts in the description of another culture (structuralism) are treated in the introduction. Anthropological studies often focused on binary or dualistic forms, putting opposing entities in a frame of reference (nature/culture, sacred/ profane, male/female, etc.). Mosko acknowledged this fact, but he extended his frame of reference: ‘Categories distinguished and mutually defined as belonging to the same set systematically come in fours. Each fourfold category group is initially composed of a single binary opposition, which is itself bisected by its own inverse or reverse’ (p. 3). Mosko illustrated the structure of bisected dualities, which systematically underlie the category distinctions of Bush Mekeo culture, in their notion of the relation between village and bush, inside and outside, resource and waste, etc. There is a transfer of things between village and bush, where the latter is seen – curiously enough – as the ‘inside’ (resources) and the former as ‘outside’ (waste producing). However, a notion of a reversal or inversion of each accompanies this opposition in the mind of the Mekeo people. The outside village has its own inside place (i.e. an inverted outside), and the inside bush has its own outside (i.e. a reverted inside). A Bush Mekeo village normally has a central, elongated open space (abdomen), and the dwellings are arranged in parallel rows (fig. 783). This rectangle empty space is the ‘inside’ of the outside (the inverted outside). The spatial categories in this seemingly ‘primitive’ human society provide a model for an approach to ‘space’ in modern architecture. City development has to deal – at least in a quadralectic approach – with four types of space consciousness. The invisible invisibility of space is present as a notable aspect in the other quadrants. Fig. 783 – The sphere of ordinary transfers in and around a Bush Mekeo village, Papua New Guinea as described by Mark MOSKO (1985). The bush is in the view of the villagers seen as the ‘inside’, while the village is the ‘outside’. Goods (food) are brought in from the inside of the (remote) bush to the outside (peripheral) of the village (a). After consumption the waste is collected on the inside of the village abdomen (b) and returned to the (adjacent) bush (c). A city can be regarded – just like the Bush Mekeo did – as an ‘outside’ place, ‘feeding’ on the countryside (bush) and bringing its waste back to nature. A city square becomes in that vision a new ‘inside’, a place of ‘production’ (of community feeling?). These options of reversal should be kept open, even if the contemporary city sees itself often as the ‘inside’ place. The city is, in a modern view, the place of ‘production’, and the countryside is just the ‘outside’ to dump the waste. The city square (or park) is in this urban centrism a reminiscence to nature, the outside place. In particular the dog owners understand and utilize this modern function of an open space within the city limits, as can be observed in any city park today. The essentials of architecture are illustrated here in a very compact form. The message is the understanding of four types of relations to a building (or a group of buildings, like a village or a city). Every architectonic entity has its boundaries – and subsequent value – in a ‘double’ interaction, taking place between the participants of the communication. A particular awareness of space was found in the conception of a village plan by the Batak people, living in the Toba region of Sumatra (Indonesia). A drawing in a bius pustaha (accordeon book of bark with a wooden cover) shows the concentric conception of space with the bindu matoga motif, consisting of two squares turned over forty-five degrees (fig. 784). The same motif, now called a mandoedoe, is used as a warding off of a bad omen and obtain happiness and bliss. The magic figure is drawn on the ground in yellow, white and black flour close to the entrance of the house, where the mandoedoe is to be held by the wizard (SCHNITGER, 1939). The (basic) four posts to support a house are orientated with respect to the cardinal directions or facing a mountain. The substructure of the house consists of wooden pillars resting on flat stones as protection against damp. The number of pillars varies from six to eight lengthways. The front of the house is made of two transverse rows of pillars to support the entrance through a trap-door. The peak of the house is made half circle. The interior of a Toba Batak house is divided into four main parts called ‘jabu’, which have the same principles of ordering as the erection of the house-posts and holds a similar ideology (NIESSEN, 1985; p. 256-258). The four-fold way of thinking (in anthropology) was also noted by van FRAASSEN (1987) in his study of the social organization on the island of Ternate in the Indonesian archipelago. This island played a key-role in the priority question of the evolution theory by Charles Darwin and Alfred Wallace. The latter wrote down his ideas about the ‘survival of the fittest’ in Ternate when he was ill of malaria. His letter to Darwin was sent on March 9, 1858, when Darwin prepared his ‘Origin of Species’ (published one-and-a-half year later in November 1859). In how far Darwin used Wallace ‘law’ of natural selection and altered his manuscripts is still a matter of debate (BROOKS, 1984). Van Fraassen noted a possible geographical origin of the four-fold division on the island Ternate (fig. 785). The so-called soa’s (social units, the equivalent of the Maleian kampung) consisted of the villages Soa Sio, Sangaji, Héku and Cim. The etymological and semantic aspects are less clear. The term bobula raha (the four parts) or ampatpihak (four sides, four categories, and four groups) was used, but not very often. Van Fraassen (I, p. 381) noticed the absence of a general term to indicate the four parts of the society, and concluded that the (four) division was at present less important. Fig. 785 – This map indicates the territorial four-division of the island of Ternate at the beginning of the seventeenth century (given by van FRAASSEN, 1987). The four division is more prominent in texts where the four parts are related to the geographical areas and used in the same sequence. Sao Sio and Sangaji are mentioned together, just as Héku and Cim. The latter words mean ‘upper’ and ‘lower’ and are related to the directions of the winds. Inversions are possible (in relation to the island of Halmahera or in the history of Ternate itself) and the direction can have a metaphorical sense. The two-division (as a conceptual setting) also becomes visible (within the four-division) as a dominance of one of the members of the pairs. Sao Sio is slightly dominant over Sangaji and Héku over Cim. The following scheme expressed the distribution of political power: Sao Sio + Héku ————————– Sangaji + Cim This complementary opposition can be followed in the history of the island group of the Molucs (Maluku, Indonesian archipelago). There were four political powers concentrated on the following islands: 1. Ternate; 2. Tidore; 3. Jailolo and 4. Bacan. They teamed up in much the same way as the smaller power units on the island of Ternate did (only in a different sequence). Ternate and Tidore were rivals (and as such ‘locked together’) just as the political partners Jailolo and Bacan. The scheme would be as follows: Ternate + Bacan ——————————- Tidore + Jailolo The sense of opposition is even more visible in the architecture of the forts (Benteng) Toloko and Kaya Merah, who depict in their plan in a straight-forwards way the male and female genitals (left and fig. 786). . Fort Toloko (Photo: Garuda Magazine). The former fort (male orientated, left) was built by the Portuguese naval general Aphonso de Albequerque in 1512 and the latter (female orientated, right) by the English in 1518. Fort Oranje, built by the Dutch in 1607, is the largest fort on the island Ternate, following a rectangular design. Fig. 786 – The forts (Benteng) Toloko and Kayu Merah (Red Wood) are situated respectively north and south of the city of Ternate on the island of the same name. A sexual symbolism is obvious. Fig. 787 – The design for a House of Pleasure in Montmartre (Paris) by the French architect Ledoux, created in 1787. This type of symbolic architecture was repeated by the French architect Ledoux, who gave his House of Pleasure in Paris (fig. 787) and later his Oikèma in Chaux (fig. 788) a phallic layout (VIDLER, 1990; LISS, 2006). These explicit efforts, with admittedly little followers, are the extreme specimens of dualistic architecture. The first (colonial) examples, by the Portuguese and British, were built in the middle of the Third Quadrant – at the Pivotal Point (1500) – of the European cultural history. The later (French) examples did not materialize, but remained in the planning stage at the end of the Third Quadrant (1800). Fig. 788 – The phallic plan of the Oikèma or a temple dedicated to libertine pleasures was designed by Ledoux for the city of Chaux. To the left: the basement plan, to the right: the ground floor. The connection of architecture and sexuality is also evident in the buildings of the Dogon, an African ethnic group living in Mali, south of the River Niger. The sculptural art of the Dogon was often hidden in houses, sanctuaries or kept by the Hogon (spiritual leader) in order to protect their symbolic meaning. The animist religion of the Dogon people is focused on ancestral spirits (like Nommo, a twin pair from the God Amma born in the second creation) and the star Sirius seems to play an important role (although recent investigations put question marks by the possibility of a Dogon astrology based on scientific facts, which cannot be seen with the naked eye). The differentiation in gender is a far better documented element in the Dogon society. This division (in male and female) is a typical sign of lower division thinking – as it is known from the people on the eastern side of the Mediterranean. A typical house is modeled on the human female form, with a round kitchen (respiratory organ) as head of the house. The living room (body) has two storage rooms on either side. The house has no windows and is therefore dark (and cool) inside. The obsession with sexuality (as manifested in fig. 789) is the result of psychological forces, which belong – in a quadralectic interpretation – in the Third Quadrant of a communication. It is remarkable, that the Dogon fled for the Muslims to the remote places along the escarpment of the Dogon plateau, for the very reason of their oppositional thinking. The beauty of the Dogon architecture seems to have a link to this form of division thinking. A preliminary conclusion can be that architecture needs this component of antagonism. It could also explain why many of the great modern architects have their roots in geographical areas were oppositional thinking is dominant. Fig. 789 – The Mosque of Sangha in the Dogon valley displays male sexual symbols, reflecting the emphasis on a division in gender in the patriarchal Dogon culture. Architecture in mud, like it was described by DETHIER (1981), has resulted in explicit architecture in other arid areas as well (fig. 790). Fig. 790 – A traditional house for the celibataires (celibates) of the Bozo tribe, in the Mopti region of Mali, has several tetradic features. A four-fold notion of the personality was described by Jean-Paul LEBEUF (1978) in the Likouba tribe in the northeast of the Republic of Congo Brazzaville. These people see the personality (nzoto) as consisting of four elements, the masotu, the molimo, the elimo and the elilingi, coming together in life (bomoy). The masotu is the perishable body, connected with the earth. The molimo is the invisible temperament, providing the (creative) thoughts, which influence the movements in life. When the molimo leaves the body, man dies. It is connected with water. The elimo is the invisible and immaterial soul, which stimulate action in much the same way as the molimo. It is connected with air. Molimo and elimo are closely connected as spiritual entities, giving a consciousness and an intellectual power. Finally, elilingi is the shadow of the body, which appears at birth and finishes after death, and is a reflection of the molimo and elimo of the individual. A connection with fire – creating the shadow – is made. A quadralectic interpretation of the tetradic personality of the Likouba would be: I. First Quadrant – elilingi (fire); II. Second Quadrant – elimo (air); III. Third Quadrant – masotu (earth) and IV. Fourth Quadrant – molimo (water). Their division also includes the human body, which has four principal parts: the head (motu), the upper extremities (maboko), the trunk (moy) and the lower extremities (makolo). Leboeuf does not mention any relation with architecture, but he noticed a close correspondence of the material and spiritual aspects of the human personality. The four elements and their representations on a human level provide for the Likouba the immortal framework of their existence and shape their character, their aptitudes, attitudes and sentiments. It is a clarifying matter that these so-called ‘primitive’ people, living as fishermen and hunters in the tropical forests of the Congo, stage a self-knowledge, which is highly advanced in terms of division thinking. It proves that a cultural content can be present in a group of people, even if the material proof of their enlightenment is lacking. This conclusion was one of the major findings of the (field)work done by Claude Levi-Strauss (1908 – 2009). His notion, that the Western civilization was neither superior nor unique, is still valuable. Levi-Strauss emphasized the mutual influences in a communication between representatives of different cultures: the observer and the observed are part of a reciprocal interaction, which partakes in a continuous process. Maybe the continent of Africa, with its relative lack of ancient architecture, should be seen in a different light. ‘Invisible’ values (like story-telling and myths) are rated higher than or are equal with the material world. This approach to the people of Africa – drifting away from the basic principle of many western scientific researchers, with their interest in the empirical traces of a culture – could change our view to the continent. The essential element in quadralectic architecture, all over the world, is the abstract experience of a four-fold division, surpassing the opposition of the two-fold and the indeterminable character of the three-fold. The ‘neutrality’ of the quadralectic approach also rules out any feelings of superiority. The only measure in a communication is the degree of understanding between the contributing partners. The word ‘architecture’ is allowed, at this stage, to go beyond its most common meaning as ‘the process of building with an aesthetic appeal’. The essential understanding of architecture includes all the forms of organization in a particular division environment. The emphasis of building (with physical materials) can shift to the organization of the building process (with conceptual elements) and even further to a type of ‘philosophical architecture’ (in which the cognitive elements question their own identity). One has to realize that ‘architecture’, in a quadralectic setting, moves through all the different quadrants and its meaning is related to the prominent type of visibility in a particular subdivision. Architecture in the First Quadrant (I) is in the realm of the invisible invisibility with its unknown properties. One could minimize the relevance of this type of architecture, because there is no substantial proof of its existence, but that would be wrong. In fact, it is very important to account for this ‘empty space’ in a communication, since it gives the other types of visibility room to maneuver. Architecture of the Second Quadrant (II) emerges in a field of general ideas about habitation. Every man aims at a place of its own and the characterizing of such a place can have many faces. The specter runs from a realization of an identity in a psychological setting to the physical presence in a personal living place. This type of architecture runs from the partly visible to the greatest highlight of creative possibilities (when the CF-graph reaches its lowest value of CF = 6). Architecture of the Third Quadrant (III) is the common-classical architecture as it is known in daily life, referring to the realization of a building from its earliest conception to its material finalization The descriptions in hand-books mainly deal with this type of historical architecture. Style periods are distinguished – often in an oppositional environment – by pointing to certain obvious architectonic features. Architecture of the Fourth Quadrant (IV) is the quadralectic architecture proper. The quadralectic model – with the CF-graph as its visible representation – is accepted as a guide into the interpretation of structures in the widest sense. Most important is a relative feeling of the essential. | Mid | [
0.630681818181818,
27.75,
16.25
] |
Jenny Nordberg’s The Underground Girls of Kabul, published Sept. 16, is the result of five years of research into why it’s not uncommon for girls in Afghanistan to be brought up as boys. Nordberg, an investigative reporter, discovered the practice in 2009, and detailed it in a story for The New York Times. The Underground Girls of Kabul explores the reasons for, and the consequences of, this longstanding practice, which has affected many Afghan girls and women. It also offers a glimpse into the situation for women there, which remains precarious. What happens to such a person, Nordberg wondered, when they relocate to a society that values women more, and there is no longer a need to hide? She recently connected with another young Afghan woman, now living in the U.S., who once passed as a boy in her home country. Exclusively for TIME, this is the story of Faheema. ** Liberating. That’s how it felt, walking out the door for the first time as a boy. I was 12. I was no longer Faheema, who needed to be proper and watch her every move, but Faheem, who had guts and could go where he wanted. That was my right as a bacha posh—from Dari, it translates to, “dressed up as a boy.” It’s what they call girls who live their lives disguised as boys in Afghanistan. And I suppose those who eventually become boys on the inside, too. My family had returned to Kabul after the Taliban, and in 2002, society was so much more conservative there than in Pakistan, where we had lived as refugees. Girls were looked down upon, and being one was made very difficult. With short hair and in pants, I found that no one would look at me on the street, or harass me. I did not have to wear the scarf. I could look people in the eyes. I could speak to other boys, and adult men too. I did not have to make myself smaller by hunching over. I could walk fast. Or run, if I felt like it. In fact, I had been brought up as a boy—I just didn’t look like one at first. At home, I was the one who got things done. We were carpet weavers, and I ran the family enterprise from our house. Seven other, younger, children took orders from me. My parents often told me they wished I had been born a boy. They have said it for as long as I can remember; my father in particular. It would have made more sense, he said, since I was a harder worker than any of my brothers. Even while living as a girl, I tried to do everything Afghan society and culture said I couldn’t do. I became strong. I took responsibility. I educated myself and my siblings. I helped my father with his guests and all the technical work at the house. But I still felt inadequate. Most bacha posh in Afghanistan are made that way by their parents. But my story is different. One day I made the decision for myself to switch. I gave them what they asked for. It worked. The attitude, the lowered voice; how I moved with more confidence. I could disappear in a crowd. The more divided a society, the easier it is to change the outside. Others bought it. It shocked me that I could trick those harassing eyes just by how I looked. Being a boy allowed me to function as a more of a whole person in society. It was practical. I could protect my sisters, and escort them to class in winter. It pleased my parents, too. At least they did not protest. I spent nine years as a boy. I continued trying to please my parents like that until a few years ago, when I came to a small town in America to go to college. My turning point was when I started thinking about being a woman. Why should I need to hide? Could I not have the same pride, and the same abilities, as a girl? Why did only my male self have that strength? I had been so proud to be a boy, in that I had figured it out and outsmarted everyone. That I had won. But I began feeling more and more angry. I was like, “How long will I have to do this?” To be honest, I had always thought of being a bacha posh as my own choice; that I was doing something also for myself, and of my own free will. But that was not entirely true, I realize now. My parents’ wish for me to be a boy forced me to become one. I took it too literally. So a few years ago, I wanted to try and accept myself as a girl. I knew it was inevitable at some point anyway. By then, I was 18 but I still had no breasts, and my periods were irregular. When my mother had sought out a doctor in Kabul, he said that my psyche may be turning into that of a man’s. It scared her. She worried I may never be able to turn back. It was hard. I began letting my hair grow out. Now it’s almost all the way down to my waist. I also went to see a psychologist at my university. We talked about what is male and what is female in me. I don’t know what normal is, but I am not as angry anymore. The differences between men and women exist here too, but there is no need for me to pretend to be a man in order to go outside, or to count as a full person. In some ways America is a conservative society too, and it’s so important for many people to be either male or female. I have both in me now and that’s how I’ll always be. I think often about what it means. Being a man gives you so many privileges, you don’t see the small things. You own the world and everything is yours. As a boy, I was very busy thinking of everything I needed and wanted. That’s what you do. You just don’t take much of it in. You focus on yourself. A lot is expected of you as a man, so you have to. As a woman, you see more. You notice what’s around you. To me, that is the essence of it. You relate to others. As a woman, I have a soft core that melts with everything. As a woman, I can feel what others feel. I see what they see. And I cry with them. I think of that as the female in me. I allow that now. I’m in my twenties now, and I don’t expect to live long. A woman’s average life in Afghanistan is 44 years, so I’m halfway done. I would like to stay here and become an anthropologist, but my American visa expires in a few months, and then I have to return home. My father still only accepts me as a boy, not as a girl. We talk on Skype: He is a macho colonel in Afghanistan who calls me every day. Like my close friends, he is still allowed to call me by my boy name. But I know now that both my family and much of my society was wrong in saying that only boys can do certain things. They are the ones who don’t allow girls to do anything. I have complicated feelings about the freedom I have here in the West. It’s borrowed. It’s not really mine. Deep down you know it’s going to be taken away at any moment. Just like that of a bacha posh. As told to and edited by Jenny Nordberg. Contact us at [email protected]. | Mid | [
0.587677725118483,
31,
21.75
] |
The production and processing of plastics is characterized by constantly increasing demands on the quality of individual products. To meet these quality requirements, it is necessary to keep precisely to predetermined specifications particularly in regard to the concentration of individual constituents in plastic mixtures. In the context of the invention, plastic mixtures may be inter alia copolymers, polymer blends or filler- or fiber-reinforced plastics. Another field of application of the invention is the determination of additive concentrations. The additives in question are, for example, mold release agents or plasticizers. The concentrations of mold release agents are typically less than 1%. Nevertheless, their exact percentage content by weight in the plastic mixture often has to be determined. There are already various known spectroscopic processes for determining the composition of plastic mixtures. In one frequently used process, a solution of the plastic is initially prepared and a transmission spectrum of the solution is then run. On the basis of absorption bands which are characteristic of the components to be determined, the concentrations of those components can be determined after calibration using the Lambert-Beer law or more recent evaluation methods. However, this process is attended by the disadvantage that there are no known solvents for certain modern high-performance plastics, for example polyphenylene sulfide, at temperatures below 150.degree. C. In addition, U.S. Pat. No. 4,717,827 describes a process in which the transmission measurement of the plastic to be analyzed is carried out on a molten sample. However, this process is suitable above all for the on-line production control of a product, but less suitable for the measurement of a number of different samples in a short time because cleaning and refilling of the measuring cell are relatively complicated. The most advantageous method of analyzing a number of different samples is to determine the transmission spectra of pressed films of the plastic mixtures to be analyzed. Pressed films are obtained in known manner by the compression-molding of a plastic mixture at a compression mold temperature above the melting temperature of the mixture to be analyzed. Their thickness is typically between 1 and 500 m. In addition, test specimens produced by injection molding with typical wall thicknesses of 0.1 to 100 mm may also be used for spectroscopic measurements. Since both pressed films or injection moldings and also the analysis beam of commercially available spectrometers often show an anisotropies or inhomogeneities, the spectra of these plastic films are subject to considerable variations which have a correspondingly adverse effect on the accuracy of the concentration measurement of individual components in the plastic mixture. The problem addressed by the present invention was to provide a process for the analysis of plastic mixtures with increased accuracy by transmission spectroscopy using solid samples. | High | [
0.703703703703703,
33.25,
14
] |
Knesset Member Yaacov Katz (National Union) on Tuesday said that if he was appointed defense minister, no infiltrator would be killed. "We must arrest all Bedouins, put them in pounds and shoot a bullet to the head of those who head the convoys. Three or four Bedouins will be shot down, and the convoys will stop. Put all the Bedouins in a pound and the smuggling will stop as well," asserted Katz. Fighting Back Refugees: We're not Israel's diseases Roi Kais Some 1,300 people rally in Tel Aviv against plan to set up detention facility for infiltrators near Israel-Egypt border. 'Netanyahu, don't be a racist,' they chant. Schoolchildren join march, say 'refugees have rights too' Refugees: We're not Israel's diseases "This phenomenon can be stopped in one day, without killing a single infiltrator. I blame the prime minister, who has yet to instruct the defense minister to stop the demographic terror. "According to findings, a million and a half infiltrators are on their way to Israel. The Jewish residents of Tel Aviv are turning into refugees. We in Judea and Samaria are not prepared to absorb all of Tel Aviv's refugees," the National Union MK added. MK Katz noted that "had the prime minister realized the catastrophe, he would have declared a state of emergency and appointed a defense minister that can solve the problem." He also dismissed all of the solutions implemented so far, and detailed his own plan of action: "Declare a state of emergency, so that the Supreme Court and the leftist battalions cannot get in the way," he said. 'Pushed by Bedouins' The conference was attended by MKs, local government officials and residents of Tel-Aviv's southern neighborhoods, where the struggle against infiltrators has been gaining steam. Eilat Mayor Meir Yitzhak Halevi noted that more than 6,000 refugees were being "pushed into Israel by Bedouins from Sinai," and called bleeding hearts to "come to their senses. "These are not refugees from Darfur. These are illegal infiltrators. Having the border wide open in a sovereign state is fundamentally wrong," he lamented. Halevi noted that Egypt has between a million and a million and a half work infiltrators, who consider Israel a goal to conquer. "No one is promising me that this is not a terrorist scheme," he said. Kadima MK Yulia Shamalov-Berkovich noted that "we have the tendency to be holier than the pope, but we keep forgetting that this is a very small country, and the only one for the Jewish people. "Even if it seems like xenophobia and that we aren't humane by deporting foreigners – its time to put things on the table. I suggest that they write us and say 'we support the infiltrators, and we've allocated a room for them in our yard'," she said while referring to the infiltrators' supporters. MK Nissim Zeev (Shas) said that the Muslims are threatening to concur the world. "That is the Islamic policy, and we must stop this epidemic with all force. "I wish it was under the jurisdiction of the Interior Ministry. Then we would have already taken care of it," he added. MK Michael Ben-Ari (National Union) said the conference was organized in large part because of the neighborhood residents. "The goal of this lobby is to expose one of the facets of the illegal infiltration phenomenon. They speak of the rights of the infiltrators while scrapping the rights of the residents of this country. That’s why this lobby aims to protect the victims of the illegal infiltration," he concluded. | Low | [
0.501118568232662,
28,
27.875
] |
The overall goal of this project is to define the molecular events involved in the transformation of low-grade lymphomas to more aggressive forms. As a first approach, we have studied a series of recurrent follicular lymphomas to assess the status of their immunoglobulin and bcl-2 restriction patterns over time. We have found that immunoglobulin gene restriction pattern changes occur frequently (30% of the cases studied). Changes in the bcl-2 gene restriction fragments were not observed. We believe the changes in the immunoglobulin genes may be due to the normal physiologic "mutator" function which gives rise to somatic mutation in B cell development. If a cell containing a mutation at a restriction enzyme site undergoes a second genetic event that imparts a growth advantage, the first mutation will become detectable as a restriction pattern change. In this way, changes in the immunoglobulin genes may serve as a marker for subsequent growth promoting mutations. We have begun to investigate potential second events which could impart a selective growth advantage to a lymphoma cell or result in aggressive transformation of a lymphoma. To this end, we are currently analyzing the potential role of several oncogenes and anti-oncogenes. We have found retinoblastoma gene abnormalities at the DNA and RNA levels in several different types of lymphomas and are currently attempting to extend these findings by studying expression at the protein level in a collaborative effort with Dr. William Benedict. We have also surveyed a large series of lymphomas with probes for several other genetic loci associated with disease progression and/or high grade histology (BCL-3, BCL-4), but have found only sporadic involvement of these other loci. | High | [
0.6876640419947501,
32.75,
14.875
] |
A time-based potential step analysis of electrochemical impedance incorporating a constant phase element: a study of commercially pure titanium in phosphate buffered saline. The measurement of electrochemical impedance is a valuable tool to assess the electrochemical environment that exists at the surface of metallic biomaterials. This article describes the development and validation of a new technique, potential step impedance analysis (PSIA), to assess the electrochemical impedance of materials whose interface with solution can be modeled as a simplified Randles circuit that is modified with a constant phase element. PSIA is based upon applying a step change in voltage to a working electrode and analyzing the subsequent current transient response in a combined time and frequency domain technique. The solution resistance, polarization resistance, and interfacial capacitance are found directly in the time domain. The experimental current transient is numerically transformed to the frequency domain to determine the constant phase exponent, alpha. This combined time and frequency approach was tested using current transients generated from computer simulations, from resistor-capacitor breadboard circuits, and from commercially pure titanium samples immersed in phosphate buffered saline and polarized at -800 mV or +1000 mV versus Ag/AgCl. It was shown that PSIA calculates equivalent admittance and impedance behavior over this range of potentials when compared to standard electrochemical impedance spectroscopy. This current transient approach characterizes the frequency response of the system without the need for expensive frequency response analyzers or software. | High | [
0.6601671309192201,
29.625,
15.25
] |
Q: Hard to select TextInput entry in Android I don't know why its so hard to select the input in Android, I keep pressing the TextInput but the Select all, paste or copy option is not showing up whether the input is empty or not. In iOS it is working perfectly! <View style={styles.reporterView}> <Text style={styles.reporterText}>Reporter:</Text> <TextInput value={this.state.reporter} onChangeText={(reporter) => this.setState({reporter})} style={styles.reporterField} placeholder={'Name'} autoCorrect={false} textAlign={'left'} maxLength={20} scrollEnabled={false} /> </View> reporterView: { flex: 1, flexDirection: 'row', marginBottom: 25, paddingTop: 15, paddingLeft: '22%' }, reporterField: { flex: 1, fontSize: 22, height: 55, marginLeft: 30, marginTop: Platform.OS == 'android' ? -12 : 0 } A: This looks like a common issue with React-Native and Android. I found a solution from the support pages that seems to work quite well. removeClippedSubviews is set as true by default. so you have to set this to false These are possible paths to find this and set to false; ..\node_modules\react-navigation-material-bottom-tabs\node_modules\react-navigation-tabs\src\views\ResourceSavingScene.js ..\node_modules\react-navigation-tabs\src\views\ResourceSavingScene.js ..\node_modules\react-navigation-drawer\dist\view\ResourceSavingScene.js ..\node_modules\react-navigation-material-bottom-tabs\node_modules\react-navigation-tabs\dist\views\ResourceSavingScene.js ..\node_modules\react-native-paper\src\components\BottomNavigation.js -- here is the code to use removeClippedSubviews={ // On iOS, set removeClippedSubviews to true only when not focused // This is an workaround for a bug where the clipped view never re-appears Platform.OS === 'ios' ? navigationState.index !== index : true //<-- set this to false } Reference: https://github.com/facebook/react-native/issues/25038 Below are other solutions on Git and stackoverflow. Let me know if any of these help. Stackoverflow: Enable paste and selection within TextInput - React Native Git: https://github.com/facebook/react-native/issues/20887 | High | [
0.6585034013605441,
30.25,
15.6875
] |
Q: RegisterDeviceNotification Getting WM_DEVICECHANGE I want to catch message of WM_DEVICECHANGE.But, there is a problem which i can not understand.I want to see when usb or cd inserted.Maybe my notification filter is wrong. I m using radstudio and the language of its c,also its commandline application.I think everything is obvious in code.What am i doing wrong,i created window for only getting messages.Also i did not understand how it message going to WndProc from message loop. #pragma hdrstop #pragma argsused #include <stdio.h> #include <tchar.h> #include <windows.h> #include <dbt.h> LRESULT CALLBACK WndProc(HWND hWnd, UINT uiMsg, WPARAM wParam, LPARAM lParam) { switch (uiMsg) { case WM_DEVICECHANGE: { MessageBox(0,"a","b",1); } } } int _tmain(int argc, _TCHAR* argv[]) { BOOL bRet; HANDLE a; HWND lua; HANDLE hInstance; MSG msg; WNDCLASSEX wndClass; HANDLE hVolNotify; DEV_BROADCAST_DEVICEINTERFACE dbh; DEV_BROADCAST_VOLUME NotificationFilter; lua = CreateWindow("lua", NULL, WS_MINIMIZE, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, hInstance, NULL); wndClass.lpfnWndProc = WndProc; ZeroMemory(&NotificationFilter, sizeof (NotificationFilter)); NotificationFilter.dbcv_size = sizeof (NotificationFilter); NotificationFilter.dbcv_devicetype = DBT_DEVTYP_VOLUME; a = RegisterDeviceNotification(lua,&NotificationFilter,DEVICE_NOTIFY_WINDOW_HANDLE); while( (bRet = GetMessage( &msg, NULL, 0, 0 )) != 0) { MessageBox(0,"o","b",1); if (bRet == -1) { } else { TranslateMessage(&msg); DispatchMessage(&msg); } } } A: What am i doing wrong,i created window for only getting messages. You are asking CreateWindow() to create a window of class "lua" but you have not actually registered the "lua" class via RegisterClass/Ex() before calling CreateWindow(), and you are not checking to see if CreateWindow() returns a NULL window handle on failure. Also i did not understand how it message going to WndProc from message loop. That is handled by DispatchMessage(). You need to assign wndClass.lpfnWndProc and register it with RegisterClass() before calling CreateWindow(). Afterwards, when DispatchMessage() sees a message that targets the window created by CreateWindow(), it knows that WndProc() has been associated with that window and will call it directly, passing it the message. Try this instead: #pragma hdrstop #pragma argsused #include <stdio.h> #include <tchar.h> #include <windows.h> #include <dbt.h> LRESULT CALLBACK WndProc(HWND hWnd, UINT uiMsg, WPARAM wParam, LPARAM lParam) { if (uiMsg == WM_DEVICECHANGE) { MessageBox(NULL, TEXT("WM_DEVICECHANGE"), TEXT("WndProc"), MB_OK); return 0; } return DefWindowProc(hWnd, uiMsg, wParam, lParam); } int _tmain(int argc, _TCHAR* argv[]) { HINSTANCE hInstance = reinterpret_cast<HINSTANCE>(GetModuleHandle(NULL)); WNDCLASS wndClass = {0}; wndClass.lpfnWndProc = &WndProc; wndClass.lpszClassName = TEXT("lua"); wndClass.hInstance = hInstance; if (RegisterClass(&wndClass)) { HWND lua = CreateWindow(wndClass.lpszClassName, NULL, 0, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, NULL, NULL, hInstance, NULL); if (lua != NULL) { DEV_BROADCAST_VOLUME NotificationFilter = {0}; NotificationFilter.dbcv_size = sizeof(NotificationFilter); NotificationFilter.dbcv_devicetype = DBT_DEVTYP_VOLUME; HDEVNOTIFY hVolNotify = RegisterDeviceNotification(lua, &NotificationFilter, DEVICE_NOTIFY_WINDOW_HANDLE); if (hVolNotify != NULL) { MSG msg; while( GetMessage(&msg, NULL, 0, 0) > 0 ) { TranslateMessage(&msg); DispatchMessage(&msg); } UnregisterDeviceNotification(hVolNotify); } DestroyWindow(lua); } UnregisterClass(wndClass.lpszClassName, hInstance); } return 0; } For added measure, you can use CreateWindowEx() instead of CreateWindow() to create a message-only window instead, if desired: HWND lua = CreateWindowEx(0, wndClass.lpszClassName, NULL, 0, 0, 0, 0, 0, HWND_MESSAGE, NULL, hInstance, NULL); | Mid | [
0.549763033175355,
29,
23.75
] |
File photo of Oregon Court of Appeals judge Chris Garrett. Courtesy of the Office of the Governor Oregon Gov. Kate Brown has appointed appeals court judge Chris Garrett to the state's supreme court. THANKS TO OUR SPONSOR: Become a Sponsor Garrett is taking over for Justice Rives Kistler, who’s retiring at the end of the year. This marks Brown's fifth appointment to the state's highest court. “Judge Garrett is a talented, thoughtful, and even-keeled jurist who is passionate about Oregon’s courts and the rule of law,” Brown said in a statement Monday. “He brings to our high court the experience of a respected civil litigator, an effective state legislator, and a productive appellate judge. His brilliant mind and collegial style will be tremendous assets to the court and the people of Oregon.” Garrett wrote the appeals court opinion upholding the state’s Bureau of Labor and Industries fine against a Gresham bakery that refused to sell a wedding cake to a lesbian couple. He also served as a Democrat in the Oregon House from 2009-2013, representing Lake Oswego and southwest Portland. Garrett spent a decade practicing law at Perkins Coie in Portland. His appointment takes effect on January 1. He'll be up for election in November 2020. THANKS TO OUR SPONSOR: Become a Sponsor | High | [
0.674388674388674,
32.75,
15.8125
] |
Beach volleyball at the 2010 Central American and Caribbean Games The Beach volleyball competition at the 2010 Central American and Caribbean Games was held in Cabo Rojo, Puerto Rico. The tournament was scheduled to be held from 23–27 July at the Cabo Rojo Beach Volleyball Field in Porta del Sol. Medal summary Men's Tournament Preliminary Round Group A Group B Group C Group D Playoffs First Round Second Round Repechage Semifinals Finals Bronze Medal Match Gold Medal Match Women's Tournament Preliminary Round Group A Group B Group C Group D Playoffs Quarterfinals Semifinals Finals Bronze Medal Match Gold Medal Match References External links Official Website Category:Events at the 2010 Central American and Caribbean Games Category:Beach volleyball at the Central American and Caribbean Games Central American and Caribbean Games | Mid | [
0.6365979381443291,
30.875,
17.625
] |
Yeh Wei-tze Yeh Wei-tze (born 20 February 1973) is a Taiwanese professional golfer. Yeh was born in Taipei. He turned professional in 1994 and played on the Asian Tour during his early career. In 2000 he won the European Tour co-sanctioned Benson and Hedges Malaysian Open to earn a two-year exemption on that tour. The win also helped him to second place on the Davidoff Asian PGA Order of Merit that season. In 2003 Yeh joined the Japan Golf Tour. He has won twice on that tour, the 2003 ANA Open and the 2006 Sega Sammy Cup. Professional wins (5) European Tour wins (1) ^Co-sanctioned by the Asian Tour Japan Golf Tour wins (2) Taiwan Tour wins (2) 2005 Taifong Open 2007 Taifong Open Team appearances Amateur Eisenhower Trophy (representing Taiwan): 1990, 1992 External links Category:Taiwanese male golfers Category:European Tour golfers Category:Japan Golf Tour golfers Category:Asian Games medalists in golf Category:Asian Games silver medalists for Chinese Taipei Category:Golfers at the 1994 Asian Games Category:Medalists at the 1994 Asian Games Category:Sportspeople from Taipei Category:1973 births Category:Living people | Mid | [
0.605405405405405,
28,
18.25
] |
Q: Visual Studio 2012 source control I currently use team foundation server as a source control, however, its database located on my computer, and I want it to be somewhere on the internet, on a small freeware cloud perhaps. Is there any free source control systems, which has build-in VS2012 support and free repository storage? What are they? I have googled a bit, but it is a little hard to detect is the system supports internet-hosted repositories and VS2012 support. I need opinions of real users. A: One option is Microsoft's Team Foundation Service, which is hosted TFS and free for small teams. | High | [
0.6906474820143881,
36,
16.125
] |
Five people held for questioning over the Bastille Day truck attack in Nice are due to appear in court. Four men and one woman, aged between 22 and 40, will attend a court in Paris on Thursday over their alleged links to Mohamed Lahouaiej-Bouhlel, who drove a truck into a crowd on the promenade in Nice, killing 84 people. Among the group are a 40-year-old who Bouhlel had known for a long time and a 38-year-old Albanian man detained along with his girlfriend and suspected of providing the Tunisian attacker with an automatic pistol. A 22-year-old man who received a text from Bouhlel shortly before the rampage will also appear in court, along with a man who had been in contact with Bouhlel over weapons. A source close to the case said police had found a Kalashnikov rifle and a bag of ammunition in the basement of one of the men held in connection with the attack. About 100 investigators are poring over masses of data linked to the probe, and photos found on Bouhlel’s cellphone indicate he was studying several locations where crowds gathered. One photo concerns a fireworks display on August 15, another a race on January 10 along the Promenade des Anglais where the attack took place, and another showed the opening times of the fan zone during the Euro football tournament. Jean-Pascal Padovani, the lawyer for the 22-year-old suspect, has denied “any implication in a terrorist act” by his client. Like Bouhlel, none of those detained were known to French intelligence prior to the attack. Their court appearance comes as the French government tries to reassure its citizens following the country’s third major attack in 18 months. On Thursday, the senate is set to pass a bill extending the state of emergency, which gives police extra powers to carry out searches and place people under house arrest, for six months. It is the fourth time the security measures have been extended since Islamic State attacked Paris in November last year, killing 130 people. On Wednesday, French MPs voted to allow the authorities to search luggage and vehicles without prior approval from a prosecutor, and to permit police to seize data from computers and mobile phones. The legislation makes it easier for authorities to shut down places of worship suspected of encouraging extremism. Following the attack, Isis said Bouhlel was one of its “soldiers”, but while he had showed a recent interest in jihadi activity, there was no evidence that the driver acted on behalf of Isis, according to investigators. | High | [
0.6728476821192051,
31.75,
15.4375
] |
Q: How should an application store its credentials Context When developing desktop applications, you will occasionally have to store credentials somewhere to be able to authenticate your application. An example of this is a Facebook app ID + secret, another one is MySQL credentials. Storing these plain text in the applications source code doesn't provide any true security since it isn't too much hassle to reverse engineer a program. Gathering the credentials from a server won't do the trick either since hackers easily can perform requests themselves. Neither will encryption of the stored credentials make any difference since the application will need access to the decryption key to be able to use the credentials at all. Question How can one store application specific credentials securely? Preferably cross-OS. Note: The relevant language is Java, however, I believe (think) that this is a language agnostic question. A: Never hardcode passwords or crypto keys in your program. The general rule of thumb is: the only credentials you should store on a user's machine are credentials associated with that user, e.g., credentials that enable that user to log into his/her account. You should not store your developer credentials on the user's machine. That's not safe. You have to assume that anything stored on the user's machine is known by the user, or can easily be learned by the user. (This is the right assumption: it is not hard to reverse-engineer an application binary to learn any keys or secrets that may be embedded in it.) Once you understand this general principle, everything becomes easy. Basically, then you need to design the rest of your system and your authentication protocol so that client software can authenticate itself using only those credentials that are safe to store on the client. Example 1. Suppose you have a Facebook app ID and key, associated with your app (i.e., associated with your developer account). Do you embed the app ID and key in the desktop software you ship to users? No! Absolutely not. You definitely don't do that, because that would allow of any of your users to learn your app ID and key and submit their own requests, possibly damaging your reputation. Instead, you find another way. For instance, maybe you set up your own server that has the app ID and key and is responsible for making the requests to the Facebook platform (subject to appropriate limitations and rate-limiting). Then, your client connects to your server. Maybe you authenticate each client by having each user set up his/her own user account on your server, storing the account credentials on the client, and having the client authenticate itself using these credentials. You can make this totally invisible to the user, by having the client app generate a new user account on first execution (generating its own login credentials, storing them locally, and sending them to the server). The client app can use these stored credentials to connect in the future (say, over SSL) and automatically log in every subsequent time the app is executed. Notice how the only thing stored on a user's machine are credentials that allow to log into that user's account -- but nothing that would allow logging into other people's accounts, and nothing that would expose developer app keys. Example 2. Suppose you write an app that needs to access the user's data in their Google account. Do you prompt them for their Google username and password and store it in the app-local storage? You could: that would be OK, because the user's credentials are stored on the user's machine. The user has no incentive to try to hack their own machine, because they already know their own credentials. Even better yet: use OAuth to authorize your app. This way your app stores an OAuth token in its app-local storage, which allows your app to access the user's Google account. It also avoids the need to store the user's Google password (which is particularly sensitive) in the app's local storage, so it reduces the risk of compromise. Example 3. Suppose you're writing an app that has a MySQL database backend that is shared among all users. Do you take the MySQL database and embed it into the app binary? No! Any of your users could extract the password and then gain direct access to your MySQL database. Instead, you set up a service that provides the necessary functionality. The client app connects to the service, authenticates itself, and sends the request to the service. The service can then execute this request on the MySQL database. The MySQL password stays safely stored on the server's machine, and is never accessible on any user's machine. The server can impose any restrictions or access control that you desire. This requires your client app to be able to authenticate to the service. One way to do that is to have the client app create a new account on the service on first run, generate a random authentication credential, and automatically log in to the service every subsequent time. You could use SSL with a random password, or even better yet, use something like SSL with a unique client certificate for each client. The other rule is: you don't hardcode credentials into the program. If you are storing credentials on the user's machine, store them in some private location: maybe a configuration file or in a directory, preferably one that is only readable by this particular app or this particular user (not a world-readable file). A: It's a classical security problem with no perfect solution, just imperfect ones, and it boils down to the more general problem of protecting software against tampering and reverse-engineering. Use an external authentication method which the user must actively provide to reach the credentials: a manually entered password (whose hash digest, for example, is used to decrypt the credentials), a secure authentication dongle containing a certificate and matching private key which must be entered into a USB port, a fingerprint reader providing the correct fingerprint etc. Ideally, the result of this will not be a simple yes/no answer to your program, as this can be overridden/patched/spoofed, but a real value (a cryptographic key) required to decrypt your credentials (or whatever else you're trying to protect), derived directly from the authenticator. A multi-source approach in which the decryption key is calculated on the fly from various sources (as to which sources, this really depends on your system) could be even better. Heavily (automatically and massively) obfuscate your program to thwart reverse-engineering. True enough, static analysis tools have become state-of-the-art, but there are [proprietary, expensive] obfuscation tools out there (obfuscating compilers, packers etc.) that make reverse-engineering very time-consuming, challenging and laborious, enough to send the attackers to look for easier targets. Adding protection mechanisms against debugging and tamper resistance methods can further strengthen the security of your program. True enough, Java as a bytecode language is especially vulnerable in this regard, as decompiling it (as compared to decompiling/disassembling native code) is rather straightforward. | Mid | [
0.6106194690265481,
25.875,
16.5
] |
Q: CSS style priority Before this question, I already search 'CSS priority' and I found following The priority of inline style like <div class="blue" style="color:red;"> is higher than class blue which define color:blue; And this rule is still work between tag and inline, i.e., inline has higher priority than tag. Now I encounter a obstacle that I don't know. Here, simple example of my case. First there are <div>s <div></div> <div></div> <div></div> and they have CSS div { backgorund:url('path') no-repeat; } div:nth-child(2n+1) { background-position: 'position A'; } Now first and third <div> has same background image at position A. Then I tried change third <div>'s background like <div></div> <div></div> <div style="background-position: 'position B';"></div> But the inline style of third <div> is not applied, still the :nth-child's style occupy it. (third <div> still has background-position with 'position A') I want to change third <div>'s background image that only has different background position. How can I do this? (I know the priority of inline style is always higher than any others. But in this case, it does not work) for specification here my code #courseList li { background:url('path') no-repeat; } #courseList > li:nth-child(2n+1) { background-position:-5px -438px; } #courseList > li:nth-child(2n+2) { background-position:-187px -438px; } <ul> <li></li> <li></li> <li></li> <li></li> <li></li> <li> <div style='background-position:-551px -438px;'></div> </li> </ul> A: EDITED to your scenario (used background color for better illustration, you can simply replace them for position): #courseList>li:nth-child(2n+1) {
background: red;
}
#courseList>li:nth-child(2n+2) {
background: green;
} <ul id="courseList">
<li></li>
<li></li>
<li></li>
<li></li>
<li></li>
<li style='background: blue;'>
<div></div>
</li>
</ul> Here's a generic example: div {
background: red;
height: 50px;
width: 50px;
}
div:nth-child(2n+1) {
background: blue;
} <div></div>
<div></div>
<div style="background: green;"></div> | Mid | [
0.555309734513274,
31.375,
25.125
] |
Hydrocarbon water-pollution related to chronic kidney disease in Tierra Blanca, a perfect storm. Chronic kidney disease (CKD) affects the kidneys, and in severe cases is considered as end-stage renal disease which can only be treated by dialysis and transplantation. Tierra Blanca city has a higher CKD rate compared to other Mexican cities, but its principal cause has not been found yet. Main factors related to CKD are carbonated beverage consumption, diabetes, obesity, hypertension, heat stress, dehydration, and intoxication by pesticides, heavy metals, and/or hydrocarbons. The aim of this work was to evaluate hydrocarbon pollution in Tierra Blanca domestic fresh-water related to CKD and to integrate this information with other main factors in order to suggest precautionary actions taking account of key actors. We found hydrocarbons in the water wells of the city and the presence of other risk factors, which creates a perfect storm for CKD. Additionally, key actors were identified in order to follow precautionary principles related to CKD cases in Tierra Blanca. | Mid | [
0.6327944572748261,
34.25,
19.875
] |
Q: Texture doesn't draw I want to draw a image to the screen, but what I get is the black square but without the texture on it. The image path is correct and loaded, because the rect have the correct size. I have a separate class for loading the texture named Texture and a class for drawing the texture called Sprite. Here is the code: // Class Texture public void loadFromResources(final Context context, int id) { GLES20.glGenTextures(1, mTextureID, 0); final BitmapFactory.Options options = new BitmapFactory.Options(); options.inScaled = false; // No pre-scaling // Temporary create a bitmap Bitmap bmp = BitmapFactory.decodeResource(context.getResources(), id, options); // Bind texture to texturename GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureID[0]); // Load the bitmap into the bound texture. GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bmp, 0); mSize = new Size(bmp.getWidth(), bmp.getHeight()); // We are done using the bitmap so we should recycle it. bmp.recycle(); } // Sprite class public void setTexture(Texture texture) { mTexture = texture; Log.d("Sprite", mTexture.getSize().getWidth() + " " + mTexture.getSize().getHeight()); vertices = new float[] { 0.0f, 0.0f, texture.getSize().getWidth(), 0.0f, texture.getSize().getWidth(), texture.getSize().getHeight(), 0.0f, texture.getSize().getHeight() }; // The vertex buffer. ByteBuffer bb = ByteBuffer.allocateDirect(vertices.length * 4); bb.order(ByteOrder.nativeOrder()); vertexBuffer = bb.asFloatBuffer(); vertexBuffer.put(vertices); vertexBuffer.position(0); // Create our UV coordinates. uvs = new float[] { 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f }; // The texture buffer bb = ByteBuffer.allocateDirect(uvs.length * 4); bb.order(ByteOrder.nativeOrder()); uvBuffer = bb.asFloatBuffer(); uvBuffer.put(uvs); uvBuffer.position(0); } public void draw(float[] projectionMatrix) { float[] scratch = new float[16]; float[] move = new float[16]; Matrix.setIdentityM(move, 0); Matrix.translateM(move, 0, 10, 10, 0); Matrix.multiplyMM(scratch, 0, projectionMatrix, 0, move, 0); GLES20.glUseProgram(mProgram); // get handle to vertex shader's vPosition member int mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition"); // Enable generic vertex attribute array GLES20.glEnableVertexAttribArray(mPositionHandle); // Prepare the triangle coordinate data GLES20.glVertexAttribPointer(mPositionHandle, 2, GLES20.GL_FLOAT, false, 2 * 4, vertexBuffer); // Get handle to texture coordinates location int mTexCoordLoc = GLES20.glGetAttribLocation(mProgram, "a_texCoord"); MyGLRenderer.checkGlError("glGetAttribLocation"); // Enable generic vertex attribute array GLES20.glEnableVertexAttribArray(mTexCoordLoc); MyGLRenderer.checkGlError("glEnableVertexAttribArray"); // Prepare the texturecoordinates GLES20.glVertexAttribPointer(mTexCoordLoc, 2, GLES20.GL_FLOAT, false, 0, uvBuffer); MyGLRenderer.checkGlError("glVertexAttribPointer"); // Get handle to shape's transformation matrix int mtrxhandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix"); MyGLRenderer.checkGlError("glGetUniformLocation"); // Apply the projection and view transformation GLES20.glUniformMatrix4fv(mtrxhandle, 1, false, scratch, 0); MyGLRenderer.checkGlError("glUniformMatrix4fv"); mTexture.useTexture(); // Get handle to textures locations int mSamplerLoc = GLES20.glGetUniformLocation(mProgram, "s_texture" ); MyGLRenderer.checkGlError("glGetUniformLocation"); // Set the sampler texture unit to 0, where we have saved the texture. GLES20.glUniform1i( mSamplerLoc, 0); MyGLRenderer.checkGlError("glUniform1i"); // Draw the triangle GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer); // Disable vertex array GLES20.glDisableVertexAttribArray(mPositionHandle); GLES20.glDisableVertexAttribArray(mTexCoordLoc); } // MyGLRenderer class GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR); mTexture = new Texture(); mTexture.loadFromResources(context, R.drawable.hex); mSprite = new Sprite(context); mSprite.setTexture(mTexture); EDIT: Here are the shaders: // vert uniform mat4 uMVPMatrix; attribute vec4 vPosition; attribute vec2 a_texCoord; varying vec2 v_texCoord; void main() { gl_Position = uMVPMatrix * vPosition; v_texCoord = a_texCoord; } // frag precision mediump float; varying vec2 v_texCoord; uniform sampler2D s_texture; void main() { gl_FragColor = texture2D( s_texture, v_texCoord ); } And here is the full class Texture: public class Texture { private final int[] mTextureID = new int[1]; private Size mSize = new Size(0,0); public Texture() { } public Size getSize() { return mSize; } public void loadFromResources(final Context context, int id) { GLES20.glGenTextures(1, mTextureID, 0); final BitmapFactory.Options options = new BitmapFactory.Options(); options.inScaled = false; // No pre-scaling // Temporary create a bitmap Bitmap bmp = BitmapFactory.decodeResource(context.getResources(), id, options); // Bind texture to texturename GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureID[0]); // Load the bitmap into the bound texture. GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bmp, 0); mSize = new Size(bmp.getWidth(), bmp.getHeight()); // We are done using the bitmap so we should recycle it. bmp.recycle(); } public void useTexture() { GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTextureID[0]); } } A: I found the problem: before the next lines were posted in the MyGLRenderer before constructing the Texture class, but now I have add them between glBindTexture() and texImage2D in Texture class: GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE); | Low | [
0.47787610619469006,
27,
29.5
] |
The Democratic Foundations of Policy Diffusion How Health, Family, and Employment Laws Spread Across Countries Katerina Linos Presents the new and controversial theory that laws spread not through networks of technocrats, but through domestic democracy, in public and politicized ways Analyzes new data from multiple methodological perspectives: public opinion experiments, case studies based on election campaigns and legislative debates, cross-national statistical analysis of policy adoption and implementation across rich democracies Bridges international and domestic law and politics; shows how international law and international norms influence fields previously understood as entirely domestic, including health, family and employment policy The Democratic Foundations of Policy Diffusion How Health, Family, and Employment Laws Spread Across Countries Katerina Linos Description Why do law reforms spread around the world in waves? Leading theories argue that international networks of technocratic elites develop orthodox solutions that they singlehandedly transplant across countries. But, in modern democracies, elites alone cannot press for legislative reforms without winning the support of politicians, voters, and interest groups. As Katerina Linos shows in The Democratic Foundations of Policy Diffusion, international models can help politicians generate domestic enthusiasm for far-reaching proposals. By pointing to models from abroad, policitians can persuade voters that their ideas are not radical, ill-thought out experiments, but mainstream, tried-and-true solutions. The more familiar voters are with a certain country or an international organization, the more willing they are to support policies adopted in that country or recommended by that organization. Aware of voters' tendency, politicians strategically choose these policies to maximize electoral gains. Through the ingenious use of experimental and cross-national evidence, Linos documents voters' response to international models and demonstrates that governments follow international organization templates and imitate the policy choices of countries heavily covered in national media and familiar to voters. Empirically rich and theoretically sophisticated, The Democratic Foundations of Policy Diffusion provides the fullest account to date of this increasingly pervasive phenomenon. The Democratic Foundations of Policy Diffusion How Health, Family, and Employment Laws Spread Across Countries Katerina Linos Author Information Katerina Linos is Assistant Professor of Law at The University of California-Berkeley Law School. The Democratic Foundations of Policy Diffusion How Health, Family, and Employment Laws Spread Across Countries Katerina Linos Reviews and Awards The Best Books of 2013 on Western Europe by Foreign Affairs 2014 Chadwick Alger Book Prize APSA's Qualitative and Multi-Method Research section's 2014 Giovanni Sartori Book Award "Katerina Linos's account of policy diffusion is the first to take voters seriously. She perceptively compares diffusion through technocracy with diffusion through democracy, and in the process demonstrates the power of citizens to use media and other information to join the domestic debate over social policy. Finally a sophisticated and convincing account of policy convergence as though local politics matters!" --Beth Simmons, Clarence Dillon Professor of International Affairs, Harvard University "Katerina Linos is both political scientist and legal scholar par excellence. She combines state-of-the-art empirical methods with a subtle understanding of international and comparative law. The result is a book that delivers a powerful message built upon rigorous and innovative empirical research. These pages are chocked full of important insights about the relationship between democratic politics and the global legal order. The Democratic Foundations of Policy Diffusion could not come at a better time-when so many countries are reconfiguring their relationships to international organizations and grappling with the maintenance of effective and humane social policies for their own people." --Ryan Goodman, Anne and Joel Ehrenkranz Professor of Law, New York University "When do one nation's reforms-of health care, anti-discrimination, and other domestic programs-influence policies in another? In this path breaking work, Katerina Linos uses opinion polls, case studies, and rigorous statistical analysis to show policies moving across 18 Western democracies, even when domestic leaders claim indifference or opposition to foreign models. Anyone interested in domestic or international politics would benefit from this powerful research to examine how ideologies, economic conditions, and local politics affect domestic choices." --Martha Minow, Dean and Jeremiah Smith, Jr. Professor, Harvard Law School "Linos brings impressive mixed-method analysis to bear on the phenomenon of cross-national policy diffusion... This work has important consequences for the understanding of the influence of international organizations -- policies need not be binding to be persuasive... Essential." --CHOICE "For scholars, the book poses as many questions as it answers. For policymakers, it suggests novel ways to build support for policy innovations." --Foreign Affairs "Linos's book can be read against the grain as valuably as with it--surely the mark of a lasting contribution to scholarship." --American Journal of International Law | High | [
0.723226703755215,
32.5,
12.4375
] |
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=iso-8859-1"> <title>Main Page</title> <link href="dox.css" rel="stylesheet" type="text/css"> </head> <body bgcolor="#cfcfcf"> <!-- Generated by Doxygen 1.5.5 --> <div class="navigation" id="top"> <div class="tabs"> <ul> <li><a href="main.html"><span>Main Page</span></a></li> <li><a href="pages.html"><span>Related Pages</span></a></li> <li class="current"><a href="files.html"><span>Files</span></a></li> </ul> </div> <h1>storage.c</h1><div class="fragment"><pre class="fragment"><a name="l00001"></a>00001 <span class="comment">/*</span> <a name="l00002"></a>00002 <span class="comment"> * $Id: storage.c,v 1.8 2003/12/02 08:25:00 troth Exp $</span> <a name="l00003"></a>00003 <span class="comment"> *</span> <a name="l00004"></a>00004 <span class="comment"> ****************************************************************************</span> <a name="l00005"></a>00005 <span class="comment"> *</span> <a name="l00006"></a>00006 <span class="comment"> * simulavr - A simulator for the Atmel AVR family of microcontrollers.</span> <a name="l00007"></a>00007 <span class="comment"> * Copyright (C) 2001, 2002, 2003 Theodore A. Roth</span> <a name="l00008"></a>00008 <span class="comment"> *</span> <a name="l00009"></a>00009 <span class="comment"> * This program is free software; you can redistribute it and/or modify</span> <a name="l00010"></a>00010 <span class="comment"> * it under the terms of the GNU General Public License as published by</span> <a name="l00011"></a>00011 <span class="comment"> * the Free Software Foundation; either version 2 of the License, or</span> <a name="l00012"></a>00012 <span class="comment"> * (at your option) any later version.</span> <a name="l00013"></a>00013 <span class="comment"> *</span> <a name="l00014"></a>00014 <span class="comment"> * This program is distributed in the hope that it will be useful,</span> <a name="l00015"></a>00015 <span class="comment"> * but WITHOUT ANY WARRANTY; without even the implied warranty of</span> <a name="l00016"></a>00016 <span class="comment"> * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the</span> <a name="l00017"></a>00017 <span class="comment"> * GNU General Public License for more details.</span> <a name="l00018"></a>00018 <span class="comment"> *</span> <a name="l00019"></a>00019 <span class="comment"> * You should have received a copy of the GNU General Public License</span> <a name="l00020"></a>00020 <span class="comment"> * along with this program; if not, write to the Free Software</span> <a name="l00021"></a>00021 <span class="comment"> * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA</span> <a name="l00022"></a>00022 <span class="comment"> *</span> <a name="l00023"></a>00023 <span class="comment"> ****************************************************************************</span> <a name="l00024"></a>00024 <span class="comment"> */</span> <a name="l00025"></a>00025 <a name="l00026"></a>00026 <span class="preprocessor">#include <config.h></span> <a name="l00027"></a>00027 <a name="l00028"></a>00028 <span class="preprocessor">#include <stdio.h></span> <a name="l00029"></a>00029 <span class="preprocessor">#include <stdlib.h></span> <a name="l00030"></a>00030 <a name="l00031"></a>00031 <span class="preprocessor">#include "avrerror.h"</span> <a name="l00032"></a>00032 <span class="preprocessor">#include "avrmalloc.h"</span> <a name="l00033"></a>00033 <span class="preprocessor">#include "avrclass.h"</span> <a name="l00034"></a>00034 <span class="preprocessor">#include "storage.h"</span> <a name="l00035"></a>00035 <a name="l00036"></a>00036 <span class="comment">/***************************************************************************\</span> <a name="l00037"></a>00037 <span class="comment"> *</span> <a name="l00038"></a>00038 <span class="comment"> * Storage(AvrClass) Methods</span> <a name="l00039"></a>00039 <span class="comment"> *</span> <a name="l00040"></a>00040 <span class="comment">\***************************************************************************/</span> <a name="l00041"></a>00041 <a name="l00042"></a>00042 Storage * <a name="l00043"></a>00043 storage_new (<span class="keywordtype">int</span> base, <span class="keywordtype">int</span> size) <a name="l00044"></a>00044 { <a name="l00045"></a>00045 Storage *stor; <a name="l00046"></a>00046 <a name="l00047"></a>00047 stor = <a class="code" href="avrmalloc_8c.html#a543f348351cdcaebdd8947d1a591578" title="Macro for allocating memory.">avr_new</a> (Storage, 1); <a name="l00048"></a>00048 storage_construct (stor, base, size); <a name="l00049"></a>00049 <a class="code" href="avrclass_8c.html#82d397ff00a7f1c1447832dbff1856e1" title="Overload the default destroy method.">class_overload_destroy</a> ((AvrClass *)stor, storage_destroy); <a name="l00050"></a>00050 <a name="l00051"></a>00051 <span class="keywordflow">return</span> stor; <a name="l00052"></a>00052 } <a name="l00053"></a>00053 <a name="l00054"></a>00054 <span class="keywordtype">void</span> <a name="l00055"></a>00055 storage_construct (Storage *stor, <span class="keywordtype">int</span> base, <span class="keywordtype">int</span> size) <a name="l00056"></a>00056 { <a name="l00057"></a>00057 <span class="keywordflow">if</span> (stor == NULL) <a name="l00058"></a>00058 <a class="code" href="avrerror_8c.html#4f6ec50114a7d63093baecafe47d7f1a" title="Print an error message to stderr and terminate program.">avr_error</a> (<span class="stringliteral">"passed null ptr"</span>); <a name="l00059"></a>00059 <a name="l00060"></a>00060 <a class="code" href="avrclass_8c.html#ffeb66dd49a62ad1b7606cde0e3b039e" title="Initializes the AvrClass data structure.">class_construct</a> ((AvrClass *)stor); <a name="l00061"></a>00061 <a name="l00062"></a>00062 stor->base = base; <span class="comment">/* address */</span> <a name="l00063"></a>00063 stor->size = size; <span class="comment">/* bytes */</span> <a name="l00064"></a>00064 <a name="l00065"></a>00065 stor->data = <a class="code" href="avrmalloc_8c.html#ac6d810b48b67b90412badbd4b71f4e3" title="Macro for allocating memory and initializing it to zero.">avr_new0</a> (uint8_t, size); <a name="l00066"></a>00066 } <a name="l00067"></a>00067 <a name="l00068"></a>00068 <span class="comment">/*</span> <a name="l00069"></a>00069 <span class="comment"> * Not to be called directly, except by a derived class.</span> <a name="l00070"></a>00070 <span class="comment"> * Called via class_unref.</span> <a name="l00071"></a>00071 <span class="comment"> */</span> <a name="l00072"></a>00072 <span class="keywordtype">void</span> <a name="l00073"></a>00073 storage_destroy (<span class="keywordtype">void</span> *stor) <a name="l00074"></a>00074 { <a name="l00075"></a>00075 <span class="keywordflow">if</span> (stor == NULL) <a name="l00076"></a>00076 <span class="keywordflow">return</span>; <a name="l00077"></a>00077 <a name="l00078"></a>00078 <a class="code" href="avrmalloc_8c.html#082a9d6d40f5e8bad64441ad950ec12c" title="Free malloc&#39;d memory.">avr_free</a> (((Storage *)stor)->data); <a name="l00079"></a>00079 <a class="code" href="avrclass_8c.html#86e290a528dd1ed0bf5057056b5731e5" title="Releases resources allocated by class&#39;s &lt;klass&gt;_new() function.">class_destroy</a> (stor); <a name="l00080"></a>00080 } <a name="l00081"></a>00081 <a name="l00082"></a>00082 <span class="keyword">extern</span> <span class="keyword">inline</span> uint8_t <a name="l00083"></a>00083 storage_readb (Storage *stor, <span class="keywordtype">int</span> addr); <a name="l00084"></a>00084 <a name="l00085"></a>00085 <span class="keyword">extern</span> <span class="keyword">inline</span> uint16_t <a name="l00086"></a>00086 storage_readw (Storage *stor, <span class="keywordtype">int</span> addr); <a name="l00087"></a>00087 <a name="l00088"></a>00088 <span class="keywordtype">void</span> <a name="l00089"></a>00089 storage_writeb (Storage *stor, <span class="keywordtype">int</span> addr, uint8_t val) <a name="l00090"></a>00090 { <a name="l00091"></a>00091 <span class="keywordtype">int</span> _addr = addr - stor->base; <a name="l00092"></a>00092 <a name="l00093"></a>00093 <span class="keywordflow">if</span> (stor == NULL) <a name="l00094"></a>00094 <a class="code" href="avrerror_8c.html#4f6ec50114a7d63093baecafe47d7f1a" title="Print an error message to stderr and terminate program.">avr_error</a> (<span class="stringliteral">"passed null ptr"</span>); <a name="l00095"></a>00095 <a name="l00096"></a>00096 <span class="keywordflow">if</span> ((_addr < 0) || (_addr >= stor->size)) <a name="l00097"></a>00097 <a class="code" href="avrerror_8c.html#4f6ec50114a7d63093baecafe47d7f1a" title="Print an error message to stderr and terminate program.">avr_error</a> (<span class="stringliteral">"address out of bounds: 0x%x"</span>, addr); <a name="l00098"></a>00098 <a name="l00099"></a>00099 stor->data[_addr] = val; <a name="l00100"></a>00100 } <a name="l00101"></a>00101 <a name="l00102"></a>00102 <span class="keywordtype">void</span> <a name="l00103"></a>00103 storage_writew (Storage *stor, <span class="keywordtype">int</span> addr, uint16_t val) <a name="l00104"></a>00104 { <a name="l00105"></a>00105 <span class="keywordtype">int</span> _addr = addr - stor->base; <a name="l00106"></a>00106 <a name="l00107"></a>00107 <span class="keywordflow">if</span> (stor == NULL) <a name="l00108"></a>00108 <a class="code" href="avrerror_8c.html#4f6ec50114a7d63093baecafe47d7f1a" title="Print an error message to stderr and terminate program.">avr_error</a> (<span class="stringliteral">"passed null ptr"</span>); <a name="l00109"></a>00109 <a name="l00110"></a>00110 <span class="keywordflow">if</span> ((_addr < 0) || (_addr >= stor->size)) <a name="l00111"></a>00111 <a class="code" href="avrerror_8c.html#4f6ec50114a7d63093baecafe47d7f1a" title="Print an error message to stderr and terminate program.">avr_error</a> (<span class="stringliteral">"address out of bounds: 0x%x"</span>, addr); <a name="l00112"></a>00112 <a name="l00113"></a>00113 stor->data[_addr] = (uint8_t) (val >> 8 & 0xff); <a name="l00114"></a>00114 stor->data[_addr + 1] = (uint8_t) (val & 0xff); <a name="l00115"></a>00115 } <a name="l00116"></a>00116 <a name="l00117"></a>00117 <span class="keywordtype">int</span> <a name="l00118"></a>00118 storage_get_size (Storage *stor) <a name="l00119"></a>00119 { <a name="l00120"></a>00120 <span class="keywordflow">return</span> stor->size; <a name="l00121"></a>00121 } <a name="l00122"></a>00122 <a name="l00123"></a>00123 <span class="keywordtype">int</span> <a name="l00124"></a>00124 storage_get_base (Storage *stor) <a name="l00125"></a>00125 { <a name="l00126"></a>00126 <span class="keywordflow">return</span> stor->base; <a name="l00127"></a>00127 } </pre></div></div> <hr width="80%"> <p><center>Automatically generated by Doxygen 1.5.5 on 7 Nov 2008.</center></p> </body> </html> | Low | [
0.483695652173913,
22.25,
23.75
] |
Product Description The instrument is a pen type digital multimeter. It has stable performance, high precision, low power consumption, novel structure. It can measure AC/DC voltage, AC/DC current, resistance, diode and connectivity; with non-contract voltage detection function, timely remind users to pay attention to the operation safety and allow users to use more safely and assuredly. Features: Manual and automatic range for selection. All in one design, easy to use. Display:2000 counts/Data hold/MAX hold. Measure AC/DC voltage, AC/DC current, resistance, diode and connectivity. Non-contacted voltage detector function. Display light and work light, fit for most of work environment, even in the darkness. Contain probes and clips, fit for testing most of objects. Related Best Sellers Perfect for lazy summer days at the park or beach, the Picnic Time NFL Canasta Grande size picnic basket features a versatile, flat-lidded design with a sturdy willow base in an attractive, double-strand weave design. The lid is made of a dark staine... Take your lunch or picnic on the go with this NFL team logoed Pranzo insulated lunch tote, featuring isolated sections so you can separate your hot and cold food and drink items. The exterior is made of rip stop polyester with a nylon interior. The P... Take your lunch or picnic on the go with this NFL team logoed Pranzo insulated lunch tote, featuring isolated sections so you can separate your hot and cold food and drink items. The exterior is made of rip stop polyester with a nylon interior. The P... Take your lunch or picnic on the go with this NFL team logoed Pranzo insulated lunch tote, featuring isolated sections so you can separate your hot and cold food and drink items. The exterior is made of rip stop polyester with a nylon interior. The P... Take your lunch or picnic on the go with this NFL team logoed Pranzo insulated lunch tote, featuring isolated sections so you can separate your hot and cold food and drink items. The exterior is made of rip stop polyester with a nylon interior. The P... Take your lunch or picnic on the go with this NFL team logoed Pranzo insulated lunch tote, featuring isolated sections so you can separate your hot and cold food and drink items. The exterior is made of rip stop polyester with a nylon interior. The P... Sign Up For Our Newsletter Sign up to be the first to know about our Exclusive Sales, Special Offers, & Member's Only Discounts! | Mid | [
0.5714285714285711,
32.5,
24.375
] |
Could this be the most terrifying marketing prank yet? From the TV show that came up with the infamous ‘corpse in an elevator’ prank, comes the Curse of Chucky. Terryfying: A man falls victim to the Curse of Chucky prank (Picture: YouTube) It shows unsuspecting members of the public waiting at a bus stop in Brazil before a prankster dressed as the horror movie character jumps out from behind a poster. The actor, who is armed with a fake knife, chases his terrified victims, including young children, down the street. The clip has racked up over 1million views online (Picture: YouTube) Filmed with the permission of Universal Studios, according to Business Insider, the prank was designed to promote the recently-released (direct-to-video) Curse of Chucky film. What do you think of the prank? Leave your comments below. MORE: Customers shocked by telekinesis prank | Low | [
0.514285714285714,
33.75,
31.875
] |
Q: JSON values 1 or 0 - int or boolean Does JSON treat these all the same? Or are they a mix of Integers and booleans? var data = { "zero" : 0, "one" : 1, "false" : 0, "true" : 1, "0" : false, "1" : true } A: The values true and false are actual boolean values, the rest are integers. See http://json.org/ for more. A: JSON is a format for transferring data. It has no notion of equality. JSON parsers treat booleans and numbers as distinct types. A: I prefer using 0/1 instead of true/false, because 0/1 consume only 1 byte while true/false consume 4/5 bytes. | Low | [
0.522099447513812,
23.625,
21.625
] |
--- abstract: 'The Conditional Probability Interpretation of Quantum Mechanics replaces the abstract notion of time used in standard Quantum Mechanics by the time that can be read off from a physical clock. The use of physical clocks leads to apparent non-unitary and decoherence. Here we show that a close approximation to standard Quantum Mechanics can be recovered from conditional Quantum Mechanics for semi-classical clocks, and we use these clocks to compute the minimum decoherence predicted by the Conditional Probability Interpretation.' author: - 'Vincent Corbin and Neil J. Cornish' bibliography: - 'decoherencebib.bib' title: 'Semi-classical limit and minimum decoherence in the Conditional Probability Interpretation of Quantum Mechanics' --- Introduction ============ In Quantum Mechanics, each measurable quantity is associated with a quantum operator, and therefore is subject to quantum fluctuations and to an uncertainty relation with its canonical conjugate. All except one, time. Time in Quantum Mechanics (and position in Quantum Field Theory) has a special role. There is no time operator, no fluctuation, and the time-energy uncertainty is of a different nature than the position-momentum uncertainty [@Messiah]. In Quantum Mechanics, time is classical. One can measure time precisely without affecting the system. One can measure time repeatedly without any consequences whatsoever. In the view of the quantization of Gravity, where spacetime becomes a quantum dynamical variable, this is not acceptable. Many theories have been developed to try and fix this “problem of time”. One in particular, the Conditional Probability Interpretation, is of special interest to us. It was first developed by Page and Wootters [@wooters], and was recently refined by Dolby [@dolby]. There, time as we know it in Quantum Mechanics, becomes a parameter - some kind of internal time that one cannot measure. Instead one chooses a quantum variable, which will be used as a “clock”. Then the probability of measuring a certain value for a variable at time $t$, is replaced by the probability of measuring this value when we have measured the “clock” variable at a given value. This interpretation is quite natural in every day experiments. Indeed, one never directly measures time, but instead reads it through a clock. What we really measure is the number of swings a pendulum makes, how many particles decay or other similar physical processes. Conventional Quantum Mechanics would then only arise when taking the limit in which the “clock” behaves classically. The Conditional Probability Interpretation predicts effects absent from the standard Quantum theory. In particular, it predicts a non-unitarity with respect to the variable chosen for “time”, and the presence of decoherence in the system under study which leads to a loss of information [@relational_solution]. It turns out that using any physical clock will lead to this phenomenon. An estimate of the minimum decoherence has been made by Gambini, Porto, and Pullin [@fundamental_decoherence], but, to the best of our knowledge, it has never been calculated directly from the Conditional Probability Interpretation, without resorting to results from standard Quantum Mechanics. Our goal is to provide a direct calculation of the minimum decoherence. A brief background of the Conditional Probability Interpretation is given in Section 2. Section 3 describes in details how standard Quantum Mechanics arises from the Conditional Probability Interpretation, in which limits and for what kind of clock. We also talk about the semi-classical regime of a simple “free particle clock”, which represents the simplest possible physical clock. In Section 4, we calculate the minimum decoherence one can achieve using the “free particle clock”, and compare our results with previous estimates. Conclusions are presented in Section 5. Conditional Probability Interpretation ====================================== In the Conditional Probability Interpretation (CPI) of quantum mechanics, there is no such thing as a direct measurement of time. The notion of measuring time is expressed through the use of a physical clock. A clock is simply a physical system, and its variables (position, momentum...) are what we measure, and use as references, or “time”. We usually choose the clock to be the least correlated with the system under study, so that a measurement of the clock variables will not greatly affect a measurement in the system of interest. Being a physical system, a clock can be fully described by a Hamiltonian, and since, from now on, we will assume the clock is fully uncorrelated with the system (which practically can never be exactly achieved), we can write $$\label{Hamiltonian } H_{\rm tot}=H_{\rm Clock}+H_{\rm System}.$$ The action then becomes $$\label{Action } S_{\rm tot}=\int \textrm{d}n \textrm{ } \left( L_{\rm Clock}(n)+L_{\rm System}(n)\right).$$ In the above equation, $n$ is a parameter, not a variable, and as such it can not be measured. In a certain way, it could be seen as some abstract internal time. This parameter ensures the unitarity of the quantum theory emerging from the CPI. It may not be unitary with respect to the time measured by a physical clock, but it will stay unitary with respect to this “internal time”. This is important since it indicates that the CPI does not in fact question one of the fundamental pillar of Quantum Mechanics. Also $n$ defines simultaneity. Two events are said to be simultaneous if they happen for the same value $n$. In the CPI, the probability of measuring a system in a state $|o\rangle$ at a time $t$, becomes the probability of measuring the system in a state $|o\rangle$ *when* measuring the clock in a state $|t\rangle$, and for a closed system is expressed as [@relational_solution] $$\label{single time probability } \mathcal{P}(o\in \Delta o | t\in \Delta t)=\frac{\int_{n}\langle\psi|P_{t}(n)P_{o}(n)P_{t}(n)|\psi\rangle}{\int_{n}\langle\psi|P_{t}(n)|\psi\rangle},$$ where $|\psi\rangle$ is the initial state (at some $n_{o}$) of the total system (*clock-system of interest*). The projectors are defined by $$\begin{aligned} P_{o}(n)=\int_{\Delta o}\int_{i}|o,i,n\rangle\langle o,i,n| \nonumber \\ P_{t}(n)=\int_{\Delta t}\int_{j}|t,j,n\rangle\langle t,j,n| \nonumber.\end{aligned}$$ We have assumed in the previous set of equations that the eigenvalues of the operators $\widehat{O}$ and $\widehat{T}$ have continuous spectra, which usually will be the case. However for now on, we will drop the integral over the interval $\Delta o$ and $\Delta t$, in an attempt to simplify the notation. That is to say we consider the spectrum to be discrete. The future calculations will not be greatly affected by this approximation. Since the intervals in questions are very small, the results will only differ by factors of $\Delta o$ and $\Delta t$. Those factors will be absorbed in the normalization of the probability. So as long as we keep in mind that $\int\textrm{d}o\mathcal{P}(o|t)=1$, we can forget that the spectra are continuous. Also, in order to keep the equation more concise we assume the eigenvalues of $\widehat{O}$ and $\widehat{T}$ form a complete set. We can now rewrite the projectors as $$\begin{aligned} P_{o}(n)=|o,n\rangle\langle o,n| \nonumber \\ P_{t}(n)=|t,n\rangle\langle t,n| .\end{aligned}$$ Since the clock and the system under study are assumed to be fully uncorrelated, the operators associated with the clock commute with the operators associated with the system. Therefore we can split a vector into a *clock vector* and a *system vector*, $$\label{vector separation} |\psi\rangle=|\psi_{c}\rangle\otimes|\psi_{s}\rangle .$$ In the same fashion, $$\label{projection operator fusion} P_{t}(n)P_{o}(n)P_{t}(n)=P_{t,o}(n)=(|t,n\rangle\otimes|o,n\rangle)(\langle t,n|\otimes\langle o,n|).$$ The parameter $n$ replaces $t$ as the parameter of the action, so as in “Standard Quantum Mechanics” (SQM) we can define a unitary operator $\widehat{U}(n_{f}-n_{i})=e^{-\frac{i}{\hbar}H(n_{f}-n_{i})}$ which will evolve operators from $n_{i}$ to $n_{f}$: $$\label{evolution operator} P_{t,o}(n)=\widehat{U}^{\dag}(n)P_{t,o}\widehat{U}(n).$$ With this framework now defined, we can make more meaningful calculations, for example the probability corresponding to a two-time measurement, which is simply the square of the propagator in SQM. This particular calculation was used by Kuchar to argue against the CPI [@kuchar]. However in Kuchar’s approach of the CPI, the parameter $n$ was missing, forbidding the system to evolve. The correct expression for the probability of measuring $o$ at $t$ when we have $o'$ at $t'$, is given in Ref. [@relational_solution]: $$\label{two time probability } \mathcal{P}(o|to't')=\frac{\int_{nn'}\langle\psi_{c}|P_{t'}(n')P_{t}(n)P_{t'}(n')|\psi_{c}\rangle\langle\psi_{s}|P_{o'}(n')P_{o}(n)P_{o'}(n')|\psi_{s}\rangle}{\int_{nn'}\langle\psi_{c}|P_{t'}(n')P_{t}(n)P_{t'}(n')|\psi_{c}\rangle\langle\psi_{s}|P_{o'}(n')|\psi_{s}\rangle}.$$ Using (\[projection operator fusion\]), (\[evolution operator\]) and defining $U_{s}(n)=e^{-\frac{i}{\hbar}H_{s}(n)}$, $U_{c}(n)=e^{-\frac{i}{\hbar}H_{c}(n)}$, we find $$\label{num_o} \langle\psi_{s}|P_{o'}(n')P_{o}(n)P_{o'}(n')|\psi_{s}\rangle = |\psi_{s}(o',n')|^{2}|\langle o'|U_{s}(n'-n)|o\rangle|^{2},$$ where $\psi_{s}(o',n')=\langle o'|U_{s}(n')|\psi_{s}\rangle$. Similarly, $$\label{num t } \langle\psi_{c}|P_{t'}(n')P_{t}(n)P_{t'}(n')|\psi_{c}\rangle = |\psi_{c}(t',n')|^{2}|\langle t'|U_{c}(n'-n)|t\rangle|^{2},$$ and $$\label{ den o} \langle\psi_{s}|P_{o'}(n')|\psi_{s}\rangle = |\psi_{s}(o',n')|^{2}.$$ We can now write the final expression for the conditional probability, $$\label{conditional probability } \mathcal{P}(o|to't')=\frac{\int_{nn'}|\psi_{s}(o',n')\psi_{c}(t',n')\langle t'|\widehat{U}_{c}^{\dag}(n-n')|t\rangle\langle o'|\widehat{U}_{s}^{\dag}(n-n')|o\rangle|^{2}}{\int_{nn'}|\psi_{s}(o',n')\psi_{c}(t',n')\langle t'|\widehat{U}_{c}^{\dag}(n-n')|t\rangle|^{2}},$$ and one can easily check $$\label{normalization } \int \textrm{d}o \mathcal{P}(o|to't')=1$$ In SQM, the probability of measuring a variable $o$ at time $t$ when we measured $o'$ at time $t'$ is given by the propagator $K(ot,o't')=\langle o'|\widehat{U}_{s}^{\dag}(t-t')|o\rangle$. In the CPI the propagator is replaced by the conditional probability. At first sight it is not obvious that the two descriptions are equivalent. However, the success of SQM demands that the CPI must give a propagator that recovers the SQM propagator in the limit of today’s experimental accuracy. Semi-Classical Clock ==================== To recover SQM from the CPI, one has to choose a clock that behaves almost classically. Two things will dictate such behavior. First, the clock itself. Some clocks will allow a more classical regime than others. Second is the clock’s initial state (at a given $n_{o}$). Indeed, this can be easily understood from the uncertainty principle. When we measure a variable very precisely, the uncertainty in its canonical conjugate will be great. Since the evolution of a variable usually depends on its canonical conjugate (thought not always as we will see), the measurement after a certain “time” (internal time $n$) will be meaningless. For example, if we use a free particle as a clock, associating time with its position, and we start with a definite value for its initial position, $$\label{initial position state} |\psi_{s}\rangle=\int\textrm{d}x \, \delta(x-x_{0})|x\rangle,$$ the wavefunction in momentum space will obviously indicate that there is equal probability for the momentum to be taking any value (of course we are not treating the particle as being relativistic since Quantum Field Theory would need to be used in such cases). At the second measurement (that is at a greater $n$), we have equal probability of finding the particle at any position. That makes for a very poor clock, and definitely not a classical clock. In Quantum Mechanics, generally the recovery of classical results is not straightforward. For example, the free particle propagator is usually not the same as what is predicted by Classical Mechanics: $$\label{classical propagator} K(xt,x_{o}t_{o})=\delta\big(x-(x_{o}+\frac{p^{2}}{m})\big).$$ However it is possible to find a propagator in the Quantum regime that mimics the classical one closely by using coherent states. Instead of a delta function, we choose the state of the particle in momentum space, at time $t_{o}$, to be a Gaussian distribution of width $\sigma_{p}$ peaked around a value $p_{o}$, $$\label{gaussian momentum distribution} |\psi\rangle \propto \int \textrm{d}p \, e^{-\frac{i}{\hbar}\big(\frac{p-p_{o}}{2\sigma_{p}}\big)^{2}}|p\rangle.$$ In the position basis, we also get a Gaussian distribution of width $\sigma_{x}=\frac{\hbar}{2 \sigma_{p}}$, peaked around 0. We evolve the system through a time $t$ to find $$\label{gaussian momentum evolved} |\psi(t)\rangle \propto \int \textrm{d}p \, e^{-\frac{i}{\hbar}\frac{p^{2}}{2m}t}e^{-\frac{i}{\hbar}\big(\frac{p-p_{o}}{2\sigma_{p}}\big)^{2}}|p\rangle.$$ We finally express the wavefunction in the position basis by Fourier transforming, $$\label{evolved postion distribution} |\psi(t)\rangle \propto \int \textrm{d}x \, e^{-\frac{i}{\hbar}\big(\frac{x-\frac{p_{o}t}{m}}{2\sigma(t)}\big)^{2}}|x\rangle.$$ The Gaussian in the position domain is peaked around $p_{0} t/ m$, the classical distance traveled by the particle. Also, we know the non-classicality of the clock comes from the uncertainty principle between the variable we use as a measure of time, and its canonical momentum. Coherent states minimize the uncertainty: $\sigma_{x}\sigma_{p}=\frac{\hbar}{2}$. Therefore we expect the clock to be the most classical when using a coherent state. To illustrate the dependence of the classical regime to the type of clock and to the initial state of the clock, we will go through two examples, the parameterized Hamiltonian and the free particle. Parameterized Hamiltonian ------------------------- This particular clock has already partly been studied by Dolby, who used a different formalism of the CPI [@dolby]. The clock’s Hamiltonian is linear in its generalized momentum $k$. The system under study is described by a general Hamiltonian $H_{s}$: $$\label{parametrized hamiltonian} H=k+H_{s}.$$ Let’s solve the classical equation of motion for the clock to have an idea of what to expect. The action is defined as $$\label{parametrized action} S=\int\textrm{d}n\big[k \dot{t}-k+L_{s}(q,\dot{q})\big],$$ where $\dot{t}=\frac{\textrm{d}t}{\textrm{d}n}$. The equations of motion are simply $\dot{p}=0$ and $\dot{t}=1$. Then, up to a constant, $t=n$. We see that the variable $t$ we choose to measure is exactly equal to the internal time $n$, and its evolution is not dictated by its momentum. We can already guess that in the Quantum regime, the uncertainty principle won’t affect the evolution of the operator $\widehat{T}$ associated with the variable $t$. Then if we start with a state $|\psi_{t}\rangle=|t=0\rangle$, the delta function will not spread and will remain sharp, the clock therefore remains classical. From (\[conditional probability \]), we have $$\label{two time probability 2} \mathcal{P}(o|to't')=\frac{\int_{nn'}|\psi_{s}(o',n')\psi_{c}(t',n')\langle t'|\widehat{U}_{c}^{\dag}(n-n')|t\rangle\langle o'|\widehat{U}_{s}^{\dag}(n-n')|o\rangle|^{2}}{\int_{nn'}|\psi_{s}(o',n')\psi_{c}(t',n')\langle t'|\widehat{U}_{c}^{\dag}(n-n')|t\rangle|^{2}}.$$ Replacing the clock Hamiltonian by $H_{c}=k$, and using the completeness of the basis, we get $$\label{propagator part} \langle t'|e^{-\frac{i}{\hbar}k(n'-n)}|t\rangle=\int\textrm{d}k \langle t'|k\rangle e^{-\frac{i}{\hbar}k(n'-n)}\langle k|t'\rangle=\delta\big((n'-n)-(t'-t)\big).$$ Using (\[two time probability 2\]), we finally find $$\label{quantum classical propagator} P(o|t,o',t')=\big|\langle o'|U_{s}(t'-t)|o\rangle\big|^{2}=\big|K(ot,o't')\big|^{2},$$ as expected. We see that the propagator from SQM is recovered from the CPI in the case of the parameterized Hamiltonian for any initial state of the clock. However the recovery of the whole SQM without any dependence on the shape of the initial state is not possible. Indeed the one time probability in the CPI is given by (\[single time probability \]): $$\label{single time probability 2} \mathcal{P}(o|t)=\frac{\int\textrm{d}n\big|\langle\psi_{c}|U_{c}^{\dagger}(n)|t\rangle\big|^{2}\big|\langle\psi_{s}|U_{s}^{\dagger}(n)|o\rangle\big|^{2}}{\int\textrm{d}n\big|\langle\psi_{c}|U_{c}^{\dagger}(n)|t\rangle\big|^{2}}.$$ Or, in the same notation as Ref. [@fundamental_decoherence]: $$\label{single time probability 3} \mathcal{P}(o|t)=\int_{n}\big|\langle\psi_{s}|U_{s}^{\dagger}(n)|o\rangle\big|^{2}\mathcal{P}_{n}(t)=\int\textrm{d}n\big|\langle\psi_{s}|U_{s}^{\dagger}(n)|o\rangle\big|^{2}\frac{\big|\int\textrm{d}k e^{\frac{i}{\hbar}k(n-t)}\langle\psi_{c}|k\rangle\big|^{2}}{\int\textrm{d}n\big|\int\textrm{d}k e^{\frac{i}{\hbar}k(n-t)}\langle\psi_{c}|k\rangle\big|^{2}},$$ where $\mathcal{P}_{n}(t)$ is the probability that the “internal time” takes the value $n$ when we measure $t$. The SQM equivalence, $\mathcal{P}(o|t)=\big|\langle\psi_{s}|U_{s}^{\dagger}(t)|o\rangle\big|^{2}$, is only recovered for $\mathcal{P}_{n}(t)=\delta(n-t)$, which is achieved for $$\label{ideal clock limit } |\psi_{c}\rangle=\int dt \, \delta(t)|t\rangle.$$ This agrees with the result found by Dolby, who calls $\psi_{c}\rightarrow\delta(t)$ the “ideal clock limit”. It is certainly true for this specific type of clock, but it won’t necessarily be an ideal limit for every clock, as we shall see. In this light, SQM is a non-physical limit to the CPI. It implies the use of a non-physical clock. Free Particle ------------- Here we use the position of a free particle of mass $m$ as a measure of time. The Hamiltonian for the clock is $$\label{free particle hamiltonian} H_{c}=\frac{p^{2}}{2m}.$$ This particular clock was studied in a recent paper by Gambini [*et al.*]{} [@Conditional_probabilities_dirac]. There, the authors approach the problem in a slightly different way. They consider the Hamiltonian to be parametrized, their goal being to show that the CPI behaves well for a constraint system. They also mention that SQM can be recovered only to leading order. Here, we will not worry about constraints but rather focus on how closely the SQM can be recovered. We will then use this result in section 4 to calculate the minimum decoherence one could achieve with this simple clock. If we calculate the single time probability $\mathcal{P}(o|x)$ using Dolby’s “ideal clock limit”, we find $$\label{ideal free particle part} \langle\psi_{c}|U_{c}^{\dag}(n)|x\rangle=\sqrt{\frac{2m\hbar\pi}{i}}e^{\frac{it^{2}m}{2\hbar n}},$$ and then the probability becomes $$\label{ ideal free particle } \mathcal{P}(o|x)=\frac{1}{V_{n}}\int_{n}\big|\langle\psi_{s}|U_{s}^{\dag}(n)|o\rangle\big|^{2}$$ If we were to measure $x$ for the particle position, there would be an equal probability that it corresponds to any possible value for the internal time. Therefore the time measured does not give any indication on the internal time. This is [*not*]{} an ideal clock. In fact $\mathcal{P}_{n}(t)=\delta(t-n)$ (with $t=m x /p_{o}$) cannot be achieved for such a clock, since any wave packet in position space will spread due to the position-momentum uncertainty. However it is possible to get a probability distribution sufficiently peaked to approximate a delta function. In order to minimize the peak’s width in both position and momentum, we use a coherent state: $$\label{coherent state} |\psi_{c}\rangle=\int\textrm{d}p \frac{1}{(2\pi)^{1/4}\sqrt{\sigma_{p}}}e^{-\big(\frac{p}{2\sigma_{p}}\big)^{2}}|p+p_{o}\rangle=\int\textrm{d}x \frac{1}{(2\pi)^{1/4}\sqrt{\sigma_{x}}}e^{-\big(\frac{x}{2\sigma_{x}}\big)^{2}}|x\rangle,$$ where $\sigma_{x}\sigma_{p}=\frac{\hbar}{2}$, and where we centered the Gaussian in momentum space around the classical momentum $p_{o}$. Using these expressions, and performing some algebra, we find the probability distribution to be a Gaussian $$\label{free particle distribution} \mathcal{P}_{n}(x)\alpha e^{-\frac{1}{2}\big(\frac{x-\frac{p_{o}}{m}n}{\delta(n)}\big)^{2}},$$ of width $\delta(n)=\sqrt{\sigma_{x}^{2}+\frac{\sigma_{p}^{2}n^{2}}{m^{2}}}$. The width can be minimize with respect to $\sigma_{x}$, taking into account that the width of the Gaussians in position and momentum are related through the uncertainty principle by $\sigma_{x}\sigma_{p}=\frac{\hbar}{2}$. The minimum will occur at $\sigma_{x}^{2}=\frac{\hbar n}{2m}$ and will take the value $$\label{distribution minimum width} \delta_{min}(n)=\sqrt{\frac{\hbar n}{m}}.$$ The optimum initial state for the clock will depend on “when” (at which internal time) we are making the measurement. Of course this presents a problem since one cannot tell “when” the measurement on the clock is taken before taking it, and even then one will have only a peaked probability distribution as an indication of what the final internal time is. It is therefore not possible to “prepare” the clock in order to ensure a minimal spread for $\mathcal{P}_{n}(x)$. To conclude this example, we note that for a free particle clock, SQM is only recovered on scales larger than $\frac{\hbar n}{m}$, and even then the recovery will only be partial, since some small deviation from the SQM will still occur as we will see in the next section. Limitation in the accuracy of a clock and Decoherence ===================================================== The CPI implies a non-unitarity of the theory with respect to the “time” measured through the physical clock. This in turn implies that the system under study will not evolve as SQM predicts. Rather, a Lindblad type equation [@lindblad] describes its evolution [@relational_solution; @isidro], $$\label{Lindblad equation} \frac{\partial\tilde{\rho}_{s}(t)}{\partial t}=-\frac{i}{\hbar}\big[(1+\beta(t))\widehat{H}_{s},\tilde{\rho}_{s}\big]-\sigma(t)\big[\widehat{H}_{s},[\widehat{H}_{s},\tilde{\rho}_{s}]\big].$$ Here $\tilde{\rho}_{s}$ is the corrected density matrix of the system under study. It is corrected in the sense that it satisfies the usual equation for a single time probability, $$\label{tilde single time probability} \mathcal{P}(o|t)=\frac{Tr\big(P(o)\tilde{\rho}(t)\big)}{Tr\big(\tilde{\rho}(t)\big)}.$$ Instead of modifying this equation as we did earlier, we have defined a new density operator $\tilde{\rho}_{s}$. It is the density matrix in the CPI regime. The second term in (\[Lindblad equation\]) is the major point of departure from the standard Heisenberg evolution equation for Quantum operators. Due to this term, a system will loose information upon evolution. The system is said to decohere. The decoherence factor $\sigma(t)$ is closely related to the probability distribution $\mathcal{P}_{n}(t)$. However Gambini and Pullin showed it only depends on the spread $b(t)$ and the asymmetry of the distribution [@real_rods]. If we assume there is no asymmetry, which is true for the clocks we studied, then the decoherence factor is given by $$\label{decoherence factor} \sigma(t)=\frac{\partial}{\partial t}b(t)$$ In order to give an estimate for the fundamental decoherence, Gambini and Pullin used a limit on the accuracy of physical clocks found by Ng and van Dam. This limit was found using a simple clock composed of two mirrors and a photon bouncing between them. Using SQM and the uncertainty in the position of the mirror, they argued that the time it takes for the photon to travel from one mirror to the other can not be measured exactly. This in turn implies that there is a limit in the accuracy of spatial measurement, given by [@limitation] $$\label{Ng limitation} \delta x=\delta x(0)+\delta x(n)= \delta x(0) +\frac{1}{2}\frac{\hbar t}{m\delta x(0)}.$$ This was used to calculate a minimum decoherence (after minimization with respect to $\delta x(0)$). However we believe that the CPI is self-consistent, and the minimum spread in a clock accuracy (and consequently a minimum decoherence) can be found without relying on any SQM result. First we note that the clock used by Ng and van Dam is really measuring a distance, since they assume there is no uncertainty in the time measured for a photon to cover a given distance. Therefore their time $t$ is equivalent to our internal time $n$. Since the clock gives us the spatial separation between the two mirror, it is equivalent to a free particle in the CPI, the particle taking the role of the mirror. There is only one difference between the two pictures. In the free particle clock, the mirror’s momentum is peaked around a classical value $p_{o}$ instead of being classically stationary. But since being stationary correspond to the special case $p_{o}=0$, the precision of the measurement will be unchanged. Then we realize that the particle wavefunction in the position representation depends only on the probability distribution $\mathcal{P}_{n}(x)$, $$\label{position wrt distribution} |\psi(n)\rangle=e^{-\frac{ip_{o}^{2}n}{2m\hbar}}\int\textrm{d}x\sqrt{\mathcal{P}_{n}(x)}|x\rangle.$$ We already calculated $\mathcal{P}_{n}(x)$ for a free particle (\[free particle distribution\]), $$\label{free particle distribution 2} \mathcal{P}_{n}(x)\alpha e^{-\frac{1}{2}\big(\frac{x-\frac{p_{o}}{m}n}{\delta_{x}(n)}\big)^{2}},$$ where $\delta_{x}$ is $$\label{delta x} \delta_{x}=\big|\langle x^{2}\rangle-\langle x\rangle^{2} \big|^{1/2}=\sqrt{\delta^{2}_{x}(0)+\frac{\hbar^{2}n^{2}}{4\delta^{2}_{x}(0)m^{2}}}.$$ We immediately see that our result differs from Ng and van Dam’s. Instead of adding the uncertainty at $n_{\rm initial}$ with the one at $n_{\rm final}$, we sum their squares. This discrepancy is conceptually important. However upon minimization of $\delta_{x}$ with respect to $\delta_{x}(0)$, the two versions agree, $$\label{minimum spread} \delta x_{min}=\sqrt{\frac{\hbar n}{m}}.$$ One can find the decoherence’s strength associated, using $b(t)=(\delta x_{min})^{2}_{n=t}$, $$\label{minimum decoherence} \sigma(t)=\frac{\hbar}{m}.$$ The CPI enables us to recover an important result, in a very fundamental way. We started from a simple Hamiltonian, and established without any assumption and without using any results from SQM that there is indeed a limit to the clock accuracy, which in turn will induce a loss of information in any system studied, through decoherence. This is a very powerful method since we can now imagine doing a similar calculation with a more realistic clock (pendulum or particle decay). All we need is the Hamiltonian for such a clock. One last point worth noting is the lack of decoherence when using the (un-realistic) clock described by the Hamiltonian $H_{c}=p$. Indeed the spread $b(t)$ in the distribution $\mathcal{P}_{n}(t)$ does not depend on $t$. It is not surprising since this clock is the closest to a classical clock. This lack of decoherence is exact for the “ideal clock limit” (\[ideal clock limit \]), but is also observed for a general initial state as long as we consider sufficiently large time. By large we mean much larger than the spread of $\mathcal{P}_{n}(t)$. If we are interested in time scales similar to $\sqrt{b(t)}$, then we cannot approximate our distribution with delta functions: $$\label{delta function approximation} \mathcal{P}_{n}(t)\neq\delta(n-t)+b(t)\delta''(n-t).$$ One has to use the full expression for the probability distribution, $$\label{full probability distribution} \mathcal{P}_{n}(t)=\frac{\sqrt{2}e^{-\frac{(t-n)^{2}}{2\sigma_{t}^{2}}}}{\sqrt{\pi}\sigma_{t}{\rm erfc}(-\frac{t}{\sqrt{2}\sigma_{t}})}.$$ Here the initial state was a Gaussian of width $\sigma_{t}$ in time. To calculate the magnitude of the decoherence, we use a density matrix expressed in its eigenenergy basis: $$\label{rhotilde} \tilde{\rho}_{s}(t)=\int_{0}^{\infty} \textrm{d}n e^{-i\frac{\hat{H}_{s}n}{\hbar}} \rho_{s} e^{i\frac{\hat{H}_{s}n}{\hbar}}\mathcal{P}_{n}(t),$$ and $$\label{densityeigenenergy} \rho_{s}=\int\int\textrm{d}E\textrm{d}E' A_{EE'}|E\rangle\langle E'|.$$ This choice allows us to explicitly perform the integration over the parameter $n$, which greatly simplifies the calculation. We find $$\label{rhotilde2} \tilde{\rho}_{s}(t)=e^{-i\frac{\hat{H}_{s}t}{\hbar}} \tilde{\rho}_{o} e^{i\frac{\hat{H}_{s}t}{\hbar}},$$ with $$\label{rhoo} \tilde{\rho}_{o}(t)=\int\int\textrm{d}E\textrm{d}E' \frac{A_{EE'}|E\rangle\langle E'|}{erfc\big(\frac{-t}{2\sigma_{t}}\big)}erfc\Big(\frac{E-E'}{2\sigma_{E}}i-\frac{t}{2\sigma_{t}}\Big)e^{-\big(\frac{E-E'}{2\sigma_{E}}\big)^{2}}.$$ Consequently, the evolution of the density operator is given by $$\label{evolutionpluscorrection} \frac{\partial\tilde{\rho}_{s}(t)}{\partial t}=\frac{i}{\hbar}[\tilde{\rho}_{s}(t),\widehat{H}_{s}]+e^{-i\frac{\hat{H}_{s}t}{\hbar}} \frac{\partial\tilde{\rho}_{o}(t)}{\partial t} e^{i\frac{\hat{H}_{s}t}{\hbar}},$$ which is the Heisenberg equation of motion plus a correction. We are interested in the magnitude of this correction, and especially in the rate at which it dies off as $t$ becomes large compared to $\sigma_{t}$. We explicitly calculate $\frac{\partial\tilde{\rho}_{o}(t)}{\partial t}$: $$\label{rhooevolution} \frac{\partial\tilde{\rho}_{o}(t)}{\partial t}=\int\int\textrm{d}E\textrm{d}E' A_{EE'}|E\rangle\langle E'|f(t),$$ with $$\label{correctionmagnitude} f(t)=\frac{e^{-\big(\frac{t}{2\sigma_{t}}\big)^{2}-\big(\frac{E-E'}{2\sigma_{E}}\big)^{2}}}{\sqrt{\pi}\sigma_{t}erfc\big(\frac{-t}{2\sigma_{t}}\big)}\left[ \frac{ e^{-\big(\frac{t}{2\sigma_{t}}-\frac{E-E'}{2\sigma_{E}}i\big)^{2}}}{e^{-\big(\frac{t}{2\sigma_{t}}\big)^{2}}} - \frac{erfc\big(\frac{E-E'}{2\sigma_{E}}i-\frac{t}{2\sigma_{t}}\big)}{erfc\big(-\frac{t}{2\sigma_{t}}\big)} \right] .$$ We first notice that the magnitude of the correction to the evolution equation depends on the density matrix element we are looking at. In particular, the correction vanishes for the diagonal terms. In general, the correction factor will stay small for nearly diagonal elements ($\|E-E'\|<\sigma_{E}$), and will be constant throughout the rest of the matrix. Also $f(t)$ vanishes when $\sigma_{t}=0$ for any nonzero $t$. This was expected since we fully recover SQM in that limit. The correction function will vanish as well for values of $t$ much bigger than the spread $\sigma_{t}$ in the clock’s initial state ($t>3\sigma_{t}$). The system will therefore decohere for an amount of time that depends on the spread of the probability distribution $\mathcal{P}_{n}(t)$, which for this Hamiltonian is equivalent to how well localized is the clock’s initial state in the $t$ representation. For times larger than $t \sim 3\sigma_{t}$ the system will undergo a standard evolution. This type of decoherence will be present for any type of clock, but will die off in a similar fashion and can be neglected for large enough time scales. Conclusion ========== In this paper we have shown that a classical clock can be described by an Hamiltonian linear in momentum. Even though the initial state of the clock must be a delta function in the time variable space (Dolby’s “ideal clock limit”) in order for the clock to be fully classical, the dynamical features of SQM (two-time probability or propagator) is recovered no matter what initial state is used, which negates Kuchar’s objection to the CPI. This is also the case for the lack of decoherence at sufficiently large scale. However, even for the simplest physically realistic clock, a free particle, the classical limit can not be recovered, and the use of the “ideal clock limit” will actually move the clock away from its classical regime. We showed that even though this clock cannot behave classically, there exist a semi-classical regime in which the CPI discrepancies with SQM are kept to a minimum. It can be achieved by using an initial state that minimizes the uncertainty relation between the “time variable” and its canonical momentum. This state is said to be coherent, and its distribution in the variable or its associated momentum space is a Gaussian. We then used those coherent states to calculate the minimum decoherence one can achieve for a “free particle clock”, and we found our results to be in agreement with previous estimates [@limitation]. We also found out that one cannot be sure the decoherence one observes is minimal. Indeed the initial state needed to get closest to the classical regime depends on the internal time $n_{\rm final}$ at which the final measurement is taken. Since $n_{\rm final}$ cannot be exactly predicted, one cannot “prepare” the clock in advance in order to achieve the minimal decoherence. Similar calculation could be carried for more realistic clocks, such as a pendulum, or a particle decay, in order to fully understand the implications of the CPI in concrete experiments. This is particularly important in the view of the development of Quantum information theory, and Quantum computing. To do so, one would need to generalize the above calculation to include the use of “rods”, in addition to “clocks”, to measure distances and therefore be able to study systems traditionally described by Quantum Fields. The use of measuring “rods” in Quantum Field Theory has been discussed in Ref. [@rods]. The effect of decoherence in dramatically curved space would also be essential in the understanding of black holes and of the early stages of our Universe. | Mid | [
0.590123456790123,
29.875,
20.75
] |
EDM transmitter promotion (March 2017) Hi all. Just saw the below posted in an xlights forum so thought I'd spread the good word. I bought one last year and it works a treat, and I know other ACL members swear by these also. Get what you pay for, but at least this is a little sweetener! Gang, EDM is offering free shipping on their transmitters for a short period of time to members of Christmas light forums. Just wanted to pass it along, in case anyone was contemplating buying a transmitter. *********************************** Hi, We have managed to fight off increasing our prices for another year, despite the rising material and shipping costs. Our products had no price increase for at least five years now. Unfortunately, we will be forced to adjust our prices later this year to recover some costs. In light of this, we decided to open our yearly special early to enable people to get one of these top transmitter kits before any price increase. If you are a member of the Christmas Light group forums, we will be having a special with free standard shipping (insurance still optional), and a free rubber duck antenna included for the next 30-60 days or until current stock is depleted. Get your order in early! Contact us on [email protected] with your details about the group/s you belong to, as well as your order details. You are welcome to pass this offer on to your fellow group members who are not a member on this forum. I purchased one of these from another member who decided not to continue with lights. It was under a year old and started working intermittently. I had no receipt for the unit, other than my payment to the seller. EDM were happy to cover the unit under warranty, unfortunately the Australian repair agent has retired and I would have to pay postage there and back to the USA for repair. EDM contacted me again and asked the symptoms of the unit - from South Africa, they diagnosed a lazy 12MHZ Quartz crystal. They were quite happy for me to replace the unit myself, it would not void my warranty and sent me photographs and instructions on how to do it. 26 cents for the crystal a spot of solder about 5 mins my time and hey presto it was up and working. I would highly recommend these people and their units. The sound quality is superb and the free freight and the addition of the rubber duck antenna is also a great bonus @Noel Richards I bought the EDM-LCD-CS-EP model years ago and it still works great. 'LCD-CS' is the cheaper of the two transmitters, and '-EP' means it comes with an enclosure (metal) & DC power adapter (US pins though - you'll need a US adapter or find/use an equivalent regulated DC adapter with Australian pins). I'm not aware of any current deals, but you can read about their transmitters at http://edmdesign.com/features-1.html and order (without any promotion applied) at http://edmdesign.com/orders-1.html N.b. the transmitter is sold as a 'kit' but in this case it is a virtually completed product minus a single component, the DC barrel connection, which is provided loose and needs to be soldered on to complete the kit. Noel I bought the same as Ryan, cheapest version. Only need to solder do jack onto board which is simple to do, even I could do it! Great working unit, worth the investment. Email them to see if they are able to include antenna for free as per original offer. No harm in asking right? Noel I bought the same as Ryan, cheapest version. Only need to solder do jack onto board which is simple to do, even I could do it! Great working unit, worth the investment. Email them to see if they are able to include antenna for free as per original offer. No harm in asking right? | Low | [
0.502222222222222,
28.25,
28
] |
James's Journal No, she didn't call. Oh well, I can live without her, now that I have my Internet Connection in my room. Yes, that's right - this very post is coming from there, where I'm currently downloading Sun's JDK1.1 so that I can do my coursework from here, rather than going down to the overly full computing labs in COGS. Now all I need to do is FTP my files off the COGS server (slightly harder), and then I'm all set. Until then, it's chatting over ICQ and MSN (low bandwidth, thankfully), and generally pissing about (like this LJ post, for example). Since you last saw me, I've done nothing. Absolutely sweet Fanny Adams. I went to an AI lecture, but it was almost empty due to the number of idiots who'd left their programming to the last day. FOOLS! They'll have a hard time, I can assure them... The lecture finished 10 minutes early because she decided that some of the material would fit in better with the subject of the next one, but really it was just because there's no point teaching if there's nobody there to be taught... I'm popular again, suddenly. It's all "You have the Internet in your room??", and rather than explain that I do not in fact have several terrabytes of data stored in my room, I merely answer "Yes". To which they respond "Could you get me....". And so it begins - everybody wants MP3s and such. Yay, they now depend on me for their entertainment - I feel good. And it means people can send e-mails without having to walk through the rain to crowded computer labs. Ah, I rock... More soon :o) Becca Never mind angel you can do better, but sure she was lovely. Lets be honest now you can have the love of your life in your room all the time, oh sweet sweet internet connection. Look its even made you more popular! Bedtime Story there lived a nice but very poor student. Now this student lived way down in the south of the country and while he had plenty of brain he had very little money. It was whispered in the magic wood that this lack of money was due to a curse. Apparantly a wicked warlock called Al Cohol did steal this money while the student was still young and naive. Luckily the student did discover this curse and did try to remove it. Try as he might he could not remove it completely but he did seem to manage it better. Indeed, the student found that he was able to buy some food with the money that he manage to hide from Al Cohol. Although he did have to go to the village of Over Draft to seek help. He started daydreaming of a time when he could once again attain money, for he did wish to use some on an "axe" so that he could improve the dexterity of his fingers. However, more evil was afoot. It appears that a telling bone did appear in his hovel, however it was without magic and was silent. But one day this telling bone did come to life and it allowed the student to commune with many using a net. The student saw that this was good, no more would he need to travel to the land of COGS, to sneak into the lab so that he might update his journal. Indeed many of the tribe saw that this telling bone, when attached to the student's magic box, could bring many wonders into their lives. The student himself saw that he could use He Males to make more communication with friends. He even found an Icy queue which he thought was good. Alas while all these good things did happen the telling bone was also stealing his money. By what magic this was achieved is a mystery, but it transpired that during the next cycle of the moon a Man date did remove much money from the student. Oh woe was the student when he saw how much had been taken. For verily, the student once had a slave called Beattie who did his netting for no cost, which was certainly cheaper than using the Demon. But now this new telling bone uses a high tariff to rob the student, who is really very nice and lets others from his tribe to load down many wonders. And they all lived happily ever after, except the poor student who sank deeper into the mire until even the people of Over Draft could help him. | Mid | [
0.608076009501187,
32,
20.625
] |
#307 1920 11 Av Sw, Calgary $325,000 Perfectly located in the exciting “1920” development in Sunalta, this stylish condo featuring a sunny south exposure is ready for you to call home! The very functional layout features two bedrooms, an upgraded kitchen with stainless appliances and marble counters with breakfast bar, and adjoining living room with access to the sunny south-facing balcony. You’ll love conveniences such as in-suite laundry, underground parking, and a storage locker! Fabulous location with quick access to the Sunalta LRT station, the downtown core, and the river pathways. It’s nice to be just outside the hustle and bustle of downtown which provides such benefits as ample street parking for your guests, and yet still have everything you want at your fingertips! Very well-managed building with a low condo fee. You’ll love living here! CALL TODAY! Disclaimer: Information herein deemed reliable but not guaranteed by CREB®. Listing information last updated on May 25th, 2019 at 8:00am CDT. Sign up for email updates The trademarks REALTOR®, REALTORS® and the REALTOR® logo are controlled by The Canadian Real Estate Association (CREA) and identify real estate professionals who are members of CREA. Used under license. The trademarks MLS®, Multiple Listing Service® and the associated logos are owned by The Canadian Real Estate Association (CREA) and identify the quality of services provided by real estate professionals who are members of CREA. Used under license. | High | [
0.68880688806888,
35,
15.8125
] |
There are a number of plants you could use in a shady area that would help cover the soil such as Hedera helix, Vinca minor, Hostas, Ferns, or Lysimachia nummularia "Aurea". While you are waiting for the plants to become established and form a covering layer, you should mulch in between them with an organic mulch. YOu will also need to loosen the soil and work in ample organic matter such as compost prior to planting. Before planting you might also want to see if the drainage patterns up hill from there can be adjusted to avoid causing such rapid erosion-causing runoff along that area. If the water moves too quickly at high volume, it may wash away your plants. Also, if it is bare along the fence and only along the fence, you might want to investigate if someone used a long term weed killer in that area at one time. I mention this because it is sometimes done along fences and could prevent your new plants from growing. Your local professionally trained nursery staff should also be able to make suggestions based on a more detailed understanding of the growing conditions and your overall design goal. Good luck with your project! | Mid | [
0.6359649122807011,
36.25,
20.75
] |
--- author: - | S.V. Demidov, D.G. Levkov\ Institute for Nuclear Research of the Russian Academy of Sciences,\ 60th October Anniversary Prospect 7a, 117312, Moscow, Russia\ E-mail: , title: Soliton pair creation in classical wave scattering --- Introduction {#sec:intro} ============ Wonders related to classical dynamics of solitons in non–integrable models surprised theorists for decades [@Makhankov; @soliton_chaos; @Belova; @Weinberg]. Intriguing long–living bound states of solitons and antisolitons — oscillons — are found in a variety of models . Another interesting example is kink–antikink annihilation in $(1+1)$–dimensional $\phi^4$ theory which displays chaotic behavior [@resonance_JETP; @resonance0; @fractal]. Most of these phenomena are explained qualitatively by reducing the infinite number of degrees of freedom in field theory to a few collective coordinates [@Manton1; @Manton2; @Manton_kinks]. Then, mechanical motion along the collective coordinates shows whether soliton evolution is regular or chaotic. Recently [@kinks_particles; @Shnir] a question of kink–antikink pair production in classical wavepacket scattering was addressed[^1], cf. Refs. . The interest to this question stems from the fact that, within the semiclassical approach, wave packets describe multiparticle states in quantum theory. Studying kink–antikink creation from wave packets one learns a lot about the quantum counterpart process: production of nonperturbative kink states in multiparticle collisions. The prospect of Refs. [@kinks_particles; @Shnir] was to describe a class of multiparticle states leading to classical formation of kinks. Due to essential nonlinearity of classical field equations the process of kink–antikink creation cannot be described analytically[^2] and one has to rely on numerical methods. A difficulty, however, is related to the space of initial Cauchy data which is infinite–dimensional in field theory. Because of this difficulty the analysis of Refs. [@kinks_particles; @Shnir] was limited to a few–parametric families of initial data. In this paper we explore the entire space of classical solutions describing soliton–antisoliton pair creation from wave packets. To this end we sample stochastically over the sets of Cauchy data and obtain large ensemble of solutions, cf. Refs. [@Rebbi1; @Rebbi2]. We select solutions evolving between free wave packets and soliton–antisoliton pair and compute the energies $E$ and particle numbers $N$ of the respective initial states. In this way we obtain the region in $(E,N)$ plane corresponding to classical creation of solitons. We are particularly interested in solutions from this region with the smallest $N$. The model we consider is somewhat different from the standard $\phi^4$ theory used in Refs. [@kinks_particles; @Shnir]. We do study evolution of a scalar field in $(1+1)$ dimensions but choose nonstandard potential $V(\phi)$ shown in Fig. \[fig:V\]a, solid line. The reason for the unusual choice is chaos in kink–antikink scattering in $\phi^4$ theory: it would be a venture to try applying new method in a potentially nontrivial chaotic model. We will comment on generalizations of our technique in the Discussion section. With the above set of classical solutions we test the method of Ref. [@DL] where classically forbidden production of kinklike solitons in the same model was studied. Namely, we compare the boundary of the “classically allowed” region in $(E,N)$ plane with the same boundary obtained in Ref. [@DL] from the classically forbidden side. Coincidence of the two results justifies both calculations. The paper is organized as follows. We introduce the model in Sec. \[sec:model\] and explain the stochastic sampling technique in Sec. \[sec:method\]. In Sec. \[sec:results\] we present numerical results which confirm, in particular, results of Ref. [@DL]. We conclude and generalize in Sec. \[sec:discussion\]. The model {#sec:model} ========= The action of the model is $$\label{eq:11} S = \frac{1}{g^2} \int dt \, dx\, \left[(\partial_\mu \phi)^2/2 - V(\phi)\right]\;,$$ where $\phi(t,x)$ is the scalar field; semiclassical parameter $g$ does not enter the classical field equation $$\label{eq:1} \left(\partial_t^2 - \partial_x^2\right) \phi = - \partial V(\phi)/\partial \phi\;.$$ We assume that the potential $V(\phi)$ has a pair of degenerate minima $\phi_-$ and $\phi_+$. Then there exists a static solution of Eq. (\[eq:1\]) — topological kinklike soliton $\phi_S(x)$ shown in Fig. \[fig:V\]b. Antisoliton solution $\phi_A(x)$ is obtained from $\phi_S(x)$ by spatial reflection, $\phi_A(x) = \phi_S(-x)$. We consider classical evolutions of $\phi(t,x)$ between free wave packets in the vacuum $\phi_-$ and configurations containing soliton–antisoliton pair. Initial and final states of the process are shown schematically in Fig. \[fig:process\]. We restrict attention to $P$–symmetric solutions, $\phi(t,x)=\phi(t,-x)$. This is natural since soliton and antisoliton are symmetric with respect to each other. In what follows we solve Eq. (\[eq:1\]) numerically. To this end we introduce a uniform spatial lattice $\{x_{i}\}$, $i=- N_x,\dots N_x$ of extent $-L_x \le x_{i} \le L_x$. At lattice edges $x=\pm L_x$ we impose energy–conserving Neumann boundary conditions $\partial_x \phi =0$. We also introduce a uniform time step $\Delta t$. Typically, $L_x = 15$, $N_x = 400$, $\Delta t = 0.03$. Discretization of Eq. (\[eq:1\]) is standard second–order[^3]. We take advantage of the reflection symmetry $x\to -x$ and use only one half of the spatial lattice. We consider the potential $$\label{eq:2} V(\phi) = \frac{1}{2}(\phi+1)^2\left[1 - v\, W\left(\frac{\phi-1}{a}\right)\right]\;,$$ where dimensionless units are introduced; $W(x) = \mathrm{e}^{-x^2}( x + x^3 + x^5)$, $a=0.4$. The value of $v$ is chosen to equate the energy densities of the vacua, $v\approx 0.75$. The potential (\[eq:2\]) is depicted in Fig. \[fig:V\]a, solid line. We denote the masses of linear excitations in the vacua $\phi_-$ and $\phi_+$ by $m_-$ and $m_+$, respectively. Our choice of the potential is motivated in two ways. First, we have already mentioned that kink dynamics in the standard $\phi^4$ theory is chaotic [@resonance_JETP; @resonance0; @fractal]. The source of chaos hides in the spectrum of linear perturbations around the $\phi^4$ kink. The latter contains [*two*]{} localized modes: zero mode due to spatial translations and first excited mode representing kink periodic pulsations. Localized modes accumulate energy during kink evolution which is thus described by two collective coordinates. Mechanical model for these coordinates is chaotic [@fractal], just like the majority of two–dimensional mechanical models. We get rid of the chaos by choosing the potential (\[eq:2\]) where the spectrum of linear perturbations around the soliton contains only one localized mode. Due to this property soliton motion is described by one–dimensional mechanical system which cannot be chaotic. Let us compute the spectrum of the soliton in the model (\[eq:2\]). Consider small perturbations $\phi - \phi_S(x) = \delta\phi (x)\cdot\mathrm{e}^{\pm i\omega t}$ in the background of the soliton. Equation (\[eq:1\]) implies, $$\label{eq:3} \left[- \partial_x^2 + U(x)\right] \delta \phi(x) = \omega^2 \delta \phi(x)\;,$$ where $U(x) = V''(\phi_S(x))$ and nonlinear terms in $\delta\phi$ are neglected. Discretization turns the differential operator in Eq. (\[eq:3\]) into a symmetric $(2N_x+1)\times (2N_x+1)$ matrix; we compute the eigenvalues $\{\omega_k^{(S)}\}$ of this matrix by the standard method of singular value decomposition. Several lower eigenvalues are shown in the inset in Fig. \[fig:V\]b. One sees no localized modes between zero mode and continuum $\omega^{(S)} > m_-$. Another, unrelated to the soliton spectrum, mechanism of chaos was proposed recently in Ref. [@Dorey:2011yw]. This mechanism works under condition $m_+ < m_-$ which is not met in our model. The second reason for the choice (\[eq:2\]) is linearization of classical solutions at large negative times. Interaction terms should be negligible in the initial part of the classical evolution; otherwise initial wave packets cannot be associated with the perturbative Fock states. However, $(1+1)$–dimensional solutions linearize slowly due to wave dispersion. Brute force linearization would require large lattice which is a challenge for the numerical method. Our model is specifically designed to overcome this difficulty. At $a\ll 1$ the potential (\[eq:2\]) is quadratic everywhere except for the small region $\phi\approx 1$. Wave packets move freely in this potential if their tops are away from $\phi=1$, see Fig. \[fig:process\]. After collision the wave packets add up coherently and hit the interaction region $\phi\approx 1$. Below we find that all classical solutions of interest behave in the described way. We use $a=0.4$ which is small enough to provide, for the chosen lattice size, linearization at the level of 1%. It is worth noting that the problem with slow linearization is absent in multidimensional theories because amplitudes of spherical waves in $D>2$ decay at power laws with distance. In general $(1+1)$–dimensional model linearization can be achieved artificially by switching off the interaction terms of the potential at $|x| > L_{int}$. This corresponds to a physical setup where interaction takes place in a sample of length $2 L_{int}$. We compute the energy $E$ and particle number $N$ of the initial wave packets in the following way. Since the wave packets move freely in the vacuum $\phi_{-}$, $$\label{eq:4} \phi(t,x) \to \phi_- + \sqrt{\frac{2}{\pi}}\int_0^{\infty} \frac{dk}{\sqrt{2\omega_k}} \,\cos(kx)\left[a_k \mathrm{e}^{-i\omega_k t} + a_k^* \mathrm{e}^{i\omega_k t}\right] \qquad{as} \;\; t\to -\infty\;,$$ where we took into account the reflection symmetry and introduced the amplitudes $a_k$; ${\omega_k^2 = k^2 + m^2_-}$. Given the representation (\[eq:4\]), one calculates $E$ and $N$ by the standard formulas, $$\label{eq:5} E = \frac2{g^2}\int_0^\infty dk \,\omega_k |a_k|^2\;,\qquad \qquad N = \frac2{g^2}\int_0^\infty dk \,|a_k|^2\;.$$ Expression for $N$ can be thought of as a sum of mode occupation numbers $n_k=|a_k|^2$, where the latter are defined as ratios of mode energies $\omega_k |a_k|^2$ and energy quanta[^4] $\omega_k$. Note that the energy $E$ is conserved; it can be calculated at arbitrary moment of classical evolution as $$\label{eq:8} E=\frac1{2g^2}\int dx \left[(\partial_t \phi)^2 + (\partial_x\phi)^2 + 2V(\phi)\right]\;.$$ In the case of free evolution this expression coincides with the first of Eqs. (\[eq:5\]). Needless to say that Eqs. (\[eq:5\]) can be used only in the linear regime; this is the practical reason for continuing solutions back in time until Eq. (\[eq:4\]) holds. Below we check the linearity of classical solutions by comparing their exact and linear energies, Eqs. (\[eq:8\]) and (\[eq:5\]). We characterize classical solutions by points in $(E,N)$ plane. Expressions (\[eq:4\]), (\[eq:5\]) are naturally generalized to the lattice system.[^5] One solves numerically the eigenvalue problem (\[eq:3\]), where $U(x) = m_-^2$, and finds the spectrum $\{\delta\phi_k(x),\,\omega_k\}$ of linear excitations above the vacuum $\phi_-$. In this way one obtains lattice analogs of the standing waves $\cos(kx)$ and frequencies $\omega_k=\sqrt{k^2+m_-^2}$. Arbitrary linear evolution in the vacuum $\phi_-$ has the form $$\label{eq:6} \phi(t,x) = \phi_- + \sum_k \delta\phi_k(x) \left[a_k \mathrm{e}^{-i\omega_k t} + a_k^* \mathrm{e}^{i\omega_k t }\right]\;,$$ cf. Eq. (\[eq:4\]), where we used the eigenmode basis with normalization $$\label{eq:6_5} \sum_i \Delta x \,\delta\phi_k(x_i) \delta\phi_{k'}(x_i) = \delta_{k,k'} /\omega_k\;,\;\; \Delta x = x_{i+1}-x_{i}.$$ One extracts the amplitudes $a_k$ from the classical solution $\phi(t,x)$ at large negative $t$ by decomposing $\phi(t,x)$, $\partial_t\phi(t,x)$ in the basis of $\delta\phi_k(x)$ and comparing the coefficients of decomposition with Eq. (\[eq:6\]). Summing up the energies and occupation numbers of different modes, one obtains $$\label{eq:7} E = \frac{2}{g^2}\sum_k \omega_k |a_k|^2 \;, \qquad \qquad N = \frac{2}{g^2}\sum_k |a_k|^2\;,$$ where Eq. (\[eq:6\_5\]) is taken into account. The method {#sec:method} ========== Modification of the potential {#sec:modification} ----------------------------- It is difficult to select solutions containing soliton–antisoliton pairs in the infinite future. On the one hand, numerical methods do not allow us to extend $\phi(t,x)$ all the way to $t\to +\infty$. On the other hand, soliton and antisoliton attract; taken at rest, they accelerate towards each other and annihilate classically into a collection of waves. Thus, we never can be sure that $\phi(t,x)$ contains solitons at $t\to +\infty$, even if lumps similar to soliton–antisoliton pairs are present at finite times. We solve this difficulty by changing the value of $v$ in Eq. (\[eq:2\]) and thus adding small negative energy density $(-\delta \rho)$ to the vacuum $\phi_+$, see Fig. \[fig:V\]a, dashed line. This turns soliton–antisoliton pair into a bubble of true vacuum $\phi_+$ inside the false vacuum $\phi_-$ [@false; @false1; @false2]. Large bubbles expand at $\delta\rho>0$ since attraction between the solitons in this case is surmounted by the constant pressure $\delta\rho$ inside the bubble. Thus, at $\delta\rho > 0$ we simply look whether solution $\phi(t,x)$ contains large bubbles at finite $t$. In the end of calculation, however, we have to consider the limit $\delta\rho \to 0$. We remark that solutions containing soliton–antisoliton pairs at $t\to +\infty$ can be identified by other methods. Our way, besides being particularly simple, has the following advantage: at $\delta\rho >0$ there exists a critical bubble [@false] — unstable static solution $\phi_{cb}(x)$ lying on top of the potential barrier between the true and false vacua. Given the critical bubble, one easily constructs classical evolutions between the vacua. Indeed, in the critical bubble attraction between the soliton and antisoliton is equal to repulsion due to $\delta\rho$. Being perturbed, it either starts expanding or collapses forming a collection of waves in the vacuum $\phi_-$. Thus, adding small perturbation to the critical bubble and solving classical equations of motion forward and backward in time, one obtains the classical solutions of interest. Critical bubble at $\delta \rho=0.4$ is depicted in Fig. \[fig:sphaleron\]a. Let us obtain a particular solution describing creation of expanding bubble from wave packets at $\delta\rho>0$. We solve numerically Eq. (\[eq:3\]) with $U(x) = V''(\phi_{cb}(x))$ and find the spectrum of linear perturbations $\{\delta \phi_k^{(cb)}(x)\,, \omega_k^{(cb)}\}$ around the critical bubble. This spectrum is shown in the inset in Fig. \[fig:sphaleron\]a; it contains precisely one negative mode $\delta \phi_{neg}(x)$, $\omega_{neg}^2 < 0$ due to changes in the bubble size. The latter mode describes decay of the critical bubble, $$\label{eq:10} \phi(t,x) \approx \phi_{cb}(x) + B_{neg}\, \delta\phi_{neg}(x) \,\mathrm{sh}\left(|\omega_{neg}|(t-t_0)\right)\;,$$ where we fix $t_0=0$, $B_{neg}=10^{-2}$ in what follows. Using the configuration (\[eq:10\]) and its time derivative as Cauchy data at $t=0$, one solves numerically Eq. (\[eq:1\]) forward and backward in time and obtains $\phi(t,x)$, see[^6] Fig. \[fig:sphaleron\]b. The latter solution interpolates between free wave packets above the vacuum $\phi_-$ and expanding bubble. We compute the values of $(g^2E,g^2N)\approx (6.1,4.4)$ by Eqs. (\[eq:7\]) and mark the respective point “cb” in Fig. \[fig:sampling\]a. In Fig. \[fig:sphaleron\]b we check the linearity of evolution at $t\to -\infty$ by comparing the linear and exact energies of $\phi(t,x)$, Eqs. (\[eq:7\]) and (\[eq:8\]). As expected, the linear energy coincides with the exact one at large negative times and departs from it when wave packets collide. In what follows we estimate the precision of linearization as fractions of percent. Stochastic sampling technique {#sec:stoch-sampl-techn} ----------------------------- In this Section we sample over classical solutions with bubbles of true vacuum in the final state, see Refs. [@Rebbi1; @Rebbi2]. It is hard to pick up the initial Cauchy data for such solutions: most of the initial wave packets scatter trivially and do not produce expanding bubbles. Instead, we consider the data $\{\phi(0,x),\,\partial_t\phi(0,x)\}$ at $t=0$. We decompose these data in the basis of perturbations around the critical bubble, $$\begin{aligned} \label{eq:9} &\phi(0,x) = \phi_{cb}(x) + A_{neg}\,\delta \phi_{neg}(x) + \sum_k A_k\, \delta\phi^{(cb)}_k(x)\;,\\ \notag &\partial_t\phi(0,x) = B_{neg}\,|\omega_{neg}|\,\delta \phi_{neg}(x) + \sum_k B_k\, \omega_k^{(cb)}\, \delta\phi^{(cb)}_k(x)\;,\end{aligned}$$ where the negative mode is treated separately. Note that any functions $\phi(0,x)$, $\partial_t\phi(0,x)$ can be written in the form (\[eq:9\]). For each set of Cauchy data $\{A_{neg},\, A_k, $ $B_{neg},\, B_k\}$ we solve numerically Eq. (\[eq:1\]) and obtain classical solution $\phi(t,x)$. Due to instability of the critical bubble there is a good chance to obtain transition between the vacua $\phi_-$ and $\phi_+$. Next, we study the region in $(E,N)$ plane corresponding to classical formation of expanding bubbles from colliding wave packets. We are particularly interested in the lower boundary $N = N_{min}(E)$ of this region. Let us organize the artificial ensemble of solutions describing transitions between the vacua. The probability of finding each solution in our ensemble is proportional to $$\label{eq:12} p \propto \mathrm{e}^{-E\tau - N\vartheta}\;,$$ where $E$ and $N$ are the energy and initial particle number of the solution; $\tau$ and $\vartheta$ are fixed numbers. At large positive $\vartheta$ solutions with the smallest $N$ dominate in the ensemble and we obtain the boundary $N_{min}(E)$ with good precision. Value of $\tau$ controls the region of energies to be covered. We use Metropolis Monte Carlo algorithm to construct the ensemble (\[eq:12\]). In our approach solutions are characterized by the coefficients in Eq. (\[eq:9\]); condition $A_{neg}=0$ is used to fix the time translation invariance of Eq. (\[eq:1\]). The algorithm starts from the solution (\[eq:10\]) describing decay of the critical bubble; it has $B_{neg} = 10^{-2}$, $A_k = B_k = 0$. Denote the energy and particle number of this solution by $(E_0,N_0)$. We pick up a random coefficient from the set $\{A_k,\, B_k,\, B_{neg}\}$ and change it by a small step. The latter step is a Gauss–distributed random number with zero average and small dispersion[^7] $\sigma$. Substituting the modified set of coefficients into Eqs. (\[eq:9\]), we find $\phi(0,x)$ and $\partial_t\phi(0,x)$. Then, solving numerically the classical field equation, we obtain the entire solution $\phi(t,x)$. We compute the values of $(E, N)$ by Eqs. (\[eq:7\]). Solution is rejected if it does not interpolate between the vacua $\phi_-$ and $\phi_+$; otherwise we accept it with the probability $$\label{eq:13} p_{accept} = \min \left( 1,\; \mathrm{e}^{-\tau\Delta E - \vartheta \Delta N}\right)\;,$$ where $(\Delta E,\Delta N)$ are differences between the new values of $(E,N)$ and the values $(E_0,N_0)$ for the solution we started with. If the new solution is accepted, we write it down and use its parameters $\{A_k,\, B_k,\, B_{neg}\}$, $(E,N)\to (E_0,N_0)$ for the next cycle of iterations. After many cycles we obtain the ensemble (\[eq:12\]) of accepted solutions. A typical run of the Monte Carlo algorithm is shown in Fig. \[fig:sampling\]a where the accepted solutions are marked by dots in $(E,N)$ plane. The algorithm starts in the vicinity of the critical bubble, then moves to smaller $N$ and finally arrives to the boundary $N_{min}(E)$ where the majority of solutions is found. Numerical results {#sec:results} ================= We perform Monte Carlo runs at different values of $\tau$ and $\vartheta$ until the entire curve $N=N_{\min}(E)$ is covered with solutions. In total we obtained $2\cdot 10^7$ solutions, where the value of $\tau$ was ranging between $0$ and $10^4$; $\vartheta = 10^3,\, 10^4,\, 5\cdot10^4$. The boundary $N=N_{min}(E)$ is constructed by breaking the energy range into small intervals $\Delta E = 0.01$ and choosing solution with minimal $N$ inside each interval. This is the result we are looking for: $N_{min}(E)$ gives the minimum number of particles needed for classically allowed production of bubbles. It is plotted in Fig. \[fig:sampling\]b, solid line. As expected, $N_{min}(E)$ starts from $(E,N)=(E_{cb},N_{cb})$ and decreases monotonously with energy. At high energies $N_{min}(E)$ is approximately constant. Note that the particle number is parametrically large in the “classically allowed” region, ${N_{min} \sim 1/g^2}$. This means, in particular, that the probability of producing the bubble from few–particle initial states is exponentially suppressed. Given the boundary $N_{min}(E)$, we check results of Ref. [@DL] where classically forbidden transitions between $N$–particle states and states containing the bubble were considered. The probability of these processes is exponentially suppressed in the semiclassical parameter, $$\label{eq:14} {\cal P}_N(E) \sim \mathrm{e}^{-F_N(E)/g^2}\;,$$ where $F_N(E)$ is suppression exponent. One expects that this exponent vanishes in the “classical” region ${N>N_{min}(E)}$. We extract the boundary of the set $F_N(E)=0$ from the results of Ref. [@DL] and plot this boundary in Fig. \[fig:sampling\]b (dashed line). It coincides with $N_{min}(E)$ within 0.5% accuracy; the agreement justifies both calculations. Let us look at solutions with almost–minimal initial particle number, $N\approx N_{min}(E)$. Two such solutions are plotted[^8] in Fig. \[fig:solutions\], their parameters $(E,N)$ are shown by circles in Fig. \[fig:sampling\]b. At $t\to -\infty$ the solutions describe free wave packets moving in the vacuum $\phi_-$. After collision the wave packets emit waves and form the bubble. The most surprising part of the evolutions in Fig. \[fig:solutions\] is emission of waves during the bubble formation. One assumes that the role of these waves is simply to carry away the energy excess which is not required for the creation of bubble. Indeed, solutions at different $E$ look alike, cf. Figs. \[fig:solutions\]a and \[fig:solutions\]b; besides, $N_{min}(E)$ is independent of energy at high values of the latter. In Refs. [@Shifman; @Levkov; @Levkov1] it was assumed that there exists certain limiting energy $E_{l}$ which is best for bubble creation. Then, classical solutions with minimal particle number at $E > E_{l}$ are sums of two parts: non–trivial soft part describing bubble production at $E = E_{l}$ and trivial hard part — waves propagating adiabatically in the soft background. Hard waves carry away the energy excess $E - E_l$ without changing the initial particle number; this is achieved at small wave amplitudes and high frequencies, see Eqs. (\[eq:7\]). Numerical results do not permit us to judge whether the limiting energy exists. We can, however, confirm the conjectured structure of high–energy solutions. Consider the energies $\epsilon_k=\omega_k |a_k|^2/g^2$ of the modes at $t\to -\infty$, where the amplitudes $a_k$ and frequencies $\omega_k$ are defined in Eq. (\[eq:6\]). In Fig. \[fig:fk\] we plot these energies for the two solutions depicted in Fig. \[fig:solutions\]. Soft parts of the graphs are almost coincident. Long tail of excited high–frequency modes is seen, however, in the graph representing the high–energy solution \[fig:solutions\]b. The tail carries substantial energy while the corresponding modes propagate adiabatically and do not participate in nonlinear dynamics. This is precisely the behavior conjectured in Refs. [@Shifman; @Levkov; @Levkov1]. It is reasonable to assume that solutions at arbitrarily high energies have the same “hard+soft” structure. Then, $N_{min}(E)$ stays constant as $E\to +\infty$ and classical formation of bubbles is not possible at any energies unless the initial particle number is larger than $N_{min}(E=+\infty)$. Finally, we consider the limit ${\delta\rho \to 0}$. A typical solution at small $\delta \rho$ is depicted in Fig. \[fig:drho\]a. It describes creation of soliton and antisoliton which move away from each other at a constant speed. The boundaries $N_{min}(E)$ at $\delta\rho=0.04,\; 0.02$ are plotted in Fig. \[fig:drho\]b (dashed lines). They are almost indistinguishable; thus, the limit ${\delta\rho \to 0}$ exists. Extrapolating $N_{min}(E)$ to $\delta \rho = 0$ with linear functions, we obtain the region in $(E,N)$ plane for classically allowed production of soliton–antisoliton pairs (above the solid line in Fig. \[fig:drho\]b). All initial wave packets leading to classical creation of soliton pairs have the values of $(E,N)$ within this region. The region in Fig. \[fig:drho\]b is qualitatively similar to the regions at $\delta\rho>0$; in particular, $N_{min}(E)$ is constant at high energies. Discussion {#sec:discussion} ========== In this paper we studied multiparticle states leading to classically allowed production of soliton–antisoliton pairs in $(1+1)$–dimensional scalar field model. We characterized these states with two parameters — energy $E$ and particle number $N$; we have found the corresponding “classically allowed” region in $(E,N)$ plane. There were two main ingredients in our technique. First, we added constant pressure $\delta\rho$ pulling soliton and antisoliton apart. This modification led to appearance of the critical [r]{}[2cm]{} (25,65) (0,5)[{width="15mm"}]{} (2,16.5)[$\bar{M}$]{} (2,49)[$M$]{} (1,34)[$\boldsymbol{F}_{att}$]{} (-2,1.2)[$-g_m\boldsymbol{H}$]{} (0.3,65.5)[$g_m\boldsymbol{H}$]{} (16,61)[$\boldsymbol{H}$]{} bubble — unstable static solution lying on the boundary between the perturbative states and soliton–antisoliton pair. Second, we applied stochastic sampling over Cauchy data in the background of the critical bubble. We thus obtained large ensemble of classical solutions describing formation of soliton–antisoliton pairs. Calculating the values of $(E,N)$ for each solution we found the required “classically allowed” region. Our method is naturally generalized to higher–dimensional models. For example, consider formation of t’Hooft–Polyakov monopole–antimonopole pairs in four–dimensional gauge theories [@Hooft; @Polyakov]. Constant force dragging monopole and antimonopole apart is provided [@Shnir_Kiselev] by external magnetic field $\mathbf{H}$, see the figure. At $\mathbf{H}\ne 0$ there exists a direct analog of the critical bubble [@Manton]: unstable static solution where the attractive force $\boldsymbol{F}_{att}$ between the monopole and antimonopole is compensated by the external forces $\pm g_m\boldsymbol{H}$ ($g_m$ is a magnetic charge of the monopole). One performs Monte Carlo simulation in the background of this static solution and obtains many classical evolutions between free wave packets and monopole–antimonopole pairs. A particularly interesting application of our technique might be found in the study of kink–antikink production in $(1+1)$–dimensional $\phi^4$ theory. In this model the boundary $N_{min}(E)$ is lowered [@kinks_particles; @Shnir] due to chaos. We do not expect any difficulties related to nontrivial dynamics of solitons. However, classification of initial data for kink–antikink formation may require modification of our method. #### Acknowledgments. {#acknowledgments. .unnumbered} We are indebted to V.Y. Petrov and I.I. Tkachev for helpful discussions. This work was supported in part by grants NS-5525.2010.2, MK-7748.2010.2 (D.L.), RFBR-11-02-01528-a (S.D.), the Fellowship of the “Dynasty” Foundation (awarded by the Scientific board of ICPFM) (D.L.) and Russian state contracts 02.740.11.0244, P520 (S.D.), P2598 (S.D.). Numerical calculations have been performed on Computational cluster of the Theoretical division of INR RAS. [99]{} V. G. Makhankov, [*Dynamics of classical solitons (in non-integrable systems)*]{}, [*Phys. Rep.*]{} [**35**]{} (1978) 1. F. K. Abdullaev, [*Dynamical chaos of solitons and nonlinear periodic waves*]{}, [*Phys. Rep.*]{} [**179**]{} (1989) 1. T. I. Belova and A. E. Kudryavtsev, [*Solitons and their interactions in classical field theory*]{}, [*Phys. Usp.*]{} [**40**]{} (1997) 359. E. J. Weinberg and P. Yi, [*Magnetic monopole dynamics, supersymmetry, and duality*]{}, [*Phys. Rep.*]{} [**438**]{} (2007) 65, \[[[hep-th/0609055]{}](http://xxx.lanl.gov/abs/hep-th/0609055)\]. I. L. Bogolyubskii and V. G. Makhankov, [*Lifetime of pulsating solitons in certain classical models*]{}, [*JETP Lett.*]{} [**24**]{} (1976) 12. E. Farhi, N. Graham, V. Khemani, R. Markov, and R. Rosales, [*An oscillon in the $\protect{SU}(2)$ gauged iggs model*]{}, [*Phys. Rev. D*]{} [ **72**]{} (2005) 101701, \[[[ hep-th/0505273]{}](http://xxx.lanl.gov/abs/hep-th/0505273)\]. G. Fodor, P. Forgács, P. Grandclément, and I. Rácz, [*Oscillons and quasibreathers in the $\phi{}^{4}$ lein-ordon model*]{}, [*Phys. Rev. D*]{} [**74**]{} (2006) 124003, \[[[hep-th/0609023]{}](http://xxx.lanl.gov/abs/hep-th/0609023)\]. M. Gleiser and J. Thorarinson, [*Phase transition in $\protect{U}(1)$ configuration space: Oscillons as remnants of vortex-antivortex annihilation*]{}, [*Phys. Rev. D*]{} [**76**]{} (2007) 041701, \[[[hep-th/0701294]{}](http://xxx.lanl.gov/abs/hep-th/0701294)\]. N. Graham, [*An electroweak oscillon*]{}, [*Phys. Rev. Lett.*]{} [**98**]{} (2007) 101801, \[[[ hep-th/0610267]{}](http://xxx.lanl.gov/abs/hep-th/0610267)\]. A. E. Kudryavtsev, [*Solitonlike solutions for a iggs scalar field*]{}, [*JETP Lett.*]{} [**22**]{} (1975) 82. D. K. Campbell, J. F. Schonfeld, and C. A. Wingate, [*Resonance structure in kink-antikink interactions in $\phi^4$ theory*]{}, [*Physica D*]{} [**9**]{} (1983) 1. P. Anninos, S. Oliveira, and R. A. Matzner, [*Fractal structure in the scalar $\lambda{}(\varphi{}^{2}-1)^{2}$ theory*]{}, [*Phys. Rev. D*]{} [**44**]{} (1991) 1147. N. S. Manton, [*A remark on the scattering of monopoles*]{}, [*Phys. Lett. B*]{} [**110**]{} (1982) 54. N. S. Manton, [*Unstable manifolds and soliton dynamics*]{}, [*Phys. Rev. Lett.*]{} [**60**]{} (1988) 1916. N. S. Manton and H. Merabet, [*$\phi^4$ kinks — gradient flow and dynamics*]{}, [*Nonlinearity*]{} [**10**]{} (1997) 3, \[[[hep-th/9605038]{}](http://xxx.lanl.gov/abs/hep-th/9605038)\]. S. Dutta, D. A. Steer, and T. Vachaspati, [*Creating kinks from particles*]{}, [*Phys. Rev. Lett.*]{} [**101**]{} (2008) 121601, \[[[arXiv:0803.0670]{}](http://xxx.lanl.gov/abs/0803.0670)\]. T. Romańczukiewicz and Y. Shnir, [*Oscillon resonances and creation of kinks in particle collisions*]{}, [*Phys. Rev. Lett.*]{} [**105**]{} (2010) 081601, \[[[arXiv:1002.4484]{}](http://xxx.lanl.gov/abs/1002.4484)\]. T. Romańczukiewicz, [*Creation of kink and antikink pairs forced by radiation*]{}, [*[J. Phys. A:]{} Math. Gen.*]{} [**39**]{} (2006) 3479, \[[[hep-th/0501066]{}](http://xxx.lanl.gov/abs/hep-th/0501066)\]. K. Rajagopal and N. Turok, [*Classical high-energy scattering in the abelian iggs model*]{}, [*Nucl. Phys. B*]{} [**375**]{} (1992) 299. H. Goldberg, D. Nash, and M. T. Vaughn, [*Classical $\lambda{}\varphi{}^{4}$ theory in 3 + 1 dimensions*]{}, [*Phys. Rev. D*]{} [**46**]{} (1992) 2585. C. R. Hu, S. G. Matinyan, B. Müller, A. Trayanov, T. M. Gould, S. D. H. Hsu, and E. R. Poppitz, [*Wave packet dynamics in ang–ills theory*]{}, [*Phys. Rev. D*]{} [**52**]{} (1995) 2402, \[[[hep-ph/9502276]{}](http://xxx.lanl.gov/abs/hep-ph/9502276)\]. C. R. Hu, S. G. Matinyan, B. Müller, and D. Sweet, [*Wave packet collisions in ang–ills-iggs theory*]{}, [ *Phys. Rev. D*]{} [**53**]{} (1996) 3823, \[[[hep-ph/9509305]{}](http://xxx.lanl.gov/abs/hep-ph/9509305)\]. C. Rebbi and R. L. Singleton, Jr, [*[Computational study of baryon number violation in high-energy electroweak collisions]{}*]{}, [*Phys. Rev. D*]{} [ **54**]{} (1996) 1020, \[[[ hep-ph/9601260]{}](http://xxx.lanl.gov/abs/hep-ph/9601260)\]. C. Rebbi and R. L. Singleton, Jr, [*[Computational advances in the study of baryon number violation in high-energy electroweak collisions]{}*]{}, [[hep-ph/9606479]{}](http://xxx.lanl.gov/abs/hep-ph/9606479). S. V. Demidov and D. G. Levkov, [*[Soliton-antisoliton pair production in particle collisions]{}*]{}, [[ arXiv:1103.0013]{}](http://xxx.lanl.gov/abs/1103.0013). P. Dorey, K. Mersh, T. Romańczukiewicz, and Y. Shnir, [*[Kink–antikink collisions in the $\phi^6$ model]{}*]{}, [[arXiv:1101.5951]{}](http://xxx.lanl.gov/abs/1101.5951). M. Voloshin, I. Kobzarev, and L. Okun, [*Bubbles in metastable vacuum*]{}, [*Sov. J. Nucl. Phys.*]{} [**20**]{} (1975) 644. M. Stone, [*Semiclassical methods for unstable states*]{}, [*Phys. Lett. B*]{} [**67**]{} (1977) 186. S. Coleman, [*Fate of the false vacuum: Semiclassical theory*]{}, [*Phys. Rev. D*]{} [**15**]{} (1977) 2929. M. Maggiore and M. Shifman, [*Non-perturbative processes at high energies in weakly coupled theories: Multi-instantons set an early limit*]{}, [*Nucl. Phys. B*]{} [**371**]{} (1992) 177. D. G. Levkov and S. M. Sibiryakov, [*Induced tunneling in quantum field theory: Soliton creation in collisions of highly energetic particles*]{}, [ *Phys. Rev. D*]{} [**71**]{} (2005) 025001, \[[[hep-th/0410198]{}](http://xxx.lanl.gov/abs/hep-th/0410198)\]. D. G. Levkov and S. M. Sibiryakov, [*Real-time instantons and suppression of collision-induced tunneling*]{}, [*JETP Lett.*]{} [**81**]{} (2005) 53, \[[[hep-th/0412253]{}](http://xxx.lanl.gov/abs/hep-th/0412253)\]. G. ’t Hooft, [*Magnetic monopoles in unified gauge theories*]{}, [*Nucl. Phys. B*]{} [**79**]{} (1974) 276. A. M. Polyakov, [*Particle spectrum in quantum field theory*]{}, [*JETP Lett.*]{} [**20**]{} (1974) 194. V. G. Kiselev and Y. M. Shnir, [*Forced topological nontrivial field configurations*]{}, [*Phys. Rev. D*]{} [**57**]{} (1998) 5174, \[[[hep-th/9801001]{}](http://xxx.lanl.gov/abs/hep-th/9801001)\]. N. S. Manton, [*The force between ’t – monopoles*]{}, [*Nucl. Phys. B*]{} [**126**]{} (1977) 525. [^1]: Related yet different problem is creation of kink–antikink pairs from wave packets in the background of preexisting kink [@kink_to_3kinks]. [^2]: In particular, collective coordinates cannot be introduced since solitons are absent in the beginning of the process. [^3]: One changes $\partial_x^2\phi(x_i)$ to $(\phi_{i+1} + \phi_{i-1} - 2\phi_i)/\Delta x^2$, where $\phi_i = \phi(x_i)$. The time derivative $\partial_t^2 \phi$ is discretized in the same way. [^4]: In our units $\hbar=1$. [^5]: Discretization of Eq. (\[eq:8\]) is standard second–order. [^6]: Only the central parts of solutions are shown in this and other figures. [^7]: We used $\sigma = 10^{-2},\, 10^{-3}$; the final result was insensitive to this number. [^8]: Small ripples covering the solutions represent fluctuations due to stochastic sampling. | Mid | [
0.588528678304239,
29.5,
20.625
] |
title: PowerShell Credential Prompt id: ca8b77a9-d499-4095-b793-5d5f330d450e status: experimental description: Detects PowerShell calling a credential prompt references: - https://twitter.com/JohnLaTwC/status/850381440629981184 - https://t.co/ezOTGy1a1G tags: - attack.credential_access - attack.execution - attack.t1059.001 - attack.t1086 # an old one author: John Lambert (idea), Florian Roth (rule) date: 2017/04/09 logsource: product: windows service: powershell definition: 'Script block logging must be enabled' detection: selection: EventID: 4104 keyword: Message: - '*PromptForCredential*' condition: all of them falsepositives: - Unknown level: high | High | [
0.681957186544342,
27.875,
13
] |
Will call with further details (location) Would like for Greg to attend at least at 11am if he can't attend entire meeting and leave after they've gotten started. Jessica Ramirez | Low | [
0.41145833333333304,
19.75,
28.25
] |
Levelized Billing No More High Bills! The “LEVELIZED BILLING PROGRAM” offered by Humboldt Utilities will prevent customers from ever having another really high utility bill. This program allows residential and commercial customers to avoid large summer bills that come with hot weather air conditioning and the really large winter bills that result from frigid winter temperatures. The bills are a moving average of the past twelve monthly bills, payments will vary minimally from month to month, and there are no yearly true-ups. *”Moving average” means that each month’s bill is an average of the past twelve months; therefore it only varies a few dollars regardless of the weather. Customers who have already joined this program say it has been a “lifesaver” for them during this past summer’s unusually hot weather and in extremely cold winters. Contact Humboldt Utilities for additional information on how to participate in the Levelized Billing Program. | High | [
0.6874221668742211,
34.5,
15.6875
] |
Is Chinese Culture the Secret Ingredient for Economic Success? Chinese culture can contribute significantly to a country’s success, according to William Ratliff, research fellow and former curator of the Americas Collection at the Hoover Institution and research fellow at the Independent Institute. Dr. Ratliff argues that culture in general and Chinese culture in particular matter a great deal to a nation’s economic and political development. He spoke with Friends and Foes of Liberty about the role of culture, the resounding success of the Asian Tigers–Hong Kong, South Korea, Singapore and Taiwan–in the latter half of the 20th century, and the unfulfilled expectations of the Arab Spring. Hosted by Ying Ma, Friends and Foes of Liberty is a show featuring in-depth discussions with scholars, business executives and policymakers about freedom, geopolitics, international affairs and U.S. foreign policy. To listen to this episode, please click here, download the podcast on iTunes, or use the blogtalkradio player below. Related 2 Responses Ms. Ma, let me say first I am neither conservative or liberal but rather a centrist who looked at an issue in some depth before forming a view or opinion. That said, I am viewing your C-SPAN appearance and you made a statement about President Obama’s immigration enforcement. Sadly, you were incorrect. The facts as stated many times in various media is that the Obama administration has sent back more illegals than any other previous administration in our history. We are all entitled to our opinions but when making them especially in a public forum we should be sure they are based on fact and not feelings. | Mid | [
0.574948665297741,
35,
25.875
] |
Share The ABC of NFC Payment Our civilization owes a lot to the Star Wars franchise for introducing some of the most sophisticated technologies humanity has ever known. Game-changing innovations like hyperspace travel, carbonite freezing, cybernetics, and the light saber – just to name a few – have singlehandedly helped shape the future that we are all now living in. A long time ago, far, far back in 1997, an early-patented NFC technology was employed by Hasbro, Inc. to allow Star Wars action figure toys to “communicate” with one another. Today, NFC offers smartphone users the ability to complete faster, more secure transactions and conveniently access loyalty rewards simply by tapping or bringing their mobile devices within close proximity of an NFC-enabled point-of-sale reader. Unlike many of the aforementioned space-age achievements, employing NFC to make mobile payments isn’t necessarily a complicated process, although it may not be as intuitive as using the Force. So to help you gain a better understanding of the basics of near field communication and its application for making and accepting mobile payments, we’ve put together a short guide to help you navigate its protocol. The mobile wallet Near field communication is enabled by mobile device software or an NFC chip incorporated into the device, which transfers payment credential information or other stored data from the device to an NFC reader. This secure communication channel can facilitate payment transactions as well as other practical uses like reading payment cards, transit passes, event and travel tickets, room keys, and more. To enable NFC payments to be made from their devices, consumers load their personal credit or debit card information into a downloaded mobile app, effectively transitioning the physical card into digital form. These payment apps – or mobile wallets – include popular mobile payment solutions like Apple Pay and Android Pay, and now include banking institutions such as Wells Fargo (Wells Fargo Wallet) and large retail enterprises like Walmart (Walmart Pay). As one of the most secure ways to make a transaction, the mobile wallet’s NFC technology creates an effective barrier against fraud. NFC mobile payments are dynamically encrypted, meaning a cardholder’s payment data is converted to an indecipherable code with each use, preventing hacking and cases of fraud. In addition to the built in security features of a mobile wallet, the user’s mobile device includes its own security features. Features like fingerprint touch ID, six-digit passcode, and the device’s chip help protect its user far beyond the basic PIN and signature authentications of the traditional payment card. Adding just one more dimension to the versatility of the mobile wallet, many apps also incorporate user-enabled card control features that facilitate account management. Offering real-time insight into every step of the transaction, users can track card use and monitor pending payments, as well as set spending limits, receive and archive receipts, access card rewards, and even “turn off” their cards in the case of detected fraudulent use. Using your mobile wallet Secure, quick, and convenient, adding a mobile wallet to your mobile device is a relatively simple process. Follow these steps to set up your smartphone to complete NFC proximity payments. Download a mobile wallet onto your device. Though some may come included on your smartphone, others can be specific to your device (Android Pay or Apple Pay), or specific to your personal bank, credit union, or card issuer. There are many different payment apps available – so choose one (or more) with the usability and features that are right for your spending habits and needs. Add your card information to the mobile walletapp. The app you choose will guide you through a simple setup process, prompting you to securely include card information and a security method such as password, PIN, or fingerprint recognition. Start using your mobile wallet. You’re ready to go. However, using your mobile wallet to make payments, you’ll first need to be sure the point-of-sale terminal is enabled to accept NFC transactions. Many updated payment terminals that can process EMV chip cards are built to be compatible with NFC payment types. Now all you have to do to make a mobile payments is: Unlock your mobile device to access your mobile wallet apps. Select the app or particular card you’d like to use to make the payment. Tap, hold, or wave your device to the terminal reader until the payment is accepted. If you’re interested in a more secure, unified, and convenient way to pay, NFC really is the payment type you’re looking for. Contact Vantiv for card payment technology and get started implementing NFC in your business. If you’re interested in a more secure, unified, and convenient way to pay, NFC really is the payment type you’re looking for. Contact Vantiv for card payment technology and get started implementing NFC in your business. | Low | [
0.520089285714285,
29.125,
26.875
] |
storage_account_name="storage-account-name" access_key="storage-account-access-key" container_name="storage-account-container" key="tfstate-key" | Low | [
0.5118733509234821,
24.25,
23.125
] |
Reviews for Bird Box have not been kind. The postapocalyptic thriller starring Sandra Bullock was panned by critics and currently sits at 66 percent among audience members on Rotten Tomatoes. Chief among the film’s problems is its lack of originality (and more than passing resemblance to another 2018 movie, A Quiet Place). The sensory-deprivation horror flick is “filmed with illustrative approximations, in generic gestures and fragments,” according to The New Yorker’s Richard Brody. Amy Nicholson, writing for The Guardian, called it “forcibly screwed together, a movie marionetted by strings of data code.” I personally have seen more creative claustrophobic disaster scenarios played out by Sims characters. “Is it good?” asked Salon’s Melanie McFarland. “Not really, but it doesn’t need to be.” In the context of a movie review, this is an unexpected statement, but also spot-on: The circumstances under which Bird Box wormed its way into our zeitgeist explain why, in the age of direct-to-consumer streaming, quality may be more irrelevant than ever. Bird Box is a “Netflix original” adapted from Josh Malerman’s 2014 sci-fi novel of the same name and one of the many high-budget films that the company has funded in an expensive mission to prove that it can upend the traditional production cycle of a Hollywood studio. But even if the Bay Area–based corporation has overseen an impressive lineup of genuinely delightful projects—from rom-com revivers To All the Boys I’ve Loved Before and Set It Up to a legitimate Oscar contender like Roma—Bird Box’s results are quite different. Its SEO-friendly name, overcrowded cast, gimmicky imagery, and savvy release schedule all add up to pure meme bait. And meme the world has. The formulaic nature that hurt Bird Box’s critical reception is, as they say in Silicon Valley, a feature, not a bug. First, I should offer you a quick play-by-play of why we are currently discussing an otherwise unremarkable movie. Bird Box premiered at the top of Netflix’s homepage December 21, around the time that people were settling into their couch grooves for a sedentary holiday vacation. Even if the company’s 58.46 million U.S. subscribers were confused by Bird Box’s title, its recognizable star, autoplay previews, and prime above-the-fold real estate were enough to catch people’s attention. Aaron M. White recalled seeing its trailer advertised during Thanksgiving break, around the time the movie premiered at the American Film Institute Festival. “Eventually I learned enough about the movie through pop culture osmosis to have the idea that it involved Sandra Bullock, that it was like A Quiet Place but with blindfolds,” White, a 28-year-old civil litigation attorney in Chicago, told me via email. “I had this vague idea that it was a good enough movie to be on Netflix but not good enough for anyone to get excited about.” It wasn’t until White noticed images from the film inundating his feeds on Twitter and Instagram that he seriously considered watching it. In the Bird Box universe, looking at one of the world’s invisible monsters causes people to immediately die by suicide, a grim reality that requires Bullock’s character, Malorie, and her fellow survivors to spend most of the film blindfolded. For that reason, screen shots of the movie were almost immediately recognizable and—because humans look clumsy while navigating the world with their vision shielded—ripe for mimicry. Images of Bullock rowing a boat down a river while blindfolded became the first of many joke backdrops that eventually led to a full-blown raid of the film’s most intense moments to harvest fresh memes. People referred to tertiary Bird Box characters by their first names, the same way that Stranger Things fans toss around the name “Barb.” In an effort to promote the film earlier in December, Netflix presented a handful of well-known Twitch gamers with a Bird Box “challenge,” asking them to play their favorite game while blindfolded. (This is the type of sponsorship that is often offered to popular influencers on various social networks.) But as the search term “bird box” surged the week of Christmas, creators began integrating the concept into their videos unprompted. YouTubers applied the Bird Box challenge to their daily lives, pawing through Popeyes drive-throughs and stumbling on escalators in the name of clicky content. TikTok users fashioned their own signature, blindfolded dance. At least one Atlanta nightclub is hosting a Bird Box–themed party, complete with a “blindfolded shot for shot challenge.” As White watched these memes multiply in his feed, he was hit with a familiar sensation: the fear of missing out. “I figured that I just about got the gist of the joke, visual memes are largely self-explanatory,” he said. “But there was just so many of them, and they seemed so versatile, so I ultimately decided to watch the movie to make sure I wasn’t missing any nuance.” White’s journey to the play button is likely one of the reasons that, according to Netflix, more than 45 million accounts viewed Bird Box within the first week of its release, a statistic that it touted as its best-ever debut for an original film. (The company later qualified that those it counted as viewers had watched at least 70 percent of the movie’s total running time. Netflix declined to comment for this story.) Aside from disclosing the occasional disturbingly specific statistic, Netflix is notorious for withholding viewership data, and a January 8 Nielsen report found that the number of viewers who watched the whole movie was around 26 million. Based on the online buzz, it’s safe to say many of those millions of people were driven to watch Bird Box—a film that most people also agree is bad—just to better understand the collective conversation online. And in the case of Netflix subscriber Stafford Heppenstall, that didn’t mean completing it. “I only watched Bird Box for the memes,” Heppenstall, a 31-year-old operations manager, told me via direct message. “After the scene with the guy forcing the old women’s eyes open, I stopped watching the movie. I got enough context to know what the memes were and after I read a spoiler on Twitter (while I was watching the movie) I really didn’t think I needed to watch more.” Hollywood has learned that, when executed correctly, memes can be a far more effective marketing tool than any online ad or freeway billboard. Usually, a positive symbiotic relationship between a meme and a movie (or TV show) relies on two factors: general access and a positive critical reception of the show or movie from which it samples. Scandal drew crowds to Twitter because people already enjoyed watching it (and were also able to easily on ABC). Arthur, The Simpsons, and SpongeBob SquarePants have been mined for online jokes for years because they feature familiar characters from long-running, beloved, and—most importantly—streamable shows. The same logic goes for reality-TV shows like Keeping Up With the Kardashians and Vanderpump Rules, which thrive on the online culture dedicated to tracking their main characters’ every online action. A Star Is Born demonstrated the power of preemptive memes on a movie’s reception—in part because it ultimately stood up to the hype. Bird Box is the cynical inverse of your typical pop culture meme. Though it has all the necessary visual ingredients that help a movie spread online, viewers seem more attached to the memes it has generated than the movie itself. In other words, it’s like your average Drake single: best when picked over, remixed, and memed by more creative minds on the internet. Because social media has become a form of entertainment in and of itself, online chatter is enough to drive traffic to what is almost universally acknowledged as lesser content—just for the sake of context. “[This is] definitely a new level of influence,” Jack Daley, a 32-year-old medical sales rep who watched the movie after seeing a meme that compared it to A Quiet Place, told me via direct message. “The power of memes is crazy. Who would have thought that memes would have got me to do something? We live in a weird time.” Streaming services like Amazon, Hulu, and Netflix may have spent the last few years throwing money at projects designed to earn them respect in Hollywood. But ultimately Netflix will need to prove its staying power to investors via sheer strength of numbers. An Oscar might be a nice memento for CEO Reed Hastings’s corner office, but the company’s ability to turn a lacking film into a hit via strategic marketing is far more valuable. In addition to the unprecedented announcement of Bird Box’s first-week viewership, Netflix also recently bragged about its ability to turn young actors into Instagram influencers via its most recent earning’s call. It’s all part of the company’s efforts to position itself as a digital-first driver of culture rather than a decorated movie studio. Even if Netflix’s numbers are impressive, flipping mediocre movies into digestible memes can have its setbacks. Amid the height of the Bird Box fervor, one suspicious Twitter user hatched a theory that the company was using bots to spread memes about the movie online, citing a large amount of engagement from recently started Twitter accounts that had very few followers and tweets. The claim went viral, spurring a handful of articles explaining why such a mediocre movie had become so popular. Netflix denied these claims in a direct message with The Daily Dot, but the lack of clarity has nevertheless sown confusion and, in some cases, even discouraged subscribers from watching the movie. “I feel like I’m being conned into watching it by some unseen force that’s funneling Bird Box memes onto my timeline,” Nora Hastings (no relation), a 25-year-old graphic designer, told me via direct message. Despite feeling left out of the online conversation, she has yet to watch the movie. “I want to see Bird Box and understand the memes fully but I also really, really, really don’t want to give Netflix the satisfaction, despite the very obvious fact that they don’t know who I am or even care about what I watch.” Darren Linvill, a Clemson University professor who recently published a study on the presence of fake Russian accounts during the 2016 presidential election, ran a quick survey of the Bird Box Twitter hashtag using proprietary university software and found no evidence of bot activity. (Though he did find some odd activity in which 9,000 separate accounts tweeted a Bird Box meme with same typo: “Dr.Lapham” with no space.) Beyond identifying automated activity on social media, however, he says it’s hard to parse whether a PR company is running an astroturfing campaign—a communication strategy in which a corporation hires people to pose as concerned citizens and push its preferred message—or people are simply stealing each other’s tweets. Dr.Lapham when she found out Malorie names the kids Boy and Girl.#BirdBox pic.twitter.com/O1zwZ3qSPR — insta;ihatemaka (@ihatemakaa) December 27, 2018 “Bird Box is great for funny memes and that seems to have driven lots of attention starting when it first came out on the 21st,” Linvill told me via email. “To what degree this spike is organic and what degree it is created by a PR company would take a lot of effort to figure out.” Along with a general suspicion that Netflix appears to have manipulated the public, the meme has now taken on a second life as a challenge on YouTube, Twitch, and TikTok. (In typical viral-grab fashion, Good Morning America news anchors recently offered their own interpretation of the trend.) On Wednesday, Netflix tweeted a warning to its followers to be careful: “PLEASE DO NOT HURT YOURSELVES WITH THIS BIRD BOX CHALLENGE. We don’t know how this started, and we appreciate the love, but Boy and Girl have just one wish for 2019 and it is that you not end up in the hospital due to memes,” the company wrote. “It’s almost like they made it even more viral,” said Nina Amjadi, a managing director at the digital marketing firm North Kingdom. “They didn’t say stop doing that, they didn’t even really really shut it down. It was more like a, ‘Hey, be careful, but continue doing that, because it’s marketing our movie really well.’” Not all brands are comfortable having their movies and TV shows connected to an unwieldy news cycle; last year, for instance, Disney surreptitiously deleted a meme that joked that Pinocchio was dead inside. But even if it’s not clear whether Bird Box meme makers and challenge participants enjoyed its film, Netflix appears to be right at home in the world of memes and challenges. Amjadi says that kind of active participation is the new holy grail of digital meme marketing. “The question is really: How do you rate the success of the film Bird Box?” she said. “Is it the amount of people who saw it, or is it the amount of people who discussed it? After the holidays, people come back to work on January 2. How many people were saying: ‘Oh, did you see Bird Box?’ Did they say: ‘Have you done the Bird Box challenge?’ In any case, it’s mentioning a movie that Netflix is behind, and Sandra Bullock is getting that exposure.” For Netflix, the creation of a massive online movement is worth far more than a few positive reviews. This piece has been updated to reflect new Nielsen data. | Low | [
0.49691991786447604,
30.25,
30.625
] |
PennDOT begins four-year I-95 construction project A four-year, $211.7 million project designed to rebuild 1.5 miles of I-95 is set to begin Monday, June 9 and wrap up roughly four years from now in the summer of 2018. The project – the second largest construction contract in PennDOT history – is a next step in PennDOT’s plans to rebuild the I-95/Girard Avenue Interchange and three miles of I-95, from Race Street to just south of Allegheny Avenue. The work will both address critical repairs on the existing, aging infrastructure and widen the I-95 corridor to four travel lanes in each direction. The work beginning Monday will rebuild the northbound portion of I-95 between the Girard Avenue and Allegheny Avenue interchanges in Philadelphia. During construction, three travel lanes will remain open in each direction – though drivers will experience overnight lane closures as crews paint new traffic lines and set concrete barriers next week. Related projects This new work will merge into PennDOT’s exiting half-mile construction zone between I-676 and Columbia Ave. Two related projects are already underway. Since late 2011, crews have been rebuilding and improving local surface streets, replacing bridges and relocating major utility lines near the Girard Ave Interchange. That $91.2 million project is scheduled to be complete in the fall of 2015. PennDOT is also in the midst of a $39 million project to widen and rebuild 1,200 feet of I-95 just south of the Girard Ave Interchange and to replace the bridges over Shackamaxon Street, Marlborough Street and Columbia Ave. This work should be complete next summer. About the author Christine Fisher, Transportation reporter From 2012-2014 Christine covered transportation, writing about everything from pedestrian concerns to bicycle infrastructure, bridges, trail networks, public transit and more. Her favorite assignments sent her bushwhacking through Philadelphia’s yet-to-be-cleared bike trails, catching a glimpse of SEPTA’s inner workings or pounding the pavement to find out what pedestrians really think. Christine also covered community news for Eyes on the Street, where her work ranged from food sovereignty to public art and urban greening. She first joined PlanPhilly in fall 2011 as an intern through a partnership with Temple University’s Philadelphia Neighborhoods website. | Mid | [
0.6469248291571751,
35.5,
19.375
] |
Q: Crystal: how to increase the number of types an array can hold after instantiation I'm building a table of data. At first, the rows of this table are most easily filled by key / value so I use a hash a = Hash(Int32, Symbol).new a[1] = :one Once that's done, it's more convenient to work with an array (I need to sort the data for example). Easy enough: a.to_a # => [{1, :one}] But now I'm discovering that for my formatter to work properly (multi-page table, using latex) things just make more sense if I can store another data type in that array, for example, a string. But it's too late! The type of the array is fixed; it won't admit a string. a << "str" # => ERROR! The solution I've come up with so far doesn't seem very elegant: a = Hash(Int32, Symbol).new a[1] = :one arr = Array(String | Tuple(Int32, Symbol)).new a.each do |k,v| arr << {k,v} end arr << "str" # no problem now Is there a more "Crystal" / elegant way? A: Just use to_a with as it can be used to cast to a "bigger" type as the docs call it a = Hash(Int32, Symbol).new a[1] = :one arr = a.to_a.map { |x| x.as(String | Tuple(Int32, Symbol)) } arr << "str" # [{1, :one}, "str"] | Mid | [
0.572043010752688,
33.25,
24.875
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.