text
stringlengths 8
5.74M
| label
stringclasses 3
values | educational_prob
sequencelengths 3
3
|
---|---|---|
/* * Copyright 2017 PayPal * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.squbs.testkit.japi import akka.actor.ActorSystem import com.typesafe.config.Config import org.squbs.lifecycle.GracefulStop import org.squbs.unicomplex.{Unicomplex, UnicomplexBoot} import org.squbs.testkit.{PortGetter, CustomTestKit => SCustomTestKit} import scala.collection.JavaConverters._ abstract class CustomTestKit(val boot: UnicomplexBoot) extends PortGetter { val system: ActorSystem = boot.actorSystem SCustomTestKit.checkInit(system) def this() { this(SCustomTestKit.boot()) } def this(actorSystemName: String) { this(SCustomTestKit.boot(Option(actorSystemName))) } def this(config: Config) { this(SCustomTestKit.boot(config = Option(config))) } def this(withClassPath: Boolean) { this(SCustomTestKit.boot(withClassPath = Option(withClassPath))) } def this(resources: java.util.List[String], withClassPath: Boolean) { this(SCustomTestKit.boot(resources = Option(resources.asScala.toList), withClassPath = Option(withClassPath))) } def this(actorSystemName: String, resources: java.util.List[String], withClassPath: Boolean) { this(SCustomTestKit.boot(Option(actorSystemName), resources = Option(resources.asScala.toList), withClassPath = Option(withClassPath))) } def this(config: Config, resources: java.util.List[String], withClassPath: Boolean) { this(SCustomTestKit.boot(config = Option(config), resources = Option(resources.asScala.toList), withClassPath = Option(withClassPath))) } def shutdown() = Unicomplex(system).uniActor ! GracefulStop } | Mid | [
0.6186046511627901,
33.25,
20.5
] |
Comparative evaluation of two MMPI short forms with chemical abusing inpatients. Studied several novel analyses of the correspondence between two popular MMPI short forms and the full MMPI, including multivariate profile analyses and analyses of change scores. Pre- and posttreatment MMPIs were collected from 100 veterans in a treatment program for alcohol and drug problems. Scores for the Faschingbauer Abbreviated MMPI and the MMPI-168 were extracted from full MMPI protocols. Each short form was compared separately with the full form in several individual-case-oriented and group-data-oriented analyses. Significant differences between full and both short form profiles were found. It was concluded that neither short form is recommended for individual case descriptions, that significant problems arise in using these short forms for group descriptive or comparative research, and that these short forms show moderately acceptable correspondence with full MMPIs over time. It appears that short MMPIs continue to show serious deficits in reliability, i.e., in correspondence with full MMPIs, in spite of recent attempts to demonstrate validity somewhat independent of the full MMPI. | Mid | [
0.6430260047281321,
34,
18.875
] |
It establishes a European quality assurance reference framework. This is a toolbox with common European references. On a voluntary basis, national authorities can use the aspects they deem most useful to develop, improve, guide and assess the quality of their own VET systems. KEY POINTS The framework contains a four-phase cycle of planning, implementation, evaluation/assessment and review/revision of the VET systems. Each phase includes quality criteria and indicative descriptors to help national authorities set goals, devise standards and undertake reviews. Reference indicators, such as investment in training teachers, are designed to help evaluate and improve the quality of VET systems. National authorities are encouraged to play an active role in the framework and to further develop common principles, reference criteria, indicators and guidelines. In May 2014, EU governments noted the advances that had been made in quality assurance in education and training and agreed on the need to make further progress. BACKGROUND The recommendation should help to modernise education and training systems and ensure people do not leave without qualifications. It also aims to improve the interrelationship between education, training and employment. Report from the Commission to the European Parliament and the Council on the implementation of the recommendation of the European Parliament and of the Council of 18 June 2009 on the establishment of a European quality assurance reference framework for vocational education and training (COM(2014) 30 final of 28.1.2014). | High | [
0.6863636363636361,
37.75,
17.25
] |
Arterial revascularization with the right gastroepiploic artery and internal mammary arteries in 300 patients. From September 1989 to September 1992, the right gastroepiploic artery in combination with one or both internal mammary arteries was used as a graft in 300 patients who underwent coronary artery bypass grafting. The gastroepiploic artery was the primary choice in preference to the saphenous vein. The study comprised 263 men and 37 women, ranging in age from 31 to 77 years (median age 59 years). Thirty-nine patients (13%) underwent previous bypass procedures with autologous vein grafts. In 17 patients (5.7%) the gastroepiploic artery was used as a single graft. In 150 patients (50%) the gastroepiploic artery in conjunction with one internal mammary artery was used (in 6 patients combined with a vein graft). In 133 patients (44.3%) the gastroepiploic artery was used with both internal mammary arteries. Revascularization in nine patients (3%) was combined with another cardiac procedure; three aortic valve replacements, two mitral valve repairs, and four resections of a left ventricular aneurysm. Ten patients died in the hospital (3.3%; 70% confidence limits 2.3% to 4.8%); two of these patients had an infarction in the area revascularized by the gastroepiploic artery. At late follow-up, 0.5 to 39 months (mean 14 months) after the operation, we found no mortality. One patient with an occluded gastroepiploic artery graft underwent reoperation with the use of the right internal mammary artery. One patient underwent percutaneous transluminal coronary angioplasty of the right coronary artery after occlusion of the gastroepiploic artery. Elective recatheterization was done in 88 patients 1 to 25 months after operation (mean 10 months). Graft patency in gastroepiploic artery grafts increased steadily from 77% in the first semester of the study to 95% in the fourth semester and then equaled the patency of the internal mammary artery grafts (97%), which was almost constant during the whole period. We conclude that patency of the gastroepiploic artery was initially related to a "learning curve" but eventually equaled that of the internal mammary artery grafts. Furthermore, the gastroepiploic artery may well be the graft of choice in conjunction with the internal mammary arteries. | Mid | [
0.649038461538461,
33.75,
18.25
] |
Commentator responds to the molestation of a student. Fulton, NY – The alleged molestation of a Fulton, NY Junior High School football player by four of his classmates shocked Central New York. It also has commentator Merilee Witherall thinking about how we call can contribute to improving responsibility in our schools. | Mid | [
0.6018099547511311,
33.25,
22
] |
There’s a good reason Dell (DELL) wants to go private and back out of the traditional PC business: Because it thinks selling computers based on Microsoft’s (MSFT) Windows operating system is quickly becoming a dead end. Forbes points us to a recent proxy statement filed with the Securities and Exchange Commission where Dell outlines the risks of remaining a private PC manufacturer and paints a very grim picture for the PC industry overall. Among other things, Dell cites “decreasing revenues in the market for desktop and notebook PCs and the significant uncertainties as to whether, or when, this decrease will end”; “the overall difficulty of predicting the market for PCs, as evidenced by the significant revisions in industry forecasts among industry experts and analysts over the past year”; “the ongoing downward pricing pressure and trend towards commoditization in the desktop and notebook personal computer market”; and “the increasing usage of alternative PC operating systems to Microsoft Windows.” One big reason that Dell wants to go private is that it reportedly plans to reinvent itself by developing a computer the size of a USB stick that’s capable of giving users access to every major operating system, from Windows to Mac OS X to Google’s (GOOG) Chrome OS. Given its own dim view of the PC industry, and of the market for Windows-based PCs in particular, it isn’t surprising that Dell seems willing to take such a big risk in overturning its traditional business model. | Mid | [
0.630669546436285,
36.5,
21.375
] |
50th Expeditionary Signal Battalion The 50th Expeditionary Signal Battalion is a United States Army unit which is part of the 35th Signal Brigade located at Fort Bragg, North Carolina. The Brigade's mission is to provide worldwide contingency, force projection, forced-entry signal support to the XVIII Airborne Corps for power-projection operations during war and operations other than war. In 2018, 50th Expeditionary Signal Battalion-Enhanced (50th ESB-E), 35th Theater Tactical Signal Brigade is serving as the ESB-E pilot unit. 50th ESB-E supports the XVIII Airborne Corps. Potentially this ESB-E will provide capabilities that are scalable, from small units (forcible-entry alongside paratrooper jumps), to larger, mature operations, as an expeditionary force keeps growing on the ground. References 050 | High | [
0.659846547314578,
32.25,
16.625
] |
// Copyright 2009 the Sputnik authors. All rights reserved. // This code is governed by the BSD license found in the LICENSE file. /** * @name: S9.3.1_A6_T2; * @section: 9.3.1, 15.7.1; * @assertion: The MV of StrUnsignedDecimalLiteral::: Infinity is 10<sup><small>10000</small></sup> * (a value so large that it will round to <b><tt>+∞</tt></b>); * @description: Compare Number('Infi'+'nity') with Number.POSITIVE_INFINITY, 10e10000, 10E10000 and Number("10e10000"); */ function dynaString(s1, s2){ return String(s1)+String(s2); } // CHECK#1 if (Number(dynaString("Infi", "nity")) !== Number.POSITIVE_INFINITY) { $ERROR('#1: Number("Infi"+"nity") === Number.POSITIVE_INFINITY'); } // CHECK#2 if (Number(dynaString("Infi", "nity")) !== 10e10000) { $ERROR('#2: Number("Infi"+"nity") === 10e10000'); } // CHECK#3 if (Number(dynaString("Infi", "nity")) !== 10E10000) { $ERROR('#3: Number("Infi"+"nity") === 10E10000'); } // CHECK#4 if (Number(dynaString("Infi", "nity")) !== Number("10e10000")) { $ERROR('#4: Number("Infi"+"nity") === Number("10e10000")'); } | Mid | [
0.565217391304347,
32.5,
25
] |
Problems in caries diagnosis. This paper reviews some of the problems in the diagnosis of primary and secondary caries, particularly with respect to the questionable lesion that has neither penetrated to dentine nor cavitated. Because tactile diagnosis of caries with sharp probes can be unreliable as well as damaging, diagnosis should include careful visual inspection (preferably with magnification), radiographic examination, fibre optic transillumination (FOTI), and/or measurement of electrical resistance depending on the anatomical location. Probably the most important and difficult diagnostic decision for the clinician is whether the patient is at high, moderate or low risk of caries. Consideration of various factors in the patient's history, and clinical and laboratory examinations will assist in this classification of risk. | High | [
0.679706601466992,
34.75,
16.375
] |
Chark caught 26 balls for 466 yards, accounting for 17.9 yards per reception. Chark also ran 12 times on jet sweeps, gaining 122 yards and 2 touchdowns in motion out of the backfield. The 6-foot-3 junior fits well into the system of new LSU offensive coordinator Matt Canada, who was known for running jet sweeps and trick plays at Pittsburgh. Chark is the first of two LSU junior wide receivers to announce his intention. The LSU football program is still waiting to hear a decision from junior wide receiver Malachi Dupre, who led the Tigers with 593 receiving yards and 41 catches in 2016. Bringing back Chark is huge for LSU’s skill position depth in 2017. LSU loses senior wideout Travin Dural and tight ends Colin Jeter and DeSean Smith to graduation, as well as running back Leonard Fournette to the NFL draft. Now that Fournette is gone, it seems like Chark will be taking over Fournette’s No. 7, a number that was previously worn by LSU greats Patrick Peterson and Tyrann Mathieu. | Mid | [
0.6413301662707831,
33.75,
18.875
] |
Friday, November 23, 2007 A few weeks ago, Will Dungee (a friend and a pastor at my church) and I went prayer walking in a part of Glenwood that had been hopping with drugs and recent violence (someone had been robbed and then shot and killed at a convenience store in that area). Our hope was to pray over the neighborhood and also to talk with folks there and ask them how we could pray for them and see where the Lord might take it from there. Eventually we happened upon “Rick”, who was hanging out with a couple of other guys and appeared to be concluding some sort of deal involving either bootleg CD’s, drugs, or both. I suggested we go talk to him, and Will agreed, and so I practically ran up to "Rick" (I didn’t know him at that point) while Will walked much more coolly behind me. We struck up a conversation, during which we learned that his mom was an evangelist, that he wouldn’t consider himself a Christian because of how he was living, that he had been shot at least once, and that he figured that each of us is supposed to be the best we can be at what we are doing – if you are a Christian, be the best one you can be; if you are a dealer, be the best one that you can be (seriously). We asked him how we might pray for him, and he said just to ask God to let him live another day (which honestly seemed pretty generic), and so we did that and went on our way. I wasn’t sure I would see him again, especially since my prayer walks in Soflo (South Florida Street) were not a regular part of my week. About a week or two later I was walking my dog Joe around the block and was passing by a different convenience store that is notorious for shady folks hanging around outside. As I passed by a car parked on the street, I look in and who do I see but "Rick" ! So I stopped and we did the whole “fist pound” thing (if you’re not from the ‘hood like me, you might not be as hip as me on that {sarcasm}). The first time we met, "Rick" had guessed that I didn’t live in Glenwood, and so on this meeting, I asked him what he was doing in my part of the ‘hood (neither of us took me seriously as I said that). Then I told him that it looked like God did answer prayers. He looked confused for a moment, and I reminded him that we had prayed that God would let him see another day, and well, here he was. He smiled and said, “Yeah, that’s right,” and I could tell that something flickered inside, the part of him that was created to know God and connect to Him. That was the extent of our conversation, and as I walked away, I couldn’t help but think that I had just taken part in a Kingdom moment. Will and I meeting "Rick" had transformed him in my eyes. I normally would have ignored him, not even noticed him on my walk with Joe, or just dismissed him as a punk dealer. But because I had a relationship with "Rick", I saw him that day, and so we talked again. I think it also transformed me in his eyes. He normally would have ignored me, once he saw I wasn’t interested in buying, not even noticing me on my walk, dismissing me as a rich white guy. But because he knew me on some level, he saw me, and so we talked again. Saturday, November 17, 2007 So I have held off as long as I could and have inaugurated my Christmas music listening tonight with Andrew Peterson's album Behold the Lamb of God. So to celebrate, I thought I might list my top five Christmas albums (#1 being most favorite, but #2's a favorite, too, just not as much favorite), and then invite my legions of loyal readers to share your lists or comment on mine. I'm always looking for new music, so show me the way! 5) Harry Connick, Jr When My Heart Finds Christmas - This album takes me back to some good ol' days when I was at Carolina, and I just continue to enjoy it year after year. 3) Ed Cash, Bebo Norman, Allen Levi Joy! - These three guys are really talented musicians and singers, and they really do have joy on this album as they perform many classic Christmas hymns. Plus, their original songs are super, and it's just a cool blend of three unique voices. 1) Andrew Peterson Behold the Lamb of God - This is an amazing original CD which tells the story of Christmas beginning in Exodus and moving through the Gospels. The songs stand alone, but are meant to be listened to as a whole body of work. All of the songs are originals, and in true Andrew Peterson form, the lyrics are profound. One of the highlights is "Matthew's Begats" which is simply the geneology of Jesus sung to an upbeat tune. If you have the chance to see this concert in person, I would take it. Tuesday, November 13, 2007 The other day as I was having a quiet time, Eliza asked me to do something with her. Used to being interrupted, I said, "Not right now, I'm spending time with Jesus. When I'm finished talking with Jesus then we can play." Her response was, "I pray sometimes too, Mommy." I said, "Really, when do you do that?" Thinking she was talking about prayer time before bed or at the dinner table. But she answered, "Well, I just close my eyes and bow my head and pray to God alone in my room." I said, "What do you pray about?" "I just sit quiet and pray, Mommy." "I know, but what do you talk to God about when you pray?" "Nothing, Mommy, I just sit quiet and listen to God." Sunday, November 11, 2007 You know, it seems like the only time I ever hear Psalm 23 is at funerals. But is that all it’s good for, to remind us of God in the midst of the valley of the shadow of death? It has become such a somber Psalm to me, repeated by rote. But as I have grown in margin, I am finding great comfort and margin from this short oldie-but-goodie. The Lord as our shepherd gives us hope that it’s not all up to us. There is someone greater than ourselves looking out for interests, pastoring us. There is a roominess in knowing that with God as our shepherd, we shall not want. Even in the midst trouble, of things not looking all right, of financial questions, God says that He is there for us with protection and presence and provision. God not only wants us to eat and to move, but also to rest, and as our shepherd, He loves us enough to make us lie down because He knows our need for rest more than we do. He leads us to green pastures and quiet waters, simple evidence of His goodness and love. And what has stood out to me over anything else in this psalm has been, “He restores my soul.” Our souls are our mind, will, and emotions, and this verse reminds us that God doesn’t just desire for us to have physical provision and rest, but He cares about our inner life as well. He wants to redeem our ways of thinking about Him, restoring our mind. He wants to heal the ways we feel about ourselves and our circumstances, restoring our emotions. He wants to transform our choices to reflect a trusting, love relationship with Him, restoring our wills. Backing all of this up is the Lord’s goodness and mercy, following us, urging us on towards our good home with the Lord. In the midst of our messes, in the midst of our forgetting, there is a quiet assurance that goodness and love will follow. And we are reminded of our future home with the Lord forever. Knowing that our future is secure gives us freedom and margin in the present to stop striving so hard. The Lord is my shepherd. I shall not want. He makes me lie down. He leads me. Surely goodness and love will follow me. This is not a somber psalm. It is a psalm of confidence that allows us to take a step back from our hurry and our efforts at self-provision and self-protection, allowing us to make room for margin. Wednesday, November 07, 2007 Make a Budget: One of the best defenses against marginless finances and one of the best ways to ensure that we can give generously to God’s work is simply to have a budget. In fact, I don't know how people manage their money and have funds to give without having a budget. Having a plan for where your money is going before you even get it and knowing where you are spending your income allows you to establish margin from the get go. If giving/tithing is a priority for us, then rather than waiting to see if there is anything left at the end to give, we ought to make “Giving” a line item in our budget (perhaps THE line item) and adjust our "want-to" spending around it. Diane and I have determined a percentage that we want to give, and so we adjust our giving percentage-wise to the amount of income that comes in. We use two tools to help us budget. One is a simple excel sheet on which we list every conceivable area of spending. There are line items for personal spending money for me and Diane, money set aside each month towards Christmas presents, vacation, and car repairs, money for clothes for our kids and more. We do a "zero-balance budget", which means that we have a place to put every dollar that is coming in, even if that place is "extra funds." The other is an online program called mVelopes (www.mvelopes.com). The way that mVelopes works is that it allows you to put money from your bank account into virtual “envelopes” so that you know where every dollar is going. So when you use your debit card at Food Lion, for example, mVelopes downloads that transaction from your bank account. Then you drag and drop that into your envelope for “grocery store” and it subtracts that amount from what you budgeted for the month. This system enables you to budget for many different areas of life, it tracks every dollar that you spend, and it helps you know when to say when. For example, when you have used up all of your “eating out money”, your envelope is at $0 and you know that it’s time to pack your lunch for the rest of the month. For us, mVelopes has been nothing short of amazing, and I would highly recommend the free, 30-day trial you can get online. An emergency fund helps with margin because if you know that you have $1,000 set aside for nothing but Murphy’s Law, it makes things like a busted radiator ($600 for my 1995 Honda Civic, I found out last month) be nothing more than a blip on the radar. It’s covered. Financial advisor Dave Ramsey http://www.daveramsey.com/suggests that after paying all of your “have-to” bills (including minimum balances on credit cards), putting all extra money towards building a $1,000 emergency fund that you DO NOT TOUCH is a necessity for margin. Having that margin allows for a measure of peace in times that could easily feel like crisis. The Debt Snowball: Once that emergency fund is established, if you have non-house debt your extra money should go towards the principle of your lowest amount owed. Then when that debt is paid off, take its minimum payment and apply that, with any extra each month, to the principle of your next lowest debt, and so on. This is what Ramsey calls the “debt snowball.” Eliminating debt removes some of the “have-to” payments and allows us to give more and save more. It’s also wise to not incur further debt, so for example, don’t buy a more expensive car than you can afford to pay off immediately (or within a few months). Now, let’s say you have your $1,000, you are putting all your extra money into your debt, and then an emergency happens and you dip into your fund. Focus next on replenishing your emergency fund, then back to debt. Diane and I have been following this 3-step plan for a few years, and we have reduced our non-house debt by over $12,000, and have experienced a great deal of financial peace and have seen our ability and desire to give increase. While we don’t have tons of room for all the “extras” that we desire, we have found our way to having joy in the financial margins. Sunday, November 04, 2007 Having margin is not just relegated to having time to relax and room for relationship. Financial margin is having some space between our income and our outflow, which allows room in our finances for giving, for saving, and for emergencies that inevitably come. This is another area where margin is key for us to have peace, and it’s an area where so many in our culture and in the Church are way out of whack. It’s hard to have financial margin here in America. Advertisers are after us from the time we are pre-schoolers, telling us about one more toy or cereal that we can’t live without, and they don’t ease off as we get older. In fact the “toys” and “cereals” get more expensive, and the benefits that they promise seem more and more alluring, because they promise us beauty, status, happiness, sex, and fulfillment. And so we are encouraged to live right up to our financial limits, spending every dollar as soon as we get it, and even to go beyond our limits, charging things on credit. We make choices to have payments and bills for things that we may or may not need, and spend a lot of time worrying about how to pay those bills or spend more time working in order to afford what we bought. One of the costs of living this way is that our ability to participate financially in building God’s Kingdom is severely limited. Many times, in our heart of hearts we want to give more than we do, but there is just not room left when we add up what needs to go out plus the things that we want. Another cost is that our time and attention is taken up by a focus on money, giving us less and less of those things to give to resting with God, being in relationship with others, and more. Financial stress is one of the leading causes of stress in America and is one of the leading causes of divorce. Such a premium has been placed on money, and it has been elevated to such a place of delivering hope and happiness that when we don’t have as much as we think we should, it can consume us. But the Lord doesn’t want us consumed with money and worrying about that. Jesus said, “Why do you worry about clothes and food? Your Father knows you need those things. Seek God and His kingdom first, and everything else will fall into place.” Financial margin gives us room for that seeking. Next we will look at three simple ways to move towards financial margin. | Low | [
0.530938123752495,
33.25,
29.375
] |
Abstract: Claims: Description: BACKGROUND SUMMARY BRIEF DESCRIPTION OF THE DRAWINGS DETAILED DESCRIPTION Overview Example System Example Methods Patent applications by Daniel Joseph Filip, San Jose, CA US Patent applications by GOOGLE INC. Near real-time imagery of a given location may be provided to user upon request. Most popularly viewed geographic locations are determined, and a 360 degree image capture device is positioned at one or more of the determined locations. The image capture device may continually provide image information, which is processed, for example, to remove personal information and filter spam. Such image information may then be provided to users upon request. The image capture device continually captures multiple views of the given location, and the requesting user can select which perspective to view.A computer-implemented method comprising: receiving, by one or more computing devices, live images of a geographical location from at least one image capture device; processing, by the one or more computing devices, the received live images; receiving, by the one or more computing devices, a request for map data corresponding to the geographical location; providing, by the one or more computing devices, the requested map data; receiving, by the one or more computing devices, a request for live imagery corresponding to the requested map data; determining, by the one or more computing devices and based on the request for live imagery, a point of view associated with the requested live images; and providing, by the one or more computing devices, processed live images corresponding to the determined point of view and the requested map information.The method of claim 1, further comprising: determining, using the one or more computing devices, geographical locations for which imagery is most often requested by users; and positioning the image capture device at the determined geographical location.The method of claim 1, wherein the received images include a continuous 360 degree field of view around the image capture device.The method of claim 1, wherein processing the received live images comprising detecting personal information and blurring the detected personal information.The method of claim 4, wherein the personal information includes human faces.The method of claim 1, wherein processing the received live images comprises filtering spam data.A system comprising: at least one image capture device positioned at a geographical location; one or more processors in communication with the image capture device, the one or processors programmed to: receive live images of a geographical location from at least one image capture device; process the received live images; receive a request for map data corresponding to the geographical location; provide the requested map data; receive a request for live imagery corresponding to the requested map data; determine, based on the request for live imagery, a point of view associated with the requested live images; and provide processed live images corresponding to the determined point of view and the requested map information.The system of claim 7, wherein the one or more processors are further programmed to determine geographical locations for which imagery is most often requested by users, and wherein the at least one image capture device is positioned at the determined geographical location.The system of claim 7, wherein the received images include a continuous 360 degree field of view around the image capture device.The system of claim 7, wherein processing the received live images comprises detecting personal information and blurring the detected personal information.The system of claim 10, wherein the personal information includes human faces.The system of claim 7, wherein processing the received live images comprises filtering spam data.A non-transitory computer-readable medium storing information and instructions executable by a processor for performing a method of providing live imagery, the method comprising: receiving live images of a geographical location from at least one image capture device; processing the received live images; receiving a request for map data corresponding to the geographical location; providing the requested map data; receiving a request for live imagery corresponding to the requested map data; determining, based on the request for live imagery, a point of view associated with the requested live images; and providing processed live images corresponding to the determined point of view and the requested map information.The non-transitory computer-readable medium of claim 13, the method further comprising determining geographical locations for which imagery is most often requested by users, wherein the image capture device is positioned at the determined geographical location.The non-transitory computer-readable medium of claim 13, wherein the received images include a continuous 360 degree field of view around the image capture device.The non-transitory computer-readable medium of claim 13, wherein processing the received live images comprising detecting personal information and blurring the detected personal information.The non-transitory computer-readable medium of claim 16, wherein the personal information includes human faces.The non-transitory computer-readable medium of claim 13, wherein processing the received live images comprises filtering spam data.Upon request, map data for a given location and associated imagery may be provided to a user. Such associated imagery is typically captured by a vehicle-mounted camera as the vehicle drives through the given location, and then stored in a database. Because of the passage of time between image capture and providing the image to the user, the imagery may depict information that is irrelevant or out of date. For example, the imagery may depict construction that is no longer ongoing, or a business that is no longer operational.Near real-time imagery of a given location may be provided to user upon request. Most popularly viewed geographic locations may be determined, and a 360 degree image capture device may be positioned at such locations. The image capture device may continually provide image information, which is processed, for example, to remove personal information and filter spam. Such image information may then be provided to users upon request. Because the image capture device continually captures multiple views of the given location, the requesting user can select which perspective to view.One aspect of the disclosure provides a computer-implemented method for providing live imagery to users upon request. In this method one or more computing devices receive live images of a geographical location from at least one image capture device, and process the received live images. Further, the one or more computing devices receive a request for map data corresponding to the geographical location, and provide the requested map data. The one or more computing devices further receive a request for live imagery corresponding to the requested map data, and determine, based on the request for live imagery, a point of view associated with the requested live images. The one or more computing devices provide processed live images corresponding to the determined point of view and the requested map information. According to one example, the one or more computing devices further determine geographical locations for which imagery is most often requested by users, and the image capture device is positioned at the determined geographical location. The received images may include a continuous 360 degree field of view around the image capture device. Processing the received live images may include detecting personal information, such as human faces and license plate numbers, and blurring the detected personal information. Alternatively or additionally, processing the received images may include filtering spam data.Another aspect of the disclosure provides a system comprising at least one image capture device positioned at a geographical location, and one or more processors in communication with the image capture device. The one or processors are programmed to receive live images of a geographical location from at least one image capture device, process the received live images, receive a request for map data corresponding to the geographical location, provide the requested map data, receive a request for live imagery corresponding to the requested map data, determine, based on the request for live imagery, a point of view associated with the requested live images, and provide processed live images corresponding to the determined point of view and the requested map information.Yet another aspect of the disclosure provides a non-transitory computer-readable medium storing information and instructions executable by a processor. When executed, the instructions perform a method comprising receiving live images of a geographical location from at least one image capture device, processing the received live images, receiving a request for map data corresponding to the geographical location, and providing the requested map data. This method further includes receiving a request for live imagery corresponding to the requested map data, determining, based on the request for live imagery, a point of view associated with the requested live images, and providing processed live images corresponding to the determined point of view and the requested map information.FIG. 1 is a functional diagram of a system in accordance with aspects of the disclosure.FIG. 2 is a pictorial diagram of the system of FIG. 1.FIG. 3 is an example screen shot in accordance with aspects of the disclosure.FIG. 4 is another example screen shot in accordance with aspects of the disclosure.FIG. 5 is another example screen shot in accordance with aspects of the disclosure.FIG. 6 is a flow diagram of an example method in accordance with aspects of the disclosure.Upon request by a user, live imagery of a given location may be provided to the user over the Internet in association with map data for the given location. For example, an image capture device may be positioned at the given location and may continually provide imagery to one or more computing devices. The one or more computing devices process the imagery to, for example, remove personal information (e.g., faces and/or license plate numbers) and filter spam. A user may request map data for the given location, and may also request live imagery of the given location. In response to the request, the one or more processors provide the processed live images associated with the requested map data.The image capture device may be, for example, a 360 degree video camera. In this regard, the image capture device may continually capture a 360 degree field of view around the image capture device. According to one example, in requesting the live imagery, the user may specify a viewpoint for the imagery. For example, the user may submit directional information with the request for imagery, and in response receive a segment of the captured imagery.Positioning of the image capture device may be determined based on popularity. For example, the one or more computing devices may determine for which geographical locations the most requests for map data or imagery are received. Image capture devices may be positioned at the determined locations. Preferably, the image capture devices are positioned so as to prevent tampering.The processing performed on the captured images may be automated. For example, the one or more processors may automatically detect personal information, such as faces, license plates, or other information. In response to detecting such information, the one or more processors blur or otherwise obscure the information such that it is not provided to a user in response to a request. Moreover, the one or more processors may detect and filter spam. For example, it may be determined that images from an unauthorized image capture device or other unauthorized content are being received in addition to or in place of approved images. Accordingly, the unauthorized content and images may be filtered.FIGS. 1 and 2 include an example system 100 in which the features described above may be implemented. It should not be considered as limiting the scope of the disclosure or usefulness of the features described herein. In this example, system 100 can include one or more computing devices 110, which may be connected to further computing devices 160 and 170 over a network 150.Computing devices 110 can contain one or more processors 120, memory 130 and other components typically present in general purpose computing devices. The memory 130 can store information accessible by the one or more processors 120, including instructions 132 that can be executed by the one or more processors 120.Memory 130 can also include data 134 that can be retrieved, manipulated or stored by the processor. The memory can be of any non-transitory type capable of storing information accessible by the processor, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories.The instructions 132 can be any set of instructions to be executed directly, such as machine code, or indirectly, such as scripts, by the one or more processors. In that regard, the terms "instructions," "application," "steps" and "programs" can be used interchangeably herein. The instructions can be stored in object code format for direct processing by a processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. The instructions 132 can be executed to perform operations such as detecting personal information in received images, modifying the received images to blur or obscure such information, or the like. The instructions 132 may also be executed to perform spam detection and filtering. Functions, methods and routines of the instructions are explained in more detail below.Data 134 can be retrieved, stored or modified by the one or more processors 120 in accordance with the instructions 132. For instance, although the subject matter described herein is not limited by any particular data structure, the data can be stored in computer registers, in a relational database as a table having many different fields and records, or XML documents. The data can also be formatted in any computing device-readable format such as, but not limited to, binary values, ASCII or Unicode. Moreover, the data can comprise any information sufficient to identify the relevant information, such as numbers, descriptive text, proprietary codes, pointers, references to data stored in other memories such as at other network locations, or information that is used by a function to calculate the relevant data. According to one example, the data may include map information for geographical locations. Moreover, the data 134 may include information related to image capture device 190, such as an identifier and location information.The one or more processors 120 can be any conventional processors, such as a commercially available CPU. Alternatively, the processors can be dedicated components such as an application specific integrated circuit ("ASIC") or other hardware-based processor. One or more of computing devices 110 may include specialized hardware components to perform specific computing processes, such as decoding video, matching video frames with images, distorting videos, encoding distorted videos, etc. faster or more efficiently.Although FIG. 1 functionally illustrates the processor, memory, and other elements of computing device 110 as being within the same block, the processor, computer, computing device, or memory can actually comprise multiple processors, computers, computing devices, or memories that may or may not be stored within the same physical housing. For instance, the memory can be a hard drive or other storage media located in housings different from that of the computing devices 110. As another example, various methods described below as involving a single component (e.g., processor 120) may involve a plurality of components (e.g., multiple computing devices distributed over a network of computing devices, computers, "racks," etc. as part of a parallel or distributed implementation). Further, the various functions performed by the embodiments may be executed by different computing devices at different times as load is shifted from among computing devices. Similarly, various methods described below as involving different components (e.g., device 110 and device 160) may involve a single component (e.g., rather than device 160 performing a determination described below, device 160 may send the relevant data to device 110 for processing and receive the results of the determination for further processing or display). Accordingly, references to a processor, computer, computing device, or memory will be understood to include references to a collection of processors, computers, computing devices, or memories that may or may not operate in parallel. For example, the computing devices 110 may include server computing devices operating as a load-balanced server farm, distributed system, etc. Yet further, although some functions described below are indicated as taking place on a single computing device having a single processor, various aspects of the subject matter described herein can be implemented by a plurality of computing devices, for example, communicating information over network 150.Each of the computing devices 110 can be at different nodes of the network 150 and capable of directly and indirectly communicating with other nodes of network 150. Although only a few computing devices are depicted in FIGS. 1-2, it should be appreciated that a typical system can include a large number of connected computing devices, with each different computing device being at a different node of the network 150. The network 150 and intervening nodes described herein can be interconnected using various protocols and systems, such that the network can be part of the Internet, World Wide Web, specific intranets, wide area networks, or local networks. The network can utilize standard communications protocols, such as Ethernet, WiFi and HTTP, protocols that are proprietary to one or more companies, and various combinations of the foregoing. Although certain advantages are obtained when information is transmitted or received as noted above, other aspects of the subject matter described herein are not limited to any particular manner of transmission of information.As an example, each of the computing devices 110 may include web servers capable of communicating with a storage system 140, image capture device 190, and computing devices 160, 170 via the network 150. For example, one or more of server computing devices 110 may receive live imagery from the image capture device 190 through the network 150, and may further transmit processed imagery to the client devices 160, 170 using the network 150. As another example, one or more of server computing devices 110 may use network 150 to transmit and present information to a user, such as user 191, 192, on a display, such as displays 165 of computing devices 160, 170. In this regard, computing devices 160, 170 may be considered client computing devices and may perform all or some of the features described herein.Each of the client computing devices 160, 170 may be configured similarly to the server computing devices 110, with one or more processors 162 and memory, including data 163 and instructions 164 as described above. Each client computing device 160, 170 may be a personal computing device intended for use by a user 191, 192 and have all of the components normally used in connection with a personal computing device such as a central processing unit (CPU), memory (e.g., RAM and internal hard drives) storing data and instructions, a display 165 (e.g., a monitor having a screen, a touch-screen, a projector, a television, or other device that is operable to display information), and user input device 166 (e.g., a mouse, keyboard, touch-screen or microphone). The client computing device may also include a camera 167 for recording video streams, speakers, a network interface device, and all of the components used for connecting these elements to one another.Although the client computing devices 160, 170 may each comprise a full-sized personal computing device, they may alternatively comprise mobile computing devices capable of wirelessly exchanging data with a server over a network such as the Internet. By way of example only, client computing device 160 may be a mobile phone or a device such as a wireless-enabled PDA, a tablet PC, or a netbook that is capable of obtaining information via the Internet. In another example, client computing device 170 may be a head-mounted computing system. As an example the user may input information using a small keyboard, a keypad, microphone, using visual signals with a camera, or a touch screen.As with memory 114, storage system 140 can be of any type of computerized storage capable of storing information accessible by the server computing devices 110, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, and read-only memories. In addition, storage system 140 may include a distributed storage system where data is stored on a plurality of different storage devices which may be physically located at the same or different geographic locations. Storage system 140 may be connected to the computing devices via the network 150 as shown in FIG. 1 and/or may be directly connected to any of the computing devices 110, 160, 170.Storage system 140 may store images and associated information such as image identifiers, orientation, location of the camera that captured the image, intrinsic camera settings (such as focal length, zoom, etc.), depth information, as well as references to other, target images. Storage system 140 may also include information used for processing live imagery received by the one or more servers 110 from the image capture device 190. For example, the storage system 140 may include data associated with previously identified spam, such that the data can be used to identify and filter spam from the live imagery.The image capture device 190 may be a camera, such as a video camera, or any other device capable of capturing images of a particular geographical location. According to one example, the image capture device 190 is a 360 degree video camera, which continually captures a 360 degree field of view around itself. According to another example, multiple image capture devices may be deployed at one geographical location. The image capture device may be positioned at the geographical location in such a way as to prevent or mitigate potential tampering with it. According to some examples, the image capture device 190 is positioned at geographical locations selected based on popularity. For example, the one or more computing devices 110 may determine geographical locations for which map data and/or imagery are most often requested by users, and image capture devices 190 may be placed at those determined locations.Using the system described above, live imagery of geographical locations is provided to users upon request. The live imagery may be received at the one or more server computing devices from the image capture device and processed, for example, to remove personal information, such as faces and license plates numbers, and filter spam. The live imagery can include 360 degree panoramas. Users can request map data and imagery for the geographical location, and receive the processed live imagery in response. The users may also specify a particular point of view, and a corresponding portion of the 360 degree panorama is provided.FIG. 3 illustrates an example screenshot 300 providing map information for a given geographical location corresponding to an address entered in search field 310. The map information includes, for example, a roadgraph 320. A place marker 322 may indicate a position on the roadgraph 320 corresponding to the entered location. View option buttons 325, 335, 345, 355 are also provided, wherein each button provides an option for a different representation of the geographical location. For example, the map button 325 may correspond to a roadgraph, such as the roagraph 320. The street button 335 may correspond to still imagery of the geographical location taken from a perspective of someone standing at street level. The satellite button 345 may correspond to satellite imagery, showing a view of the geographical location from space. The live button 355 may correspond to live imagery captured by an image capture device dedicated to obtaining imagery of the specified geographical location, such as the image capture device 190 (FIGS. 1-2).FIG. 4 illustrates an example screenshot 400 illustrating an example of the live imagery associated with the specified geographical location and provided to the user. For example, the geographical location corresponding to address 415 is depicted by roadgraph 420, on which a position viewpoint indicator 462 and a directional viewpoint indicator 464 is placed. Live imagery appearing in viewing field 450 corresponds to the address 415 and the roadgraph 420. The live images may be viewed by the user, for example, by selecting live view button 455 among option buttons 425, 435, 455. The images provided in viewing field 450 may include a portion of images actually captured and provided to the server computing devices. For example, while the image capture device positioned at the geographical location may obtain images with a continuous 360 degree field of view, only a segment of such field of view may be shown in viewing field 450. That segment corresponds to a position and direction of indicators 462, 464. According to other examples, the full images captured, such as the entire 360 degree field of view panorama, may be provided to the user in one or more viewing fields.The position indicator 462 and directional indicator 464 may be manipulated by the user, for example, to receive images of a different viewpoint. FIG. 5 illustrates another example screenshot providing live imagery corresponding to a different view of the same geographical location as in FIG. 4. In particular, while position indicator 562 remains in the same position as position indicator 462 (FIG. 4), directional indicator 564 has been manipulated to point to a different direction, such as towards North. Accordingly, the live imagery provided in viewing field 550 shows a different area of the geographical location. According to the example where a 360 degree image capture device positioned at the location is providing the images, the images shown in the viewing field 550 may be another portion of the 360 degree panorama. In this regard, the user may repeatedly request different portions of the 360 degree frame. According to some examples, because the imagery provided in the viewing field 550 is live, objects in the imagery may appear to be moving. Moreover, the imagery may be continually updated as new images are received.The imagery provided in FIGS. 4-5 is processed by one or more computing devices prior to being provided to users. For example, personal information and spam may be removed. As an example of removing personal information, an automatic face detection and blurring operation may be performed on the imagery. Similarly, license plate numbers and other personal information may be detected and blurred or otherwise obscured. As an example of spam filtering, spam such as people putting their faces close up to the camera or people holding up signs with slogans may be present in the received images. Such spam may be detected, for example, using face detection or text detection algorithms. Detected spam may be blurred in the images or obscured by a black box or other object. Thus, while the imagery is described as being "live," it should be understood that the imagery may actually be subject to a small delay, such as a few second to a few minutes. According to another example, images including detected spam may not be sent to users. For example, last available clean live imagery from the given geographical location, which does not include spam, can be provided. In some instances, such last available clean images can be provided with a timestamp or note indicating the delay.According to one example, crowd-sourcing techniques may be used as part of the spam detection and filtering process. For example, users may submit reports identifying spam included in the live imagery for a given location. In response to receiving a predetermined number of reports for the given location, the last available clean images may be provided to users in place of the more recent images that include spam.FIG. 6 provides a flow diagram illustrating an example method 600. The following operations do not have to be performed in the precise order described below. Rather, various steps can be handled in a different order or simultaneously. Steps can also be omitted unless otherwise stated.In block 610, one or more computing devices receive live images of a geographical location from at least one image capture device. The at least one image capture device may be, for example, a 360 camera that continually captures images in directions all around a vertical axis. Such image capture devices may be positioned at selected geographical locations throughout the world. According to one example, the geographical locations may be selected by determining the locations for which imagery and/or map data is most often requested by users.In block 620, the received live images may be processed, for example, to remove personal information and filter spam. For example, the one or more server computing devices may automatically detect objects such as faces, license plates, etc. Once detected, the received imagery may be modified to obscure those objects. For example, the detected objects may be blurred, covered, or the like. Spam may affect the received images in various ways. For example, it may be determined that images from an unauthorized image capture device or other unauthorized content are being received in addition to or in place of approved images. Such spam may be automatically filtered using any of a number of techniques.In block 630, a request for map data corresponding to the geographical location is received by the one or more computing devices. For example, a user may enter an address, point of interest, or other relevant information in a search field of an interface. In response, the requested map data is provided to the user (block 640). For example, an address and/or a roadgraph or other depiction of the geographical location may be provided.In block 650, a request for live imagery corresponding to the requested map data is received. For example, the user may select an option to view live imagery from among several other types of views. Further, the user may identify in the request a specific area of the geographical location to view. For example, the user may identify position and/or directional information associated with the requested imagery. Such information may be indicated by the user by manipulating icons, entering text, providing speech commands, navigating through a depiction of the location, or the like.In block 660, a point of view associated with the requested live image is determined based on the request for live imagery. For example, the one or more computing devices may determine from information received from the user which specific area of the geographical location the user would like to see live.In block 670, processed live images corresponding to the determined point of view and the requested map information are provided to the user. For example, a portion of the captured 360 degree panorama may be to the user, wherein the provided portion corresponds to the position and direction specified in the user's request. According to another example, the full 360 degree panorama may be provided in one or more viewing fields. Because the imagery is continually captured, the imagery provided to the user may be continually updated.The above described features may be advantageous in that they provide users with the most up to date information regarding a specified location. For example, users can become informed about weather, traffic, construction, events, or other details associated with a geographic location. Such information may be more reliable than other sources of the same information, because the users can view it first hand, regardless of their current location. Using such information, users can make decisions about visiting the geographic location, or just become better educated about it.As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. As an example, the preceding operations do not have to be performed in the precise order described above. Rather, various steps can be handled in a different order or simultaneously. Steps can also be omitted unless otherwise stated. In addition, the provision of the examples described herein, as well as clauses phrased as "such as," "including" and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, the examples are intended to illustrate only one of many possible embodiments. Further, the same reference numbers in different drawings can identify the same or similar elements. | Low | [
0.522448979591836,
32,
29.25
] |
Q: vim: How do I syntax highlight a line so that one part of it is formatted as A and the other part is formatted as B? I've been at this for hours today, researching online and reading the vim manual. I'm about at my wits end. I want to format lines with timestamps so that they have green text, while bolding the timestamps themselves. For instance, If I have the following 4 lines: 1 [ 20:42:57 20190601 ] Apple car truck a whole bunch of other nonsense 2 ball baby zebra more nonsense 3 [ 20:43:12 20190601 ] dog blah blah blah 4 circle mouse rat up down left right b a b a select start Then both lines containing the timestamps (lines 2 & 4) would have green text, and the timestamps themselves ([ 20:42:57 20190601 ] and [ 20:43:12 20190601 ]) would be bold. My first thought was to just use a regex pattern to match all lines with a timestamp and color them green, and then use another regex pattern for just the timestamps themselves and make them bold, like so: syntax match timestampline "\[ \([0-9]\{2}\:\)\{2}[0-9]\{2} [0-9]\{8} \].*$" highlight timestampline ctermfg=green ctermbg=NONE syntax match timestamponly "\[ \([0-9]\{2}\:\)\{2}[0-9]\{2} [0-9]\{8} \]" highlight timestamponly cterm=bold But that just results in the timestamps being bolded with no green text anywhere. Then I thought that maybe I need to tell each syntax where to stop or start matching, like so: syntax match timestampline "\[ \([0-9]\{2}\:\)\{2}[0-9]\{2} [0-9]\{8} \]\{-}\zs.*$" highlight timestampline ctermfg=green ctermbg=NONE syntax match timestamponly "\[ \([0-9]\{2}\:\)\{2}[0-9]\{2} [0-9]\{8} \]\ze" highlight timestamponly ctermfg=green ctermbg=NONE cterm=bold But that just results in the timestamps being green and bold with everything else being unformatted. I don't understand what I'm doing wrong. Why are the second highlight statements completely negating the first ones? Shouldn't they just be formatting what they match and not affect things that they don't match? A: :help :syn-priority: When several syntax items may match, these rules are used: When multiple Match or Region items start in the same position, the item defined last has priority. So in your code timestamponly always wins and timestampline is never used. You can get the effect you want like this: syntax match timestampline /.*$/ contained highlight timestampline ctermfg=green syntax match timestamponly /\[ \d\{2}:\d\{2}:\d\{2} \d\{8} \]/ nextgroup=timestampline highlight timestamponly ctermfg=green cterm=bold | Mid | [
0.587155963302752,
32,
22.5
] |
PA: Kerry's 'Jewish Advisors' Are to Blame Former minister in the PA government claims that John Kerry’s “Jewish advisors” are to blame for the impasse in the peace talks. Dalit Halevi and Elad Benari, 02/04/14 03:14 Sufian Abu Zaida Flash 90 U.S. Secretary of State John Kerry’s “Jewish advisors” are to blame for the impasse in the peace talks, a former minister in the Palestinian Authority (PA) government claimed on Tuesday. Speaking to the PA-based Safa news agency, Sufian Abu Zaida, a senior member of PA Chairman Mahmoud Abbas’s Fatah movement, said, “All the members of the U.S. delegation to the negotiations are Jews, except for Kerry, and they seek to implement Israel’s goals.” It was unclear who Abu Zaida was referring to, but he may have meant Martin Indyk, the former U.S. Ambassador to Israel, who has served as President Barack Obama’s envoy to the peace talks. The comments come hours after Abbas breached the conditions of the peace talks by signing a request to join several UN agencies, using the fact that Israel has delayed the release of the fourth tranche of terrorists as an excuse for the move. Abbas obligated himself to refrain from such unilateral moves for international recognition during the course of the peace talks, restarted last July. The unilateral move by the PA effectively torpedoes the peace talks, which were set to continue until April 29, and which have been stalling of late amid Abbas's consistent refusal to recognize Israel as the Jewish state. As usual, the PA blamed Israel for the impasse. Abbas’s spokesman, Nabil Abu Rudeineh, accused Israel of causing Kerry’s efforts to fail, saying that the failure was due to Israel’s “evading the agreement to release the fourth group of veteran prisoners.” Abu Rudeineh accused Israel of thwarting the Oslo Accords because of its “wild settlement policy”, the latest manifestation of which was the publishing of tenders for 700 new housing units in Jerusalem. "The international bodies, and particularly the United States, must exert pressure on the Israeli government and force it to comply with its obligations regarding the release of the fourth group of veteran prisoners," he said. On Tuesday evening, as Abbas signed the request to join 15 international organizations, Kerry cancelled a planned meeting with the PA Chairman. He later clarified, however, that it was too early to declare that the peace efforts have officially failed. "It is completely premature tonight to draw...any final judgement about today's events and where things are," Kerry said at a press conference in Brussels. "This is a moment to be really clear-eyed and sober about this process." | Mid | [
0.553488372093023,
29.75,
24
] |
--- abstract: 'Hybrid analog and digital beamforming transceivers are instrumental in addressing the challenge of expensive hardware and high training overheads in the next generation millimeter-wave (mm-Wave) massive MIMO (multiple-input multiple-output) systems. However, lack of fully digital beamforming in hybrid architectures and short coherence times at mm-Wave impose additional constraints on the channel estimation. Prior works on addressing these challenges have focused largely on narrowband channels wherein optimization-based or greedy algorithms were employed to derive hybrid beamformers. In this paper, we introduce a deep learning (DL) approach for joint channel estimation and hybrid beamforming for frequency-selective, wideband mm-Wave systems. In particular, we consider a massive MIMO Orthogonal Frequency Division Multiplexing (MIMO-OFDM) system and propose three different DL frameworks comprising convolutional neural networks (CNNs), which accept the received pilot signal as input and yield the hybrid beamformers at the output. Numerical experiments demonstrate that, compared to the current state-of-the-art optimization and DL methods, our approach provides higher spectral efficiency, lesser computational cost, and higher tolerance against the deviations in the received pilot data, corrupted channel matrix, and propagation environment.' author: - 'Ahmet M. Elbir and Kumar Vijay Mishra [^1] [^2]' bibliography: - 'IEEEabrv.bib' - 'references\_047\_journal.bib' title: 'Deep Learning Strategies For Joint Channel Estimation and Hybrid Beamforming in Multi-Carrier mm-Wave Massive MIMO Systems' --- Channel estimation, deep learning, hybrid beamforming, mm-Wave, wideband massive MIMO. Introduction {#sec:Introduciton} ============ The conventional cellular communications systems suffer from spectrum shortage while the demand for wider bandwidth and higher data rates is continuously increasing [@mimoOverview]. In this context, millimeter wave (mm-Wave) band is a preferred candidate for fifth-generation (5G) communications technology because they provide higher data rate and wider bandwidth [@mimoOverview; @mishra2019toward; @5GwhatWillItBe; @hodge2019reconfigurable; @ayyar2019robust]. Compared to sub-6 GHz transmissions envisaged in 5G, the mm-Wave signals encounter a more complex propagation environment that is characterized by higher scattering, severe penetration losses, lower diffraction, and higher path loss for fixed transmitter and receiver gains [@mimoHybridLeus1; @mimoHybridLeus2]. The mm-Wave systems leverage massive antenna arrays - usually in a multiple-input multiple-output (MIMO) configuration - to achieve array and multiplexing gain, and thereby compensate for the propagation losses at high frequencies [@mimoRHeath]. However, such a large array requires a dedicated radio-frequency (RF) chain for each antenna resulting in an expensive system architecture and high power consumption. In order to address this, hybrid analog and baseband beamforming architectures have been introduced, wherein a small number of phase-only analog beamformers are employed to steer the beams. The down-converted signal is then processed by baseband beamformers, each of which is dedicated to a single RF chain [@mimoHybridLeus1; @mimoHybridLeus2; @mimoRHeath; @mimoScalingUp]. This combination of high-dimensional phase-only analog and low-dimensional baseband digital beamformers significantly reduces the number of RF chains while also maintaining sufficient beamforming gain [@mmwaveKeyElements; @mimoRHeath]. However, lack of fully digital beamforming in hybrid architectures poses challenges in mm-Wave channel estimation [@channelEstLargeArrays; @channelEstLargeArrays2; @channelEstimation1; @channelEstimation1CS; @channelModelSparseBajwa; @channelModelSparseSayeed]. The instantaneous channel state information (CSI) is essential for massive MIMO communications because precoding at downlink or decoding at uplink transmission requires highly accurate CSI to achieve spatial diversity and multiplexing gain [@mimoHybridLeus1; @mimoHybridLeus2]. In practice, pilot signals are periodically transmitted and the received signals are processed to estimate the CSI [@channelEstLargeArrays2; @channelEstLargeArrays2]. Further, the mm-Wave environments such as indoor and vehicular communications are highly variable with short coherence times [@coherenceTimeRef] that necessitates use of channel estimation algorithms that are robust to deviations in the channel data. Once the CSI is obtained, the hybrid analog and baseband beamformers are designed using either instantaneous channel matrix or channel covariance matrix (CCM). Bamforming based on the latter provides lower spectral efficiency [@widebandHBWithoutInsFeedback] because CCM does not reflect the instantaneous profile of the channel. Hence, it is more common to utilize the channel matrix for hybrid beamforming [@mimoHybridLeus3; @hybridBFAltMin; @hybridBFLowRes; @sohrabiOFDM]. In recent years, several techniques have been proposed to design the hybrid precoders in mm-Wave MIMO systems. Initial works have focused on narrow-band channels [@mimoHybridLeus1; @mimoHybridLeus2; @mimoHybridLeus3; @mimoRHeath; @hybridBFLowRes]. However, to effectively utilize the mm-Wave MIMO architectures with relatively larger bandwidth, there are recent and concerted efforts toward developing broadband hybrid beamforming techniques. The key challenge in hybrid beamforming for a broadband frequency-selective channel is designing a common analog beamformer that is shared across all subcarriers while the digital (baseband) beamformer weights need to be specific to a subcarrier. This difference in hybrid beamforming design of frequency-selective channels from flat-fading case is the primary motivation for considering hybrid beamforming for orthogonal frequency division multiplexing (OFDM) modulation. The optimal beamforming vector in a frequency-selective channel depends on the frequency, i.e., a subcarrier in OFDM, but the analog beamformer in any of the narrow-band hybrid structures cannot vary with frequency. Thus, a common analog beamformer must be designed in consideration of impact to all subcarriers, thereby making the hybrid precoding more difficult than the narrow-band case. Among prior works, [@widebandChannelEst1; @widebandChannelEst2] consider channel estimation for wideband mm-Wave massive MIMO systems. The hybrid beamforming design was investigated in [@alkhateeb2016frequencySelective; @sohrabiOFDM; @widebandHBWithoutInsFeedback; @widebandMLbased] where OFDM-based frequency-selective structures are designed. In particular, [@alkhateeb2016frequencySelective] proposes a Gram-Schmidt orthogonalization based approach for hybrid beamforming (GS-HB) with the assumption of perfect CSI and GS-HB selects the precoders from a finite codebook which are obtained from the instantaneous channel data. Using the same assumption on CSI, [@sohrabiOFDM] proposed a phase extraction approach for hybrid precoder design. In [@zhu2016novel], a unified analog beamformer is designed based on the second-order spatial channel covariance matrix of a wideband channel. In [@zhang2016low], the Eckart-Young-Mirsky matrix approximation is employed to find the wideband beamforming matrices that have the minimum Euclidean distance from the optimal solutions. In [@lee2014matrix], the wideband beamformer design is cast as a search for a common basis matrix for the subspaces spanned by all subcarriers’ channel matrices and the higher order singular value decomposition (HOSVD) method is applied. In [@chen2018hybrid], antenna selection is also introduced to wideband hybrid beamforming. It exploits the asymptotic orthogonality of array steering vectors and proposes two angular-information-based beamforming schemes to relax the assumption of full CSI at the transmitter such that knowledge of only angles of departure is required. Nearly all of the aforementioned methods strongly rely on perfect CSI knowledge. This is very impractical given the highly dynamic nature of mm-Wave channel [@coherenceTimeRef]. To relax this dependence and obtain robust performance against the imperfections in the estimated channel matrix, we examine a deep learning (DL) approach. The DL is capable of uncovering complex relationships in data/signals and, thus, can achieve better performance. This has been demonstrated in several successful applications of DL in wireless communications problems such as channel estimation [@mimoDLChannelEstimation; @deepCNN_ChannelEstimation], analog beam selection [@mimoDLHybrid; @hodge2019multi], and also hybrid beamforming [@mimoDLHybrid; @mimoDLChannelModelBeamformingFacebook; @mimoDeepPrecoderDesign; @elbirDL_COMML; @elbirQuantizedCNN2019; @elbirHybrid_multiuser]. In particular, DL-based techniques have been shown [@deepCNN_ChannelEstimation; @deepLearningCommOverAir; @elbirIETRSN2019; @elbirQuantizedCNN2019; @elbirDL_COMML] to be computationally efficient in searching for optimum beamformers and tolerant to imperfect channel inputs when compared with the conventional methods,. However, these works investigated only narrow-band channels [@mimoDeepPrecoderDesign; @mimoDLChannelModelBeamformingFacebook; @elbirDL_COMML; @elbirQuantizedCNN2019]. The DL-based design of hybrid precoders for broadband mm-Wave massive MIMO systems, despite its high practical importance, remains unexamined so far. In this paper, we propose a DL-based joint channel estimation and hybrid beamformer design for wideband mm-Wave systems. The proposed framework constructs a non-linear mapping between the received pilot signals and the hybrid beamformers. In particular, we employ convolutional neural networks (CNNs) in three different DL structures. In the first framework (F1), a single CNN maps the received pilot signals directly to the hybrid beamformers. In the second (F2) and third (F3) frameworks, we employ multiple CNNs to also estimate the channel separately. In F2, entire subcarrier data are fed to a single CNN for channel estimation. This is a less complex architecture but it does not allow flexibility of controlling each channel individually. Therefore, we tune the performance of F2 in F3, which has a dedicated CNN for each subcarrier. The proposed DL framework operates in two stages: offline training and online prediction. During training, several received pilot signals and channel realizations are generated, and hybrid beamforming problem is solved via the manifold optimization (MO) approach [@hybridBFAltMin; @manopt] to obtain the network labels. In the prediction stage when the CNNs operate in real-time, the channel matrix and the hybrid beamformers are estimated by simply feeding the CNNs with the received pilot data. The proposed approach is advantageous because it does not require the perfect channel data in the prediction stage yet it provides robust performance. Moreover, our CNN structure takes less computational time to produce hybrid beamformers when compared to the conventional approaches. The rest of the paper is organized as follows. In the following section, we introduce the system model for wideband mm-Wave channel. We formulate the joint channel estimation and beamforming problem in Section \[sec:probform\]. We then present our approaches toward both of these problems in Sections \[sec:ice\] and \[sec:bb\_hb\], respectively. We introduce our various DL frameworks in Section \[sec:HD\_Design\] and follow it with numerical simulations in Section \[sec:Sim\]. We conclude in Section \[sec:Conc\]. Throughout this paper, we denote the vectors and matrices by boldface lower and upper case symbols, respectively. In case of a vector $\mathbf{a}$, $[\mathbf{a}]_{i}$ represents its $i$th element. For a matrix $\mathbf{A}$, $[\mathbf{A}]_{:,i}$ and $[\mathbf{A}]_{i,j}$ denote the $i$th column and the $(i,j)$th entry, respectively. The $\mathbf{I}_N$ is the identity matrix of size $N\times N$; $\mathbb{E}\{\cdot\}$ denotes the statistical expectation; $\textrm{rank}(\cdot)$ denotes the rank of its matrix argument; $\|\cdot\|_\mathcal{F}$ is the Frobenius norm; $(\cdot)^{\dagger}$ denotes the Moore-Penrose pseudo-inverse; and $\angle\{\cdot\}$ denotes the angle of a complex scalar/vector. The notation expressing a convolutional layer with $N$ filters/channels of size $D\times D$, is given by $N$@$ D\times D$. System Model {#sec:SystemModel} ============ We consider hybrid precoder design for a frequency selective wideband mm-Wave massive MIMO-OFDM system with $M$ subcarriers (Fig. \[fig\_SystemArchitecture\]). The base station (BS) has $N_\mathrm{T}$ antennas and $N_\mathrm{RF}$ $(N_\mathrm{RF} \leq N_\mathrm{T})$ RF chains to transmit $N_\mathrm{S}$ data streams. In the downlink, the BS first precodes $N_\mathrm{S}$ data symbols $\mathbf{s}[m] = [s_1[m],s_2[m],\dots,s_{N_\mathrm{S}}[m]]^\textsf{T}\in \mathbb{C}^{N_\mathrm{S}}$ at each subcarrier by applying the subcarrier-dependent baseband precoders $\mathbf{F}_{\mathrm{BB}}[m] = [\mathbf{f}_{\mathrm{BB}_1}[m],\mathbf{f}_{\mathrm{BB}_2}[m],\dots,\mathbf{f}_{\mathrm{BB}_{N_\mathrm{S}}} [m]]\in \mathbb{C}^{N_{\mathrm{RF}}\times N_\mathrm{S}}$. Then, the signal is transformed to the time-domain via $M$-point inverse fast Fourier transforms (IFFTs). After adding the cyclic prefix, the transmitter employs a subcarrier-independent RF precoder $\mathbf{F}_{\mathrm{RF}}\in \mathbb{C}^{N_\mathrm{T}\times N_{\mathrm{RF}}}$ to form the transmitted signal. Given that $\mathbf{F}_{\mathrm{RF}}$ consists of analog phase shifters, we assume that the RF precoder has constant equal-norm elements, i.e., $|[\mathbf{F}_{\mathrm{RF}}]_{i,j}|^2 =1$. Additionally, we have the power constraint $\sum_{m=1}^{M}\|\mathbf{F}_{\mathrm{RF}}\mathbf{F}_{\mathrm{BB}}[m] \|_\mathcal{F}^2= MN_\mathrm{S}$ that is enforced by the normalization of baseband precoder $\{\mathbf{F}_{\mathrm{BB}}[m] \}_{m\in \mathcal{M}}$ where $\mathcal{M} = \{1,\dots,M\}$. Thus, the $N_\mathrm{T}\times 1$ transmit signal is $$\begin{aligned} \mathbf{x}[m] = \mathbf{F}_{\mathrm{RF}} \mathbf{F}_{\mathrm{BB}}[m] \mathbf{s}[m], \end{aligned}$$ In mm-Wave transmission, the channel is represented by a geometric model with limited scattering [@mimoChannelModel1]. The channel matrix $\mathbf{H}[m]$ includes the contributions of $L$ clusters, each of which has the time delay $\tau_l$ and $N_\mathrm{sc} $ scattering paths/rays within the cluster. Hence, each ray in the $l$th cluster has a relative time delay $\tau_{{r}}$, angle-of-arrival (AOA) $\theta_l \in [-\pi,\pi]$, angle-of-departure (AOD) $\phi_l \in [-\pi,\pi]$, relative AOA (AOD) shift $\vartheta_{rl}$ ($\varphi_{rl}$) between the center of the cluster and each ray [@alkhateeb2016frequencySelective], and complex path gain $\alpha_{l,r}$ for $r = \{1,\dots, N_\mathrm{sc}\}$. Let $p(\tau)$ denote a pulse shaping function for $T_\mathrm{s}$-spaced signaling evaluated at $\tau$ seconds [@channelModelSparseSayeed], then the mm-Wave delay-$d$ MIMO channel matrix is $$\begin{aligned} \label{eq:delaydChannelModel} \mathbf{H}[d] = & \sqrt{\frac{ N_\mathrm{T} N_{\mathrm{R}} } {N_{sc}L}}\sum_{l=1}^{L} \sum_{r=1}^{N_\mathrm{sc}}\alpha_{l,r} p(dT_\mathrm{s} - \tau_l - \tau_{{r}}) \nonumber \\ & \times \mathbf{a}_\mathrm{R}(\theta_{l} - \vartheta_{rl}) \mathbf{a}_\mathrm{T}^\textsf{H}(\phi_l - \varphi_{rl}), \end{aligned}$$ where $\mathbf{a}_\mathrm{R}(\theta)$ and $\mathbf{a}_\mathrm{T}(\phi)$ are the $N_\mathrm{R} \times 1$ and $N_\mathrm{T}\times 1$ steering vectors representing the array responses of the receive and transmit antenna arrays respectively. Let $\lambda_m = \frac{c_0}{f_m}$ be the wavelength for the subcarrier $m$ with frequency of $f_m$. Since the operating frequency is relatively higher than the bandwidth in mm-Wave systems and the subcarrier frequencies are close to each other, (i.e., $f_{m_1} \approx f_{m_2}$, $m_1,m_2 \in\mathcal{M}$), we use a single operating wavelength $\lambda = \lambda_{1} = \dots = \lambda_{M} = \frac{c_0}{f_c}$ where $c_0$ is speed of light and $f_c$ is the central carrier frequency [@sohrabiOFDM]. This approximation also allows for a single frequency-independent analog beamformer for each subcarrier. Then, for a uniform linear array (ULA), the array response of the transmit array is $$\begin{aligned} \mathbf{a}_\mathrm{T}(\phi) = \big[ 1, e^{j\frac{2\pi}{\lambda} \overline{d}_\mathrm{T}\sin(\phi)},\dots,e^{j\frac{2\pi}{\lambda} (N_\mathrm{T}-1)\overline{d}_\mathrm{T}\sin(\phi)} \big]^\textsf{T}, \end{aligned}$$ where $\overline{d}_\mathrm{T}=\overline{d}_\mathrm{R} = \lambda/2$ is the antenna spacing and $\mathbf{a}_\mathrm{R}(\theta)$ can be defined in a similar way as for $\mathbf{a}_\mathrm{T}(\phi)$. Using the delay-$d$ channel model in (\[eq:delaydChannelModel\]), the channel matrix at subcarrier $m$ is $$\begin{aligned} \mathbf{H}[m] = \sum_{d=0}^{D-1}\mathbf{H}[d]e^{-j\frac{2\pi m}{M} d}, \end{aligned}$$ where $D$ is the length of cyclic prefix [@channelModelSparseBajwa]. With the aforementioned block-fading channel model [@mmWaveModel1], the received signal at subcarrier $m$ is $$\begin{aligned} \label{arrayOutput} \mathbf{y}[m] = \sqrt{\rho}\mathbf{H}[m] \mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m]\mathbf{s}[m] + \mathbf{n}[m], \end{aligned}$$ where $\rho$ represents the average received power and $\mathbf{H}[m]\in \mathbb{C}^{N_\mathrm{R}\times N_\mathrm{T}}$ channel matrix and $\mathbf{n}[m] \sim \mathcal{CN}(\mathbf{0},\sigma^2 \mathbf{I}_\mathrm{N_\mathrm{R}})$ is additive white Gaussian noise (AWGN) vector. The received signal is first processed by the analog combiner $\mathbf{W}_\mathrm{RF}$. Then, the cyclic prefix is removed from the the processed signal and $N_\mathrm{RF}$ $M$-point FFTs are applied to yield the signal in frequency domain. Finally, the receiver employs low-dimensional $N_\mathrm{RF}\times N_\mathrm{S}$ digital combiners $\{\mathbf{W}_\mathrm{BB}[m]\}_{m\in \mathcal{M}}$. The received and processed signal is obtained as $\widetilde{\mathbf{y}}[m] = \mathbf{W}_\mathrm{BB}^\textsf{H}[m]\mathbf{W}_\mathrm{RF}^\textsf{H}\mathbf{y}[m]$, i.e., $$\begin{aligned} \label{sigModelReceived} \widetilde{\mathbf{y}}[m] = & \sqrt{\rho}\mathbf{W}_\mathrm{BB}^\textsf{H}[m]\mathbf{W}_\mathrm{RF}^\textsf{H}\mathbf{H}[m] \mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m]\mathbf{s}[m] \nonumber \\ &+ \mathbf{W}_\mathrm{BB}^\textsf{H}[m]\mathbf{W}_\mathrm{RF}^\textsf{H}\mathbf{n}[m], \end{aligned}$$ where the analog combiner $\mathbf{W}_\mathrm{RF}\in \mathbb{C}^{N_\mathrm{R}\times N_\mathrm{RF}}$ has the constraint $\big[[\mathbf{W}_\mathrm{RF}]_{:,i}[\mathbf{W}_\mathrm{RF}]_{:,i}^\textsf{H}\big]_{i,i}=1$ similar to the RF precoder. Problem Formulation {#sec:probform} =================== In practice, the estimation process of the channel matrix is a challenging task, especially in case of a large number of antennas deployed in massive MIMO communications [@channelEstLargeArrays; @channelEstimation1]. Further, short coherence times of mm-Wave channel imply that the channel characteristics change rapidly [@coherenceTimeRef]. Literature indicates several mm-Wave channel estimation techniques [@mimoChannelModel2; @channelEstimation1CS; @channelEstimation1; @mimoAngleDomainFaiFai; @mimoHybridLeus2]. In our DL framework, the channel estimation is performed by a deep network which accepts the received pilot signals as input and yields the channel matrix estimate at the output layer [@deepCNN_ChannelEstimation]. During the pilot transmission process, the transmitter activates only one RF chain to transmit the pilot on a single beam; the receiver meanwhile turns on all RF chains [@mimoHybridLeus2]. Hence, unlike other DL-based beamformers [@elbirDL_COMML; @elbirQuantizedCNN2019; @mimoDLChannelModelBeamformingFacebook; @mimoDeepPrecoderDesign] that presume knowledge of the channel, our framework exploits DL for both channel matrix approximation as well as beamforming. Specifically, we focus on designing hybrid precoders $\mathbf{F}_\mathrm{RF},\mathbf{F}_\mathrm{BB}[m]$, $\mathbf{W}_\mathrm{RF},\mathbf{W}_\mathrm{BB}[m]$ by maximizing the overall spectral efficiency of the system under power spectral density constraint for each subcarrier. Let $R[m]$ be the overall spectral efficiency of the subcarrier $m$. Assuming that the Gaussian symbols are transmitted through the mm-Wave channel [@mimoRHeath; @mimoHybridLeus1; @mimoHybridLeus2; @alkhateeb2016frequencySelective], $R[m]$ is $$\begin{aligned} &R[m] = \textrm{log}_2 \bigg| \mathbf{I}_{N_\mathrm{S}} +\frac{\rho}{N_\mathrm{S}}\boldsymbol{\Lambda}_\mathrm{n}^{-1}[m]\mathbf{W}_\mathrm{BB}^\textsf{H}\mathbf{W}_\mathrm{RF}^\textsf{H} \mathbf{H}[m]\nonumber \\ &\;\;\;\;\;\; \times\mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]\mathbf{{F}}_\mathrm{BB}^\textsf{H}[m] \mathbf{{F}}_\mathrm{RF}^\textsf{H}\mathbf{H}^\textsf{H}[m]\mathbf{W}_\mathrm{RF}\mathbf{W}_\mathrm{BB}[m] \bigg|, \end{aligned}$$ where $\boldsymbol{\Lambda}_\mathrm{n}[m] = \sigma_n^2 \mathbf{W}_\mathrm{BB}^\textsf{H}[m]\mathbf{W}_\mathrm{RF}^\textsf{H} \mathbf{W}_\mathrm{RF}\mathbf{W}_\mathrm{BB}[m]\in \mathbb{C}^{N_\mathrm{S} \times N_\mathrm{S}}$ corresponds to the noise term in (\[sigModelReceived\]). The hybrid beamformer design is equivalent to the following optimization problem: $$\begin{aligned} \label{HBdesignProblem} &\underset{\mathbf{{F}}_\mathrm{RF},\mathbf{{W}}_\mathrm{RF}, \{\mathbf{{F}}_\mathrm{BB}[m],\mathbf{{W}}_\mathrm{BB}[m]\}_{m\in \mathcal{M}}}{\operatorname*{maximize}} \frac{1}{M}\sum_{m =1}^{M} R[m] \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\; \mathbf{{F}}_\mathrm{RF} \in \mathcal{F}_\mathrm{RF}, \mathbf{{W}}_\mathrm{RF} \in \mathcal{W}_\mathrm{RF}, \nonumber \\ &\sum_{m=1}^{M}||\mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]||_{\mathcal{F}}^2 = M N_\mathrm{S}, \end{aligned}$$ where $\mathcal{F}_\mathrm{RF}$ and $\mathcal{W}_\mathrm{RF}$ are the feasible sets for the RF precoder and combiners which obey the unit-norm constraint and The hybrid beamformer design problem in (\[HBdesignProblem\]) requires analog and digital beamformers which, in turn, are obtained by exploiting the structure of the channel matrix in mm-Wave channel. Our goal is to recover $\mathbf{F}_\mathrm{RF}$, $\mathbf{F}_\mathrm{BB}[m]$, $\mathbf{W}_\mathrm{RF}$, and $\mathbf{W}_\mathrm{BB}[m]$ for the given received pilot signal. In the following section, we describe the channel estimation and design methodology of hybrid beamformers before introducing learning-based approach. Channel Estimation {#sec:ice} ================== In our work, DL network estimates the channel from the received pilot signals in the preamble stage. Consider the downlink scenario when the transmitter employs a single RF chain $\overline{\mathbf{f}}_u[m]\in\mathbb{C}^{N_\mathrm{T}}$ to transmit pilot signals $\overline{{s}}_u[m]$ on a single beam where $u = 1,\dots,M_\mathrm{T}$. Then, the receiver activates $M_\mathrm{R}$ RF chains to apply $\overline{\mathbf{w}}_v$ for $v = 1,\dots, M_\mathrm{R}$ to process the received pilots [@deepCNN_ChannelEstimation; @mimoHybridLeus2]. Since the number of RF chains in the receiver is limited by $N_\mathrm{RF}$ (usually less than $M_\mathrm{R}$ in a single channel use), a total of $N_\mathrm{RF}$ combining vectors are employed. Hence, the total channel use in the channel acquisition process is $\lceil \frac{M_\mathrm{R}}{N_\mathrm{RF}}\rceil$. After processing through combiners, the received pilot signal becomes $$\begin{aligned} \label{receivedSignalPilot} \mathbf{\overline{Y}}[m] = \overline{\mathbf{W}}^\textsf{H}[m] \mathbf{H}[m] \overline{\mathbf{F}}[m]\overline{\mathbf{S}}[m] + \widetilde{\mathbf{N}}[m], \end{aligned}$$ where $\overline{\mathbf{F}}[m] = [\overline{\mathbf{f}}_1[m],\overline{\mathbf{f}}_2[m],\dots,\overline{\mathbf{f}}_{M_\mathrm{T}}[m]]$ and $\overline{\mathbf{W}}[m] = [\overline{\mathbf{w}}_1[m],\overline{\mathbf{w}}_2[m],\dots,\overline{\mathbf{w}}_{M_\mathrm{R}}[m]]$ are $N_\mathrm{T}\times M_\mathrm{T}$ and $N_\mathrm{R}\times M_\mathrm{R}$ beamformer matrices. The $\overline{\mathbf{S}}[m] = \mathrm{diag}\{ \overline{s}_1[m],\dots,\overline{s}_{M_\mathrm{T}}[m]\}$ denotes pilot signals and $\widetilde{\mathbf{N}}[m]= \overline{\mathbf{W}}^\textsf{H} \overline{\mathbf{N}}[m]$ is effective noise matrix, where $\overline{\mathbf{N}}[m] \sim \mathcal{N}(0, \sigma_{\overline{\mathbf{N}}}^2)$. The noise corruption of the pilot training data is measured by SNR$_{\overline{\mathbf{N}}}$. Without loss of generality, we assume that $\overline{\mathbf{F}}[m] = \overline{\mathbf{F}}$ and $\overline{\mathbf{W}}[m] = \overline{\mathbf{W}}$, $\forall m$ and $\overline{\mathbf{S}}[m] = \sqrt{P_\mathrm{T}}\mathbf{I}_{M_\mathrm{T}}$, where $P_\mathrm{T}$ is the transmit power. Then, the received signal (\[receivedSignalPilot\]) becomes $$\begin{aligned} \label{receivedSignalPilotMod} \mathbf{\overline{Y}}[m] = \overline{\mathbf{W}}^\textsf{H} \mathbf{H}[m] \overline{\mathbf{F}} + \widetilde{\mathbf{N}}[m]. \end{aligned}$$ The initial channel estimate (ICE) is then $$\begin{aligned} \label{Gm} \mathbf{G}[m] = \mathbf{T}_\mathrm{T} \overline{\mathbf{Y}}[m]\mathbf{T}_\mathrm{R}, \end{aligned}$$ where $$\begin{aligned} \mathbf{T}_\mathrm{T} = \begin{dcases} \overline{\mathbf{W}},& M_\mathrm{R} < N_\mathrm{R} \\ (\overline{\mathbf{W}}\overline{\mathbf{W}}^\textsf{H})^{-1}\overline{\mathbf{W}}, & M_\mathrm{R} \leq N_\mathrm{R}, \end{dcases} \end{aligned}$$ and $$\begin{aligned} \mathbf{T}_\mathrm{R} = \begin{dcases} \overline{\mathbf{F}}^\textsf{H},& M_\mathrm{T} < N_\mathrm{T} \\ \overline{\mathbf{F}}^\textsf{H}(\overline{\mathbf{F}}\overline{\mathbf{F}}^\textsf{H})^{-1}, & M_\mathrm{T} \leq N_\mathrm{T}. \end{dcases} \end{aligned}$$ We consider $\mathbf{G}[m]$ as an initial estimate because, later, we improve this approximation with a deep network that maps $\mathbf{G}[m]$ to $\mathbf{H}[m]$. Hybrid Beamformer Design For Wideband mm-Wave MIMO Systems {#sec:bb_hb} ========================================================== The design problem in (\[HBdesignProblem\]) requires a joint optimization over several matrices. This approach is computationally complex and even intractable. Instead, a decoupled problem is preferred [@mimoRHeath; @sohrabiOFDM; @elbirQuantizedCNN2019; @hybridBFAltMin]. Here, the hybrid precoders $\mathbf{F}_\mathrm{RF},\mathbf{F}_\mathrm{BB}[m]$ are estimated first and then the hybrid combiners $\mathbf{W}_\mathrm{RF},\mathbf{W}_\mathrm{BB}[m]$ are found. Define the mutual information of the mm-Wave channel that can be achieved at the BS through Gaussian signalling as [@alkhateeb2016frequencySelective] $$\begin{aligned} & \mathcal{I}\{\mathbf{F}_\mathrm{RF},\mathbf{F}_\mathrm{BB}[m]\} = \textrm{log}_2 \bigg| \mathbf{I}_{N_\mathrm{S}} \nonumber \\ &\;\;\;\;\;\; +\frac{\rho}{N_\mathrm{S}}\mathbf{H}[m]\mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]\mathbf{{F}}_\mathrm{BB}^\textsf{H}[m] \mathbf{{F}}_\mathrm{RF}^\textsf{H}\mathbf{H}^\textsf{H}[m] \bigg|. \end{aligned}$$ The hybrid precoder are then obtained by maximizing the mutual information, i.e., $$\begin{aligned} \label{PrecoderDesignProblem} &\underset{\mathbf{{F}}_\mathrm{RF}, \{\mathbf{{F}}_\mathrm{BB}[m]\}_{m\in \mathcal{M}}}{\operatorname*{maximize}} \frac{1}{M}\sum_{m =1}^{M} \mathcal{I}\{\mathbf{F}_\mathrm{RF},\mathbf{F}_\mathrm{BB}[m]\} \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\; \mathbf{{F}}_\mathrm{RF} \in \mathcal{F}_\mathrm{RF}, \nonumber \\ &\sum_{m=1}^{M}||\mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]||_{\mathcal{F}}^2 = M N_\mathrm{S}, \end{aligned}$$ We note here that one could approximate the optimization problem in (\[PrecoderDesignProblem\]) by exploiting the similarity between the hybrid beamformer $\mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m]$ and the optimal unconstrained beamformer $\mathbf{F}^{\mathrm{opt}}[m]$. The latter is obtained from the right singular matrix of the channel matrix $\mathbf{H}[m]$ [@hybridBFAltMin; @mimoRHeath]. Let the singular value decomposition of the channel matrix be $\mathbf{H}[m] = \mathbf{U}[m] \boldsymbol{\Sigma}[m] \mathbf{V}^H[m]$, where $\mathbf{U}[m]\in \mathbb{C}^{N_\mathrm{R}\times \mathrm{rank}(\mathbf{H}[m])}$ and $\mathbf{V}[m]\in \mathbb{C}^{N_\mathrm{T} \times \mathrm{rank}(\mathbf{H}[m])}$ are the left and the right singular value matrices of the channel matrix, respectively, and $\boldsymbol{\Sigma}[m]$ is $\mathrm{rank}(\mathbf{H}[m])\times \mathrm{rank}(\mathbf{H}[m])$ matrix composed of the singular values of $\mathbf{H}[m]$ in descending order. By decomposing $\boldsymbol{\Sigma}[m]$ and $\mathbf{V}[m]$ as $\boldsymbol{\Sigma}[m] = \mathrm{diag}\{ \widetilde{\boldsymbol{\Sigma}}[m],\overline{\boldsymbol{\Sigma}}[m] \},\hspace{5pt} \mathbf{V}[m] = [\widetilde{\mathbf{V}}[m],\overline{\mathbf{V}}[m]],$ where $\widetilde{\mathbf{V}}[m]\in \mathbb{C}^{N_\mathrm{T}\times N_\mathrm{S}}$, the unconstrained precoder is readily obtained as $\mathbf{F}^{\mathrm{opt}}[m] = \widetilde{\mathbf{V}}[m]$ [@mimoRHeath]. The hybrid precoder design problem for subcarrier $m$ then becomes the minimization of the Euclidean distance between $\mathbf{F}^{\mathrm{opt}}[m]$ and $\mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m]$ as $$\begin{aligned} \label{PrecoderSingleCarrier} &\underset{\mathbf{F}_\mathrm{RF},\mathbf{F}_\mathrm{BB}[m]}{\operatorname*{minimize}} \big|\big| \mathbf{F}^{\mathrm{opt}}[m] - \mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m] \big|\big|_\mathcal{F}^2 \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\; \mathbf{{F}}_\mathrm{RF} \in \mathcal{F}_\mathrm{RF}, \nonumber \\ &\big|\big| \mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]\big|\big|_{\mathcal{F}}^2 = N_\mathrm{S}. \end{aligned}$$ Incorporating all subcarriers in the problem produces $$\begin{aligned} \label{PrecoderAllCarriers} &\underset{\mathbf{F}_\mathrm{RF},\{\mathbf{F}_\mathrm{BB}[m]\}_{m \in \mathcal{M}}}{\operatorname*{minimize}} \big|\big| \widetilde{\mathbf{F}}^{\mathrm{opt}} - \mathbf{F}_\mathrm{RF}\widetilde{\mathbf{F}}_\mathrm{BB} \big|\big|_\mathcal{F}^2 \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\; \mathbf{{F}}_\mathrm{RF} \in \mathcal{F}_\mathrm{RF}, \nonumber \\ &\sum_{m=1}^{M}\big|\big| \mathbf{{F}}_\mathrm{RF}\mathbf{{F}}_\mathrm{BB}[m]\big|\big|_{\mathcal{F}}^2 = MN_\mathrm{S}, \end{aligned}$$ where $$\begin{aligned} \widetilde{\mathbf{F}}^{\mathrm{opt}} = \begin{bmatrix} \mathbf{F}^{\mathrm{opt}}[1] & \mathbf{F}^{\mathrm{opt}}[2] & \cdots & \mathbf{F}^{\mathrm{opt}}[M] \end{bmatrix} \in \mathbb{C}^{N_\mathrm{T}\times MN_\mathrm{S}}, \end{aligned}$$ and $$\begin{aligned} \widetilde{\mathbf{F}}_\mathrm{BB} = \begin{bmatrix} \mathbf{{F}}_\mathrm{BB}[1] & \mathbf{{F}}_\mathrm{BB}[2] & \cdots & \mathbf{{F}}_\mathrm{BB}[M] \end{bmatrix} \in \mathbb{C}^{N_\mathrm{RF}\times MN_\mathrm{S}}, \end{aligned}$$ contain the beamformers for all subcarriers. Once the hybrid precoders are designed, the hybrid combiners $\mathbf{W}_\mathrm{RF},\mathbf{W}_\mathrm{BB}[m]$ realized by minimizing the mean-square-error (MSE), $\mathbb{E}\{\big|\big| \mathbf{s}[m] - \mathbf{W}_\mathrm{BB}^\textsf{H}[m] \mathbf{W}_\mathrm{RF}^\textsf{H}\mathbf{y}[m] \big|\big|_2^2\}$. The combiner-only optimization is $$\begin{aligned} \label{CombinerOnlyProblem} &\underset{\mathbf{W}_\mathrm{RF}, \mathbf{W}_\mathrm{BB}[m] }{\operatorname*{minimize}} \mathbb{E}\{\big|\big| \mathbf{s}[m] - \mathbf{W}_\mathrm{BB}^\textsf{H}[m] \mathbf{W}_\mathrm{RF}^\textsf{H}\mathbf{y}[m] \big|\big|_2^2\} \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\mathbf{W}_\mathrm{RF} \in{\mathcal{W}}_\mathrm{RF}. \end{aligned}$$ A more efficient form of (\[CombinerOnlyProblem\]) is due to [@mimoRHeath], where a constant term $\mathrm{Trace}\{\mathbf{W}_{\mathrm{MMSE}}^\textsf{H}[m] \mathbb{E}\{\mathbf{y}[m]\mathbf{y}^\textsf{H}[m]\mathbf{W}_{\mathrm{MMSE}}[m] \}\} - \mathrm{Trace}\{\mathbf{s}[m]\mathbf{s}^\textsf{H}[m] \}$ is added to the cost function. Here, $\mathbf{W}_{\mathrm{MMSE}}[m]$ denotes the minimum MSE (MMSE) estimator defined as $\mathbf{W}_\mathrm{MMSE}[m]= (\mathbb{E}\{\mathbf{s}[m] \mathbf{y}^\textsf{H}[m] \} \mathbb{E}\{\mathbf{y}[m] \mathbf{y}^\textsf{H}[m] \}^{-1})^\textsf{H}$. Then, (\[CombinerOnlyProblem\]) reduces to the optimization problem $$\begin{aligned} \label{CombinerOnlyProblemEquivalent} &\underset{\mathbf{W}_\mathrm{RF}, \mathbf{W}_\mathrm{BB}[m]}{\operatorname*{minimize}} \big|\big| \boldsymbol{\Lambda}_\mathrm{y}^{1/2}[m] (\mathbf{W}_\mathrm{MMSE}[m] - \mathbf{W}_\mathrm{RF} \mathbf{W}_\mathrm{BB}[m] )\big|\big|_\mathcal{F}^2 \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\mathbf{W}_\mathrm{RF} \in{\mathcal{W}}_\mathrm{RF}. \end{aligned}$$ where $\boldsymbol{\Lambda}_\mathrm{y}[m] = \rho\mathbf{H}[m]\mathbf{F}_\mathrm{RF}\mathbf{F}_\mathrm{BB}[m]\mathbf{F}_\mathrm{BB}^\textsf{H}[m]\mathbf{F}_\mathrm{RF}^\textsf{H}\mathbf{H}^\textsf{H}[m] + \sigma_n^2\mathbf{I}_{N_\mathrm{R}},$ is the covariance of the array output in (\[arrayOutput\]). The unconstrained combiner in a compact form is then [@WoptCombiner], $$\begin{aligned} &\mathbf{W}_\mathrm{MMSE}^\textsf{H}[m] = \frac{1}{\rho}\bigg( \mathbf{F}^{\mathrm{opt}^\textsf{H}}[m]\mathbf{H}^\textsf{H}[m]\mathbf{H}[m]\mathbf{F}^{\mathrm{opt}}[m] \nonumber \\ & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ \frac{N_\mathrm{S}\sigma_n^2}{\rho}\mathbf{I}_{N_\mathrm{S}} \bigg)^{-1} \mathbf{F}^{\mathrm{opt}^\textsf{H}}[m]\mathbf{H}^\textsf{H}[m]. \end{aligned}$$ In (\[CombinerOnlyProblemEquivalent\]), the multiplicative term $\boldsymbol{\Lambda}_\mathrm{y}^{1/2}[m]$ does not depend on $\mathbf{W}_\mathrm{RF}$ or $\mathbf{W}_\mathrm{BB}[m]$, It, therefore, has no bearing on the solution and can be ignored. Define $$\begin{aligned} \widetilde{\mathbf{W}}_\mathrm{MMSE} &= \begin{bmatrix}{\mathbf{W}}_\mathrm{MMSE}[1]&{\mathbf{W}}_\mathrm{MMSE}[2]&\cdots&{\mathbf{W}}_\mathrm{MMSE}[M] \end{bmatrix} \nonumber\\ &\in \mathbb{C}^{N_\mathrm{R}\times MN_\mathrm{S}}, \end{aligned}$$ and $$\begin{aligned} \widetilde{\mathbf{W}}_\mathrm{BB} = \begin{bmatrix}{\mathbf{W}}_\mathrm{BB}[1] & {\mathbf{W}}_\mathrm{BB}[2] & \cdots &{\mathbf{W}}_\mathrm{BB}[M] \end{bmatrix}\in \mathbb{C}^{N_\mathrm{RF}\times MN_\mathrm{S}}. \end{aligned}$$ Then, the hybrid combiner design problem becomes $$\begin{aligned} \label{CombinerOnlyProblemAllSubcarriers} &\underset{\mathbf{W}_\mathrm{RF}, \{\mathbf{W}_\mathrm{BB}[m]\}_{m\in \mathcal{M}}}{\operatorname*{minimize}} \big|\big| \widetilde{\mathbf{W}}_\mathrm{MMSE} - \mathbf{W}_\mathrm{RF} \widetilde{\mathbf{W}}_\mathrm{BB}\big|\big|_\mathcal{F}^2 \nonumber \\ &\operatorname*{subject \hspace{3pt} to\hspace{3pt}}\;\;\;\;\;\mathbf{W}_\mathrm{RF} \in{\mathcal{W}}_\mathrm{RF} \nonumber \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \mathbf{W}_\mathrm{BB}[m] = (\mathbf{W}_\mathrm{RF}^\textsf{H} \boldsymbol{\Lambda}_\mathrm{y}[m] \mathbf{W}_\mathrm{RF})^{-1}\nonumber \\ &\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\times (\mathbf{W}_\mathrm{RF}^\textsf{H}\boldsymbol{\Lambda}_\mathrm{y}[m]\mathbf{W}_\mathrm{MMSE}[m]). \end{aligned}$$ In [@manopt], manifold optimization or “Manopt” algorithm is suggested to effectively solve the optimization problems in (\[PrecoderAllCarriers\]) and (\[CombinerOnlyProblemAllSubcarriers\]). Note that both of these problems do not require a codebook or a set of array response of transmit and receive arrays [@mimoRHeath]. In fact, the manifold optimization problem for (\[PrecoderAllCarriers\]) and (\[CombinerOnlyProblemAllSubcarriers\]) are initialized at a random point, i.e., beamformers with unit-norm and random phases. Learning-Based Joint Channel Estimation and Hybrid Beamformer Design {#sec:HD_Design} ==================================================================== ![Deep learning frameworks for hybrid beamforming in mm-Wave MIMO systems. The F1 has a single CNN (MC-HBNet) which maps the received pilot signal data directly into hybrid beamformers. In F2 and F3, multiple CNNs are used for channel estimation and hybrid beamforming sequentially. For channel estimation, a single CNN (MC-CENet) is trained for all subcarrier data in F2 whereas a dedicated CNN (SC-CENet) is used for each subcarrier data in F3. The final HBNet stage is identical in F2 and F3. []{data-label="fig_DLFrameworks"}](DLFrameworks.PNG){width="1.0\columnwidth"} We introduce three DL frameworks F1, F2, and F3 (Fig. \[fig\_DLFrameworks\]). In all of them, hybrid beamformers are the outputs. The ICE values $\mathbf{G}[m]$ obtained from the received pilot signal in the preamble stage form the inputs. The F1 architecture is Multi-Carrier Hybrid Beamforming Network (MC-HBNet). It comprises a single CNN which accepts the ICEs jointly for all subcarriers. The input size is $MN_\mathrm{R} \times N_\mathrm{T}$). The ICEs introduce a performance loss if the channel estimates are inaccurate. To address this, F2 employs separate CNNs for channel estimation (Multi-Carrier Channel Estimation Network or MC-CENet) and hybrid beamforming (HBNet). The MC-CENet accepts the ICE of a single subcarrier as input; other subcarriers are fed sequentially, one at a time. So, the training data consists of a single ICE (with input of size $N_\mathrm{R}\times N_\mathrm{T}$) for each subcarrier. To make the setup even more flexible at the cost of computational simplicity, F3 employs one CNN per subcarrier for estimating the channel. For the $m$th subcarrier, each Single Carrier Channel Estimation Network (SC-CENet$[m]$, $m\in \mathcal{M}$) feeds into a single HBNet. Input Data ---------- We partition the input ICE data into three components to enrich the input features. In our previous works, similar approaches has provided good features for DL implementations [@elbirQuantizedCNN2019; @elbirDL_COMML; @elbirIETRSN2019; @deepCNN_ChannelEstimation]. In particular, we use the real, imaginary parts and the absolute value of each entry of ICEs. The absolute value entry indicates to the DL network that the real and imaginary input feeds are connected. Define the input for MC-HBNet in F1 as $\mathbf{X}_{\mathrm{F1}} = [\mathbf{X}_{\mathrm{F1}}^\textsf{T}[1],\dots, \mathbf{X}_{\mathrm{F1}}^\textsf{T}[M] ]^\textsf{T}$. Then, for $M_\mathrm{R}\times M_\mathrm{T}$ ICE, the $(i,j)$-th entry of the submatrices per subcarrier is $[[\mathbf{X}_{\mathrm{F1}}[m]]_{:,:,1}]_{i,j} = | [\mathbf{G}[m]]_{i,j}|$ for the first “channel” or input matrix of $\mathbf{X}_{\mathrm{F1}}[m]$. The second and the third channels are $[[\mathbf{X}_{\mathrm{F1}}[m]]_{:,:,2}]_{i,j} = \operatorname{Re}\{[\mathbf{G}[m]]_{i,j}\}$ and $[[\mathbf{X}_{\mathrm{F1}}[m]]_{:,:,3}]_{i,j} = \operatorname{Im}\{[\mathbf{G}[m]]_{i,j}\}$, respectively. Hence, the size of $\mathbf{X}_{\mathrm{F1}}$ is $M M_\mathrm{R}\times M_\mathrm{T}\times 3$. In F2, the input data comprises single subcarrier ICEs. The input for MC-CENet $\mathbf{X}_{\mathrm{F2}}$ is of size $M_\mathrm{R}\times M_\mathrm{T}\times 3$. The input data for each SC-CENet in F3 is same as in F2. The inputs of HBNet in both F2 and F3 also have the same structure; it is denoted as $\mathbf{X}_{\mathbf{H}} = [\mathbf{X}_{\mathbf{H}}^\textsf{T}[1],\dots, \mathbf{X}_{\mathbf{H}}^\textsf{T}[M] ]^\textsf{T} $ which is of size $M N_\mathrm{R}\times N_\mathrm{T}\times 3$, where $[[\mathbf{X}_{\mathbf{H}}[m]]_{:,:,1}]_{i,j} = | [\mathbf{H}[m]]_{i,j}|$, $[[\mathbf{X}_{\mathbf{H}}[m]]_{:,:,2}]_{i,j} = \operatorname{Re}\{[\mathbf{H}[m]]_{i,j}\}$ and $[[\mathbf{X}_{\mathbf{H}}[m]]_{:,:,3}]_{i,j} = \operatorname{Im}\{[\mathbf{H}[m]]_{i,j}\}$. Labeling -------- The hybrid beamformers are the common output for all three frameworks (Fig. \[fig\_DLFrameworks\]). We represent the output as the vectorized form of analog beamformers common to all subcarriers and baseband beamformers corresponding to all subcarriers. The output is an $N_\mathrm{RF}\big(N_\mathrm{T} + N_\mathrm{R} + 2MN_\mathrm{S} \big) \times 1 $ real-valued vector $$\begin{aligned} \label{zSU} \hspace{10pt} \mathbf{z} = \begin{bmatrix} \mathbf{z}_\mathrm{RF}^\textsf{T} & \widetilde{\mathbf{z}}_\mathrm{BB}^\textsf{T} \end{bmatrix}^\textsf{T}, \end{aligned}$$ where $\mathbf{z}_\mathrm{RF} = [\mathrm{vec}\{\angle \mathbf{F}_\mathrm{RF} \}^\textsf{T},\mathrm{vec}\{\angle \mathbf{W}_\mathrm{RF} \}^\textsf{T}]^\textsf{T}$ is a real-valued $N_\mathrm{RF}(N_\mathrm{T} + N_\mathrm{R})\times 1$ vector which includes the phases of analog beamformers. The $\widetilde{\mathbf{z}}_\mathrm{BB}\in \mathbb{R}^{2M N_\mathrm{S} N_\mathrm{RF}}$ is composed of the baseband beamformers for all subcarriers as $ \widetilde{\mathbf{z}}_\mathrm{BB} = [\mathbf{z}_\mathrm{BB}^\textsf{T}[1],\mathbf{z}_\mathrm{BB}^\textsf{T}[2],\dots,\mathbf{z}_\mathrm{BB}^\textsf{T}[M]]^\textsf{T} $ where $$\begin{aligned} &\mathbf{z}_\mathrm{BB}[m] = [\mathrm{vec}\{\operatorname{Re}\{ \mathbf{F}_\mathrm{BB}[m]\} \}^\textsf{T}, \mathrm{vec}\{\operatorname{Im}\{ \mathbf{F}_\mathrm{BB}[m]\} \}^\textsf{T}, \nonumber \\ &\;\;\;\;\;\;\;\;\;\;\;\mathrm{vec}\{\operatorname{Re}\{ \mathbf{W}_\mathrm{BB}[m]\} \}^\textsf{T}, \mathrm{vec}\{\operatorname{Im}\{ \mathbf{W}_\mathrm{BB}[m]\} \}^\textsf{T}]^\textsf{T}. \end{aligned}$$ The output label of MC-CENet in F2 is the channel matrix. Given that MC-CENet is fed by the ICE $\mathbf{G}[m]$, the output label for MC-CENet is $$\begin{aligned} \label{zH} \mathbf{z}_{\mathbf{H}[m]} = [\mathrm{vec}\{\operatorname{Re}\{\mathbf{H}[m]\}\}^\textsf{T} , \mathrm{vec}\{\operatorname{Im}\{\mathbf{H}[m]\}\}^\textsf{T} ]^\textsf{T}, \end{aligned}$$ which is a real-valued vector of size $2N_\mathrm{R}N_\mathrm{T}$. The SC-CENet$[m]$ in F3 has similar input and output structures as the MC-CENet but ICEs are fed to each SC-CENet$[m]$ separately. Network Architectures and Training ---------------------------------- ![Deep network architectures used in DL frameworks F1, F2, and F3 for wideband mm-wave channel estimation and hybrid beamforming. []{data-label="fig_Networks"}](NetworkArchitectures_v02.png){width="1.0\columnwidth"} We design four deep network architectures (Fig. \[fig\_Networks\]). The MC-HBNet and HBNet have input size of $MN_\mathrm{R}\times N_\mathrm{T}\times 3$ whereas the input for MC-CENet and SC-CENet$[m]$ is $N_\mathrm{R}\times N_\mathrm{T}\times 3$. The number of filters and number of units for all layers are shown in Fig. \[fig\_Networks\]. There are dropout layers with a $50\%$ probability after each fully connected layer in each network. We use pooling layers after the first and second convolutional layers only in MC-HBNet and HBNet to reduce the dimension by two. The output layer of all networks are the regression layer with the size depending on the application as discussed earlier. The network parameters are fixed after a hyperparameter tuning process that yields the best performance for the considered scenario [@elbirDL_COMML; @elbirQuantizedCNN2019; @elbirIETRSN2019]. The proposed deep networks are realized and trained in MATLAB on a PC with a single GPU and a 768-core processor. We used the stochastic gradient descent algorithm with momentum 0.9 and updated the network parameters with learning rate $0.0005$ and mini-batch size of $128$ samples. Then, we reduced the learning rate by the factor of $0.9$ after each 30 epochs. We also applied a stopping criteria in training so that the training ceases when the validation accuracy does not improve in three consecutive epochs. Algorithm \[alg:algorithmTraining\] summarizes steps for training data generation. .\ [**Output:** Training datasets for the networks in Fig. \[fig\_DLFrameworks\]: $\mathcal{D}_{\mathrm{MC-HBNet}}$, $\mathcal{D}_{\mathrm{MC-CENet}}$, $\mathcal{D}_{\mathrm{HBNet}}$ and $\mathcal{D}_{\mathrm{SC-CENet}}$.]{} \[alg:algorithmTraining\] Generate $\{\mathbf{H}^{(n)}[m]\}_{n=1}^N$ for $m \in \mathcal{M}$. Initialize with $t= \overline{t}=1$ while the dataset length is $T=NG$ for MC-HBNet, HBNet, SC-CENet, and $\overline{T} = MT$ for MC-CENet. **for** $1 \leq n \leq N$ **do** **for** $1 \leq g \leq G$ **do** $[\widetilde{\mathbf{H}}^{(n,g)}[m]]_{i,j} \sim \mathcal{CN}([\mathbf{H}^{(n)}[m]]_{i,j},\sigma_{\mathbf{H}}^2)$. Generate received pilot signal from (\[receivedSignalPilotMod\]) as $$\begin{aligned} \overline{\mathbf{Y}}^{(n,g)}[m] = \overline{\mathbf{W}}^{\textsf{H}} \mathbf{H}^{(n,g)}[m] \overline{\mathbf{F}} + \widetilde{\mathbf{N}}^{(n,g)}[m]. \nonumber \end{aligned}$$ Construct ${\mathbf{G}}^{(n,g)}[m]$ from (\[Gm\]) by using $\overline{\mathbf{Y}}^{(n,g)}[m]$. Using $\mathbf{H}^{(n,g)}[m]$, find $\hat{\mathbf{F}}_{\mathrm{RF}}^{(n,g)}$ and $\hat{\mathbf{F}}_{\mathrm{BB}}^{(n,g)}[m]$ by solving (\[PrecoderAllCarriers\]). Find $\hat{\mathbf{W}}_{\mathrm{RF}}^{(n,g)}$ and $\hat{\mathbf{W}}_{\mathrm{BB}}^{(n,g)}[m]$ by solving (\[CombinerOnlyProblemAllSubcarriers\]). Input for MC-HBNet: $\mathbf{X}_{\mathrm{F1}}^{(t)} =$ $ [\mathbf{X}_{\mathrm{F1}}^{(t)^\textsf{T}}[1],\dots,\mathbf{X}_{\mathrm{F1}}^{(t)^\textsf{T}}[M] ]^\textsf{T}$ and, for $ m\in \mathcal{M}, \forall i,j$, $$\begin{aligned} &[[\mathbf{X}_{\mathrm{F1}}^{(t)}[m]]_{:,:,1}]_{i,j} = |[{\mathbf{G}}^{(n,g)}[m]]_{i,j}| \nonumber \\ &[[\mathbf{X}_{\mathrm{F1}}^{(t)}[m]]_{:,:,2}]_{i,j}=\operatorname{Re} \{[{\mathbf{G}}^{(n,g)}[m]]_{i,j}\} \nonumber \\ &[[\mathbf{X}_{\mathrm{F1}}^{(t)}[m]]_{:,:,3}]_{i,j} = \operatorname{Im}\{[{\mathbf{G}}^{(n,g)}[m]]_{i,j}\}, \nonumber \end{aligned}$$ Output for MC-HBNet: $\mathbf{z}_\mathrm{HB}^{(t)} = \mathbf{z}^{(t)}$ as in (\[zSU\]). **for** $1\leq m \leq M$ **do** Input for MC-CENet: $\mathbf{X}_\mathrm{F2}^{(\overline{t})} = \mathbf{X}_{\mathrm{F1}}^{(t)}[m]$. Output for MC-CENet: $\mathbf{z}_{\mathrm{MC}-\hspace{-3pt}\mathbf{H}}^{(\overline{t})}\hspace{-5pt} = \hspace{-3pt} \mathbf{z}_{\mathbf{H}[m]}^{(t)}$ as in (\[zH\]). $\overline{t} = \overline{t} + 1$. **end for** Input for HBNet: $\mathbf{X}_\mathbf{H}^{(t)} = [\mathbf{X}_{\mathbf{H}}^{(t)^\textsf{T}}[1],\dots,\mathbf{X}_{\mathbf{H}}^{(t)^\textsf{T}}[M] ]^\textsf{T}$. Output for HBNet: $\mathbf{z}_\mathrm{HB}^{(t)}$. Input for SC-CENet$[m]$: $\mathbf{X}_\mathrm{F3}^{({t})}[m] = \mathbf{X}_{\mathrm{F1}}^{(t)}[m] $. Output for SC-CENet$[m]$: $\mathbf{z}_{\mathrm{SC}-\mathbf{H}}^{({t})}[m] =\mathbf{z}_{\mathbf{H}[m]}^{({t})} $. $t = t+1$. **end for** $g$, **end for** $n$, $\mathcal{D}_{\mathrm{MC-HBNet}} = \big((\mathbf{X}_{\mathrm{F1}}^{(1)}, \mathbf{z}_\mathrm{HB}^{(1)}),\dots, (\mathbf{X}_{\mathrm{F1}}^{(T)}, \mathbf{z}_\mathrm{HB}^{(T)})\big).$ $\mathcal{D}_{\mathrm{MC-CENet}} = \big((\mathbf{X}_\mathrm{F2}^{(1)}, \mathbf{z}_{\mathrm{MC}-\mathbf{H}}^{(1)} ),\dots, (\mathbf{X}_\mathrm{F2}^{(\overline{T})}, \mathbf{z}_{\mathrm{MC}-\mathbf{H}}^{(\overline{T})} )\big).$ $\mathcal{D}_{\mathrm{HBNet}} = \big((\mathbf{X}_{\mathbf{H}}^{(1)}, \mathbf{z}_\mathrm{HB}^{(1)}),\dots, (\mathbf{X}_{\mathbf{H}}^{(T)}, \mathbf{z}_\mathrm{HB}^{(T)})\big).$ $\mathcal{D}_{\mathrm{SC\hspace{-1pt}-\hspace{-1pt}CENet}}\hspace{-2pt}[m] \hspace{-3pt}=\hspace{-3pt} \big(\hspace{-1pt}(\mathbf{X}_{\mathrm{F3}}^{(1)}[m], \mathbf{z}_{\mathrm{SC}\hspace{-1pt}-\hspace{-1pt}\mathbf{H}}^{(1)}),\dots,\hspace{-3pt} (\mathbf{X}_{\mathrm{F3}}^{(T)}[m], \mathbf{z}_{\mathrm{SC}\hspace{-1pt}-\hspace{-1pt}\mathbf{H}}^{(T)})\hspace{-1pt} \big).$ To train the proposed CNN structures, we realize $N=100$ different scenarios for $G=100$ (see Algorithm \[alg:algorithmTraining\]). For each scenario, we generated a channel matrix and received pilot signal where we introduce additive noise to the training data on both the channel matrix and the received pilot signal which are defined by SNR$_{\mathbf{H}}$ and SNR$_{\overline{\mathbf{N}}}$ respectively. During training, we use multiple SNR$_{\mathbf{H}}$ and SNR$_{\overline{\mathbf{N}}}$ values to enable robustness in the networks against corrupted input characteristics [@elbirDL_COMML; @elbirQuantizedCNN2019]. In particular, we use SNR$_{\overline{\mathbf{N}}} = \{20, 30, 40\}$ dB and SNR$_{\mathbf{H}} =\{15,20,25\}$ dB, where we have SNR$_{\mathbf{H}} = 20\log_{10}(\frac{|[\mathbf{H}[m]]_{i,j}|^2}{\sigma_{\mathbf{H}}^2})$ and SNR$_{\overline{\mathbf{N}}} = 20\log_{10}(\frac{|[ \mathbf{H}[m] \overline{\mathbf{F}}[m]\overline{\mathbf{S}}[m]]_{i,j}|^2}{\sigma_{\overline{\mathbf{N}}}^2})$. In addition, SNR $=\{-10, 0, 10\}$ dB is selected in the training process. As a result, the sizes of the training data for MC-HBNet, MC-CENet, HBNet and SC-CENet$[m]$ are $MN_\mathrm{R}\times N_\mathrm{T}\times 3 \times 30000 $, $N_\mathrm{R}\times N_\mathrm{T}\times 3 \times 30000 M$, $MN_\mathrm{R}\times N_\mathrm{T}\times 3 \times 30000 $ and $N_\mathrm{R}\times N_\mathrm{T}\times 3 \times 30000 $, respectively. Further, $80\%$ and $20\%$ of all generated data are chosen for training and validation datasets, respectively. For the prediction process, we generated $J_T$ Monte Carlo experiments where a test data which is separately generated by adding noise on received pilot signal with SNR defined as SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ is used. Note that this operation is applied to corrupt input data and test the network against deviations in the input data which can resemble the changes in the channel matrix due to short coherence times in mm-Wave scenario [@coherenceTimeRef]. Numerical Simulations {#sec:Sim} ===================== We evaluated the performance of the proposed DL frameworks through several experiments. We compared our DL-based hybrid beamforming (hereafter, DLHB) with the state-of-the-art hybrid precoding algorithms such as Gram-Schmidt-orthogonalization-based method (GS-HB) [@alkhateeb2016frequencySelective], phase-extraction-based method (PE-HB) [@sohrabiOFDM], and another recent DL-based multilayer perceptron (MLP) method [@mimoDeepPrecoderDesign]. As a benchmark, we implemented a fully digital beamformer obtained from the SVD of the channel matrix. We also present the performance of the MO algorithm [@hybridBFAltMin] used for the labels of the hybrid beamforming networks. The MO algorithm constitutes a performance yardstick for DLHB, in the sense that the latter cannot perform better than the MO algorithm because the hybrid beamformers used as labels are obtained from MO itself. Finally, we implemented spatial frequency CNN (SF-CNN) architecture [@deepCNN_ChannelEstimation] that has been proposed recently for wideband mm-Wave channel estimation. We compare the performance of our DL-based channel estimation with SF-CNN using the same parameters. We followed the training procedure outlined in the Section \[sec:HD\_Design\] with $N_\mathrm{T}=128$ elements, $N_\mathrm{R}=16$ antennas, and $N_\mathrm{RF} = N_\mathrm{S} = 4$ RF chains. Throughout the experiments, unless stated otherwise, we use $M=16$ subcarriers at $f_c = 60$ GHz with $4$ GHz bandwidth, and $L=10$ clusters with $N_\mathrm{sc}=5$ scatterers for all transmit and receive angles that are uniform randomly selected from the interval $[-\pi,\pi]$. We selected $\overline{\mathbf{F}}[m]$ and $\overline{\mathbf{W}}[m]$ as the first $M_\mathrm{T}$ columns of an $N_\mathrm{T}\times N_\mathrm{T}$ discrete Fourier transform (DFT) matrix and the first $M_\mathrm{R}$ columns of an $N_\mathrm{R}\times N_\mathrm{R}$ DFT matrix respectively [@deepCNN_ChannelEstimation]. Then, we set $M_\mathrm{T}=128$ and $M_\mathrm{R}=16$. In the prediction stage, the preamble data are different from the training stage. Instead, we construct $\mathbf{G}[m]$ from (\[receivedSignalPilot\]) and (\[Gm\]) with a completely different realization of noise $\overline{\mathbf{N}}$ corresponding to SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. Spectral efficiency evaluation ------------------------------ Figure \[fig\_SNR\_Rate\] shows the spectral efficiency of various algorithms for varying test SNR, given SNR$_{\overline{\mathbf{N}}}=20$ dB. The DLHB techniques - fed with only the received pilot data (i.e., $\mathbf{G}[m]$) - outperform GS-HB [@alkhateeb2016frequencySelective] and PE-HB [@sohrabiOFDM] that utilize perfect channel matrix to yield hybrid beamformers. Further, GS-HB algorithm requires the set of array responses of received paths which is difficult to achieve in practice. The MO algorithm is used to obtain the labels of the deep networks for hybrid beamforming, hence the performances of the DL approaches are upper-bounded by the MO algorithm. However, note that perfect channel information is required for even the benchmark MO algorithm [@hybridBFAltMin]. The gap between the MO algorithm and the DL frameworks is explained by the corruptions in the DL input which causes deviations from the label data (obtained via MO) at the output regression layer. Note that our DLHB methods improve upon other DL-based techniques such as MLP [@mimoDeepPrecoderDesign], which lacks a feature extraction stage provided by convolutional layers in our networks. Among the DL frameworks, F2 and F3 exhibit superior performance than F1 because the channel estimated by MC-CENet and SC-CENet has higher accuracy. On the contrary, F1 uses ICEs directly as input and is, therefore, unable to achieve similar improvement. While F2 and F3 have similar hybrid beamforming performance, F3 has computationally more complex because of presence of $M$ CNNs in the channel estimation stage. In order to compare the algorithms with the same input channel data, we use the channel matrix estimate obtained from MC-CENet for MO, GS-HB, PE-HB and MLP when SNR $=0$ dB. Figure \[fig\_CE\_SNR\_N\_Rate\] shows the spectral efficiency so obtained with respect to SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$, which determines the noise added to the received pilot data. For SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}\geq 0$ dB, we note that the non-DL methods perform rather imperfectly but their performance is at least similar with the true channel matrix case shown in Fig \[fig\_SNR\_Rate\]. The DL-based techniques exceed in comparison and exhibit higher tolerance against the corrupted channel data corresponding to SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. The F2 and F3 quickly reach the maximum efficiency when SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ is increased to $-15$ dB. Again, the F1 fares poorly because it is directly fed by the ICEs and lacks the channel estimation network. Error in channel estimation {#subsec:ch_est} --------------------------- \ \ Figure \[fig\_CE\_SNRonReceivedSignal\] shows the normalized MSE (NMSE) (Fig. \[fig\_CE\_SNRonReceivedSignal\](a)) in the channel estimates and the spectral efficiency (Fig. \[fig\_CE\_SNRonReceivedSignal\](b)) of the DL approaches with respect to SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ when SNR $=0$ dB. Here, the NMSE is $$\begin{aligned} \textrm{NMSE} = \frac{1}{M J_T } \sum_{m=1}^{M}\sum_{i=1}^{J_T} \frac{|| \mathbf{H}[m] - \hat{\mathbf{H}}_i[m] ||_\mathcal{F}}{|| \mathbf{H}[m] ||_\mathcal{F} } , \end{aligned}$$ where $J_T$ is the number trials. We observe that all of the DL frameworks provide improvement as SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ increases but F3, in particular, surpasses all other methods. We remark that DLHB approaches outperform the recently proposed SF-CNN because the latter lacks fully connected layers and relies only on several convolutional layers (see Table 1 in [@deepCNN_ChannelEstimation]). While convolutional layers are good at extracting the additional features inherent in the input, the fully connected layers are more efficient in non-linearly mapping the input to the labeled data [@vggRef]. Further, SF-CNN [@deepCNN_ChannelEstimation] draws on a single SNR$_{\overline{\mathbf{N}}}$ in the training and works well only when SNR$_{\overline{\mathbf{N}}}=$ SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. This is impractical because it requires re-training whenever there is a change in SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. On the other hand, no such requirement is imposed on our DLHB method because we use multiple SNR$_{\overline{\mathbf{N}}}$s during the training stage. Again, F3 leverages multiple CNNs to outclass F2. While both have largely similar results as in Fig. \[fig\_SNR\_Rate\], we observe from Fig. \[fig\_CE\_SNRonReceivedSignal\](b) that F3 attains higher spectral efficiency even at SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ as low as -5 dB when compared with F1, F2, and MLP. We conclude that, effectively, the channel estimation improvement in F3 also leads to capacity enhancement at very low SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. Next, Fig. \[fig\_CE\_SNRonReceivedSignal\](b) illustrates that F1 performs well only when SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ exceeds 15 dB. In summary, F2 yields the highest spectral efficiency with reasonable network complexity. We observe in Fig. \[fig\_CE\_SNRonReceivedSignal\](a) that the performance of DL-based algorithms maxes out after SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$ reaches $5$ dB. This is because, being biased estimators, deep networks do not provide unlimited accuracy. This problem can be mitigated by increasing the number of units in various network layers. Unfortunately, it may lead to the network memorizing the training data and perform poorly when the test data are different than the ones in training. To balance this trade-off, we used noisy data-sets during training so that the network attains reasonable tolerance to corrupted/imperfect inputs. Although the spectral efficiency of DLHB frameworks remains largely unchanged at high SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$, it is an improvement over MLP as can be ascertained from both Fig. \[fig\_SNR\_Rate\] and Fig. \[fig\_CE\_SNRonReceivedSignal\](b). \ \ Effect of noise contamination ----------------------------- We examined the performance of the DL approaches for the corrupted pilot data when SNR $=0$ dB and SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}= 10$ dB. In this experiment, we added noise determined by SNR$_{\overline{\mathbf{S}}-\mathrm{TEST}}$ to the pilot signal matrix $\overline{\mathbf{S}}$ in (\[receivedSignalPilot\]). All networks are trained by selecting $\overline{\mathbf{S}} = \sqrt{P_T} \mathbf{I}_{M_\mathrm{T}}$. Figure \[fig\_CE\_PilotContamination\](a) shows that F3 has lower NMSE than both F2 and SF-CNN. Here, the performance of the algorithms maxes out after SNR$_{\overline{\mathbf{S}}-\mathrm{TEST}}$ is increased to $15$ dB; the channel estimation improvement is very incremental for all deep networks except ICE, where the preamble noise is determined by SNR$_{\overline{\mathbf{N}}-\mathrm{TEST}}$. The degradation in accuracy of DL methods can be similarly explained as in Section \[subsec:ch\_est\]. Nevertheless, the hybrid beamforming performance of F2 and F3 is better than MLP even though the channel estimation improvement is modest. Moreover, the performance of F2 and F3 quickly reaches to their best after SNR$_{\overline{\mathbf{S}}-\mathrm{TEST}} = -15$ dB (Fig. \[fig\_CE\_PilotContamination\](b)). Effect of angle and cluster mismatch ------------------------------------ We imposed further challenges on our techniques by introducing an angle mismatch from the receiver AOA/AOD angles (also used as training data). In the prediction stage, we generated a different channel matrix by inserting angular mismatch in each of the path angles. Figure \[fig\_AngleMismatch\] illustrates the spectral efficiency achieved with respect to the standard deviation of the mismatch angle, $\sigma_\Theta$. Hence, for the AOA/AOD angles ${\theta}_l,{\phi}_l$ from the $l$th cluster, the mismatched angles are given by $\widetilde{\theta}_l \sim \mathcal{N}(\theta_l, \sigma_\Theta^2)$ and $\widetilde{\phi}_l \sim \mathcal{N}(\phi_l, \sigma_\Theta^2)$, respectively. For both $L=10$ (Fig. \[fig\_AngleMismatch\]a) and $L=3$ (Fig. \[fig\_AngleMismatch\]b) clusters, DLHB methods are able to tolerate at most $4^{\circ}$ of angular mismatch which other learning-based methods such as MLP are unable to. As this mismatch increases, it leads to significant deviations in the channel matrix data (arising from the multiplication of deviated steering vectors in (\[eq:delaydChannelModel\]). We also evaluated effect of a mismatch in the number of clusters $L$ between training and prediction data. We trained the networks for $L=10$ and $L=5$ with different channel realizations. During testing, we generated a new channel matrix for different number of clusters. Figures \[fig\_PathMismatch\](a) and (b) illustrate the spectral efficiency for $L=10$ and $L=5$, respectively. The F2 and F3 reach to their maximum performance when $L$ reaches to the value used in the training. The performance of F1 and MLP gets worse as $L$ increases. Note that in the prediction stage, the first 10 (5), as in Fig. \[fig\_PathMismatch\]a (b), cluster angles are same as used for training; remaining 10 (5) cluster angles are selected uniformly at random as mentioned earlier. As $L$ increases, the input data becomes “more familiar” to the deep network. The spectral efficiency does not degrade after addition of randomly generated cluster paths because DLHB designs the hybrid beamformers according to the received paths that are already present in the training data. As a result, deep networks provide robust performance even with additional received paths and channel matrix different from the training stage. However, the loss of cluster paths in the training data causes would deteriorate the performance because the input data becomes “unfamiliar” to the deep network and hybrid beamformer designs suffer as a result. Computational complexity ------------------------ MC-HBNet MC-CENet SC-CENet$[m]$ HBNet SF-CNN MLP ---------- ---------- --------------- ------- -------- ------ 45.6 95.3 76.6 43.8 85.1 41.4 : Training Times for Networks (in Minutes) []{data-label="tableComp_Networks"} DLHB-F1 DLHB-F2 DLHB-F3 --------- --------- --------- 45.6 138.5 1270.3 : Training Times for DLHB Frameworks (in Minutes) []{data-label="tableComp_Frameworks"} MC-HBNet MC-CENet SC-CENet$[m]$ HBNet SF-CNN MLP ---------- ---------- --------------- -------- -------- -------- 0.0053 0.0056 0.0057 0.0059 0.0057 0.0056 : Run Times for Networks (in Seconds) []{data-label="tableComp_Networks2"} DLHB-F1 DLHB-F2 DLHB-F3 MO GS-HB PE-HB --------- --------- --------- ------- -------- -------- 0.0053 0.0113 0.0778 3.204 0.0132 0.0152 : Run Times for Algorithms (in Seconds) []{data-label="tableComp_Frameworks2"} We assessed the training times of all DLHB frameworks. We select the same simulations settings presented in Section \[sec:HD\_Design\]. For $M=16$, Tables \[tableComp\_Networks\] and \[tableComp\_Frameworks\] list training times for each network (Fig. \[fig\_Networks\]) and DLHB framework (Fig. \[fig\_DLFrameworks\]), respectively. The simple structure and smaller input/output layer sizes of MC-HBNet, HBNet, and MLP implies that they have the lowest training times than the CENet. Similarly, F1 is the fastest in training while F3 is the slowest. Note that we trained each SC-CENet separately one after the other. The training time of F3 is reduced when all SC-CENet networks are to be trained jointly in parallel. Designing hybrid beamformers by solving (\[PrecoderAllCarriers\]) and (\[CombinerOnlyProblemAllSubcarriers\]) using the MO algorithm introduces computational overhead. While this process is tedious, our proposed DLHB holds up this complexity only during the training. In the prediction stage, however, DLHB exhibits far smaller computational times than other algorithms. For the sake of completeness, Tables \[tableComp\_Networks2\] and \[tableComp\_Frameworks2\] list the prediction stage computational times of the networks and frameworks, respectively. All networks show similar run times because of parallel processing of deep networks with GPUs. Among the DLHB frameworks, F1 is the fastest due to its structural simplicity. The MO algorithm takes longest to run in solving its inherent optimization problem. While GS-HB and PE-HB are quicker than F3, they are fed with the true channel matrix and lack any channel estimation stage. The F2 has slightly less execution times than GS-HB and PE-HB and provides more robust performance without requiring the CSI. Hence, we conclude that the proposed DL frameworks are computationally efficient and more tolerant to many different corruptions in the input data. Summary {#sec:Conc} ======= We introduced three DL frameworks for joint channel estimation and hybrid beamformer design in wideband mm-Wave massive MIMO systems. Unlike prior works, the proposed DL frameworks do not require the knowledge of the perfect CSI to design the hybrid beamformers. We investigated the performance of DLHB approaches through several numerical simulations and demonstrated that they provide higher spectral efficiency and more tolerance to corrupted channel data than the state-of-the-art. The robust performance results from training the deep networks for several different channel scenarios which are also corrupted by synthetic noise. This aspect has been ignored in earlier works. We showed that the trained networks provide robust hybrid beamforming even when the received paths change up to 4 degrees from the training channel data. This allows for sufficiently long times in deep network operations without requiring re-training. This significant improvement addresses the common problem of short coherence times in a mm-Wave system. Even in terms of the channel estimation accuracy, our DLHB frameworks outperform other DL-based approaches such as SF-CNN. Our experiments show that the channel estimation performance of all DL methods maxes out at high SNR$_{\overline{\mathbf{N}}}$ regimes. This is explained by the nature of deep networks which are biased estimators. Acknowledgements {#acknowledgements .unnumbered} ================ K. V. M. acknowledges Prof. Robert W. Heath Jr. of The University of Texas at Austin for helpful discussions and suggestions. [^1]: A. M. E. is with the Department of Electrical and Electronics Engineering, Duzce University, Duzce, Turkey. E-mail: [email protected], [email protected]. [^2]: K. V. M. is with The University of Iowa, Iowa City, IA 52242 USA. E-mail: [email protected]. | High | [
0.692708333333333,
33.25,
14.75
] |
Charlotte, North Carolina erupted after an officer-involved shooting on Tuesday. Agitators filmed themselves throwing rocks off an I-85 overpass and striking unsuspecting motorists below. Video floating around the Internet from riots in #Charlotte #NorthCarolina. Rioters tossing rocks off I-85 fwy overpass, striking vehicles. pic.twitter.com/TXqkRUsiDJ — The Detroit Scanner (@DetroitScanner) September 21, 2016 They ooed and awed as they could be heard striking several passing vehicles. “We’re people trying to get home. We haven’t done anything wrong. I fear if we would have had to stop, they would have done something else,” one motorist said outside a hospital emergency room entrance. WSOC’s Mark Barber reports the woman’s windshield was shattered by the agitators. “We haven’t done anything wrong.” Driver whose windshield was shattered by Charlotte protesters who were throwing rocks off bridges. @wsoctv pic.twitter.com/f7H3azf1Gi — Mark Barber (@MBarberWSOC9) September 21, 2016 “Unfortunately these protesters, they don’t see the difference,” a victim said. Other video shows agitators hitting passing cars as they chanted “no justice, no peace.” Protesters throw rocks and hit a car and cut the stream. Pray for driver. This is your future white america #KeithLamontScott #Charlotte pic.twitter.com/noAKaoMZ8N — The Current Year (@TheeCurrentYear) September 21, 2016 Police were also targeted by the “protesters.” Medic says that 7 CMPD officers have been taken to the hospital for injuries during the riots in Charlotte tonight. Rocks/bottles thrown. pic.twitter.com/aGmNfb8naf — Bill Melugin (@BillFOX46) September 21, 2016 Fox 46 reports rocks and bottles were thrown at police, resulting in at least 7 injuries. As for the shooting incident, WCCB reports: Police say a person was killed in an officer-involved shooting in the University area. The shooting was reported Tuesday afternoon just before 4pm on Old Concord Road near Bonnie Lane and John Kirk Drive. Police say officers with the Metro Division Crime Reduction Unit were searching for a suspect with an outstanding warrant on him at The Village at College Downs. Officers say they saw a subject inside a vehicle in the apartment complex. The subject allegedly exited the vehicle armed with a firearm. Officers say the subject got back into the vehicle and then the officers started to approach the subject. That’s when police say the subject got back out of the vehicle armed with a firearm and posed an imminent deadly threat to the officers who subsequently fired their weapon striking the subject. The officers immediately requested MEDIC and began performing CPR, according to a news release. The subject was pronounced dead on the scene. | Low | [
0.494929006085192,
30.5,
31.125
] |
A woman who forced her teenage daughter to marry a male relative who had raped and impregnated her as a child has been jailed for four and a half years in the UK’s first successful prosecution of its kind. The 45-year-old mother of four was sentenced Wednesday at Birmingham Crown Court after being found guilty of duping her 17-year-old daughter into travelling to Pakistan to force her into marrying her stepfather’s nephew in September 2016, reports the BBC. The man raped and impregnated the victim four years prior when she was 13 – which her mother interpreted as a “marriage contract”. Neither the victim nor her mother can be named for legal reasons. Jurors heard that the British teenager was duped into going on holiday to celebrate her 18th birthday and that almost immediately after blowing out her candles she was told that she would be marrying her rapist. The young woman, who was described as having special educational needs, was said to have “wept” during her marriage ceremony. British Teenager ‘Wept’ During Forced Marriage to Man Who Impregnated Her at 13 https://t.co/FtUolkMaYU — Breitbart London (@BreitbartLondon) May 16, 2018 Following the girl’s pregnancy at 13, the court heard her mother arranged an abortion. Traumatised by the rape and termination, the victim turned to alcohol and drugs. She was taken into emergency care and became vulnerable to Child Sexual Exploitation (CSE), was raped again, and had a second abortion. Her mother was found guilty on two counts of forced marriage and one charge of perjury on Tuesday. This is the second successful prosecution for forced marriage since it became an offence in June 2014. The first came in June 2015 when a 34-year-old man was convicted for blackmailing a devout Muslim woman into becoming his second wife – however, this recent case is the first time the law was applied to a parent forcing their child. This is despite the government’s Forced Marriage Unit (FMU) identifying more than 8,000 cases since 2010 and logging nearly 1,200 forced marriages in 2017 alone, with the organisation saying that number “may not reflect the full scale of the abuse”. Cases of Female Genital Mutilation (FGM), also commonly associated with some Islamic cultures, have resulted in not one single conviction in the UK. Girls Fearing Forced Marriage Told to Stick a Spoon in Underwear to Alert Airport Security https://t.co/Vx8bm5WlyC — Breitbart London (@BreitbartLondon) May 23, 2018 Twitter Follow @friedmanpress Follow Victoria Friedman on | Mid | [
0.575824175824175,
32.75,
24.125
] |
Welcome to our website! As we have the ability to list over one million items on our website (our selection changes all of the time), it is not feasible for a company our size to record and playback the descriptions on every item on our website. However, if you are an American with a disability we are here to help you. Please call our disability services phone line at 919-834-0395 during regular business hours and one of our kind and friendly personal shoppers will help you navigate through our website, help conduct advanced searches, help you choose the item you are looking for with the specifications you are seeking, read you the specifications of any item and consult with you about the products themselves. There is no charge for the help of this personal shopper for any American with a disability. Finally, your personal shopper will explain our Privacy Policy and Terms of Service, and help you place an order if you so desire. Graff Wilkinson Supply Co in Raleigh, NC is an authorized dealer of Graff Products. American ingenuity and European craftsmanship are the cornerstones of GRAFF's design commitment to create innovative, cutting-edge mixers (faucets) and plumbing accessories. Supported by over 80 years of plumbing and hardware manufacturing experience, GRAFF's luxury kitchen and bath offerings include a range of contemporary, transitional and traditional products. So if you are looking for Graff products in Raleigh, Durham, Carrboro, Wilson, Chapel Hill, Cary, Apex, Holly Springs, Morrisville, Wilson and Rocky Mount, or if you have any questions about Graff products, please feel free to call us at 919-834-0395 or simply stop by Wilkinson Supply Co at any time and we would be glad to help you. The award-winning Ametis Shower System creates a truly exceptional showering experience. Engineered with many high-tech features, the Ametis Shower System offers a soothing halo effect using LED chromotherapy lighting. The LED lighting is positioned within the shower ring to add a new dimension to the colum, thanks to indirect lighting - still a seldom used concept in bathroom design. Aqua-Sense is an innovative, highly technological shower collection for the most demanding tastes. Water, light and sound orchestrated in harmonic balance, allow for a deeper sense of wellness. With various handle options and numerous shower components, Aqua-Sense can deliver a physical and emotional experience. Drawing inspiration from a traditional water pump, G+Design Studio transformed an outdated product into an elegant and modern object for everyday use. In each model, Bali retains its unique ties to both the past and the present. Bollero offers refined looks and understated styles, an inclusive approach to design fulfilling the needs of every user. With intuitive operation and effortless style, Bollero appeals to consumers who are both steadfast about cooking and want to create a beautifully practical space. Traditional meets contemporary with the sophisticated Camden Collection. Designed by GRAFF's G+ Design Studio, the collection's style is transitional and highly unique, allowing it to fit into both traditional and contemporary settings. Blending Victorian and Edwardian aesthetic sensibilities with modern principles and technologies, each fixture exhibits a luxurious artistic quality. Canterbury represents the perfect choice for those who search for a traditional Victorian feel in their bathroom. Whether in the exposed shower version or in the traditional showerhead, this collection is always elegant and distinctive. Available with cross handles, porcelain handles, or metal handles, the whole collection has been developed in several precious and long-lasting finishes. Staying true to its namesake, the Conical collection is composed using fluid lines that broaden at the base and taper into a graceful neck. With a matching bar faucet, this pair creates a magnetic combination and adds a fresh perspective to the kitchen space. A graceful form with enduring elegance, Corsica withstands trends exuding a timeless appeal. Within its classical design, Corsica offers a matching side spray to allow water to be used in different ways that suit the user. A tradition-inspired silhouette, Duxbury stands dignified and graceful. With a beveled neck and obtuse base, Duxbury stays true to itself while transforming the kitchen with a comfortable, casual ambiance. The side spray offers even more flexibility in the heart of the home, the kitchen. Architectural details combine with clean lines to form the Finezza collection. A perfect blend of grace and elegance, this full suite offers an exquisite array of choices from faucets to shower elements. As you descend down the profile of the Fontaine Collection, this contemporary design broadens in its form. Clearly its shape is meant to delineate a luxurious and clean style. The contour of the handles is a dominant element that embodies this distinct silhouette. The design of the contemporary collection was derived from the stylings of classic motorcycles, fusing an industrial aesthetic with details nostalgic of the all-American icons.Conceptualized by GRAFF's G+Design Studio, the faucet's noteworthy handle, recalling a car steering wheel, offers a unique eclecticism and adaptability to contemporary and technical environments. Each piece is crafted with a focus on engineering and ingenuity, resulting in a minimalist and even composition. Complete with a full line of matching shower components and accessories, Incanto can fulfill the functional and design needs of your next project. With distinctive, designer handles, the Infinity kitchen faucet displays a unique, modern style best suited for contemporary designs. For both cooking and clean-up, effortlessly utilize the side spray for even more functionality within the kitchen. Grace personified. That's the best way to describe the Lauren collection. The gentle curve of the showerhead and the slim perfectly-appointed handles can take on a stronger look when shown in the gold plated finish. The Lauren Collection knows that there is no need to shout to be heard. Arched to perfection, the sleek crescents of the Luna Collection carry over into the thermostatic shower system. A wall-mounted base with an overhead rainfall setting, the Luna shower may be used every day but will never feel routine. The M-Series thermostatic module lets you create, customize and transform your shower experience. Pushing the boundaries of design and opening up new possibilities in shower functionality, the M-Series minimalistic beauty comes to life in a new and personalized way. M.E. 25 embodies the best that minimalism has to offer. As one of the most versatile collections on the market, personalization of each bathroom is a simple job. ADA compliant, available in four finishes, M.E. 25 is perfectly suited to most bath environments. Never compromise on your decisions. Inspired by the city skyline, the Manhattan collection delivers a streamlined design coupled with exceptional functionality. Crisp lines and simple forms melt together for a sleek appeal - bringing a striking new life to a contemporary or transitional kitchen space. Traditional in nature, when shown in polished chrome the shower shines through a more contemporary light. The perfect complement to a classic bath space, Nantucket speaks to a time when life moved at a slower, more contemplative pace. Practical and stylish, Oscar is perfectly suited to environments with a contemporary taste, giving the kitchen a new allure. Its twist and lock sprayhead and adjustable stream allow a simple and effective use while the pull-out spray and handle have a rubber grip for easier handling. A simple divergence in the neck spurs thoughts of the past while traditional styling adds a vintage feel to the Pesaro Collection. Bringing about an air of timelessness, Pesaro represents the perfect complement to an ageless space. The Phase Collection's clean and simple shape gives tribute to the perfect union of sensuality and precision. A contemporary collection with slim lines, it is suited to every type of interior project. It is a timeless creation, a group of elements which fits perfectly in today's ever-changing society. Sometimes simplicity is deceiving. Qubic could not reflect this concept any better. Polished and elegant, it represents an architectural element with a defined contemporary design. Square and cube-shaped, as already announced by its name, Qubic confers strength and stability to the sink and the bathroom as a whole. ‘'Lightness and strength'' are the principles that inspired Angeletti Ruzza Design when they created this collection. Its minimal yet sensual design is defined by clean, simple lines that result in a strong visual impact. The series offers a wide range of options to satisfy both practical and aesthetical desires. This contemporary shower stands as the main protagonist in the bathroom and seduces with the purity of its shape. The minimal design consists of a geometrical composition of cubes, rectangles and right angles. Besides the traditional polished chrome and Steelnox® finishes, the version in matte black adds a concrete and substantial look to each item of the Solar collection. Clean lines and smooth surfaces identify this very unique collection. Designed by G+Design Studio, Structure delivers, with its right angles, an idea of artistic perfection. As geometrical as poetry, this shower is the purest expression of engineering brilliance disguised as simplicity. Modern and refined, Targa is outstanding with smooth and slightly convex handles. The arrangement uses a softly bent lever. All elements of this collection are premium-design products, with a superior level in terms of manufacturing standards. Behind a natural profile, simple and free from excess, as essential as the element from which it takes its name, Terra offers modern cutting-edge solutions that are ecologically sound and look forward to technological progress. The cylindrical, smooth and bright shape, recreates a relaxing atmosphere of harmony in the bathroom, like a journey into nature, where the flow of the water and the fascinating forms, capture you and lull you gently. An art-deco touch makes the Topaz Collection as unique as its jewel shaped handles. The showerhead features an unmistakable hexagonal shape. No redundant detail, no excessive movements: the Topaz Collection is perfect in its modern and timeless finishes. With a zen-inspired appeal, Tranquility speaks a soft, gentle language. Contemporary in its finishes and forms, the Tranquility Collection brings a slight Asian influence to the bath environment. The soft slope of the handle, while recalling a traditional style, makes a more transitional statement. The handle resembles a bamboo shoot and its gentle curve is soothing to the touch. The Vintage Collection draws inspiration from the design of classic fire hose nozzles, pairing a modern spout with bold handles. Each element, from the rounded brim at the spout's top to the undulating handles complete with carefully designed cut outs, resembles the traditional forms of the fire house featured in the historic Chicago Fire Department logo. The elegant styling of the Vista Collection displays fine design and sophistication. As the curvature of the spout reaches its pinnacle, its quiet slope concludes with a modest flair. The simple structure of the two handled base conveys a vintage charm. A style as unique as the place its named after, Wellington stands proud as it blends a fascination of the past with a fury for the present. The uncommon shape is made for those who like to deviate from the norm with a tendency towards progression. Welcome to our website! As we have the ability to list over one million items on our website (our selection changes all of the time), it is not feasible for a company our size to record and playback the descriptions on every item on our website. However, if you are an American with a disability we are here to help you. Please call our disability services phone line at 919-834-0395 during regular business hours and one of our kind and friendly personal shoppers will help you navigate through our website, help conduct advanced searches, help you choose the item you are looking for with the specifications you are seeking, read you the specifications of any item and consult with you about the products themselves. There is no charge for the help of this personal shopper for any American with a disability. Finally, your personal shopper will explain our Privacy Policy and Terms of Service, and help you place an order if you so desire. | Mid | [
0.5795918367346931,
35.5,
25.75
] |
The cost of post 9/11 wars hit $5.9 trillion, 480,000 lives lost, study says Jeremy Salt The price for America’s longest wars has surpassed more than $5.9 trillion and at least 480,000 lost lives, according to a new study released by the Watson Institute for International and Public Affairs at Brown University. The figures highlight the toll of U.S. war operations around the world since the Sept. 11, 2001, terrorist attacks, and the study projects the numbers could rise. Send Food for Children of Yemen.. Donate now! “It’s important for the American people to understand the true costs of war, both the moral and monetary costs,” said Sen. Jack Reed, the ranking Democrat on the Senate Armed Services Committee, who helped introduce the report Wednesday at a meeting on Capitol Hill. “Our nation continues to finance wars and military operations through borrowing, rather than asking people to contribute to the national defense directly, and the result is a serious fiscal drag that we’re not really accounting for or factoring into deliberations about fiscal policy or military policy.” The study’s death estimates include nearly 7,000 U.S. servicemembers, nearly 8,000 U.S. contractors, more than 100,0000 military and police members from other countries, more than 244,000 civilians and more than 100,000 opposition fighters. The $5.9 trillion U.S. cost includes Pentagon spending through fiscal year 2019, such as direct and indirect spending as well as future war-related costs for post-9/11 war veterans. It represents U.S. spending in the war zones of Iraq, Syria, Afghanistan and other locations designated as “overseas contingency operations.” It also includes war-related spending by other agencies, such as the State Department and the Department of Homeland Security, costs of veterans care as well as debt used to pay for the wars. Send Food for Children of Yemen.. Donate now! “Veterans benefits and disability spending, and the cost of interest on borrowing to pay for the wars, will comprise an increasingly large share of the costs,” said Neta Crawford, a political science professor at the institute, who authored the study. The institute’s “Costs of War” project, with 35 scholars, legal experts, human rights practitioners and physicians, began tracking the costs of the post-9/11 wars in 2011 and continues to release updated reports. The group, which does its work through Brown University, said it uses research and public data to facilitate greater transparency of the actual toll of the wars. Even if the wars were to end by 2023, the United States is on track to spend an additional $808 billion, bringing the overall tally to at least $6.7 trillion, according to the study. That doesn’t include future interest payments on the spending. War appropriations for Iraq and Afghanistan are funded by deficit spending and borrowing, and not new taxes or war bonds, the study notes. This adds to interest costs, it concludes. Those interest payments could shift with the winds of the economy and other factors, with some pundits estimating those fees alone could total trillions. “The U.S. continues to fund the wars by borrowing, so this is a conservative estimate of the consequences of funding the war as if on a credit card, in which we are only paying interest even as we continue to spend,” Crawford said. Tracking an overall cost for the post-9/11 wars is challenging because different departments take part in the spending. In March 2018, the Defense Department estimated it had spent $1.5 trillion in war-related appropriations, but that only includes a portion of all war spending, the study argued. With no single number for the budgetary costs of the wars, it makes assessing costs, risks and benefits difficult, Crawford said. Because taxpayers tend to focus on direct military spending, it discounts the larger budgetary costs of the wars and underestimates its greater significance, she added. “In sum, high costs in war and war-related spending pose a national security concern because they are unsustainable,” Crawford said. “The public would be better served by increased transparency and by the development of a comprehensive strategy to end the wars and deal with other urgent national security priorities.” The study also tallied the number of soldiers and sailors injured in the wars. Since 2001, more than 53,700 U.S. servicemembers have been injured in Iraq and Afghanistan. Of those injuries, 62 percent were hurt in Iraq, while 38 percent were injured in Afghanistan. Though the fighting in Afghanistan and Iraq has been less intense than in recent years, the toll of civilians killed in Afghanistan in 2018 is on track to be one of the highest death tolls of the war, Crawford said in her study. Most of these war deaths in Afghanistan, Iraq and Syria have been caused by militants, but some of them are at the hands of the United States and its coalition partners, Crawford said. Yet, the tally remains incomplete, and there are efforts by the United Nations to track and identify perpetrators of those deaths and injuries, she noted. Other organizations, such as the Congressional Research Service and the news media, are also attempting to track these figures. “Indeed, we may never know the total direct death toll in these wars,” she said. In addition, this tally does not include “indirect deaths” — people harmed as a result of long-term damage left in the war zones, such as lost access to food and water. “This update just scratches the surface of the human consequences of 17 years of war,” Crawford said. “There are a number of areas — the number of civilians killed and injured, and the number of U.S. military and veteran suicides, for instance — where greater transparency would lead to greater accountability and could lead to better policy.” Because you’re right here… Famine devours Children According to the United Nations, Yemen is in urgent need of medicines, vaccines and food. The supplies “are essential to staving off disease and starvation,” the organization said. “Without them, untold thousands of innocent victims, among them many children, will die.” A joint statement from the heads of the World Food Program, UNICEF and the World Health Organization called the situation in Yemen “the worst humanitarian crisis in the world.” Send Food for Children of Yemen.. Donate now! | Mid | [
0.649214659685863,
31,
16.75
] |
package foo; import plop.C; public class A { C c; } | Low | [
0.49339207048458106,
28,
28.75
] |
Q: Is there a way to enable/disable (grey out) the Rotation Lock in Action Center programmatically on a Windows 10 device without rebooting? I'm developing a feature on a Surface Book that can control the Rotation Lock of the device. This involves turning Rotation Lock on/off, as well as disabling it alltogether. To clarify, my question here is not about turning Rotation Lock ON/OFF, which makes the icon turn blue or neutral. I'm talking about turning Rotation Lock enabled/disabled, which makes the icon turn grey or neutral. I've read through several Microsoft documents and online search results, but they all seem to focus on the on/off state of Rotation Lock, not the enable/disable state. I'm aware of the UWP feature for SetAutoRotationPreferences, but that appears to only lock orientations in Tablet mode (not Desktop mode), and doesn't affect the Rotation Lock icon state. I'm aware of the undocument/unsupported Win32 API SetAutoRotation, which works but only to turn the Rotation Lock on/off, not enabled/disabled. I'm aware that the Rotation Lock icon can be manipulated programmatically using the Windows Registry key Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\AutoRotation with the ValueName SlateEnable set to 0, but it requires a reboot of the device in order to be applied. I'm aware that SHChangeNotify can be used to refresh the desktop, but I was unable to get it to work. I used the value SHChangeNotify(SHCNE_ASSOCCHANGED, SHCNF_FLUSH, IntPtr.Zero, IntPtr.Zero) if that is any indication I am doing it wrong or not. Is there any Win32 API feature that can control the Rotation Lock's enable/disable state, or is there any API that can apply the windows registry modification immediately? I'm stuck. Any help would be greatly appreciated. A: After submitting the same question to the MSDN forums about this, the consensus appears to be that there is no API that exposes the capability to grey out the 'Rotation Lock' icon. However, the undocumented SetAutoRotation API can toggle the 'Rotation Lock' icon on and off. | Mid | [
0.559440559440559,
30,
23.625
] |
<?php /** * @group taxonomy */ class Tests_Term_getTermField extends WP_UnitTestCase { public $taxonomy = 'wptests_tax'; function setUp() { parent::setUp(); register_taxonomy( $this->taxonomy, 'post' ); } /** * @ticket 34245 */ public function test_get_term_field_should_not_return_error_for_empty_taxonomy() { $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy ) ); $found = get_term_field( 'taxonomy', $term->term_id, '' ); $this->assertNotWPError( $found ); $this->assertSame( $this->taxonomy, $found ); } /** * @ticket 34245 */ public function test_get_term_field_supplying_a_taxonomy() { $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy ) ); $found = get_term_field( 'taxonomy', $term->term_id, $term->taxonomy ); $this->assertSame( $this->taxonomy, $found ); } /** * @ticket 34245 */ public function test_get_term_field_supplying_no_taxonomy() { $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy ) ); $found = get_term_field( 'taxonomy', $term->term_id ); $this->assertSame( $this->taxonomy, $found ); } /** * @ticket 34245 */ public function test_get_term_field_should_accept_a_WP_Term_id_or_object() { $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy ) ); $this->assertInstanceOf( 'WP_Term', $term ); $this->assertSame( $term->term_id, get_term_field( 'term_id', $term ) ); $this->assertSame( $term->term_id, get_term_field( 'term_id', $term->data ) ); $this->assertSame( $term->term_id, get_term_field( 'term_id', $term->term_id ) ); } /** * @ticket 34245 */ public function test_get_term_field_invalid_taxonomy_should_return_WP_Error() { $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy ) ); $found = get_term_field( 'taxonomy', $term, 'foo-taxonomy' ); $this->assertWPError( $found ); $this->assertSame( 'invalid_taxonomy', $found->get_error_code() ); } /** * @ticket 34245 */ public function test_get_term_field_invalid_term_should_return_WP_Error() { $found = get_term_field( 'taxonomy', 0, $this->taxonomy ); $this->assertWPError( $found ); $this->assertSame( 'invalid_term', $found->get_error_code() ); $_found = get_term_field( 'taxonomy', 0 ); $this->assertWPError( $_found ); $this->assertSame( 'invalid_term', $_found->get_error_code() ); } public function test_get_term_field_term_id() { $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy ) ); $this->assertSame( $term->term_id, get_term_field( 'term_id', $term ) ); $this->assertSame( $term->term_id, get_term_field( 'term_id', $term->data ) ); $this->assertSame( $term->term_id, get_term_field( 'term_id', $term->term_id ) ); } public function test_get_term_field_name() { $name = rand_str( 15 ); $term = self::factory()->term->create_and_get( array( 'name' => $name, 'taxonomy' => $this->taxonomy, ) ); $this->assertSame( $name, get_term_field( 'name', $term ) ); $this->assertSame( $name, get_term_field( 'name', $term->data ) ); $this->assertSame( $name, get_term_field( 'name', $term->term_id ) ); } public function test_get_term_field_slug_when_slug_is_set() { $slug = rand_str( 15 ); $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy, 'slug' => $slug, ) ); $this->assertSame( $slug, get_term_field( 'slug', $term ) ); $this->assertSame( $slug, get_term_field( 'slug', $term->data ) ); $this->assertSame( $slug, get_term_field( 'slug', $term->term_id ) ); } public function test_get_term_field_slug_when_slug_falls_back_from_name() { $name = rand_str( 15 ); $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy, 'name' => $name, ) ); $this->assertSame( $name, get_term_field( 'slug', $term ) ); $this->assertSame( $name, get_term_field( 'slug', $term->data ) ); $this->assertSame( $name, get_term_field( 'slug', $term->term_id ) ); } public function test_get_term_field_slug_when_slug_and_name_are_not_set() { $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy, ) ); $this->assertSame( $term->slug, get_term_field( 'slug', $term ) ); $this->assertSame( $term->slug, get_term_field( 'slug', $term->data ) ); $this->assertSame( $term->slug, get_term_field( 'slug', $term->term_id ) ); } public function test_get_term_field_taxonomy() { $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy ) ); $this->assertSame( $this->taxonomy, get_term_field( 'taxonomy', $term ) ); $this->assertSame( $this->taxonomy, get_term_field( 'taxonomy', $term->data ) ); $this->assertSame( $this->taxonomy, get_term_field( 'taxonomy', $term->term_id ) ); } public function test_get_term_field_description() { $desc = wpautop( rand_str() ); $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy, 'description' => $desc, ) ); $this->assertSame( $desc, get_term_field( 'description', $term ) ); $this->assertSame( $desc, get_term_field( 'description', $term->data ) ); $this->assertSame( $desc, get_term_field( 'description', $term->term_id ) ); } public function test_get_term_field_parent() { $parent = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy ) ); $term = self::factory()->term->create_and_get( array( 'taxonomy' => $this->taxonomy, 'parent' => $parent->term_id, ) ); $this->assertSame( $parent->term_id, get_term_field( 'parent', $term ) ); $this->assertSame( $parent->term_id, get_term_field( 'parent', $term->data ) ); $this->assertSame( $parent->term_id, get_term_field( 'parent', $term->term_id ) ); } } | Mid | [
0.558044806517311,
34.25,
27.125
] |
The mass and the damping properties of the simulation are settings of the brush. Two additional controllers limit the influence and falloff of the simulation. Masked vertices act as pinned during the simulation, and the gravity is applied directly in the solver. This brush includes seven deformation modes with radial and plane falloff types. | Mid | [
0.6202247191011231,
34.5,
21.125
] |
— Story — ↓ Jeff ChartierOwner Steve FosterExecutive Chef Becca ChartierRoost Baker A life long dream realized— The Roost was founded in 2014 in Sarona, Wisconsin with a single vision in mind: to serve beautiful, homemade, delicious food with the best available, locally-sourced ingredients. This remains the vision to this day. Owner, Jeff and Executive Chef, Steve, have been life-long friends and spent their high school and college years working in the food service industry. After graduation, each pursued careers in other fields, married, raised families and realized after 23 years that they still hadn’t gotten around to opening that restaurant they had always dreamed of opening back in the day. The timing appeared right in 2013 with the availability of the Katty Shack in Madge Township, just down the road from Jeff’s parents’ home. Jeff purchased the restaurant in January 2014 and The Roost of Sarona opened on April 1 st 2014 after an extensive redesign led by Jeff’s wife, Becca. The doors opened with a focus on presenting an unforgettable breakfast experience to their customers. That focus continues to this day with unique (and always made-from- scratch) offerings including Crab Cake Benedict, Beef Tenderloin Hash, Cranberry French Toast, and much more. Roost Roast Coffee might just be the star of the show, roasted every week in nearby Hayward, WI with an exclusive blend of South American Arabica beans and ground fresh for every pot. The Roost’s now famous Friday Night Spotted Cow Fish Fry was added in May, 2014 and has grown to become one of the region’s most popular Friday night destinations. The initial concept of having an all-inclusive dinner with choice of appetizer, soup or salad, entrée choice and side choice remains the Friday hallmark with new and different culinary twists being featured regularly. In 2015, The Roost added what has become a favorite of locals, our endless Wings or Shrimp on Wednesdays and Thursdays. Featuring jumbo chicken wings and White Tiger Shrimp with multiple preparation and sauce options, reservations are always a good idea if you want to ensure a seat at the table. These nights also include a truly gourmet special option featuring ingredients including lobster, lamb, clams, salmon, and much, much more all at affordable prices. Local partnerships, particularly with our neighbor Hunt Hill, also grew in 2015. Hunt Hill is a unique preserve just south of The Roost that offers the public a pristine old-growth forest and prairie educational experience. For Hunt Hill’s overnight campers and for some day-meetings for local organizations, The Roost provides their guests with custom meals throughout the day. In March, 2016, The Roost reached an agreement to assume food operations at Spooner Golf Club, the premier golf and dining event location in Northwestern Wisconsin. The Roost at Spooner Golf Club boasts a full bar and special event seating and dining year-round for over 100 guests. From May through September, The Roost at Spooner Golf Club carries on the long tradition of Friday Night Fish Fry with great food and unforgettable views overlooking the iconic fairways and greens of one of the few “must-play” golf courses in all of Northwestern Wisconsin. Due to popular demand, The Roost also vastly expanded its large-scale catering operations to accommodate needs of brides and grooms throughout the area along with local churches and businesses in search of truly special treatment at multiple venues. Private Catering has also become a specialty of the house. Vacation homes abound in and around the amazing lakes of Northwestern Wisconsin and there has been a steady increase in the desire for small-group, at-home catering. The Roost prides itself on being able to satisfy the needs of this market and you just might find both Jeff and Steve in your kitchen expertly preparing foods you only expect to find at Michelin-rated restaurants. Today, The Roost is a regional leader in providing amazing cuisine at attainable prices. For groups from four to four hundred, The Roost has the expertise and staff to make your breakfast, lunch, dinner or event spectacular in every way. | High | [
0.6599496221662461,
32.75,
16.875
] |
Tracked object is a moving object which is captured to appear continuously on a screen and is assumed to be paid attention by a cameraman. Detection of the tracked objects, therefore, is required for generation of summarized video or extraction of key frames which are used for recognizing important objects in video. One example of a tracked object determination device is recited in Patent Literature 1 (Japanese Patent Laying-Open No. 08-191411). The method recited in Patent Literature 1 is to calculate a possibility of existence of a tracked object in a video segment where shooting is executed by a camera moving in a fixed direction based on a distribution of moving regions which are image regions having a vector different from a motion vector generated by motion of the camera. In this method, “a set of moving regions constantly existing in the lump” is determined to be a tracked object by using the degree of lumping of moving regions obtained by the number of pixels in a moving region, the degree of concentration, a position of the center of gravity and the degree of dispersion, or the stationary degree obtained by a rate of the number of frames including moving regions in the video segment. Structure for determining a tracked object includes, as shown in FIG. 23, a motion vector detection unit 300 which detects a motion vector with respect to each frame of video, a panning section detection unit 301 which detects a start point and an end point of a panning section based on a motion vector of each frame of video, a moving region information detection unit 302 which detects, as a moving region, a region whose correlation degree is low between a motion compensation predicted image as a predicted image which is shifted in parallel from a past video frame by the amount of a motion vector and a current frame to extract distribution information of the region, and a scene determination unit 303 which determines, based on distribution information of a moving region detected in each panning section, that the section corresponds to a scene which tracks the object. Patent Literature 1: Japanese Patent Laying-Open No. H08-191411. Non-Patent Literature 1: Yousuke Torii, Seiichi Konya and Masashi Morimoto, “Extracting follow and close-up shots from moving images”, MIRU2005, pp. 24-31. Non-Patent Literature 2: Yoshio Iwai, Shihong Lao, Osamu Yamaguchi, Takatsugu Hirayama, “A Survey on Face Detection and Face Reconition”, IPSJ SIG Technical Reports (CVIM-149), 2005, pp. 343-368. The first problem of the related art method is that a moving object is not determined as a tracked object when a camera cannot be moved approximately with the same speed as the moving object. Moving object cannot be determined to be a tracked object in a case, for example, where shot ends before a shifting rate of a camera comes to be equal to a shifting rate of a moving object because of shortness of the shot, where a cameraman is not allowed to predict a shifting destination of a moving object because the moving object moves at random or where a camera shifting rate varies as a moving object moves such as a case of shooting by a telephoto camera. The reason is that whether a moving object is an object to be tracked or not is determined by a state of distribution of moving regions as an image region having a motion vector different from a motion vector generated by movement of the camera. Accordingly, when a distribution of moving regions fails to satisfy the property that “constantly exists in the lump” because the movement of the camera differs from a shifting rate of the moving object, the moving object as a set of moving regions cannot be determined to be a tracked object. Second problem of the related art method is that a moving object cannot be judged as a tracked object before the end point of a video segment captured with a fixed direction. The reason is that because whether a moving object is an object to be tracked or not is determined based on a rate of the number of frames in which the moving object can be stably tracked within the video segment, the moving object cannot be determined to be a tracked object unless a length of the video segment is obtained. | Mid | [
0.55859375,
35.75,
28.25
] |
This invention relates to systems using terahertz radiation to detect particular types of articles. Computerized tomography (CT) imaging has been employed for non-destructive examination of various types of articles, such as contraband, which may be hidden inside luggage. However, CT systems emit X-rays, which may pose a health risk to the operators of such systems, as well as passengers who may be standing near the system, and hence CT systems generally include some type of shield to protect the operators and passengers ionizing radiation. Moreover, although CT systems are capable of analyzing the density of an article, along with other characteristics of the shape and volume of the article, these systems do not have spectroscopic capabilities, and therefore cannot analyze the chemical compositions of the articles. Furthermore, X-rays are not sensitive to the optical traits that result from the article's refractive index and absorption coefficient. These properties, if measurable, can yield unique, high-contrast images and reveal much about the reflective, absorptive and scattering properties of material. Thus, there is a need for a non-destructive imaging system that is also capable of providing optical and spectroscopic probing capabilities. | Mid | [
0.608108108108108,
33.75,
21.75
] |
Looking Ahead: An Interview with Michael A. Marletta On January 1, Michael A. Marletta took office as president and CEO of The Scripps Research Institute. Here, he speaks with Mika Ono of News&Views about topics including his background, priorities, and vision for the future. What led you to The Scripps Research Institute? Twenty years ago, Scripps offered me a position. I was at the University of Michigan at the time. I thought long and hard about it, and decided I still enjoyed the full spectrum of a complicated university with many thousands of undergraduates. Just over 10 years ago, I moved to UC Berkeley. At Berkeley, I served as chair of the chemistry department for five years and found I enjoyed leading a complex and driven diverse group of people. A few times over the years, Richard Lerner [former president] would say, “Look, if you are ready to make a move…” I visited Scripps a number of times, and I’ve always admired the place. So when this opportunity came along, I thought it was a long shot but I applied. What excites you about the job as president? I’m excited about the potential of learning how biology works and applying that knowledge to medical problems—and that’s really being excited about the mission of The Scripps Research Institute. Others at Scripps are excited about that, too, and that’s great. You’ve been here since July. What are some of your first impressions of the institute? The most encouraging thing I learned is that, in general, the faculty and staff have an intense devotion to this place. I walked into Beckman the other day and the security guard at the desk, Marcus Bilbee, and I struck up a conversation, and it was clear he cares a lot. When the faculty start to talk about what they have been able to discover here, it’s clear they have an attachment. That has been deeper and more intense than I expected. That is going to help us in the long run. No place is perfect. Scripps has its challenges, areas for improvement. But if you feel strongly about the place where you work, you are willing to help and be part of the solution. What are the biggest challenges you see? There are financial pressures. Scripps is a soft-money institution. One question I could ask in return is, “Why do faculty come to Scripps?” They could stay in a university and, even with no research support, collect nine months of salary for teaching. But for that, their days would be broken up with all kinds of university responsibilities. I did those for many years. Some of those are enjoyable, but sometimes they take you away from research when you would rather sit in a lab talking to students about a particular result. At Scripps, you can come in at the beginning of the day and if somebody finds something unexpected or a big experiment works, you could spend all day thinking about it, talking about it, writing about it… That never happens in a university environment. Faculty come here because they can do unencumbered research. For that, there’s the risk of raising money to fund the research you want to do. Faculty also come and stay because of the infrastructure here—the very best in equipment. So we need to generate resources to keep that infrastructure at the highest level. We need to generate resources to recruit the next generation of new faculty. We need to have resources to keep our faculty who will get offers from other places. While there are different issues in Florida, in La Jolla the financial pressures are significant. We have had long-time relationships with “big pharma” that are not going to be repeated in the current environment. Florida is still in the growth phase, still with money from the State of Florida, so there is empty space because we are still recruiting principle investigators. We’re on track to meet the Florida benchmarks. All of this boils down to the fact that the biggest issue facing us is how to move forward in a situation where the federal government will not be the partner it has been in the past. That will put even more pressure on us to raise internal funds. We’re looking at a combination of philanthropy and a return on our investment in intellectual property [IP]. IP is going to ramp up. Not having a first-rights agreement as we have had in the past will make us look farther into the future for financial benefit from our IP, but we will own all of what we discover here and that should be a direct benefit to us. Could you talk a little more about philanthropy? Why should people give here versus elsewhere? People give because there is something about what we are doing that strikes a chord in them. Each of us can rattle off parents, siblings, aunts, uncles, cousins who suffered from some disease. It’s just inevitable. When disease strikes, we often like to do something about it. It’s one of the common aspects of private giving here. Donors hear about what we’re doing and want to support it. Of course, we have to tell them what we’re doing, and I’m spending some of my time doing that. Sometimes what strikes a chord is an individual they meet, say a faculty member working on a particular disease. When they make a contribution, they have the opportunity to see that person be successful, working on something they believe in or a disease they want to see wiped out. So it’s often deeply personal. That’s why philanthropy is all about relationships—listening to what potential donors find interesting and then showing them we have the potential to make a major discovery that they can be a part of. Isn’t basic research somewhat of a double-edged sword—you’re years away from medically applied research, although the fundamental discovery may ultimately have a larger impact? I actually don’t agree with that. Let’s use the recent example of Jeff Kelly’s tafamidis [now approved in Europe for the treatment of familial amyloid polyneuropathy]. At the heart of it, I’d say Jeff probably has two passions. One is to come up with a drug that helps treat disease. He’s just done that. But the other passion is for the science itself. So Jeff’s driving force was understanding how proteins fold, and when they misfold what happens—very basic, fundamental work, but also necessary to make a drug. Benlysta® is the only treatment for lupus, a very complicated disease. Richard Lerner’s antibodies are the technology that drug was based on. There was Humira® before that. Humira® will soon be the largest selling drug in the world. To me, Scripps represents the very best in fundamental research coupled with looking outward for the translational piece, which takes fundamental discovery and turns it into drugs. When I was at Michigan, in the medical school’s biological chemistry department, the clinicians would say, “You’re so far away from [the clinic].” It appeared more like that then, because you made a fundamental discovery, you published it, and that was more or less the end of it. But at Scripps, it’s not just about basic discovery, but also what can you can do with it. That’s different. I tell donors what our fundamental discoveries can do. I tell them we are about discovery—that’s what we do—but we don’t let it rest there, and we’ve got examples to show it. Here, basic research and potential applications go hand-in-hand. Your own work has bridged fundamental discovery and medical application. We started a company. My father said I finally must have done something important! We spent years trying to understand the remarkable finding that a molecule such as nitric oxide, this toxic molecule, is regulating blood pressure and is involved in learning, in memory. Everything in moderation; a glass of wine is good, 10 is probably bad. It’s the same with a molecule like nitric oxide. Biology has learned how to handle it. It’s extremely toxic, yet we’re making it and using it in some important physiological processes. It just turns out we don’t make very much of it. Over the years, we asked questions about how biology handles such a toxic molecule to carry out these important physiological processes. We then started to ask how biology tells the difference between nitric oxide, carbon monoxide, and oxygen. Biology has to look at all three and tell the difference from a chemical perspective, and it’s not so easy. In figuring that out, we realized that we could use our fundamental understanding to deliver nitric oxide or carbon monoxide or oxygen to particular tissues, and there are good, practical reasons for wanting to do all three. So we wrote some patents, and there’s a little company [Omniox] that’s operating right now in San Francisco. Hopefully, it will be successful. How did you get interested in science in the first place? I have a 16-year old. I watched him when he was a baby. Every kid is a scientist. They are all trying to figure out the world—whether they are lying on the floor and whacking at something or trying to figure out where the ball is going to go when it rolls across the floor. I found it interesting to watch him. I thought about myself and from my earliest memories, I always wanted to know how things work. But the catalyzing moment was October 4, 1957, when I was six years old and the Soviets launched Sputnik. I was six, so I was too young to be afraid. This was in upstate New York. It was pretty cold as I remember it, an October night, and I put on a heavy jacket and went out and stood on the front lawn of the little house we lived in and watched Sputnik fly over. Even though I didn’t understand there was engineering and science at the time, I became convinced that whatever that was I wanted to be a part of it. Christmas was right around the corner, so I asked for a telescope. Since I was six, I guess it would have been Santa who brought it to me. Then the next year, I asked for a microscope and I got that. And the next year I asked for a chemistry set, a Gilbert chemistry set, and I didn’t get that. My father was worried I was going to blow up the house, although there was nothing you can blow up with a Gilbert chemistry set. But by this time, it was maybe 1960 and you could still buy a lot of chemicals, which I did because I had a paper route. I built my own lab and I almost did blow up the house… I was always fascinated by the periodic table and the idea that everything on this planet was composed of those elements, and you could mix and match them to make things already in nature or make new things with properties nobody expected. I thought that was it. Then I took a biology course and realized that the master chemist is biology. Since then I’ve been walking between the two worlds. Is it too early to ask you your vision of Scripps? It’s a little early, but people have asked. As I mentioned, you have to have the best infrastructure possible. You’ve got really smart people who already have great ideas. You need to recognize talent, keep the best talent, and then basically get out of the way. That said, I think that it would be important for Scripps to engage in serious issues in human health. I would like us to work on some big problems, like the combination of obesity and metabolic diseases like diabetes. We already have people working in these areas, but there is some opportunity. As enzymologists—I would describe myself as an enzymologist—we study one enzyme in a test tube, one at a time. We understand a lot, but when you put that one enzyme with a thousand others all working together in us, it doesn’t quite work like it works in a test tube. So, in fact, we’re talking about metabolism, an old moniker. When you think about the spectrum of metabolic diseases, they include not only diabetes and aspects of obesity, but also cancer, which is now being reinterpreted as something called the Warburg effect—oxygen consumption by cancer cells. I would like us to be as good at metabolomics as we are at proteomics—where we are one of the best in the world due to our investment in talent and infrastructure. With infrastructure in metabolomics, not only can our faculty take advantage of these resources, we’ll also be able to tackle diseases that confront the Western world. If we don’t solve those problems, as a society we’re going to have an albatross around our neck. We need to understand the processes, and we need to do something about those diseases. So, I see investment in that kind of infrastructure and then doing what I do best, which is 1) taking advantage of it in my own research, and 2) getting out of the way. Are there any other messages you want to get out there to employees, to donors, to faculty? I mostly want people to know I’m excited. The more I learn about Scripps, the more excited I am. Also, I’m going to work hard to make sure that Scripps remains the kind of institution that it has been and moves forward with new discoveries, but I need everybody’s help—faculty and staff—everybody. Send comments to: [email protected] Michael A. Marletta took office as president and CEO of The Scripps Research Institute on January 1. (Photo by Dave Freeman, BioMedical Graphics.) | Mid | [
0.592233009708737,
30.5,
21
] |
The invention is concerned with caprolactone-containing vinyl polymers and coating compositions containing the same. It is known that .epsilon.-caprolactone of the formula: ##STR1## will react with acids or alcohols by ring-opening between the --O-- and the adjacent keto group to form various kinds of adducts. Thus, with an acid RCOOH, the caprolactone opens and reacts as follows: ##STR2## where n is an integer. With an alcohol ROH, the caprolactone (n mols) splits in the same way to give: ##STR3## It will be evident from the above that the reaction of the acid or alcohol with the caprolactone can be used to introduce terminal --COOH and/or --OH groups in the product. The invention is based on the finding that by polymerizing together (1) a mixture of vinyl monomers, including at least one such monomer containing an --OH or --COOH group, and (2) an .epsilon.-caprolactone, the resulting polymer demonstrates certain particularly useful properties for coating purposes. For example, the resulting product reacts particularly well with conventional melamine-formaldehyde precondensates to give coatings or films which show outstanding exposure resistance and are particularly useful as metal coil coatings or the like. Accordingly, the principal object of the invention is to provide certain acrylic copolymers which are uniquely useful, particularly with amino-formaldehyde precondensates, for coating purposes. Other objects will also be hereinafter apparent. | High | [
0.6637554585152831,
38,
19.25
] |
Download AudioSenator Pete Kelly, a Fairbanks Republican, previewed a bill he is planning to introduce this week to reform the current Medicaid system. He said the bill won’t include a provision to expand Medicaid, he said during a press conference this morning. A group of Anchorage religious leaders and lay people are in Juneau to try to convince him and other skeptical lawmakers to change their minds on the issue.Senator Kelly said his Medicaid reform bill will feature Health Savings Accounts. A portion of the permanent fund dividends of Medicaid recipients would go into the accounts to pay for costs that are considered unreasonable:“If you got to an emergency room when you shouldn’t have, then that comes out of that Health Savings Account [and] if you self-refer to a specialist; if you use brand name drugs instead of a generic when they’re available, those kinds of abuses,” he said.The bill will also include a provision for managed care, a system for controlling health costs by managing how patients use health care services, he said. Full details won’t be available until the bill is formally introduced later this week.One thing Kelly’s bill won’t include is Medicaid expansion. He said that may come as a surprise to the Walker Administration. Health Commissioner Valerie Davidson did not respond to requests for an interview. Her department issued a short statement saying they will comment on the bill after they have a chance to review it.Kelly said he thinks reform should happen before expansion. “It’s a broken system,” he said. “I think everyone agrees that Medicaid is broken. I think it’s been broken for 30 years. And now to expand it and put more money into it, to bring more people into it, that’s certainly not going to help its brokenness.”Kelly will likely encounter a large group of Anchorage residents in Juneau early this week who will try to change his mind. They are from Anchorage Faith and Action Congregations Together- or AFACT, a federation that represents 15 congregations and 10,000 congregants.Reverend Julia Seymour expects their diverse group of 14 representatives to stand out at the capital. She says their message is pretty simple:“We’re about honesty,” she said. “And the reality is that Medicaid expansion is an honest need for Alaskans, and religious and faithful people support that.”Reverend Seymour says Medicaid expansion has been a priority for AFACT for at least three years. In 2013, the group started publishing a small booklet explaining the complicated issue to congregants. AFACT decided to send representatives to Juneau this session, because it’s the first time the legislature has seriously considered the issue. Reverend Seymour is a pastor at Lutheran Church of Hope in Anchorage.Reverend Seymour said they will meet with as many lawmakers as possible on both sides of the aisle. “We’re hoping that we will come back from Juneau smarter about this issue,” she said. “With more knowledge about what’s going on with Juneau with the concerns of both the majority and minority caucuses and with a clear understanding of what needs to be done… to get Medicaid expansion in Alaska.”For Reverend Seymour, approving Medicaid expansion is the moral and ethical decision to make for the state’s future:“It’s about the health of Alaskans,” she said. “Healthy Alaskans are productive Alaskans. Productive Alaskans enjoy the gifts of creation and we have excellent gifts of creation in this state.”At the press conference, Senator Kelly said he didn’t think Medicaid expansion is a moral imperative. But he didn’t shut the door completely on the issue either. Kelly said this draft of the bill doesn’t include expansion, but talks on whether it – or another bill- should include it will continue for the rest of the session.“I’m one person with one bill, so I think expansion and reform are discussions that are going on with 60 people in this building, 61 including the governor. My bill just doesn’t have expansion in it.”Kelly’s Medicaid reform bill is tentatively scheduled to have its first hearing Friday. Reverend Seymour said when their members return to Anchorage they will regroup to consider their next steps and also pray for lawmakers to do their work.This story is part of a reporting partnership between APRN, NPR and Kaiser Health News.read more | Mid | [
0.5714285714285711,
34.5,
25.875
] |
Portsmouth Harbor Lighthouse, Located at the US Coast Guard Station in New Castle. Go inside and up to the top of the lighthouse for spectacular 360* views of the harbor and Seacoast Area. This is one of only 2 lighthouses in New Hampshire. Whaleback Lighthouse in Kittery Point harbor can be seen from the US Coast Guard station in New Castle or from the town pier in Kittery Point. Parks & Recreation Tour Check out some hidden spots and great history. Prescott Park, visit one of New Hampshire’s most beautiful parks with gardens & fountains all set along the Portsmouth waterfront. Strawbery Banke tour one of New England’s oldest settlements. For a small fee check out how the early settlers lived in New Hampshire’s infancy. Albacore Submarine Park, go inside a real submarine that was active in the 1950s. Great for the seacoast’s naval heritage. Fort McClary State Park in Kittery Point, Maine go into a World War I fort and check out the spectacular views of the Maine coastline. | Mid | [
0.63716814159292,
36,
20.5
] |
Q: Row locks - manually using them I basically have an application that has, say 5 threads, which each read from a table. The query is a simple SELECT TOP 1 * from the table, but I want to enforce a lock so that the next thread will select the next record from the table and not the locked one. When the application has finished it's task, it will update the locked record and release the lock and repeat the process again. Is this possible? A: The kind of approach I'd recommend is to have a field in the record along the lines of that indicates whether the record is being processed or not. Then implement a "read next from the queue" sproc that does the following, to ensure no 2 processes pick up the same record: BEGIN TRANSACTION -- Find the next available record that's not already being processed. -- The combination of UPDLOCK and READPAST hints makes sure 2 processes don't -- grab the same record, and that processes don't block each other. SELECT TOP 1 @ID = ID FROM YourTable WITH (UPDLOCK, READPAST) WHERE BeingProcessed = 0 -- If we've found a record, set it's status to "being processed" IF (@ID IS NOT NULL) UPDATE YourTable SET BeingProcessed = 1 WHERE ID = @ID COMMIT TRANSACTION -- Finally return the record we've picked up IF (@ID IS NOT NULL) SELECT * FROM YourTable WHERE ID = @ID For more info on these table hints, see MSDN | High | [
0.69069069069069,
28.75,
12.875
] |
June 26, 2008 Justice Scalia sells out felon gun rights, but on what basis exactly? Here are sets of quotes from the majority opinion in Heller that I have a hard time adding up: We start therefore with a strong presumption that the Second Amendment right is exercised individually and belongs to all Americans. (Slip op. at 10, emphasis added.) It was plainly the understanding in the post-Civil War Congress that the Second Amendment protected an individual right to use arms for self-defense. (Slip op. at 44, emphasis added.) As the quotations earlier in this opinion demonstrate, the inherent right of self-defense has been central to the Second Amendment right. The [DC] handgun ban amounts to a prohibition of an entire class of “arms” that is overwhelmingly chosen by American society for that lawful purpose. The prohibition extends, moreover, to the home, where the need for defense of self, family, and property is most acute. Under any of the standards of scrutiny that we have applied to enumerated constitutional rights, banning from the home “the most preferred firearm in the nation to ‘keep’ and use for protection of one’s home and family,” 478 F. 3d, at 400, would fail constitutional muster. (Slip op. at 56-57, emphasis added.) A broader point about the laws that JUSTICE BREYER cites: All of them punished the discharge (or loading) of guns with a small fine and forfeiture of the weapon (or in a few cases a very brief stay in the local jail), not with significant criminal penalties.... [W]e do not think that a law imposing a 5-shilling fine and forfeiture of the gun would have prevented a person in the founding era from using a gun to protect himself or his family from violence, or that if he did so the law would be enforced against him. The District law, by contrast, far from imposing a minor fine, threatens citizens with a year in prison (five years for a second violation) for even obtaining a gun in the first place. (Slip op. at 61-62, emphasis added.) Summing up, it would seem that the majority holds that, pursuant to the Second Amendment, "all Americans" have an "individual right to use arms for self-defense." And, the Second Amendment would be most problematically transgressed when this right is severely restricted in the "home, where the need for defense of self, family, and property is most acute" through the threat of years in prison rathen than just a minor fine. As regular readers know, I think all these assertions add up to making constitutionally questionable the threat of severe sentences on felons in possession of firearms. After all, felons are Americans with a need to protect themselves and their families through keeping guns in their home. And yet, all felons (even non-violent ones like Lewis Libby and Martha Stewart) face the threat of 10 years in federal prison for just possessing a firearm. Nevertheless, the majority opinion boldly and baldly asserts that "nothing in our opinion should be taken to cast doubt on longstanding prohibitions on the possession of firearms by felons and the mentally ill." (Slip op. at 54.) Really? How can that (unjustified and unsupported) dicta be squared with all that has been said before? To his credit, Justice Stevens properly asserts in this context that felons are not categorically excluded from exercising First and Fourth Amendment rights and thus the majoiry "offers no way to harmonize its conflicting pronouncements." Time and litigation will tell if holdings or dicta end up dominating the application of the Second Amendment in future cases. Comments Doug, since time immemorial, criminals have lost certain rights. It's that simple. The right to vote is precious, and it can be taken away. Posted by: federalist | Jun 26, 2008 11:26:09 AM So federalist, is it your position that a state can someone who commits perjury from practicing his or her religion? We've all read the trite point that criminals may lost certain rights. The question becomes, then, whether they may arbitrarily lose rights that have no connection with what they've done. Posted by: | Jun 26, 2008 11:36:21 AM Let me try again: So federalist, is it your position that a state can ban someone who commits perjury from practicing his or her religion? We've all read the trite point that criminals may lose certain rights. The question becomes whether they may arbitrarily lose rights that have no connection with what they've done. Posted by: | Jun 26, 2008 11:38:02 AM Felons also retain their rights to be free from having troops quartered upon them in peacetime without their consent, and in wartime, except as provided by law, under the Third Amendment; their rights to the due process of law, and freedom from compulsory self-incrimination, under the Fifth Amendment; their rights to have the assistance of counsel, to confront the witnesses against them, and to have their fate in a criminal case determined by a jury, under the Sixth Amendment; their right to a trial by jury in Federal court in an action at common law, where the amount in controversy exceeds $20, under the Seventh Amendment; and their rights to be free from excessive fines, from cruel and unusual punishments, and from having to post excessive bail, under the Eighth Amendment. The Fourteenth Amendment prohibits states from abridging the privileges and immunities of citizens of the United States. Heller does not seem in any way to turn upon citizenship, and thus does not make prohibition possible if one is an alien. Therefore, what is there about the Second Amendment, or about its right to possess weapons one might reasonably use in self-defense (which seems to be the issue the Court would like us to focus on in Heller), that excludes felons from the protection of this provision of the Bill of Rights? Posted by: Greg Jones | Jun 26, 2008 11:45:59 AM There are certain rights that cannot be taken away. The "inherent right of self-defense" in one's home may be one of them. There are two separate issues/groups in this debate. First, there are violent felons and non-violent felony (and by non-violent, I mean truly non-violent like perjury, insider trading, tax violation, environmental violations, ect.) Second, there is "inside the home" versus "outside the home". One could picture a grid with two collums and two rows, and in each box depicting the validity of taking away the right. In my opinion, non-violent felons, at a minimum, must retain their inherent right to self defense in the home. Outiside of the home is not as certain, and this is where the "standard of scrutiny" is important. Violent felons should also maintain their inherent right to self defense in the home. However, IMO, they can loose it outside of the home, where the right is not "most accute." Posted by: DEJ | Jun 26, 2008 11:47:04 AM They did not use language "for example" or "such as" felons or mentally ill. The majority in SCOTUS used clear language on whom may be restricted. Lautenburg seems to be in jeapordy, as well as the California "ugly gun" bans. Extremes of licensing requirements for purchasing/possessing a firearm are also out the door. Posted by: Mike | Jun 26, 2008 11:50:27 AM I wonder if we are jumping to conclusions. Did Scalia (in what seems to amount to dicta) foreclose the possibility that some firearm restrictions on felons are unconstitutional (particularly given the reliance on self-defense)? Perhaps he only means they are not not necessarily unreasonable...there may be some leeway to require the feds to insert a "reasonable component" to felon firearm bans. (For instance, creating a reasonable application process for non-violent offenders to regain the right, putting time limits on the ban, or requiring the government to exercise reasonable and non-arbitrary discretion in deciding whether a certain convicted felon should be allowed to possess a firearm). Such reasonable restrictions, IMHO, would be more reasonable than many restrictions placed on convicted sex offenders. I would not foreclose the possibility of someone convicted of a minor non-violent felony decades ago successfully challenging the ban. (Perhaps he could be an otherwise upstanding citizen, have a family of four, and live in a dangerous neighborhood too). Posted by: Nathan | Jun 26, 2008 12:00:39 PM Based on an initial reading of the majority opinion, the real "Heller challenge" lies in challenging a charge under 18 USC 924(c)(1)(A)(i). Why should someone (who is not otherwise a prohibited person) involved in a drug conspiracy face a separate 5 year mandatory minimum for exercising his fundamental right to self defense? Posted by: Anon. Law Clerk | Jun 26, 2008 12:18:40 PM The state is entitled to opine that someone who commits a felony is more likely to use the weapon for illegal purposes than for (lawful) self-defense purposes. Posted by: Steve | Jun 26, 2008 12:19:14 PM Mike - did you actually read the Court's opinion? Scalia specifically states that governments are allowed to require a license for the purchase or possession of a gun as long as they are not being denied for "arbitrary and capricious" reasons. And that was not dicta, but instead it was part of the holding (that DC would be required to grant Mr. Heller a license if he met their criteria). Make no mistake - this sort of thing is exactly why the NRA never wanted this case before the Supreme Court. Far from saying existing gun control laws are in jeopardy, the majority at the end of its opinion actually seems to be calling for more gun control laws to be passed! Wonder which one of the 5 justices that was needed for to get them to vote that a total ban was out. I repeatedly predicted that this opinion would ultimately be a big zero (and maybe even a hidden victory for gun control advocates because it would legitimize most forms of gun control short of an outright ban). I think that is exactly what happened. Its is really simple. It use to be that the ATF could reinstate firearm priviledges until congress stopped funding I believe 1992. I think it might be time to start funding that program again. The people who apply would most likely be reformed felons and would think twice before committing another felony. Posted by: noway | Jun 26, 2008 12:28:25 PM I'm not surprised at all that the felon in possession laws would not be affected but is the federal laws banning felons from owning firearms subject to amendment now? I mean there are a lot of nonviolent offenders with old felony convictions and I mean some are very old, that have straightened out there life and should not be subject to a blanket federal prohibition. Having said that and taking note of 18 U.S.C. 921 (a) (20), I'm glad I live in Louisiana where felons can own firearms if they keep out of trouble for 10 years from the date they complete their sentence and in some case of nonviolent non enumerated offenses they can get the right back upon completion of their sentence. See RS 14:95.1, Article 1 Section 20 of the Louisiana constitutio and United States v. Dupaquier. See also the Louisiana first offender pardon statute. Posted by: Paul | Jun 26, 2008 12:31:47 PM I think the greater concern with the opinion is regulation of types of weapons. Scalia recognizes striking down the handgun ban is in tension with the preferatory clause. The majority provides absolutely no guidance on what types of guns may be regulated and what types may not. You may disagree with the majority's statement that this opinion does not disturb the present rulings upholding felon is possession laws but the Court is fairly clear on that issue. They didn't say "we express no opinion," but said "no doubt should be cast." Rightly or wrongly five Supreme Court justices have said (albeit in dicta) that the felon in possession cases are still good law notwithstanding Heller. From page 55 of the slip opinion: We also recognize another important limitation on the right to keep and carry arms. Miller said, as we have explained, that the sorts of weapons protected were those “in common use at the time.” But arguably one of the reasons some of these guns are not "in common use" is because there are federal and state regulations which ban or severely limit their purchase. Apparently, the government may ban some guns, but not others because they aren't in common use because they are illegal. In other words the guns may be banned because they have been banned, a wholly unsatisfactory conclusion. The court does say that the prohibition of dangerous and unusual weapons is permissible, but provides no guidance on what that means. Unusual suffers from the same problem as "common use." If unusual were enough D.C. might have claimed pistols were unusual in D.C., and therefore might be banned because they have been banned. Dangerous might be workable, but is problematic. Consider for instance the sawed off shotgun the supreme court considered in the Miller case. The sawed off shotgun is not in the ordinary, plain sense of the word more dangerous. It does not fire faster, the shorter barrel length decreases the energy of the pellets and diminishes the guns effective range. But it does make the gun more concealable and so more dangerous in the sense it is more adaptable to illicit use. But how about a high capacity magazine for a pistol? Does that make a pistol more dangerous? Dangerous might work for some obvious weapons (e.g. an AK-47 is more dangerous than a 9mm semi-automatic pistol but how dangerous is dangerous enough to regulate?). But there are significant problems with its application. The majority seem to indicate the fully automatic versions of the M-16 may be banned, but does that mean the Government can ban the civilian semi-automatic versions? I'm not sure that the civilian version is substantially less dangerous within the meaning of the word. Its more common, but again that may be because federal law has restricted the purchase of fully automatic weapons for some time. Posted by: NK | Jun 26, 2008 12:41:08 PM Mr. Heller is/was a resident of DC with no right to vote or representation in congress and as a consequence had to sue in federal court over a city ordinance? It seems to me the benefits/damage of this case are very limited. Did the supremes select this case because it was narrow to begin with? Posted by: John Neff | Jun 26, 2008 12:54:29 PM wonder if we are jumping to conclusions. Did Scalia (in what seems to amount to dicta) foreclose the possibility that some firearm restrictions on felons are unconstitutional (particularly given the reliance on self-defense)? Perhaps he only means they are not not necessarily unreasonable...there may be some leeway to require the feds to insert a "reasonable component" to felon firearm bans. (For instance, creating a reasonable application process for non-violent offenders to regain the right, putting time limits on the ban, or requiring the government to exercise reasonable and non-arbitrary discretion in deciding whether a certain convicted felon should be allowed to possess a firearm). Such reasonable restrictions, IMHO, would be more reasonable than many restrictions placed on convicted sex offenders. I would not foreclose the possibility of someone convicted of a minor non-violent felony decades ago successfully challenging the ban. (Perhaps he could be an otherwise upstanding citizen, have a family of four, and live in a dangerous neighborhood too). What I'm thinking is that congress may now ammend the federal firearms law, specifically 922 (g) and 921 (a) (20), so that nonviolent felons can enjoy the their second admendment right with out having to qualify under such strict exemptions as layed out in 921 (a) (20). Posted by: Paul | Jun 26, 2008 12:58:05 PM Hi, I'm a patriotic blogger, 7th generation American. I'm named for my 7th Great-Grandfather, so really more than 7 generations, but he was riding horses in the revolutionary war to the Continental Congress with important documents and such so they made the statue of him. Pidgeons like it, :) Scalia is in felony violation of the USA Patriot Act as he persists in maintaining the false information on terrorism he provided in his recent supreme court decision regarding the Guantanamo and other terrorist detainees. So, how would this square with the (moderately clear) right to vote and losing that right upon the conviction of a felony. Is that also an area in which you would argue that we have a defect in constitutional reasoning, Prof. Berman? Posted by: Jonathan | Jun 27, 2008 1:05:55 PM He wouldn't argue that because he's probably aware that Section 2 of the 14th Amendment provides support for felon-disenfranchisement. Posted by: | Jun 27, 2008 1:34:41 PM DAB, I am a Colorado citizen that was wrongly convicted. Not my opinion DAB, buddy I have court transcripts that show it! To the rest- Whoever thought that all Americans have rights clearly has to be stoned. You lose rights at the whim of the system, even if you're not a criminal. They can strip you of more than just the right to defend yourself, they can (and have) strip you of the right to a fair trial. They can strip you of your freedom of religion. The ACLU fought and won a battle over a law that stopped felons from voting. Now they can vote. But the ACLU won't touch the right to defend yourself. We have to do that ourselves. Posted by: Colorado citizen | Jun 28, 2008 6:33:43 PM The now-unconstitutional sentencing guidelines already distinguish between felons. Firearm possession by a felon not convicted of a violent or drug crime is an automatic level 6 as long as the firearm is possessed for a lawful purpose. It seems a small leap for the courts to allow non-violent felons to possess firearms... Posted by: Matt | Jul 3, 2008 6:16:35 PM Zack - Had you also read the opinion, Scalia quite clearly required a license to be issued to Mr Heller if he was not otherwise barred from one - and Scalia explicitly stated weapons are prohibited to felons and the mentally ill. He wrote that twice in his opinion. Clearly, there is no licensing requirement past that check that would pass his version of "constitutional muster". "Arbitrary and capricious" would be requiring training, the location of your home, your reason for purchase, etc. All fees are already unconstitutional in order to practice a constitutional right. Posted by: Mike | Jul 3, 2008 11:35:59 PM I'm a felon. When I took my Alford Plea to an attempted charge, my alleged crime wasn't considered violent. Because of a change in wording, I'm now, many years later, considered a violent offender. So according to many posters here, before the courts changed my status, it would have been A-OK for me to protect my family, myself, and my business with a firearm, but now that I'm a "violent" offender, I can no longer be trusted not to "go postal" and gun down my fellow citizens? Need I remind you that, were I to decide to break the law by killing people I'd not balk at the idea of illegally possessing a firearm? DAB: I'm just a second-class citizen Posted by: Never been to Whitechapel | Jul 23, 2008 8:45:01 AM i am a convicted felony a low grade the lowest non-violent i cant protect my family plus i was convicted of a non violent crime convicted for protection because of this i am now a convicted felont 2 years but served 14 months for good behavior. the people i tried to help well they had to go to court as a person that had the crime forced on them numerous time the court found the people nothing wrong they had the proff they said because i intervene justice was served. homemaker disable disable at the time Posted by: | Jul 28, 2008 7:33:08 PM How would the court ruling affect my one and only run in with the law. They confiscated my rifle. made me forfeit it. What about drug user (marijuana) In possession of firearm. Legal Rifle found in home safe with small bag of pot. I have federal sentencing in court next month. Possible 10 years. Is this reason for appeal.? Posted by: ed | Jul 28, 2008 9:52:47 PM I've become more in touch with this issue over the past couple of years. I'm 52 and at age 17 in 1973 I was arrested and charge with the offense of burglary. 3 of us broke in to a doctor's office. Myself and one of the others were caught. While out on bond I was arrested (not convicted) for possession of a couple of joints. The arrest caused me to be sentenced to 2 years in prison. I served my time and completed parole. At that time I was not particularly concerned about gun rights. I worked and have never been convicted of a crime except for a DUI in 1985. After the DUI I woke up, went to college, graduated with a MSW and have worked as a clinical social worker in addictions, psychiatry, college counseling center, and as a family therapist working with juveniles and their parents who are at high risk for criminal convictions. I'm married with no children and own my home in a suburb. We have had a few "strange" visitors but I've not been concerned with the need for a firearm to protect our home. I work with some very rough families and in some tough neighborhoods where self-defense awareness is a necessity. Now the best I can do is duck or accelerate the gas pedal. About 2 years ago I realized that I'm not in the shape I was in the past and if someone attempted to harm me or my family the likelihood of being able to defend us was pretty poor. I applied for a pardon but was denied because the governor is "very conservative". The prosecutor of my case supported my application for the pardon. I've been researching and education myself about the felon with firearm issue and my opinion is that the underlying intent of this was to remove access to firearms in the urban ghettos. I consider it to be discriminatory because the law came about during the Civil Rights era and rioting in the late 60's. (BTW, I'm white). I'm adamant that restricting a felon's right to own firearms discriminates against them. The 2nd Amendment is the only right that is entirely restricted for a crime. I wonder if those who use religion as an excuse for their crimes should have their religious freedom restricted. Does an "inciting to riot" or "threat to harm" offense cause someone to lose their right to free speech. It doesn't. I wish this law would be overturned, but I know its an apostasy to many that a felon should have a 2nd Amendment right. My best hope is a pardon which is not likely unless I can make a large donation. Posted by: Bruce | Aug 6, 2008 8:37:41 PM Do like I do. I'm a convicted felon but I have firearms. Some laws you just ignore. I'd rather be jailed for shooting someone proteting my home than be 6ft. under. Just be careful. Posted by: Glenn | Aug 15, 2008 6:11:41 PM Question, Can a member of a felon’ family have a gun in the same home as the felon, if not. The member of that family loses their rights , say man and wife, Husband is a felon, wife non- felon Posted by: question | Aug 17, 2008 10:21:32 PM My son is a convicted felon and was charged with possession of a firearm and the bad part is they didnt find a weapon on his person they found it on the ground in the dark and said it was his mind you this was a crime area where anyone could have put it there because police where around. Now they didnt do balistics on the weapon to see if his finger prints where on it they also said he stole it from a police in texas then turned around and said they made a mistake it was brought at action some is not telling truth they went by the word of police but they lie to they are just human just like us. can any tell me what is the procedure for convicting a felon with firearm very mad mom at or justice system. Posted by: Tanya | Aug 21, 2008 11:53:45 PM This is tanya again email me [email protected] Posted by: Tanya | Aug 21, 2008 11:57:46 PM Lousisana Law what is the procedure for conviction? Posted by: Tanya | Aug 22, 2008 12:00:40 AM The united states justice system is just as bad as its health care system. When presidents and politicians can lie to the people on prime time TV, what hope do we have that a normal citizen has a true chance of justice. Unless your rich and famous or connected you are out of luck. There is no hope for justice in America. Most people don't see the violations of rights and written law until its to late. Like me, I thought a trial by jury of your peers was just that "a trial by a jury of your peers" Well its not. Its a trial buy a jury of prosecutor supported elite citizens supporting the desired outcome. Posted by: anon | Aug 30, 2008 5:03:12 PM Obviously the ban on felons possessing handguns is a hat-tip by Scalia to the "evolving standards" / "living Consitution" wing of the Court. [[I'm being tongue-in-cheek: I'm sure there is a good originalist argument against felons possessing handguns, but there's no consistent way for the minority in Heller to protest Scalia's conclusion using their own methodology]]. Posted by: AndyK | Aug 31, 2008 2:59:44 PM Scalia and this gang of co-conspirators didn't interpret the Second Amendment. They just patched up a "solution" to please some but essentially left the status quo unchanged. If Scalia and his cronies had had basic independent judicial reasoning they would have recognized that Congress and the States have No Constitutional authority to "regulate" the Second Amendment because the Amendment grants no such right to either. In fact, nothing in the Second Amendment states that " Congress and the States or Congress or the States SHALL have power to enforce this Act by Appropriate Legislation". This is pure Tyranny no matter how Scalia puts it. As for the so-called "Interstate Commerce Clause", this act not only didn't give any right to Congress to "Criminalize" Intestate Commerce (Regulate Only) but it was foreclosed, became null and void by the passage of the Amendment to the Constitution which clarified the limited power of Congress while expanding the greater power of the States and the People of every State. Scalia's mind is degenerating rapidly and has limited legal knowledge of what's really happening. When he says that "nothing in our opinion should be taken to cast doubt on longstanding prohibition on the possession of firearms by felons and the mentally." he is not telling the truth. What he says only applies to people convicted of federal Felonies and not to those convicted of State Crimes who can by application of State Laws regain easily the right to vote,to run for office and serve in a jury thereby re-acquiring the right to possess firearms and even satisfy federal rules. I believe that now it's time for person with a federal felony to come forward a generate another review on this case. Otherwise at some point, possessing anything manufactured in another State could be a Felony for people already convicted of a prior felony!!! Posted by: Allisio Rex | Sep 19, 2008 2:48:40 PM Job: history researcher. "Law enforcement agencies and personnel have no duty to protect individuals from the criminal acts of others; instead their duty is to preserve the peace and arrest law breakers for the protection of the general public." (Lynch v. NC Dept. Justice) ". . . a government and its agents are under no general duty to provide public services, such as police protection, to any particular individual citizen."--Warren v. District of Columbia, 444 A.2d 1 (D.C. App.181) If there is an individual right to self-protection, and there is no right to police protection, then how can the felon exercise the right to self-protection where police protection is denied? Do they become wards of the state, given a right to police protection, or does the state relinquish their right to self-protection, while simultaneously maintaining their power to not protect? There is a moral conundrum there. Are they less worthy of protection, or self-protection? Further, where they are pushed to the edges of society, is it any better for them, where they live in a more dangerous environment, to lack the power as well as the right to self-protection? If they can be punished for exercising the right to self-protection, is it still remaining a right? My understanding , which is not from personal experience at all, but just from talking with Judges who do these ,is that they issues hundred or thousands of these and that all the movants have to do is come in and say basically I’M afraid of my boyfriend or my husband , and they get this order, Now, is there more required that. Movant was found Guilty based on no proof required per the above statement by Federal court judge Shubb Court record transcript of Jan 7th 2005 page 155 to 167 Jan 7th 2005 page 169 Court “ But what is your understanding of what has to be proven at the hearing, other than the fact that the women is afraid Of him/ Attorney Smith government, the statute does not speak to that, it gives the judge discretion Posted by: larry | Oct 10, 2008 6:27:15 PM I am a convicted felon. Student Financial Aid Fraud. Can anyone make a legitimate argument as to how this effects my ability to safely and legally operate a firearm? I was convicted of unauthorized use of a motor vehicle at age 18 in 1966 in Virginia.Recently the Govenor of Virginia restored all my rights except the right to own a gun. Virginia says get it from from my residence state. My residence state(La.) says get it from Virginia. Now I live and work in a hurricane prone area and need to be on the job as essential personnel(federal) for hurricanes.Post katrina it was very scary to be in New Orleans with no self-defense. If anyone reading this decides they need a guinea-pig to challenge this unfair right of non-violent felons to own a gun please contact [email protected] Posted by: jack | Nov 24, 2008 12:05:48 PM i,m retired united states merchant marine with 28 years service . presently federal blue collar employee with over 12 years service. aged 60 and still can,t have a gun for home defense,etc. Posted by: jack | Nov 24, 2008 12:18:40 PM I too, am a convicted felon. The image that most people think of when someone says the term "convicted felon" is some nefarious person lurking in the shadows waiting to do evil things to others. But the reality is that as the web of laws grows larger and larger, so does the corral of malum prohibitum laws and victimless crimes for which we may all be prosecuted. In the early days, being a member of congress was only a part time job! I was convicted my second semester of college for possessing a small amount of marijuana with the intent to sell it to my peers. I spent four months incarcerated. A few years earlier, when I was 16, I was convicted for burglarizing a house I had never set foot into. I was guilty because I was an unwitting accomplice and told the authorities what had happened. Needless to say, the sum of my experiences on the wrong side of the justice system has been illuminating. The purpose of my post isn’t to rant about the lack of justice in the. Rather, I would like to shed some light on why the notion of a blanket ban on firearm possession by felons is a violation of their constitutional rights, which may safely be restored to them. The good news is that I got my life together. I graduated university with honors in the field of business and was hired by a large fortune 500 company before ultimately pursuing a career in real estate. I now own a real estate brokerage and a manufactured home dealership. The process for obtaining these licenses took me 5 years while the state contemplated whether or not I was fit to oversee other licensees and ensure the public’s best interest. But, I’m a patient person and at last was allowed operate a brokerage & dealership. With a little extra paperwork, I can meander through most encounters with the government. I have a steady girlfriend and am held in high regard by my peers who are all accomplished professionals. My actions are that of an upstanding citizen. When Katrina hit, I watched the looting and the complete chaos on erupt. Between Katrina and the attacks on 9-11, I began to realize that it is inevitable that events will take place in our lives for which our government cannot be there for us every time. However, I can be there every time and therefore the ownership of firearms would be a reasonable exercise of one’s 2nd amendment rights. Only, I don’t have them anymore. So should I be predisposed to becoming the victim of a violent crime because of a couple of past convictions dating back over 14 years? I’ve kept my nose clean and even excelled in life. Moreover, had all this not taken place in Kalifornia, but rather another state, I would once again be able to own firearms. So Americans’ Right to Bear Arms doe not seem equally protected under the law. Though it would be very easy to circumvent the law and possess a firearm, I have chosen to do the right thing. A pardon looks unlikely according to my former gun-rights attorney. I’ve even looked into joining the military as they have expressed the potential for restoring my rights in exchange for service. I’m keeping that option on the table, but at this stage of my life and being the only kin responsible for the care of my 91 year old grandmother, I can’t help but wonder if there isn’t another way… I am currently looking for a solution to my legal issues. What must I do? What challenges must I overcome? Who can help? Posted by: Flavio | Dec 1, 2008 4:12:52 AM was there a racist reason the gun control act of 1968 was passed? Do the black panthers ring a bell? What about Drug laws. Was there a racist element in passing those? Its a shame that the war on drugs and denying felons the right to protect themselves can come from laws that have an element of racism to them. If you don't know what im talking about. The GCA of 1968 was also called the "keep guns from n@ggers act" Posted by: | Dec 19, 2008 12:06:10 AM Heres a more poignyant question, if in some states felons do not have the leagle right to say no to a search without a warrant, what exactly are the leagle rights of a felon? Posted by: Eric Holt | Mar 12, 2009 11:06:45 AM Scalia is wrong. Felons even those who have committed violent Felonies do get back firearm rights following a State conviction either by operation of State laws or by application. Only Federal Felonies precludes such remedies. This is unequal treatment under under our laws. It's clear now that States finally protect our Civil Rights while Congress "stomps" on them. What a reversal! I would like to add that it is of a common understanding shared by Constitutional lawyers that Congress has No authority to regulate firearms within a State, not only by virtue of the Second and Amendment but and all the enumerated Amendments which place a heavy restriction on the on what Congress can do. In fact the Amendments to the Constitution fully render the so-called "Interstate Commerce Clause null and void. Posted by: Geaorge Alleni | Apr 30, 2009 2:09:56 PM Hello, Whatever happened to the movement to restore rights after a sentence has been served? Does anyone know? Interesting post. I am a prior felons that lives in a not-so-good area of town(as most felons do because it is hard to find a job with a record). I have spend many hours researching this issue. I would like to own a firearm to protect my home, but legally I can't and don't. I can't risk the consequences of owning one. Many years ago I asked my probation officer how I was to defend my home and he said "call the police" or "hide". I told him that the response time from the police in my area was about an hour and he told me "I better hide good then!". He thought it was unfair that we are not allowed to defend ourselves as well. The whole thing is ridiculous. I made a mistake many years ago in my life and now I am to be a second class citizen my whole life. I am a legal productive citizen involved in politics(I have restored my voting rights) and family life. I am not the person I was a decade ago. I can't believe people that say they are for gun rights and then say they believe in "restrictions" as a form of infringement that directly contradicts the second amendment, they engage in "double speak" and "double think". We really only possess the rights that the lowest class of citizen in our country does. We set a dangerous trend with we start having classes of citizens in this country. Just remember that when they can restrict my rights, they set the precedence so they can restrict and will eventually restrict yours. Posted by: Chris | Jul 24, 2009 1:03:33 PM I am a legal researcher by occupation, and a felon. Here is one for you guys. I am preparing to bring suit against the feds for denying my right to firearms. I was convicted of one felony 15 years ago. Ten years ago Ohio granted me a Relief of Disability O.R.C. 2923.14, which states I am "allowed to own and possess firearms as allowed by state and federal law...this does not apply to dangerous ordnance." In my NICS appeal, the feds denied me based on the "unless clause" of 922, stating the the state limited my firearms ownership by not granting dangerous ordnance. According to O.R.C. 2923.11, dangerous ordnance are exactly what the feds require all citizens to apply for with a class 3 form 4. Therefore, the state does not even have the power to grant permission to ownership or possession of dangerous ordnance. This creates several issues. First, the feds are denying my rights based on the state not granting me rights that THEY say the state does not have the power to grant! Next, the will of the Ohio legislature was to allow me to petition the court to be allowed to own and possess firearms, which I did and was subsequently granted. The feds denial of my rights is a direct defiance of the will of the Ohio legislature. Also, the Gun Control Act itself is not Constitutional. How does the interstate commerce clause give the feds the right to regulate the use and ownership of the product once the interstate commerce is complete? How about product items that were not shipped interstate but were strictly intrastate? (see Firearms Freedom Act and current Montana challenge). I am seriously doing this, working close with my attorney, lobbyists, and soveriegnty and firearms groups. If anyone would like to help me in developing defense or whatever, please respond and email me. [email protected] My book entitled: the Second Amendment: the State, the Felon, the Right to Keep and Bear Arms, can be found at: www.scribd.com. It covers pragmatic experiences about the use and necessity of a firearm and the arbitrary state laws in Ohio and other states that define the soft felon, those that have never gone to prison, as one equal in temper and attitude to that of a murderer, rapist, or burglar. For the most part the soft felon is normally not a recusant. Such behavior should be viewed as socially beneficial to society rather than inimical. Usually unbridled circumstances are the cause of the soft felon's criminal misfortunes. That is, external or perhaps internal forces, or both, beyond his or her immediate control. All soft felons need to band together to form a coalition for the purpose of the restoration of all civil rights in the USA and the right to keep and bear a handgun in the home for personal protection against unwanted intrusion. You can also go to www.webcommentary.com and read my commentaries on the subject as well. Chris, I feel your pain and wish you the best. It's starting to become more and more like a dictatorship rather than a free democracy when the feds are constantly trying to find loopholes in the system to attack our rights to self defense. Instead of concentrating on making laws that work against the criminals of America. They are restricting the law abiding citizens to their right to self defense. Ever research "The Black Laws"? Most enlightening on this and several other points of state-created-ability to "lose" rights (as opposed to having privileges withdrawn or withheld). By the way, I am not "Black", just an observer of the ways bad laws become ensconced and worse. Posted by: Throsso | Aug 17, 2010 12:22:24 PM "The right to keep and bear arms shall not be infringed." There is no exception made for "convicted felons" of any stripe. ALL gun control is unreasonable and UNCONSTITUTIONAL. An unconstitutional statute is null and void. Cowards should protect themselves instead of leaving it to their government ass-wipers. Felon or not, buy a firearm (no NICS on a private transaction) and learn to use it against ANYONE who would try to harm you. REGARDLESS of the outfit they wear or authority they claim. Bottom line, it's better to be judged by twelve, than carried by six (and the loved-ones who trust their lives to YOU, would agree). "It is the duty of all good men to disobey unjust laws." Posted by: Thoroughly Provoked | Dec 28, 2010 7:06:40 PM Chris said: "I would like to own a firearm to protect my home, but legally I can't and don't. I can't risk the consequences of owning one." Consequences? In prison on your child's birthday, or free at your child's funeral. The choice is yours. Posted by: Thoroughly Provoked | Dec 28, 2010 7:20:42 PM If somthing is a rite, no institue can legaly take it away, so if felons dont have a rite to keep and bear arms it is safe to asume that they are no longer "citizens" and therefore are either slaves or non-entities it is in my humble opinion that the second amendment does not say "unless the government doesn't want you to". If an instition or government entity has to give you permission to do something it then becomes a privlidge and the government has placed itself incorectly above you whom should by law be the controller or boss of the government. Posted by: Eric Holt | Jan 14, 2012 3:05:50 PM an american- fyi to all out there- didnt see anyone mention this- the federal definition of "firearm" excludes anything from efore 1898 including 'modern replicas'. so we have the right to keep and bear arms until 1898, anyway. that's some deadly weaponry, if you think about it. also see "united states vs simmons"; you might not be as felonious as you thought... Posted by: john m. | Aug 7, 2012 5:19:47 AM Post a comment In the body of your email, please indicate if you are a professor, student, prosecutor, defense attorney, etc. so I can gain a sense of who is reading my blog. Thank you, DAB | Mid | [
0.5714285714285711,
36,
27
] |
EU regulator recommends suspension of drugs over Indian data LONDON, Jan 23 (Reuters) - Europe's drug regulator said on Friday it had recommended the suspension of a number of drugs which were approved on the basis of clinical studies conducted at GVK Biosciences in Hyderabad, India. The European Medicines Agency said the recommendation was based on findings from an inspection that raised concerns about how GVK conducted studies at the Hyderabad site on behalf of the pharmaceutical companies. The move follows a similar decision from state regulators in France, Germany, Belgium and Luxembourg in December to suspend the marketing approval of 25 generic drugs due to concerns over the quality of data from clinical trials conducted by the Indian firm. | Mid | [
0.6103092783505151,
37,
23.625
] |
Western Savings and Loan Western Savings and Loan was an American financial institution founded by the Driggs family. The Driggs family came to Arizona in 1921 after trading everything they owned—a bank, drugstore, hotel, and wheat farm in Driggs, Idaho—for a section of cotton land in Maricopa County. Their timing was unfortunate as cotton prices plummeted just as their crop came in and they were forced to take jobs selling building and loan certificates. In 1929, the Driggs family pooled $5,000 to found the Western Building and Loan Association, which became Western Savings. Success and eventual failure Western Savings and Loan eventually became a $6 billion savings and loan institution. Western shared a position on the list of the nation's 100 largest savings and loans with other Arizona-based institutions — MeraBank was number 27 on the list, Western came in at 37th, Great American was 67th, and Pima was 82nd. But in 1989, Western Savings moved into second place — not for its size, but for the amount of its losses, with a $1.06 billion net deficit, following a substantial but smaller loss the previous year. Western Savings was taken over by the Resolution Trust Corporation, the federal depositor for the savings and loan crisis bailout in June 1989. In June 1990, Bank of America paid the Resolution Trust Corporation $81 million for Western Savings' $3.5 billion in deposits in 60 branches in Arizona and one branch in Salt Lake City, Utah, and converted the thrift into commercial bank called Bank of America Arizona. In 1995, President Gary Driggs pleaded guilty to two felony charges and was fined $10,000 and placed on probation for five years. Accomplishments and recognition American Newcomen honored Western Savings and Loan Association in the year of the company's 40th anniversary. Since it was formed by the Driggs family in the Spring of 1929, six months before the historic stock crash, Western Savings had at that time grown to become the largest savings and loan association in Arizona and among the 100 largest in the United States. The major objectives of the association were the encouragement of thrift and the promotion of home ownership. Over the years up to that point, more than 30,000 first mortgage loans had been made, totaling more than 390 million dollars, thus providing an important factor in the growth and development of Arizona. References External links Category:Defunct banks of the United States Category:Banks established in 1929 Category:1929 establishments in Arizona Category:Banks with year of disestablishment missing Category:Savings and loan crisis Category:Banks disestablished in 1990 Category:1990 disestablishments in Arizona | Mid | [
0.6119791666666661,
29.375,
18.625
] |
Singlet-triplet gaps in large multireference systems: spin-flip-driven alternatives for bioinorganic modeling. The proper description of low-spin states of open-shell systems, which are commonly encountered in the field of bioinorganic chemistry, rigorously requires using multireference ab initio methodologies. Such approaches are unfortunately very CPU-time consuming as dynamic correlation effects also have to be taken into account. The broken-symmetry unrestricted (spin-polarized) density functional theory (DFT) technique has been widely employed up to now to bypass that drawback, but despite a number of relative successes in the determination of singlet-triplet gaps, this framework cannot be considered as entirely satisfactory. In this contribution, we investigate some alternative ways relying on the spin-flip time-dependent DFT approach [Y. Shao et al. J. Chem. Phys. 118, 4807 (2003)]. Taking a few well-documented copper-dioxygen adducts as examples, we show that spin-flip (SF)-DFT computed singlet-triplet gaps compare very favorably to either experimental results or large-scale CASMP2 computations. Moreover, it is shown that this approach can be used to optimize geometries at a DFT level including some multireference effects. Finally, a clear-cut added value of the SF-DFT computations is drawn: if pure ab initio data are required, then the electronic excitations revealed by SF-DFT can be considered in designing dramatically reduced zeroth-order variational spaces to be used in subsequent multireference configuration interaction or multireference perturbation treatments. | High | [
0.671875,
32.25,
15.75
] |
--- abstract: 'The ionic-liquid-gating technique can be applied to the search for novel physical phenomena at low temperatures because of its wide controllability of the charge carrier density. Ionic-liquid-gated field-effect transistors are often fragile upon cooling, however, because of the large difference between the thermal expansion coefficients of frozen ionic liquids and solid target materials. In this paper we provide a practical technique for setting up ionic-liquid-gated field-effect transistors for low-temperature measurements. It allows stable measurements and reduces the electronic inhomogeneity by reducing the shear strain generated in frozen ionic liquid.' author: - Yamaguchi Takahide - Yosuke Sasama - Hiroyuki Takeya - Yoshihiko Takano - Taisuke Kageura - Hiroshi Kawarada bibliography: - 'EDLTsetupbib.bib' title: 'Ionic-liquid-gating setup for stable measurements and reduced electronic inhomogeneity at low temperatures' --- Introduction ============ The application of the ionic-liquid-gating technique to low-temperature physics has attracted considerable attention because it can control the charge carrier density over an extremely wide range.[@Bis17] This technique uses an ionic liquid (organic salt in the liquid phase at room temperature) as the gate dielectric in field-effect transistors in which a target material acts as a channel of charge carriers. In this electric double layer transistor (EDLT), the large capacitance of the electric double layer at the surface of the target material allows the channel to have a large concentration of charge carriers on the order of $10^{13}-10^{14}$ cm$^{-2}$. Even when the EDLT is cooled to below the freezing point of the ionic liquid, the electric double layer and the resulting charge carrier density are preserved. Therefore, one can measure the low-temperature electronic properties of the sample with a charge carrier density controlled in a wide range. Electric-field-induced superconductivity has been obtained in various materials by using ionic-liquid gating.[@Ye10; @Bol11; @Uen11; @Len11; @Dub12; @Ye12; @Jo15; @Shi15; @Sai15; @Lu15; @Li16; @Cos16; @Zen18] {width="7truecm"} However, there is a practical problem for such low-temperature measurements on EDLTs: frozen ionic liquids often fracture at low temperatures. This induces some detrimental effects on measurements, such as a sudden jump in the resistance-temperature curve (Fig. 1). A large electronic inhomogeneity possibly due to the local detachment of frozen ionic liquid from the sample has also been reported for WS$_2$ and MoS$_2$ EDLTs.[@Jo15; @Cos16] These problems are presumably due to the shear strain caused by the large difference in the thermal expansion coefficient between the frozen ionic liquid and target sample or its substrate. In this paper we introduce an experimental technique that suppresses the shear strain and leads to stable measurements on EDLTs at low temperatures. Our setups for diamond and silicon EDLTs[@Yam13; @Yam14; @Yam16; @Sas17; @Sas172] are shown as examples. This technique will allow stable and efficient low-temperature experiments with EDLTs and studies of high-quality samples with reduced electronic inhomogeneity. Results and discussion ====================== A key feature of our setup is a counter plate placed above the sample/substrate surface (Fig. 2). Ionic liquid is inserted between the counter plate and sample/substrate surface. A similar setup has been used in previous experiments.[@Bol11; @Dub12] We propose here that an adequate spacing between the sample/substrate surface and the counter plate can reduce the shear strain that appears when the device is cooled. Our idea is to compensate for the cooling-induced shrinkage of frozen ionic liquid in a mechanical manner by using the shrinkage of the counter plate support. If the shrinkage of the support along the z axis is larger than that of the ionic liquid, the ionic liquid is compressed along the z axis and expands along the xy plane. If the shrinkage of ionic liquid along the xy plane due to cooling cancels this expansion, there should be no shear strain along the xy plane. The counter plate can be used as a gate electrode if its surface is electrically conductive and a proper wiring is made. {width="7truecm"} {width="4.6truecm"} ![Temperature dependence of the sheet resistance of 10 different diamond EDLTs. The resistance was measured with a four-point configuration. From top to bottom at the lowest temperature, the samples are numbered as B1-10. The surface orientation, ionic liquid, and gate voltage are as follows. B1: (100), DEME-TFSI, $-1.8 V$; B2: (100), DEME-BF$_4$, $-1.8 V$; B3: (111), DEME-BF$_4$; $-1.8$ V, B4: (100), DEME-TFSI, $-1.49$ V; B5: (100), DEME-BF$_4$, $-1.44$ V; B6: (100), TMPA-TFSI + HTFSI, $-2.58$ V; B7: (100), DEME-BF$_4$, $-1.96$ V; B8: (111), DEME-BF$_4$, $-2.4$ V; B9: (111), DEME-BF$_4$, $-2.2$ V; B10: (111), DEME-BF$_4$, $-1.8$ V. Boron-doped diamond was used as source, drain, and gate electrodes for B6. The diamond surface of B10 is atomically flat.[@Yam14] The curves measured while the sample was cooled down and warmed up are both shown for B2 and B10. (Gray thick lines are for warming up.) The resistance variation for different samples despite similar gate voltages is attributed to different amounts of charged adsorbates on the diamond surface.[@Yam13]](Fig4_rev.pdf){width="6.5truecm"} Let us examine the adequate spacing between the sample/substrate surface and the counter plate. We assume that the counter plate and its support are thick and rigid enough that they are not deformed by external force. We also assume that the thermal expansion coefficient of the sample (or substrate) is small and can be neglected. This is the case when diamond or silicon is used as a substrate or a sample, because at temperatures below 293 K the thermal expansion coefficients for diamond[@Sto11] and silicon[@Mid15] are less than $1{\times}10^{-6}$ and $3{\times}10^{-6}$ (K$^{-1}$), respectively, which are smaller than those of most of other materials. This assumption is only for simplification of the calculation shown below. With a straight forward modification, our scheme can be applied to any material. The length variations of the frozen ionic liquid along the x direction due to the temperature change ${\Delta}T$ and along the z direction due to the mechanical force are given by $$\begin{aligned} \frac{{\Delta}x_\mathrm{IL}}{x_\mathrm{IL}}=\alpha_\mathrm{IL}{\Delta}T-{\sigma}_\mathrm{IL}\frac{{\Delta}z_\mathrm{IL}}{z_\mathrm{IL}},\\ {\Delta}z_\mathrm{IL}=-z_\mathrm{IL}\alpha_\mathrm{IL}{\Delta}T+z_\mathrm{sup}\alpha_\mathrm{sup}{\Delta}T.\end{aligned}$$ Here ${\alpha}_\mathrm{IL}$ and ${\alpha}_\mathrm{sup}$ are the thermal expansion coefficients of the frozen ionic liquid and the counter plate support, and ${\sigma}_\mathrm{IL}$ is the Poisson’s ratio of the frozen ionic liquid. For the shear strain caused by the temperature change ${\Delta}T$ to be minimized, $$\begin{aligned} \frac{{\Delta}x_\mathrm{IL}}{x_\mathrm{IL}}=0.\end{aligned}$$ Then, $$\begin{aligned} z_\mathrm{IL}=\frac{{\sigma}_\mathrm{IL}}{1+{\sigma}_\mathrm{IL}}\frac{\alpha_\mathrm{sup}}{\alpha_\mathrm{IL}}z_\mathrm{sup}.\end{aligned}$$ The value of Poisson’s ratio $\sigma$ for most materials is $0.3-0.4$. We use brass and copper for the support of the counter plate. The thermal expansion coefficient of copper is $10{\times}10^{-6}$ (K$^{-1}$) at 100 K and $15{\times}10^{-6}$ (K$^{-1}$) at 200 K.[@Nix41] It is difficult to find the data of thermal expansion of frozen ionic liquids at temperatures below their freezing point. We directly observed the thermal contraction of droplets of ionic liquid (DEME-BF$_4$; freezing point: 238 K, melting point: 282 K[@Kim05]) on a hydrogen-terminated diamond surface under an optical microscope \[Figs. 1(d) and 1(e)\]. The diameter of the droplets shrank by $0.7{\pm}0.1{\%}$ when the temperature decreased from 200 to 8 K. If we assume that the thermal expansion coefficient of the frozen ionic liquid depends linearly on temperature, it is estimated to be $(35{\pm}5){\times}10^{-6}$ (K$^{-1}$) at 100 K and $(70{\pm}10){\times}10^{-6}$ (K$^{-1}$) at 200 K, which we use in the following calculation. These values are in the same range as those of organic charge-transfer salts, which are $(40-80){\times}10^{-6}$ (K$^{-1}$) at 100 K and $(40-80){\times}10^{-6}$ (K$^{-1}$) at 200 K.[@Mul02; @Sou08; @Fou13] The height $z_\mathrm{sup}$ of the support of the counter plate is $0.45-0.5$ mm in our experimental setup for diamond EDLTs. If we use these values, the thickness $z_\mathrm{IL}$ of ionic liquid should be $30-50$ $\mu$m for 100 K and $20-40$ $\mu$m for 200 K to minimize the shear strain. If a softer material with a larger $\alpha_\mathrm{sup}$ is used for the support (for example, polymer), then it is better to increase the ratio $z_\mathrm{IL}/z_\mathrm{sup}$. If the sample/substrate is fixed on the sample holder using adhesive tape, its large thermal expansion coefficient should also be taken into consideration. An optical microscope image of our setup for a diamond EDLT is shown in Fig. 2(c). The diamond is fixed using two copper claws, without the use of adhesive tape. As a counter plate, we used a Ti/Pt or Ti/Au deposited glass (or silicon) plate or a diamond substrate with a boron-doped layer on the surface. Here the counter plate also acted as a gate electrode. The thickness of the diamond differed from sample to sample because the original thickness of the diamond substrate and the amount of surface polishing differed. We adjusted the spacing between a sample and counter plate to be ${\approx}20-30$ $\mu$m each time by using some metal spacer plates with different thicknesses: 20, 30, 40, 50, 80, and 100 $\mu$m. This sample holder was also designed so that it can be sealed with indium in an Ar-filled glove box[@Bon00; @Bon01; @Brass] to prevent water contamination of the ionic liquid (Fig. 3). We have not observed any significant jumps in the temperature dependence of resistance of diamond EDLTs and could perform stable measurements at low temperatures with this setup. The temperature dependence of the resistances of ten different diamond EDLTs is shown in Fig. 4. The curves vary in a monotonic manner, although a few curves cross, possibly due to the difference in the surface crystallographic orientation. Furthermore, there is almost no difference between the resistance-temperature curves measured while the sample is cooled down and warmed up. This indicates that the local detachment of ionic liquid[@Jo15] is negligible during the thermal process. We observed an electric-field-induced insulator-metal transition and Shubnikov de-Haas oscillations of diamond with this setup.[@Yam13; @Yam14] An anomalous low-temperature magnetotransport of the electric-field-induced charge carriers was also observed in diamond with the (100) surface.[@Yam16] ![Optical image of a sample holder for silicon EDLTs. (a) The main part of the sample holder. (b) The lid of the sample holder. (c) A silicon EDLT ready for low-temperature measurements. First, small pieces of indium for electrical wiring and the seal of the sample holder are placed on the lid. Then, a silicon chip with a hydrogen-terminated channel, Hall bar electrodes, and a gate electrode[@Sas17; @Sas172] is fixed on the main part of the sample holder by a copper claw in an Ar-filled glove box. After a drop of ionic liquid is applied, the lid is screwed on. This makes the electrical wiring, the seal of the sample holder, and the insertion of the ionic liquid between the silicon and counter plate (lid) at the same time. The dimensions of the silicon chip are approximately 6.0 mm $\times$ 6.0 mm $\times$ 0.38 mm. (d) Optical image of the Hall bar and gate electrode on the silicon chip.](Fig5_rev.pdf){width="7truecm"} {width="8.2truecm"} We performed a study of silicon EDLTs as well[@Sas17; @Sas172]. Another type of sample holder (Fig. 5) was fabricated for the silicon EDLTs for the following reasons. The silicon surface of the channel of the EDLTs is hydrogen-terminated to reduce the trap density. This hydrogen termination is crucial for the device operation[@Sas17; @Sas172] but, unlike the hydrogen termination of diamond surface, is easily destroyed by air exposure. Therefore, the electrical wiring between the sample and sample holder cannot be performed in air for the silicon EDLTs. The sample holder is designed so that the electrical wiring can be performed using small pieces of indium in an Ar-filled glove box. The sample holder can also be sealed with indium in the glove box. This sample holder is made of polychlorotrifluoroethylene (PCTFE) and the lid acts as a counter plate. The counter plate support consists of $\approx$0.40 mm thick PCTFE and $\approx0.1-0.15$ mm thick indium: $z_\mathrm{sup}{\approx}0.50-0.55$ mm. The thermal expansion coefficient of PCTFE is $34{\times}10^{-6}$ (K$^{-1}$) at 100 K and $47{\times}10^{-6}$ (K$^{-1}$) at 200 K.[@PCTFE] The coefficient ($\alpha_v/3$) for indium is $27{\times}10^{-6}$ (K$^{-1}$) at 100 K and $28{\times}10^{-6}$ (K$^{-1}$) at 200 K.[@Smi64] Using Eq. 4, $z_\mathrm{IL}$ for the minimized shear strain is estimated to be $100-170$ $\mu$m for 100 K and $60-110$ $\mu$m for 200 K. We set $z_\mathrm{IL}\approx120-170$ $\mu$m in the actual setup. By using this setup and using ion implantation underneath the electrodes to reduce the contact resistance, we were able to measure detailed low-temperature transport properties of silicon EDLTs[@Sas172] (Fig. 6). The proposed method minimizes the shear strain in frozen ionic liquid, but the perfect elimination of this strain in an entire temperature range is difficult. This is because the temperature dependences of $\alpha_\mathrm{IL}$ and $\alpha_\mathrm{sup}$ generally differ. There may still be small remaining strains at low temperatures. The Shubnikov-de Haas oscillations observed in diamond EDLTs suggest a spatial inhomogeneity of charge carrier density and mobility at low temperatures.[@Yam14] Further work is necessary to elucidate whether this inhomogeneity has an intrinsic origin[@DezArXiv] or is caused by local distortion of the frozen ionic liquid due to residual shear strains. Detailed measurements of the thermal expansion coefficient and Poisson’s ratio of ionic liquids at different temperatures are also awaited. It may be possible to further reduce shear strain by setting the spacing $z_\mathrm{IL}$ so that the integral of $(1/x_\mathrm{IL})(\mathrm{d}x_\mathrm{IL}/\mathrm{d}T)$ (dependent of $\alpha_\mathrm{IL}(T)$, $\sigma_\mathrm{IL}(T)$, and $\alpha_\mathrm{sup}(T)$) between the temperature of interest and the freezing temperature of the ionic liquid would be zero. Conclusions =========== We proposed a practical method to reduce shear strain in frozen ionic liquid for stable measurements of electric double layer transistors at low temperatures. The reduction of shear strain was achieved by compensating for the cooling-induced shrinkage of frozen ionic liquid in a mechanical way using a counter plate and its support. The simple setup will be used for various materials and allow stable and efficient experiments at low temperatures. In particular, it prevents frozen ionic liquid from detaching from the sample surface and thus prevents the device breakdown due to cooling. It will also reduce the electronic inhomogeneity caused by the shear strain and thus help to study more intrinsic properties of the target materials. Acknowledgments =============== We appreciate helpful comments from Y. Ootuka and thank E. Watanabe, H. Osato, D. Tsuya, S. Hamada and S. Tanigawa for device fabrication in the early stage of this study. This study was supported by Grants-in-Aid for Fundamental Research (Grant Nos. 25287093 and 26220903) and the “Nanotechnology Platform Project” of MEXT, Japan. | Mid | [
0.6501305483028721,
31.125,
16.75
] |
{ "citation": "@inproceedings{Keysers2020,\n title={Measuring Compositional Generalization: A Comprehensive Method on\n Realistic Data},\n author={Daniel Keysers and Nathanael Sch\"{a}rli and Nathan Scales and\n Hylke Buisman and Daniel Furrer and Sergii Kashubin and\n Nikola Momchev and Danila Sinopalnikov and Lukasz Stafiniak and\n Tibor Tihon and Dmitry Tsarkov and Xiao Wang and Marc van Zee and\n Olivier Bousquet},\n booktitle={ICLR},\n year={2020},\n url={https://arxiv.org/abs/1912.09713.pdf},\n}", "description": "The CFQ dataset (and it's splits) for measuring compositional generalization.\n\nSee https://arxiv.org/abs/1912.09713.pdf for background.\n\nA note about the validation set: Since it has the same distribution as the test\nset and we are interested in measuring the compositional generalization of a\n*model* with respect to an *unknown* test distribution we suggest that any\ntuning should be done on a subset of the train set only (see section 5.1 of the\npaper).\n\nExample usage:\n\n```\ndata = tfds.load('cfq/mcd1')\n```", "downloadSize": "267599061", "location": { "urls": [ "https://github.com/google-research/google-research/tree/master/cfq" ] }, "name": "cfq", "schema": { "feature": [ { "domain": "query", "name": "query", "presence": { "minCount": "1", "minFraction": 1.0 }, "shape": { "dim": [ { "size": "1" } ] }, "type": "BYTES" }, { "domain": "question", "name": "question", "presence": { "minCount": "1", "minFraction": 1.0 }, "shape": { "dim": [ { "size": "1" } ] }, "type": "BYTES" } ], "stringDomain": [ { "name": "query", "value": [ "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.film_costumer_designer.costume_design_for_film M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.producer.films_executive_produced M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.edited_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}", "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.prequel M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.written_by M0\n}", "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.sequel M0 .\n?x0 ns:film.film.written_by M1\n}", "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.film_costumer_designer.costume_design_for_film M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}", "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}", "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.director.film M4 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.editor.film M4 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.films_executive_produced M4 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M4 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3 .\nM0 ns:film.writer.film M4\n}", "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3\n}", "SELECT count(*) WHERE {\nM0 ns:film.film.directed_by M1 .\nM0 ns:film.film.directed_by M2 .\nM0 ns:film.film.edited_by M1 .\nM0 ns:film.film.edited_by M2 .\nM0 ns:film.film.executive_produced_by M1 .\nM0 ns:film.film.executive_produced_by M2 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\nM0 ns:film.film.written_by M1 .\nM0 ns:film.film.written_by M2\n}" ] }, { "name": "question", "value": [ "Who wrote , edited , directed , and produced M1 , M2 , and M3", "Who wrote , edited , executive produced , and produced M1 , M2 , and M3", "Who wrote , executive produced , and directed M1 , M2 , and M3", "Who wrote , executive produced , and edited M1 , M2 , M3 , and M4", "Who wrote , executive produced , edited , and produced M1 , M2 , and M3", "Who wrote , executive produced , produced , and directed M1 , M2 , M3 , and M4", "Who wrote , executive produced , produced , and edited M1 , M2 , and M3", "Who wrote , produced , directed , and edited M1 , M2 , and M3", "Who wrote , produced , edited , and directed M1 , M2 , and M3", "Who wrote , produced , executive produced , and edited M1 , M2 , and M3" ] } ] }, "splits": [ { "name": "test", "numBytes": "5828528", "shardLengths": [ "11968" ], "statistics": { "features": [ { "path": { "step": [ "query" ] }, "stringStats": { "avgLength": 379.0212, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "11968", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 } ], "type": "QUANTILES" }, "totNumValues": "11968" }, "rankHistogram": { "buckets": [ { "label": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.film_costumer_designer.costume_design_for_film M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}", "sampleCount": 59.0 }, { "highRank": "1", "label": "SELECT count(*) WHERE {\nM0 ns:film.film.directed_by M1 .\nM0 ns:film.film.directed_by M2 .\nM0 ns:film.film.edited_by M1 .\nM0 ns:film.film.edited_by M2 .\nM0 ns:film.film.executive_produced_by M1 .\nM0 ns:film.film.executive_produced_by M2 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\nM0 ns:film.film.written_by M1 .\nM0 ns:film.film.written_by M2\n}", "lowRank": "1", "sampleCount": 53.0 }, { "highRank": "2", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.sequel M0 .\n?x0 ns:film.film.written_by M1\n}", "lowRank": "2", "sampleCount": 44.0 }, { "highRank": "3", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.producer.films_executive_produced M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "lowRank": "3", "sampleCount": 42.0 }, { "highRank": "4", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.edited_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}", "lowRank": "4", "sampleCount": 41.0 }, { "highRank": "5", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.prequel M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.written_by M0\n}", "lowRank": "5", "sampleCount": 39.0 }, { "highRank": "6", "label": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3\n}", "lowRank": "6", "sampleCount": 38.0 }, { "highRank": "7", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.film_costumer_designer.costume_design_for_film M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "lowRank": "7", "sampleCount": 31.0 }, { "highRank": "8", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.executive_produced_by M2 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2\n}", "lowRank": "8", "sampleCount": 30.0 }, { "highRank": "9", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:film.film .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.executive_produced_by M2 .\n?x0 ns:film.film.executive_produced_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}", "lowRank": "9", "sampleCount": 29.0 } ] }, "topValues": [ { "frequency": 59.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.film_costumer_designer.costume_design_for_film M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}" }, { "frequency": 53.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.film.directed_by M1 .\nM0 ns:film.film.directed_by M2 .\nM0 ns:film.film.edited_by M1 .\nM0 ns:film.film.edited_by M2 .\nM0 ns:film.film.executive_produced_by M1 .\nM0 ns:film.film.executive_produced_by M2 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\nM0 ns:film.film.written_by M1 .\nM0 ns:film.film.written_by M2\n}" }, { "frequency": 44.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.sequel M0 .\n?x0 ns:film.film.written_by M1\n}" }, { "frequency": 42.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.producer.films_executive_produced M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}" }, { "frequency": 41.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.edited_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}" }, { "frequency": 39.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.prequel M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.written_by M0\n}" }, { "frequency": 38.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3\n}" }, { "frequency": 31.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.film_costumer_designer.costume_design_for_film M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}" }, { "frequency": 30.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.executive_produced_by M2 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2\n}" }, { "frequency": 29.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:film.film .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.executive_produced_by M2 .\n?x0 ns:film.film.executive_produced_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}" } ], "unique": "8863" }, "type": "STRING" }, { "path": { "step": [ "question" ] }, "stringStats": { "avgLength": 68.0676, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "11968", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 } ], "type": "QUANTILES" }, "totNumValues": "11968" }, "rankHistogram": { "buckets": [ { "label": "Who wrote , produced , executive produced , and directed M1 , M2 , and M3", "sampleCount": 1.0 }, { "highRank": "1", "label": "Who wrote , produced , executive produced , and directed M1 , M2 , M3 , and M4", "lowRank": "1", "sampleCount": 1.0 }, { "highRank": "2", "label": "Who wrote , executive produced , directed , and produced M1 , M2 , M3 , and M4", "lowRank": "2", "sampleCount": 1.0 }, { "highRank": "3", "label": "Who wrote , edited , produced , and directed M1 , M2 , and M3", "lowRank": "3", "sampleCount": 1.0 }, { "highRank": "4", "label": "Who wrote , edited , and directed M1 , M2 , M3 , and M4", "lowRank": "4", "sampleCount": 1.0 }, { "highRank": "5", "label": "Who wrote , directed , produced , and executive produced M1 , M2 , M3 , and M4", "lowRank": "5", "sampleCount": 1.0 }, { "highRank": "6", "label": "Who wrote , directed , produced , and edited M1 , M2 , M3 , and M4", "lowRank": "6", "sampleCount": 1.0 }, { "highRank": "7", "label": "Who wrote , directed , edited , and produced M1 , M2 , and M3", "lowRank": "7", "sampleCount": 1.0 }, { "highRank": "8", "label": "Who wrote , directed , and executive produced M1 , M2 , and M3", "lowRank": "8", "sampleCount": 1.0 }, { "highRank": "9", "label": "Who was influenced by and influenced M1 , M2 , M3 , and M4", "lowRank": "9", "sampleCount": 1.0 } ] }, "topValues": [ { "frequency": 1.0, "value": "Who wrote , produced , executive produced , and directed M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , produced , executive produced , and directed M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , executive produced , directed , and produced M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , edited , produced , and directed M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , edited , and directed M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , directed , produced , and executive produced M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , directed , produced , and edited M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , directed , edited , and produced M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , directed , and executive produced M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who was influenced by and influenced M1 , M2 , M3 , and M4" } ], "unique": "11968" }, "type": "STRING" } ], "numExamples": "11968" } }, { "name": "train", "numBytes": "40461180", "shardLengths": [ "95743" ], "statistics": { "features": [ { "path": { "step": [ "query" ] }, "stringStats": { "avgLength": 318.35504, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "95743", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 } ], "type": "QUANTILES" }, "totNumValues": "95743" }, "rankHistogram": { "buckets": [ { "label": "SELECT count(*) WHERE {\n?x0 a ns:film.film .\nM1 ns:film.director.film ?x0 .\nM1 ns:film.editor.film ?x0 .\nM1 ns:film.producer.films_executive_produced ?x0 .\nM1 ns:film.producer.film|ns:film.production_company.films ?x0 .\nM1 ns:film.writer.film ?x0\n}", "sampleCount": 115.0 }, { "highRank": "1", "label": "SELECT count(*) WHERE {\n?x0 a ns:film.cinematographer .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "1", "sampleCount": 94.0 }, { "highRank": "2", "label": "SELECT count(*) WHERE {\n?x0 a ns:people.person .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "2", "sampleCount": 93.0 }, { "highRank": "3", "label": "SELECT count(*) WHERE {\n?x0 a ns:film.actor .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "3", "sampleCount": 88.0 }, { "highRank": "4", "label": "SELECT count(*) WHERE {\n?x0 a ns:film.editor .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "4", "sampleCount": 86.0 }, { "highRank": "5", "label": "SELECT count(*) WHERE {\n?x0 a ns:film.director .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "5", "sampleCount": 84.0 }, { "highRank": "6", "label": "SELECT count(*) WHERE {\n?x0 ns:film.film.prequel M0 .\nM1 ns:film.director.film ?x0 .\nM1 ns:film.editor.film ?x0 .\nM1 ns:film.producer.films_executive_produced ?x0 .\nM1 ns:film.producer.film|ns:film.production_company.films ?x0 .\nM1 ns:film.writer.film ?x0\n}", "lowRank": "6", "sampleCount": 83.0 }, { "highRank": "7", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by ?x1 .\n?x0 ns:film.film.edited_by ?x1 .\n?x0 ns:film.film.executive_produced_by ?x1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies ?x1 .\n?x0 ns:film.film.written_by ?x1 .\n?x1 ns:people.person.parents|ns:fictional_universe.fictional_character.parents|ns:organization.organization.parent/ns:organization.organization_relationship.parent M0\n}", "lowRank": "7", "sampleCount": 83.0 }, { "highRank": "8", "label": "SELECT count(*) WHERE {\n?x0 a ns:film.producer .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}", "lowRank": "8", "sampleCount": 80.0 }, { "highRank": "9", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by ?x1 .\n?x0 ns:film.film.edited_by ?x1 .\n?x0 ns:film.film.executive_produced_by ?x1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies ?x1 .\n?x0 ns:film.film.written_by ?x1 .\n?x1 ns:people.person.spouse_s/ns:people.marriage.spouse|ns:fictional_universe.fictional_character.married_to/ns:fictional_universe.marriage_of_fictional_characters.spouses M0 .\nFILTER ( ?x1 != M0 )\n}", "lowRank": "9", "sampleCount": 80.0 } ] }, "topValues": [ { "frequency": 115.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.film .\nM1 ns:film.director.film ?x0 .\nM1 ns:film.editor.film ?x0 .\nM1 ns:film.producer.films_executive_produced ?x0 .\nM1 ns:film.producer.film|ns:film.production_company.films ?x0 .\nM1 ns:film.writer.film ?x0\n}" }, { "frequency": 94.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.cinematographer .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 93.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:people.person .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 88.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.actor .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 86.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.editor .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 84.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.director .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 83.0, "value": "SELECT count(*) WHERE {\n?x0 ns:film.film.prequel M0 .\nM1 ns:film.director.film ?x0 .\nM1 ns:film.editor.film ?x0 .\nM1 ns:film.producer.films_executive_produced ?x0 .\nM1 ns:film.producer.film|ns:film.production_company.films ?x0 .\nM1 ns:film.writer.film ?x0\n}" }, { "frequency": 83.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by ?x1 .\n?x0 ns:film.film.edited_by ?x1 .\n?x0 ns:film.film.executive_produced_by ?x1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies ?x1 .\n?x0 ns:film.film.written_by ?x1 .\n?x1 ns:people.person.parents|ns:fictional_universe.fictional_character.parents|ns:organization.organization.parent/ns:organization.organization_relationship.parent M0\n}" }, { "frequency": 80.0, "value": "SELECT count(*) WHERE {\n?x0 a ns:film.producer .\nM1 ns:film.film.directed_by ?x0 .\nM1 ns:film.film.edited_by ?x0 .\nM1 ns:film.film.executive_produced_by ?x0 .\nM1 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM1 ns:film.film.written_by ?x0 .\nM2 ns:film.film.directed_by ?x0 .\nM2 ns:film.film.edited_by ?x0 .\nM2 ns:film.film.executive_produced_by ?x0 .\nM2 ns:film.film.produced_by|ns:film.film.production_companies ?x0 .\nM2 ns:film.film.written_by ?x0\n}" }, { "frequency": 80.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by ?x1 .\n?x0 ns:film.film.edited_by ?x1 .\n?x0 ns:film.film.executive_produced_by ?x1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies ?x1 .\n?x0 ns:film.film.written_by ?x1 .\n?x1 ns:people.person.spouse_s/ns:people.marriage.spouse|ns:fictional_universe.fictional_character.married_to/ns:fictional_universe.marriage_of_fictional_characters.spouses M0 .\nFILTER ( ?x1 != M0 )\n}" } ], "unique": "70043" }, "type": "STRING" }, { "path": { "step": [ "question" ] }, "stringStats": { "avgLength": 64.36601, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "95743", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 9574.3 } ], "type": "QUANTILES" }, "totNumValues": "95743" }, "rankHistogram": { "buckets": [ { "label": "Who wrote and produced a prequel of M1", "sampleCount": 1.0 }, { "highRank": "1", "label": "Who wrote and produced M1 's sequel", "lowRank": "1", "sampleCount": 1.0 }, { "highRank": "2", "label": "Who wrote and produced M1 's prequel", "lowRank": "2", "sampleCount": 1.0 }, { "highRank": "3", "label": "Who wrote and produced M1", "lowRank": "3", "sampleCount": 1.0 }, { "highRank": "4", "label": "Who wrote and executive produced a prequel of M1", "lowRank": "4", "sampleCount": 1.0 }, { "highRank": "5", "label": "Who wrote and executive produced a film executive produced by M2 and M3 and directed by M4", "lowRank": "5", "sampleCount": 1.0 }, { "highRank": "6", "label": "Who wrote and executive produced M1 's sequel", "lowRank": "6", "sampleCount": 1.0 }, { "highRank": "7", "label": "Who wrote and executive produced M1 's prequel", "lowRank": "7", "sampleCount": 1.0 }, { "highRank": "8", "label": "Who wrote and executive produced M1", "lowRank": "8", "sampleCount": 1.0 }, { "highRank": "9", "label": "Who wrote and edited a sequel of M1", "lowRank": "9", "sampleCount": 1.0 } ] }, "topValues": [ { "frequency": 1.0, "value": "Who wrote and produced a prequel of M1" }, { "frequency": 1.0, "value": "Who wrote and produced M1 's sequel" }, { "frequency": 1.0, "value": "Who wrote and produced M1 's prequel" }, { "frequency": 1.0, "value": "Who wrote and produced M1" }, { "frequency": 1.0, "value": "Who wrote and executive produced a prequel of M1" }, { "frequency": 1.0, "value": "Who wrote and executive produced a film executive produced by M2 and M3 and directed by M4" }, { "frequency": 1.0, "value": "Who wrote and executive produced M1 's sequel" }, { "frequency": 1.0, "value": "Who wrote and executive produced M1 's prequel" }, { "frequency": 1.0, "value": "Who wrote and executive produced M1" }, { "frequency": 1.0, "value": "Who wrote and edited a sequel of M1" } ], "unique": "95743" }, "type": "STRING" } ], "numExamples": "95743" } }, { "name": "validation", "numBytes": "5875751", "shardLengths": [ "11968" ], "statistics": { "features": [ { "path": { "step": [ "query" ] }, "stringStats": { "avgLength": 383.1173, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "11968", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 } ], "type": "QUANTILES" }, "totNumValues": "11968" }, "rankHistogram": { "buckets": [ { "label": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3\n}", "sampleCount": 68.0 }, { "highRank": "1", "label": "SELECT count(*) WHERE {\nM0 ns:film.film.directed_by M1 .\nM0 ns:film.film.directed_by M2 .\nM0 ns:film.film.edited_by M1 .\nM0 ns:film.film.edited_by M2 .\nM0 ns:film.film.executive_produced_by M1 .\nM0 ns:film.film.executive_produced_by M2 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\nM0 ns:film.film.written_by M1 .\nM0 ns:film.film.written_by M2\n}", "lowRank": "1", "sampleCount": 56.0 }, { "highRank": "2", "label": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.film_costumer_designer.costume_design_for_film M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}", "lowRank": "2", "sampleCount": 50.0 }, { "highRank": "3", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.sequel M0 .\n?x0 ns:film.film.written_by M1\n}", "lowRank": "3", "sampleCount": 39.0 }, { "highRank": "4", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.producer.films_executive_produced M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "lowRank": "4", "sampleCount": 37.0 }, { "highRank": "5", "label": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.director.film M4 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.editor.film M4 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.films_executive_produced M4 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M4 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3 .\nM0 ns:film.writer.film M4\n}", "lowRank": "5", "sampleCount": 33.0 }, { "highRank": "6", "label": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}", "lowRank": "6", "sampleCount": 32.0 }, { "highRank": "7", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.edited_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}", "lowRank": "7", "sampleCount": 31.0 }, { "highRank": "8", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.film_costumer_designer.costume_design_for_film M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}", "lowRank": "8", "sampleCount": 31.0 }, { "highRank": "9", "label": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.prequel M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.written_by M0\n}", "lowRank": "9", "sampleCount": 30.0 } ] }, "topValues": [ { "frequency": 68.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3\n}" }, { "frequency": 56.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.film.directed_by M1 .\nM0 ns:film.film.directed_by M2 .\nM0 ns:film.film.edited_by M1 .\nM0 ns:film.film.edited_by M2 .\nM0 ns:film.film.executive_produced_by M1 .\nM0 ns:film.film.executive_produced_by M2 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\nM0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\nM0 ns:film.film.written_by M1 .\nM0 ns:film.film.written_by M2\n}" }, { "frequency": 50.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.film_costumer_designer.costume_design_for_film M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}" }, { "frequency": 39.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.executive_produced_by M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.sequel M0 .\n?x0 ns:film.film.written_by M1\n}" }, { "frequency": 37.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.producer.films_executive_produced M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}" }, { "frequency": 33.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.director.film M1 .\nM0 ns:film.director.film M2 .\nM0 ns:film.director.film M3 .\nM0 ns:film.director.film M4 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.editor.film M2 .\nM0 ns:film.editor.film M3 .\nM0 ns:film.editor.film M4 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.films_executive_produced M2 .\nM0 ns:film.producer.films_executive_produced M3 .\nM0 ns:film.producer.films_executive_produced M4 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M2 .\nM0 ns:film.producer.film|ns:film.production_company.films M3 .\nM0 ns:film.producer.film|ns:film.production_company.films M4 .\nM0 ns:film.writer.film M1 .\nM0 ns:film.writer.film M2 .\nM0 ns:film.writer.film M3 .\nM0 ns:film.writer.film M4\n}" }, { "frequency": 32.0, "value": "SELECT count(*) WHERE {\nM0 ns:film.actor.film/ns:film.performance.film M1 .\nM0 ns:film.cinematographer.film M1 .\nM0 ns:film.director.film M1 .\nM0 ns:film.editor.film M1 .\nM0 ns:film.film_art_director.films_art_directed M1 .\nM0 ns:film.producer.films_executive_produced M1 .\nM0 ns:film.producer.film|ns:film.production_company.films M1 .\nM0 ns:film.writer.film M1\n}" }, { "frequency": 31.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.directed_by M1 .\n?x0 ns:film.film.directed_by M2 .\n?x0 ns:film.film.directed_by M3 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.edited_by M1 .\n?x0 ns:film.film.edited_by M2 .\n?x0 ns:film.film.edited_by M3 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M2 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M3 .\n?x0 ns:film.film.written_by M0 .\n?x0 ns:film.film.written_by M1 .\n?x0 ns:film.film.written_by M2 .\n?x0 ns:film.film.written_by M3\n}" }, { "frequency": 31.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 a ns:people.person .\n?x0 ns:film.actor.film/ns:film.performance.film M1 .\n?x0 ns:film.cinematographer.film M1 .\n?x0 ns:film.director.film M1 .\n?x0 ns:film.editor.film M1 .\n?x0 ns:film.film_art_director.films_art_directed M1 .\n?x0 ns:film.film_costumer_designer.costume_design_for_film M1 .\n?x0 ns:film.producer.film|ns:film.production_company.films M1 .\n?x0 ns:film.writer.film M1\n}" }, { "frequency": 30.0, "value": "SELECT DISTINCT ?x0 WHERE {\n?x0 ns:film.film.directed_by M0 .\n?x0 ns:film.film.edited_by M0 .\n?x0 ns:film.film.executive_produced_by M0 .\n?x0 ns:film.film.prequel M1 .\n?x0 ns:film.film.produced_by|ns:film.film.production_companies M0 .\n?x0 ns:film.film.written_by M0\n}" } ], "unique": "8859" }, "type": "STRING" }, { "path": { "step": [ "question" ] }, "stringStats": { "avgLength": 67.91536, "commonStats": { "avgNumValues": 1.0, "maxNumValues": "1", "minNumValues": "1", "numNonMissing": "11968", "numValuesHistogram": { "buckets": [ { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 }, { "highValue": 1.0, "lowValue": 1.0, "sampleCount": 1196.8 } ], "type": "QUANTILES" }, "totNumValues": "11968" }, "rankHistogram": { "buckets": [ { "label": "Who wrote , produced , executive produced , and edited M1 , M2 , and M3", "sampleCount": 1.0 }, { "highRank": "1", "label": "Who wrote , produced , edited , and directed M1 , M2 , and M3", "lowRank": "1", "sampleCount": 1.0 }, { "highRank": "2", "label": "Who wrote , produced , directed , and edited M1 , M2 , and M3", "lowRank": "2", "sampleCount": 1.0 }, { "highRank": "3", "label": "Who wrote , executive produced , produced , and edited M1 , M2 , and M3", "lowRank": "3", "sampleCount": 1.0 }, { "highRank": "4", "label": "Who wrote , executive produced , produced , and directed M1 , M2 , M3 , and M4", "lowRank": "4", "sampleCount": 1.0 }, { "highRank": "5", "label": "Who wrote , executive produced , edited , and produced M1 , M2 , and M3", "lowRank": "5", "sampleCount": 1.0 }, { "highRank": "6", "label": "Who wrote , executive produced , and edited M1 , M2 , M3 , and M4", "lowRank": "6", "sampleCount": 1.0 }, { "highRank": "7", "label": "Who wrote , executive produced , and directed M1 , M2 , and M3", "lowRank": "7", "sampleCount": 1.0 }, { "highRank": "8", "label": "Who wrote , edited , executive produced , and produced M1 , M2 , and M3", "lowRank": "8", "sampleCount": 1.0 }, { "highRank": "9", "label": "Who wrote , edited , directed , and produced M1 , M2 , and M3", "lowRank": "9", "sampleCount": 1.0 } ] }, "topValues": [ { "frequency": 1.0, "value": "Who wrote , produced , executive produced , and edited M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , produced , edited , and directed M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , produced , directed , and edited M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , executive produced , produced , and edited M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , executive produced , produced , and directed M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , executive produced , edited , and produced M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , executive produced , and edited M1 , M2 , M3 , and M4" }, { "frequency": 1.0, "value": "Who wrote , executive produced , and directed M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , edited , executive produced , and produced M1 , M2 , and M3" }, { "frequency": 1.0, "value": "Who wrote , edited , directed , and produced M1 , M2 , and M3" } ], "unique": "11968" }, "type": "STRING" } ], "numExamples": "11968" } } ], "supervisedKeys": { "input": "question", "output": "query" }, "version": "1.2.0" } | Mid | [
0.5708245243128961,
33.75,
25.375
] |
~ Pontifex minimus War and Culture The countryside in Khuzestan (ancient Susiane/lowland Elam) near Ahwaz where 30 years ago the king of Babylon and the assembly of the land of Iran fought a terrible war. As part of my dissertation I have to talk about conscription and how well it functioned in the Ancient Near East, and that turned me to a classic article. As I was searching for it I found another which I want to talk about. Back in 1999, Norvell Atkine set out to explain to the American imperial elite why the “Arab armies” which they had armed and trained were so reluctant to fight the way that Americans told them to fight. These armies kept losing, so why were they rejecting help from more effective soldiers like him and his friends? “There are many factors—economic, ideological, technical—but perhaps the most important has to do with culture and certain societal attributes which inhibit Arabs from producing an effective military force.” When I read it the first time, I took away his lovely anecdotes about the culture clash between American military personnel and the Arab officers which they had been assigned to collaborate with. Atkine focusses on the armies of Mubarak’s Egypt, Jordan, Lebanon, Saudi Arabia, and the UAE. But a few years ago, Caitlyn Talmadge wrote a scholarly article on one of the Arab armies which he is less interested in: Saddam Hussein’s. Her article has an abstract, so I will let her speak for herself: Saddam’s Iraq has become a cliché in the study of military effectiveness—the quintessentially coup-proofed, personalist dictatorship, unable to generate fighting power commensurate with its resources. But evidence from the later years of the Iran-Iraq War actually suggests that the Iraqi military could be quite effective on the battlefield. What explains this puzzling instance of effectiveness, which existing theories predict should not have occurred? Recently declassified documents and new histories of the war show that the Iraqi improvements stemmed from changes in Saddam’s perceptions of the threat environment, which resulted in significant shifts in his policies with respect to promotions, training, command arrangements, and information management in the military. Threat perceptions and related changes in these practices also help explain Iraq’s return to ineffectiveness after the war, as evident in 1991 and 2003. These findings, conceived as a theory development exercise, suggest that arguments linking regime type and coup-ridden civil-military relations to military performance need to take into account the threat perceptions that drive autocratic leaders’ policies toward their militaries. To put it bluntly, Saddam spent his time in power worried that someone would toss him in his own torture chambers. After all, most of the governments in the region, including his Baˀath party, were descended from a group of soldiers who had overthrown the previous regime. So he set up policies to ensure that the army was not a threat to him: strictly limiting communication between units, requiring minor acts to be authorized from Baghdad, refusing to allow different types of troops to train together, and killing officers who were too popular. This kept him in power for 25 years and able to play warlord, even if it also meant that his adventures cost the lives of too many of his own soldiers for little or no gain. The only time that he relaxed these politics was the late 1980s, when it seemed like if the war continued, his regime might collapse. As soon as he had driven the Iranians back across the border and made peace, he treated the army just like he had before, because once again he was more worried about a coup from within than an invasion from without. And while Saddam was crazy (and perhaps not the sharpest knife in the drawer), his 25 year rule suggests that he knew how to stay in power. This idea is an important one for military historians, who used to criticize soldiers, armies, or generals for not acting according to some modern ideal of efficiency or reason. But a Hellenistic king was expected to fight and fight often, and when he fought he was expected to lead the charge himself, because that was the way that Alexander had done it. This was strongest in the Seleucid empire, where ten out of 30 or so kings died in combat (Tuplin, “Hellenistic Kingship: An Achaemenid Inheritance?”) A Roman emperor knew that any general in charge of a large army might proclaim himself Caesar, so he had to think carefully about who he allowed to command more than a legion or two. This meant that commanders were not always the most competent generals, and that someone who was too successful might need to be transferred to a quiet province. Before we accuse someone of being irrational or incompetent, its wise to look at the environment they are living in, and what they can expect to be punished or rewarded for. Atkine talks a bit about political reform in a euphemistic way, but after living in Beirut for eight years has a feeling that these problems are rooted in placeless timeless “Arab culture.” Talmadge lets readers catch more glimpses of the kind of regime she is describing, and to her the military problems of Baˀathist Iraq were rooted in a particular political situation and changed when that situation changed. Whatever you think about their arguments, these two articles raise interesting questions and challenge each other’s answers. And reminding me of a terrible war in a harsh land, and the people caught up in it with no good choices, is not a bad thing either. | Mid | [
0.563786008230452,
34.25,
26.5
] |
Q: Is it possible to access static members class in C++ like in Java? In a YouTube video someone made two classes in Java like this: public class Var { static JFrame jf1; static int screenWidth = 800; static int screenHeight = 600; public Var() { } } public class Gui { public Gui() { Var.jf1 = new JFrame(); Var.jf1.setSize(Var.screenWidth, Var.screenHeight); } } As you can see he can access jf1 by just putting Var. in front of the variable. Can you access member variables in C++ like this as well? Or do I have to create a GetValue function which returns the variable I want to have in a different class? A: In C++, you need to use Var::jf1. The . syntax is used when you have an object on the left side. Also, you will need to define the member in your .cpp file: JFrame Var::jf1; Same for the other members. | High | [
0.6657929226736561,
31.75,
15.9375
] |
River Till, Northumberland The River Till is a river of north-eastern Northumberland. It is a tributary of the River Tweed, of which it is the only major tributary to flow wholly in England. The upper part of the Till, which rises on Comb Fell in the Cheviots, is known as the River Breamish. Its tributaries include Wooler Water, which originates in the Cheviot Hills, and the River Glen in Glendale. It meets the Tweed near Berwick-upon-Tweed and Twizell Bridge. According to local folklore: Tweed said to Till "What gars ye rin sae stil?" Says Till to Tweed, "Though ye rin wi' speed And I rin slaw Whar ye droon yin man I droon twa" Recent environmental projects have included an attempt to conserve the native brown trout. External links A walk along the River Till bank from Etal to Tiptoe Brown trout conservation project Local history Map sources for: - source of the Breamish and - confluence with the Tweed https://www.antonychessell.co.uk/Breamish and Till:From Source to Tweed, TillVAS,2014 Till Till 1Till | Mid | [
0.6532258064516121,
30.375,
16.125
] |
Q: How to adapt script tag to Angular? On my old website, I have had some problems with some bad guys, who have tried to clone my site. Now I try to pass the application to Angular7. One of the measures that I have implemented in the old page, to end the attempts to clone, has been using a .js code to check the hostname from the header. <script type="text/javascript"> if(!['mysite.com','testsite.com'].includes(window.location.hostname)){ window.location.href = 'https://google.com'; } </script> Can someone tell me how I can convert (rewrite, adapt) this Javascript code so that it can be used in Angular? I want to use this code (if possible) in the source code, so it can be compiled after the run - npm run build - so that it stays integrated in main.js of my website. A: in your index.html add this code example src/app/index.html <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Stackoverflow</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> </head> <body> <app-root></app-root> <script type="text/javascript"> if(!['mysite.com','testsite.com'].includes(window.location.hostname)){ window.location.href = 'https://google.com'; } </script> </body> </html> ok | Mid | [
0.647214854111405,
30.5,
16.625
] |
/* * Copyright (c) 2010-2017 Evolveum and contributors * * This work is dual-licensed under the Apache License 2.0 * and European Union Public License. See LICENSE file for details. */ package com.evolveum.midpoint.web.component.prism.show; import com.evolveum.midpoint.gui.api.component.BasePanel; import com.evolveum.midpoint.gui.api.util.WebComponentUtil; import com.evolveum.midpoint.model.api.ModelAuthorizationAction; import com.evolveum.midpoint.model.api.visualizer.SceneItemValue; import com.evolveum.midpoint.prism.PrismObject; import com.evolveum.midpoint.prism.PrismReference; import com.evolveum.midpoint.prism.PrismReferenceValue; import com.evolveum.midpoint.schema.constants.ObjectTypes; import com.evolveum.midpoint.util.exception.*; import com.evolveum.midpoint.web.component.data.column.ImagePanel; import com.evolveum.midpoint.web.component.data.column.LinkPanel; import com.evolveum.midpoint.web.component.util.VisibleEnableBehaviour; import com.evolveum.midpoint.web.util.ObjectTypeGuiDescriptor; import com.evolveum.midpoint.xml.ns._public.common.common_3.AuthorizationPhaseType; import com.evolveum.midpoint.xml.ns._public.common.common_3.ObjectReferenceType; import com.evolveum.midpoint.xml.ns._public.common.common_3.ObjectType; import org.apache.wicket.ajax.AjaxRequestTarget; import org.apache.wicket.markup.html.basic.Label; import org.apache.wicket.model.IModel; import javax.xml.namespace.QName; /** * TODO make this parametric (along with SceneItemValue) * @author mederly */ public class SceneItemValuePanel extends BasePanel<SceneItemValue> { private static final String ID_ICON = "icon"; private static final String ID_LABEL = "label"; private static final String ID_LINK = "link"; private static final String ID_ADDITIONAL_TEXT = "additionalText"; public SceneItemValuePanel(String id, IModel<SceneItemValue> model) { super(id, model); } @Override protected void onInitialize() { super.onInitialize(); initLayout(); } private void initLayout() { final VisibleEnableBehaviour visibleIfReference = new VisibleEnableBehaviour() { @Override public boolean isVisible() { SceneItemValue object = getModelObject(); return hasValidReferenceValue(object); } }; final VisibleEnableBehaviour visibleIfNotReference = new VisibleEnableBehaviour() { @Override public boolean isVisible() { SceneItemValue object = getModelObject(); return !hasValidReferenceValue(object); } }; final ImagePanel icon = new ImagePanel(ID_ICON, new IconModel(), new TitleModel()); icon.add(visibleIfReference); add(icon); final Label label = new Label(ID_LABEL, new LabelModel()); label.add(visibleIfNotReference); add(label); final LinkPanel link = new LinkPanel(ID_LINK, new LabelModel()) { @Override public void onClick(AjaxRequestTarget target) { if (!(getModelObject().getSourceValue() instanceof PrismReferenceValue)) { return; } PrismReferenceValue refValue = (PrismReferenceValue) getModelObject().getSourceValue(); if (refValue == null){ return; } ObjectReferenceType ort = new ObjectReferenceType(); ort.setupReferenceValue(refValue); WebComponentUtil.dispatchToObjectDetailsPage(ort, getPageBase(), false); } }; link.add(visibleIfReference); add(link); final Label additionalText = new Label(ID_ADDITIONAL_TEXT, new IModel<String>() { @Override public String getObject() { return getModelObject() != null ? getModelObject().getAdditionalText() : null; } }); add(additionalText); } private boolean hasValidReferenceValue(SceneItemValue object) { PrismReferenceValue target = null; if (object != null && object.getSourceValue() != null && object.getSourceValue() instanceof PrismReferenceValue && (object.getSourceValue() != null)) { target = (PrismReferenceValue) object.getSourceValue(); } if (target == null) { return false; } QName targetType = target.getTargetType(); if (target == null) { return false; } Class<? extends ObjectType> targetClass = getPrismContext().getSchemaRegistry().getCompileTimeClass(targetType); return WebComponentUtil.isAuthorized(targetClass); } private ObjectTypeGuiDescriptor getObjectTypeDescriptor() { SceneItemValue value = getModelObject(); if (value != null && value.getSourceValue() != null && value.getSourceValue() instanceof PrismReferenceValue) { QName targetType = ((PrismReferenceValue) value.getSourceValue()).getTargetType(); return ObjectTypeGuiDescriptor.getDescriptor(ObjectTypes.getObjectTypeFromTypeQName(targetType)); } else { return null; } } private class IconModel implements IModel<String> { @Override public String getObject() { ObjectTypeGuiDescriptor guiDescriptor = getObjectTypeDescriptor(); return guiDescriptor != null ? guiDescriptor.getBlackIcon() : ObjectTypeGuiDescriptor.ERROR_ICON; } } private class TitleModel implements IModel<String> { @Override public String getObject() { ObjectTypeGuiDescriptor guiDescriptor = getObjectTypeDescriptor(); return guiDescriptor != null ? createStringResource(guiDescriptor.getLocalizationKey()).getObject() : null; } } private class LabelModel implements IModel<String> { @Override public String getObject() { return getModelObject() != null ? getModelObject().getText() : null; } } } | Mid | [
0.5383022774327121,
32.5,
27.875
] |
Q: nhibernate join on subquery I am trying to do a join on a subquery to another table I have the following entity: public class SomeClass { public virtual string KeyPart1 { get; set; } public virtual string KeyPart2 { get; set; } public virtual int VersionNo { get; set; } public virtual string ClassProperty1 { get; set; } public virtual string ClassProperty2 { get; set; } } I then have the following query to get me the latest version of each record: var subquery = QueryOver.Of<SomeClass>() .SelectList(lst => lst .SelectGroup(f => f.KeyPart1) .SelectGroup(f => f.KeyPart2) .SelectMax(f => f.VersionNo)); I am now trying to return the entire SomeClass for each of the results of the subquery. So far I have something like this: var query = QueryOver.Of<SomeClass>() .WithSubquery.Where(???) The SQL statement should look something like this when it is done SELECT cls.* FROM SomeClass as cls INNER JOIN (SELECT KeyPart1, KeyPart2, MAX(VersionNo) FROM SomeClass GROUP BY KeyPart1, KeyPart2) as sub ON sub.KeyPart1 = cls.KeyPart1 and sub.KeyPart2 = cls.KeyPart2 and sub.VersionNo = cls.VersionNo Can someone help me return the entire SomeClass record for each highest version? EDIT: Can the same thing be done using an exist statement? This will allow us to use something like: SomeClass classAlias = null var subquery = QueryOver.Of<SomeClass>() .SelectList(lst => lst .SelectGroup(f => f.KeyPart1) .SelectGroup(f => f.KeyPart2) .SelectMax(f => f.VersionNo)) .Where(x => x.KeyPart1 == classAlias.KeyPart1) .Where(x => x.KeyPart2 == classAlias.KeyPart2) .Where(x => x.VersionNo == classAlias.VersionNo) var query = Session.QueryOver(() => classAlias) .WithSubQuery.WhereExists(subquery); Which generates the following SQL statement: SELECT * FROM SomeClass cls WHERE EXISTS (SELECT KeyPart1, KeyPart2, MAX(VersionNo) FROM SomeClass cls2 WHERE cls.KeyPart1 = cls2.KeyPart1 and cls.KeyPart2 = cls2.KeyPart2 and cls.VersionNo = cls2.VersionNo GROUP BY KeyPart1, KeyPart2) This however also brings back all versions, but I thought it would be another good place to start. A: After a lot of trial and error I was able to get this working using WHERE NOT EXISTS. Hopefully this will help people with a similar problem. Here is the code snippet that will return the latest version of a particular record using QueryOver: SomeClass classAlias = null var subquery = QueryOver.Of<SomeClass>() .SelectList(lst => lst .SelectGroup(f => f.KeyPart1) .SelectGroup(f => f.KeyPart2) .SelectMax(f => f.VersionNo)) .Where(x => x.KeyPart1 == classAlias.KeyPart1) .Where(x => x.KeyPart2 == classAlias.KeyPart2) .Where(x => x.VersionNo > classAlias.VersionNo); var query = Session.QueryOver(() => classAlias) .WithSubQuery.WhereNotExists(subquery); var results = query.List(); | Mid | [
0.6094986807387861,
28.875,
18.5
] |
Passing down hunting wisdom Wednesday When the Michigan Department of Natural Resources started a youth hunting program a few years ago, it was a chance for Wolverton to share another one of his passions: hunting. Bob Wolverton is pleased to see his three grandsons all wearing T-shirts of his favorite team, Michigan State University. The Onsted resident and retired police detective knows how important sharing common family bonds can be, especially over multiple generations. So when the Michigan Department of Natural Resources started a youth hunting program a few years ago, it was a chance for Wolverton to share another one of his passions: hunting. For the first time this year, he was able to take all three grandsons, Eric, Garrett and Brent Burnor of Deerfield, on a hunting weekend. The Department of Natural Resources and Environment youth hunt in late September allowed children ages 12 to 17 an early chance to harvest whitetail deer. For Wolverton, it is an opportunity to share his decades of hunting wisdom and experience as well as spend some quiet time away with his grandsons. “I think (hunting) is an activity that is good for them,” said Wolverton, who began hunting small game as a youth in the Blissfield area and went on his first deer hunt at age 14. “Frankly, the youth hunt opportunity that the DNR came up with is long, long overdue. It gives you a nice time of the year to have kids out. The weather is generally reasonable, if not downright pleasant. And when you have somebody, particularly kids, the first experience isn’t that they are freezing to death.” It was the third year for Eric, 16, the second for 14-year-old Garrett and the first for 12-year-old Brent. The three joined their grandfather and grandmother on a weekend trip to a remote cabin near Ludington, in Mason County. The term “remote” is not an understatement; the cabin has no electricity, no running water and no indoor plumbing. “Not only that,” Wolverton said, “the area we are in at best has a very marginal cell phone signal, which I enjoy, to be honest with you.” What the area does have, however, is some near ideal terrain and conditions for whitetail deer. Wolverton said that environment, as well as the early season, makes for a more interesting and engaging hunting experience for his grandsons. “The game at this time of the year is in its most relaxed state,” he said. “It’s the most predictable pattern in their behavior. So the percentage of success is higher than it would be otherwise.” Eric, Garrett and Brent each completed hunter safety courses and hunted under the state’s apprentice license program. However, the lessons are only beginning, according to grandpa. “They need to know how to handle weapons they are going to be using,” Wolverton said. “And they also need to know a little bit about the anatomy of an animal. And how to make an effective kill shot, where to place the bullet so it efficiently kills so we can harvest the animal.” The learning experience also includes what to do after the kill. “I am very particular about how the animal is treated, how the carcass is field dressed and handled properly,” Wolverton said. “They all actually participate and help me with the body cavity and legs and those kinds of things. And the nice thing, of course, with teenage boys, is the recovery is much, much easier than if you are by yourself.” He said the main lesson of hunting is having patience, which is a lesson even Wolverton is still learning. “(Hunting) reinforces my patience, with which I can be short of with them,” Wolverton said. “And they don’t have as much patience as I would like them to have. That is a virtue for a hunter, as well as for most of us in our daily lives. It’s a learning process.” Garrett received a very good lesson in patience during his most recent trip. After his brothers each recorded a successful kill on Saturday, he was the only grandson without a deer. He also took a shot and missed, which further fueled his frustration. “He made a common mistake,” Wolverton said. “At the last instant, he lifted his head to see if he hit it… and missed it.” But on Sunday, Grandma Mary Ann, a hunter in her own right who also accompanied the group on the trip, took Garrett out to hunt on another part of the property. And this time, he remembered his grandfather’s advice. “That deer was about to turn around and leave, so I had to realize I had to take my shot kind of in a hurry,“ Garrett said. “I was quite nervous because I never shot one before. I was excited. My heart was pounding, a little shaky. … But once I did get mine, it was kind of like ‘hurrah!’” “He did what I’ve been teaching him to do,” Wolverton said. “There’s a quote attributed to one of the Old West gunfighters: ‘In a gunfight, you take your time quickly.’ When you hunt, you should be able to do the same thing. You don’t take a wild shot; you make sure it’s a good shot. But that moment of opportunity may be fleeting. And you need to act in that window of opportunity. And he did.” Garrett finished with the largest deer, a 150-plus-pound doe, while Eric shot a 120-pound doe and Brent a 130-pound, 3-point buck. All three deer will supply the family with plenty of meat during the winter months. “For us, hunting is a food-gathering activity as well as recreational,” Wolverton said. So, how did his grandsons enjoy the technology-free weekend? “I like being outdoors. It’s pretty fun hanging out with my grandpa,” said Eric, who added that having his two brothers along makes the experience even better. “I like to hunt. It’s a hobby I like to do. I play a lot of basketball, and you have to have a lot of focus out there. In hunting, when a deer comes around, you have to focus. You don’t want to lose it.” “It was kind of what I expected,” said Brent, who was the only grandson to record a kill in his first year. “I really didn’t expect that I was going to get (a deer). When I got one, I was pretty shocked. … But I really didn’t expect (not having) electricity.” Wolverton said introducing youth to the outdoors is an important life lesson that needs to be learned. “You ought to think about the type of outdoor activity, whether it’s hunting or fishing,” he said, “and one which is likely to have some degree of success. You should have at least a reasonably good chance of at least seeing (game), and potentially having the opportunity to do it. Interest is sustained by the activity and not stifled by the lack of activity.” And what other lessons did the grandsons learn from their grandfather? “That you shouldn’t hesitate when you are hunting out there, and it’s God’s gift that you got the deer,” Eric said. “I just like the quiet,” Garrett said, “as long as it’s only for a weekend.” “They need to understand respect for the web of life, and clearly you need to give the respect to the animal that you harvest,” Wolverton said. “I think it’s a time that I would hope they would treasure all of their lives, long after our generation is gone. They say that people remain alive as long as someone remembers them. It might give me a few more years.” Never miss a story Choose the plan that's right for you. Digital access or digital and print delivery. | Mid | [
0.630071599045346,
33,
19.375
] |
<?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.calsignlabs.apde.wearcompanion"> <uses-feature android:name="android.hardware.type.watch"/> <uses-feature android:name="android.hardware.microphone" android:required="false"/> <uses-permission android:name="android.permission.WAKE_LOCK" /> <uses-permission android:name="android.permission.VIBRATE" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.RECORD_AUDIO" /> <application android:allowBackup="false" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@android:style/Theme.DeviceDefault"> <meta-data android:name="com.google.android.wearable.standalone" android:value="false"/> <uses-library android:name="com.google.android.wearable" android:required="true"/> <service android:name=".WatchfaceLoader" android:enabled="true" android:exported="true"> <intent-filter> <action android:name="com.google.android.gms.wearable.DATA_CHANGED"/> <data android:host="*" android:scheme="wear"/> </intent-filter> </service> <provider android:name="androidx.core.content.FileProvider" android:authorities="com.calsignlabs.apde.wearcompanion.fileprovider" android:exported="false" android:grantUriPermissions="true"> <meta-data android:name="android.support.FILE_PROVIDER_PATHS" android:resource="@xml/paths"/> </provider> <activity android:name=".CompanionActivity" android:label="@string/title_activity_companion"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> <!-- Four services - two each of Canvas and GLES. Both start disabled and the correct one is enabled when sketch is run. --> <service android:name=".watchface.CanvasWatchFaceService$A" android:label="@string/watchface_label_canvas_a" android:permission="android.permission.BIND_WALLPAPER" android:enabled="false"> <meta-data android:name="android.service.wallpaper" android:resource="@xml/watch_face" /> <meta-data android:name="com.google.android.wearable.watchface.preview" android:resource="@drawable/preview_rectangular" /> <meta-data android:name="com.google.android.wearable.watchface.preview_circular" android:resource="@drawable/preview_circular" /> <intent-filter> <action android:name="android.service.wallpaper.WallpaperService" /> <category android:name="com.google.android.wearable.watchface.category.WATCH_FACE" /> </intent-filter> </service> <service android:name=".watchface.CanvasWatchFaceService$B" android:label="@string/watchface_label_canvas_b" android:permission="android.permission.BIND_WALLPAPER" android:enabled="false"> <meta-data android:name="android.service.wallpaper" android:resource="@xml/watch_face" /> <meta-data android:name="com.google.android.wearable.watchface.preview" android:resource="@drawable/preview_rectangular" /> <meta-data android:name="com.google.android.wearable.watchface.preview_circular" android:resource="@drawable/preview_circular" /> <intent-filter> <action android:name="android.service.wallpaper.WallpaperService" /> <category android:name="com.google.android.wearable.watchface.category.WATCH_FACE" /> </intent-filter> </service> <service android:name=".watchface.GLESWatchFaceService$A" android:label="@string/watchface_label_gles_a" android:permission="android.permission.BIND_WALLPAPER" android:enabled="false"> <meta-data android:name="android.service.wallpaper" android:resource="@xml/watch_face" /> <meta-data android:name="com.google.android.wearable.watchface.preview" android:resource="@drawable/preview_rectangular" /> <meta-data android:name="com.google.android.wearable.watchface.preview_circular" android:resource="@drawable/preview_circular" /> <intent-filter> <action android:name="android.service.wallpaper.WallpaperService" /> <category android:name="com.google.android.wearable.watchface.category.WATCH_FACE" /> </intent-filter> </service> <service android:name=".watchface.GLESWatchFaceService$B" android:label="@string/watchface_label_gles_b" android:permission="android.permission.BIND_WALLPAPER" android:enabled="false"> <meta-data android:name="android.service.wallpaper" android:resource="@xml/watch_face" /> <meta-data android:name="com.google.android.wearable.watchface.preview" android:resource="@drawable/preview_rectangular" /> <meta-data android:name="com.google.android.wearable.watchface.preview_circular" android:resource="@drawable/preview_circular" /> <intent-filter> <action android:name="android.service.wallpaper.WallpaperService" /> <category android:name="com.google.android.wearable.watchface.category.WATCH_FACE" /> </intent-filter> </service> </application> </manifest> | Low | [
0.5011655011655011,
26.875,
26.75
] |
Peter Lang’s ‘solar realities’ paper and its associated discussion thread has generated an enormous amount of interest on BraveNewClimate (435 comments to date). Peter and I have greatly appreciated the feedback (although not always agreed with the critiques!), and this has led Peter to prepare: (a) an updated version of ‘Solar Realites’ (download the updated v2 PDF here) and (b) a response paper (download PDF here). Below I reproduce the response, and also include Peter’s sketched analysis of the scale/cost of the electricity transmission infrastructure (PDF here). ———————————————– Comparison of capital cost of nuclear and solar power By Peter Lang (Peter is a retired geologist and engineer with 40 years experience on a wide range of energy projects throughout the world, including managing energy R&D and providing policy advice for government and opposition. His experience includes: coal, oil, gas, hydro, geothermal, nuclear power plants, nuclear waste disposal, and a wide range of energy end use management projects) Introduction This paper compares the capital cost of three electricity generation technologies based on a simple analysis. The comparison is on the basis that the technologies can supply the National Electricity Market (NEM) demand without fossil fuel back up. The NEM demand in winter 2007 was: 20 GW base load power; 33 GW peak power (at 6:30 pm); and 25 GW average power. 600 GWh energy per day (450 GWh between 3 pm and 9 am) The three technologies compared are: 1. Nuclear power; 2. Solar photo-voltaic with energy storage; and 3. Solar thermal with energy storage (Solar thermal technologies that can meet this demand do not exist yet. Solar thermal is still in the early stages of development and demonstration. On the technology life cycle Solar Thermal is before “Bleeding edge” – refer: http://en.wikipedia.org/wiki/Technology_lifecycle). This paper is an extension of the paper “Solar Power Realities” . That paper provides information that is essential for understanding this paper. The estimates are ‘ball-park’ and intended to provide a ranking of the technologies rather than exact costs. The estimates should be considered as +/- 50%. Nuclear Power 25 GW @ $4 billion /GW = $100 billion (The settled-down-cost of nuclear may be 25% to 50% of this figure if we reach consensus that we need to cut emissions from electricity to near zero as quickly as practicable.) 8 GW pumped hydro storage @ $2.5 billion /GW = $20 billion Total capital cost = $120 billion Australia already has about 2 GW of pumped-hydro storage so we would need an additional 6 GW to meet this requirement. If sufficient pumped hydro storage sites are not available we can use an additional 8GW of nuclear or chemical storage (e.g. Sodium Sulphur batteries). The additional 8 GW of nuclear would increase the cost by $12 billion to $132 billion (the cost of extra 8 GW nuclear less the cost of 8 GW of pumped hydro storage; i.e. $32 billion – $20 billion). Capital cost of PV system with 30 days of pumped-hydro storage = $2,800 billion. (In reality, we do not have sites available for even 1 day of pumped hydro storage.) Capital cost of PV system with 5 days of Sodium Sulphur battery storage = $4,600 billion. Solar Thermal The system must be able to supply the power to meet demand at all times, even during long periods of overcast conditions. We must design for the worst conditions. We’ll consider two worst case scenarios: 1. All power stations are under cloud at the same time for 3 days. 2. At all times between 9 am and 3 pm at least one power station, somewhere, has direct sunlight, but all other power stations are under cloud. Assumptions: The average capacity factor for all the power stations when under cloud for 3 days is 1.56 % (to be consistent with the PV analysis in “Solar Power Realities”; refer to Figure 7 and the table on page 10). The capacity factor in midwinter, when not under cloud, is 15% (refer Figure 7 in “Solar Power Realities”). But the clouds move, so all the power stations need this generating capacity. To maximise the probability that at least one power station is in the sun we need many power stations spread over a large geographic area. If we have say 20 power stations spread across south east South Australia, Victoria, NSW and southern Queensland, we would need 3,300 GW – assuming only the power station in the sun is generating. If we want redundancy for the power station in the sun, we’d need to double the 3,300 GW to 6,600 GW. Of course the power stations under cloud will also contribute. Let’s say they are generating at 1.56% capacity factor. Without going through the calculations we can see the capacity required will be between the 1,600 GW calculated for Scenario 1 and the 3,300 GW calculated here. However, it is a relatively small reduction (CF 3% / 60% = 5% reduction), so I have ignored it in this simple analysis . So, Scenario 2 requires 450,000 MWh storage and 3,300 GW generating capacity. It also requires a very much greater transmission capacity, but we’ll ignore that for now. This would be the cost if the sun was always shining brightly on all the solar power stations. This is about five times the cost of nuclear. However, that is not all. This system may have an economic life expectancy of perhaps 30 years. So it will need to be replaced at least once during the life of a nuclear plant. So the costs should be doubled to have a fair comparison with a nuclear plant. In order to estimate the costs for Scenario 1 and Scenario 2 we need costs for power and for energy storage as separate items. The input data and the calculations are shown in the Appendix. The costs for the two scenarios (see Appendix for the calculations) are: Summary of cost estimates for the options considered The conclusion stated in the “Solar Power Realities” paper is confirmed. The Capital cost of solar power would be 20 times more than nuclear power to provide the NEM demand. Solar PV is the least cost of the solar options. The much greater investment in solar PV than in solar thermal world wide corroborates this conclusion. Some notes on cloud cover A quick scan of the Bureau of Meteorology satellite images revealed the following: This link provides satelite views. A loop through the midday images for each day of June, July and August 2009, shows that much of south east South Australia, Victoria, NSW and southern Queensland were cloud covered on June 1, 2, 21 and 25 to 28. July 3 to 6, 10, 11, 14. 16, 22 to 31 also had widespread cloud cover (26th was the worst), as did August 4, 9, 10, 21, 22.. This was not a a rigorous study. Note that, although this table includes calculations for the cost of a system with 3 and 5 days of continuous operation at full power, the technology does not exist, and current evidence is that it is impracticable. The figure is used in this comparison, but is highly optimistic. ———————————————– Eraring to Kemps Creek 500kV transmission line. Each of the double circuit 500kV lines from Eraring to Kemps Creek can carry 3250MW. The 500kV lines are double circuit, 3 phase, quad Orange, i.e.2 circuits times 3 phases times 4 conductors per bundle, i.e. 24 wires per tower. Orange is ACSR, Aluminium Conductor Steel Reinforced, with 54 strands of 3.25mm dia aluminium surrounding 7 strands of 3.25mm dia steel. Roughly 1/3 of the cost of a line is in the wires, 1/3 in the steel towers and 1/3 in the easements required to run the line. Capital Cost of Transmission for Renewable Energy Following is a ‘ball park’ calculation of the cost of a trunk transmission system to support wind and solar farms spread across the continent and generating all our electricity. The idea of distributed renewable energy generators is that at least one region will be able to meet the total average demand (25 GW) at any time. Applying the principle that ‘the wind is always blowing somewhere’ and ‘the sun will always be shining somewhere in the day time’, there will be times when all the power would be supplied by just one region – let’s call it the ‘Somewhere Region’. The scenario to be costed is as follows: Wind power stations are located predominantly along the southern strip of Australia from Perth to Melbourne. Solar thermal power stations, each with their own on-site energy storage, are distributed throughout our deserts, mostly in the east-west band across the middle of the continent. All power (25GW) must be able to be provided by any region. We’ll base the costs on building a trunk transmission system from Perth to Sydney, with five north-south transmission lines linking from the solar thermal regions at around latitude 23 degrees. The Perth to Sydney trunk line is 4,000 km and the five north-south lines average 1000 km each. Add 1,000 km to distribute to Adelaide, Melbourne, Brisbane. Total line length is 10,000km. All lines must carry 25GW. Each of the double circuit 500kV lines from Eraring Power Station to Kemps Creek can transmit 3,250MW so let’s say we would need 8 parallel lines for 25GW plus one extra as emergency spare. Like this: LikeLoading... Related 322 Comments I’m aware of two broad approaches to solar thermal. One involves the focusing of sunlight using mirrors or lenses. The other is the solar chimney which relies on temperature differentials at the top and the bottom of a very large chimney and has little to do with direct sunlight (although obviously the sun drives the atmospherics). I don’t know the exact facts but I am lead to believe that the latter is only modestly effected by cloud cover and in fact it continues to produce substantial amounts of power at night even without any dedicated storage infrastructure or using quite passive storage via water filled containers. Can you inform me as to which version of solar thermal you are refering to in this article? You are correct, There are actually about four main categories of solar thermal. They are described in the NEEDS analysis, which is referenced in the “Solar Power Realities – Addendum” paper. The NEEDS analysis looks a the various options and selected the Solar Trough as the reference technology for detailed costings. They explain the reasons for the selection. How well do the solar towers and other meteorological reactors compare with conventional factories for electrical energy production? • By their description it is evident that Power Stations with Meteorological Reactors (Solar Chimneys and Energy Towers) will be very big electrical production units, which will produce a guaranteed Electric Power profile year round. Thus they are compatible to conventional Power Plants (that use coal, oil, gas or nuclear fuels) and thus can replace them. But as they are located in deserts or semi-desert areas, far away from consumption locations (big cities or industrial plants), they need very good interconnection of electricity grids and this is already being done progressively for all the other renewable energies: wind, sun, OTEC… (Have a look for instance to the Desertec concept on http://www.desertec.org). Solar thermal power plants have been in use commercially at Kramer Junction in California since 1985. New solar thermal power plants with a total capacity of more than 2000 MW are at the planning stage, under construction, or already in operation. • Other Renewable Power Plants (wind, solar concentrator, solar PVs, et al) only produce when weather and meteorological conditions are optimum (enough wind but not too strong, for PVs: sunshiny days with few clouds but no production during the night) and thus are only electrical energy production units of non-guaranteed power output, and cannot replace the conventional Power Plants. Solar chimneys can! • Due to thermal storage Solar updraft Chimney Power Stations can operate 24 h/ per day 365days/per year, with their daily energy production following the day’s average solar irradiation. The daily power production profile is very close to the usual demand profile and an aperture (or closure) mechanism allows to produce more (or less) at on-peak (or off-peak) consumption hours. • Electric power cannot be stored up and saved. During the hours at night and on the weekends when demand for electric power decreases, regular fuel consuming power companies actually lose money because they cannot just slow down or stop the generators during these times. It is not feasible because powering down the turbines and then getting them back up to speed during the peak hours, even if could be done within eight hours, would be more costly than letting them run. On the contrary, heat can be stored up and saved on special water containing reservoirs or tanks under the greenhouse of the solar chimney power plants, and electrical output can be adapted to peak power demand. • The only other renewable Power Plant, having a similar behaviour to a Meteorological Reactor Power Plant, is the Hydro Electric Power Plant. Their similarity is far deeper as water can be stored upstream and used for on-peak demand. Water can also be stored in a second reservoir downstram, and pumped back upstream when electricity from nuclear plants is much cheaper (off-peak demand). Conversion yield is good. • The optimum range of Power rating for the Solar Chimney Power Stations, due to the high dimensions, is 50 MW (Ciudad Real project in Spain), 200MW (Buronga, New South Wales project in Australia), and 400 MW (GreenTower South African project in the Namib desert, Namibia). This range of Power (50 – 400 MW) seems to be also optimum for Floating Solar Chimneys and Energy Towers. • For the appropriate places of installation these Meteorological Reactor Power Stations can annually produce electrical energy respectively from 150GWh to 600GWh. The material in your post #4 appears to be copied from a promotion brochure. I’d suggest you study the NEEDS report as a first step. Then you’ll be in a better position to condider all the options. Of course, you’d also need to get a good understanding of the nuclear option, because that is the least-cost option by a long way. An option with no new transmission might be thin film PV with local storage, either a fridge sized lead acid battery at home or sodium sulphur at substations. If dollar-a-watt predictions are true an average house roof could generate in the expected daily range 10 – 50 kwh for $50k and 20 kwh local storage might cost $5k. The household would have to carefully manage their winter needs, perhaps using fuel heating. Assuming we’re headed to 10 million households that’s $550 billion, still more expensive than 25 GW nuclear at $5 a watt. The underlying factor is not the need for storage so much as to greatly overbuild for winter generation. This system may have an economic life expectancy of perhaps 30 years. So it will need to be replaced at least once during the life of a nuclear plant. So the costs should be doubled to have a fair comparison with a nuclear plant. It is not linear. To make sense, you have to discount future costs/revenues – in particular here, revenues – to reflect interest. So years 30-60 of a nuclear reactor’s life are worth far less than years 0-30 – it is not double the economic value. See for instance table 6.D in the MIT ‘update on the cost of nuclear power’ working paper, for a stark illustration of what this financial effect does: You would be absolutely correct if the comparison were being done on the basis of Levelised Cost of Electricity (LCOE). But the comparisons are simple; and are of just the capital costs. By the way, altohough the paper mentions the need to double the capital cost to take into account the shorter life of the solar power station, this extra cost is not included in the comparison. It would need to be included in an LCOE analysis, as you quite rightly point out. Credit Suisse published a pretty big study at the beginning of the year on the comparative costs of some of the likeliest alternatives. They mentioned the big factor for nuke was the level of regulatory compliance that would be imposed. We estimate the costs of nuclear power to be $61.87 per MWh. Capital costs per kW are difficult to come by, but recent data from the Keystone Center estimates a capital cost in the range of $2,950 to $4,000 per kW (2007), and FPL estimates a cost of $8,000 per kW for its Turkey Point project. Therefore, we assume $6,000/KW in our base case. We note, however, that if capital costs are on the low-end of our estimates, the LCOE of power is only $35/MWh, which would be the lowest cost energy available. Any new nuclear plant would likely be built far from the energy demand, therefore transmission infrastructure investment would likely be required. The significant benefit of nuclear power is that there are no carbon emissions and the power is highly reliable, suitable for base load generation. The WACC of nuclear projects tends to be lower due to the high debt capital structure and loan collateral – utilities would not proceed with a nuclear build out without federal loan guarantees. Nuclear power often appears to be the easy solution to growing energy demands and climate concerns, but the public opposition is a serious obstacle. As better options are developed for safe storage or reprocessing of used rods, we believe we will eventually start to see new nuclear power plants. jc, as I understand it, the FPL costs for Turkey Point are higher because they are escalated costs, rather than overnight. It is quite right, according to my reading and contacts, that regulatory uncertainty is the big issue right now for nuclear builds. As one respected contact said: “As for price predictions, it’s not that hard to predict what they should cost, since ABWRs have already been built [in Asia]. That completely disregards how much Americans are being told they’ll cost, due to the lack of assurance that once they’re ordered they’ll be allowed to be built without repeated construction shutdowns, etc. Until utility companies feel confident that they’ll be able to build them and get them online expeditiously, it cannot reasonably be said that the competitive model is fully operational when it comes to nuclear power, anywhere in the States. I’d be glad to let the market decide if the court system didn’t let every zealot with a sign shut down a multi-billion dollar construction project. And believe me, if you build it, they will come. How to get beyond that? I’m not a lawyer, so I don’t know if it would be legal, but if there was a way to fashion legislation to allow construction to continue even through pending litigation as long as the builder has all the permits in a row, then I believe you’d see a lot of plants start going up.” Peter – thanks. I found the following in that document and it basically answers my question. Due to the uncertain perspectives of this technology, the absence of a reference project, and therefore the lack of cost and material data the solar updraft tower is not considered furthermore in this study. In short your discussion of solar thermal excludes consideration of the solar updraft tower. In which case I find any conclusion that “solar is very expensive” to be quite unsurprising. Of course the solar updraft tower if ever built on a commercial basis may not change that conclusion but I suspect it might (although I also have little doubt that coal and nuclear would still win on cost). TerjeP, I should do it more justice than this (perhaps in the future), but briefly, the solar chimney’s (updraft tower) stuff is nonsense – 200 MWe yield from tower that is taller than the Burj Dubai? You’ve got to be kidding me. It’s so utterly fantastic, it’s not even worth crunching the numbers on. I have no doubt that such a power plant has a significant commericalisation risk and so the cost of capital for the first plant will be high. However this really only applies to the first plant and beyond that the construction costs and operational performance become the main factors. “I have no doubt that such a power plant has a significant commericalisation risk and so the cost of capital for the first plant will be high. However this really only applies to the first plant and beyond that the construction costs and operational performance become the main factors.” Yeah. Operational performance. That’s the whole point, isn’t it. 200MW from a project that size is crap. There’s no point even discussing it. JC @ 11 writes: Any new nuclear plant would likely be built far from the energy demand, therefore transmission infrastructure investment would likely be required. There is no reason other than politics to site nuclear plants far from where the power is used. You can build PRISM reactors in the middle of a city. The reactors themselves are subterranean and the generation infrastructure could be likewise, or could reside in regular-looking industrial buildings. A power plant with several PRISMs and a recycling facility would appear no more conspicuous than a small to mid-size industrial park. Even with Gen II plants they were often cited close to the areas of demand. See Indian Point just upriver from New York City, Prairie Island just 40 miles from St. Paul/Minneapolis, and there are numerous other examples. This is a non-issue, all the more so with IFRs. It’s another tremendous advantage they have over wind and solar. The power in your basement option looks neat if you already have gas fired central heating in your basement and it needs an upgrade. However not many people in Australia have or need central heating. The overall efficiency of the system seems to rely on the fact that a major biproduct of electricity generation is heat. Within it’s niche (which could be quite big in Europe and North America) it seems like a clever bit of kit. The location debate is important. Because the cost is important. The lower we can make the cost of electricity, the faster will low-emissions generation technologies replace fossil fuels. Also, the lower the cost of electricity, the faster electricity will displace oil for land transport. Oil used in land transport represents about 1/3 of our emissions. Electricty may power land transport directly (ege batteries) or it may produce synthetic fuels (hydrogen or other possibilities). Either way, the lower the cost of electrcity the better for all reasons. So I do not want to see the nuclear power plants located far from the demand centres. I want them close. Combined heat and power (CHP) could be added to steam cycle, gas turbine and combined cycle as gas generation options. However since Gorgon LNG sale contracts recently have been $30bn + $50bn + $70bn Australia might be lucky to have any gas left. I guess it helps pay for imported gadgets. Australia needs a long term policy on gas priorities; for ammonia production, peak electrical generation, CNG as a petrol/diesel replacement and domestic use including CHP should it become popular. LNG exports would be last priority. Given the green chic of Australia’s politicians gas fired generation will probably expand several times over before nuclear is considered. Based on the paper I cited above my quick back of the envelope calculation for Solar updraft towers is as follows. From figure 10 in the paper the output of the 200MW solar updraft tower at 6:30pm in winter is about 50MW (the output is still pretty steady at that level through the night). As such we would need a lot of towers to meet a peak of 33GW. Number of Towers = 33000 / 50 = 165 The capital cost of each tower is estimated in Table 3 to be 0.606 billion Euro per tower. So total capital cost would be:- Total capital cost = 0.606 x 165 = 99.99 billion euro. Converting to Aussie Dollars we have a figure of about A$170. Given the size of each tower they would need to be situated in remote areas. So there would be additional costs associated with transmission. If we take Peters figure for Solar thermal transission then we need an extra A$180. So total capital cost of powering the NEM using only solar updraft towers is by my calculation around $350 billion. Which is about three times the price of nuclear as calculated by Peter but still a heck of a lot cheaper than the other solar options. TerjeP, reading over that document, it’s certainly a fascinating technology and worth looking at a bit harder than I’d first thought. The output of the Spanish prototype was tiny (50 kW peak), so it’s difficult to know how realistic their non-linear scaling estimates for taller towers are. The 50 kW tower yielded 44 MWh over the course of a year, which gives a capacity factor of 10%, which isn’t all that great — that means you’d need ~50 x 1 km tall (7km diameter at base) 200 MW peak towers to equate to a 1 GW nuclear power station. Their simulations (Fig 10) with water-based thermal storage look much better than this figure, so it’s a matter of how much credence you put in the technical data of the demonstration plant vs simulations of potential operational potential of larger plants. As to cost, my points above are relevant (depends on ultimate real-world performance) but also it’s difficult to cost-out anything like this when structures of this size have never been built. So I’ll reserve judgement, but will follow any developments of this alternative solar tech with interest. Luke – I blame the envelope. Thanks for fixing my maths. Still at $860 billion it is a lot cheaper than the other solar options. Barry – I pretty much agree with everything in your latest comment. It is a technology that is worth watching but it entails a lot of unknowns. In particular it depends on their simulations being correct. I would have thought though that the basic physics isn’t that complex and there is a lot of experience in the scaling of aircraft aerodynamics and the like. Still there is nothing quite like real world data. Peter, Your figures on tantangara/Blowering pumped storage of about 5billion for 9,000MW is slightly higher than what I had been estimating but I was considering mainly much shorter pipleines( for example Blowering /Talbingo incresed Tumut3 capacity to 6,000MW. It would seem that expanding the Snowy pumped hydro to 15GW capcity and TAS hydro to 4.4GW( by adding 2GW of return reversible turbines) for a total of 20.15GW including the other 0.75GW already in use, is a realistic storage option for nuclear and renewable energy. Your study of transmission costs is dissappointing. The theory behind the wind blowing somewhere idea IS NOT to have the entire wind capacity moved from one site of the continent to the other. For example, WA would have 20% of the wind capacity(SA,TAS,VIC, NSW about the same with a small amount in QLD) so on the observation that wind dispesed over the size of a state will at most generate 75% capacity WA would only ever produce 15% of capacity(9GW not 25GW) and some of this would be used locally (3GW) so at most 6GW would be exported east(even less with CAES), but not to Sydney, to Pt Augusta with perhaps another 1-2GW moved to Adelaide. Sydney and Melbourne would get most power from pumped storage( moving much shorter distances). When high winds exist in NSW and VIC energy would be returned to Snowy with 2-3GW to WA ( if no wind in WA, most unlikely considering the 2,000Km of good wind coastline). You statement that 10,000Km would have to carry 25GW is totally mis-understanding how grids work. Feeder lines will only have the capacity of the solar and wind farms and none of these would be anything like 25GW. The major transmission links would be Snowy to Sydney, Snowy to Melbourne, Melbourne to Tasmania and Pt Augusta to Perth. We already have a large grid in SE Australia, but it would have to be increased. OCGT/CCGT and nuclear will probably be sited at existing coal fired power stations using existing transmission lines. The 50 kW tower yielded 44 MWh over the course of a year, which gives a capacity factor of 10%, which isn’t all that great — that means you’d need ~50 x 1 km tall (7km diameter at base) 200 MW peak towers to equate to a 1 GW nuclear power station. The two things that always distract people with this technology are the size of the thing and the low solar efficiency. However neither matters that much. What matters in the final analysis is cost and the output profile. The only reason that solar efficiency is so important in PV is that associated casing and mounting costs are such a big proportion of the final cost. A smaller cell for the same power output has less add on costs. But of course PV has a lousy output profile. Moonlight just aint that bright. Using fuel cells instead of an engine nearly doubles the fuel to electricity efficiency, and more than doubles the ratio of electricity output to heat. The heat output from the fuel cell system is a reasonable match to domestic hot water (not heating) needs, so it makes sense in most paces, not just Northern Europe in winter. TerjeP, I’m not talking about efficiency, I’m talking about capacity factor relative to peak performance. This is useful for working out redundancy and # required to build for a given average delivery. As Peter Lang has so clearly pointed out, minimal capacity is also useful to know. Which while factually correct is irrelevant to your main point and seemed to stand out as a veiled criticism. Perhaps I was still feeling a bit prickly due to some earlier comments made here. I did understand your main point and I do agree. Capacity factor is in fact the thing that makes me think the solar updraft tower would probably be superior to the alternative solar options. TerjeP, my broader point was that 50 of these structures, equating to 1 gigawatt average capacity, would have a footprint on a landscape of ~2,000 km2. In addition, 50 x 1 km high spires would also pose a potential aviation hazard. The point is not that these shouldn’t or can’t be built, but it does illustrate the size of the engineering challenge (even if it is, fundamentally, just glass and steel). The land cost issue does not seem to be overly significant. And the glass canopy would be several metres off the ground so you could grow food on the land also. Obviously it is going to be a windy place to farm but low profile plants are not going to care and the wind speed would be tolerable. Essentially it is a big warm, wet and windy glass house that you can drive around in on a tractor. I can’t see the towers being a problem for aviation. Their location will be well mapped. And they can be lit at night. And they would be fat things that are hard not to see. I doubt the aviation issue is a challenge. Whether people like the look of them is an asthetic issue that is hard to answer. However nuclear has asthetic issues also relating to how people feel about nuclear. Personally I like big man made structures. I’ve always quite liked the look of high voltage transmission lines. I suspect that people would like them as much or as little as they like wind farms. However solar updraft towers wouldn’t hog prime coastal locations in the way wind farms do. I’d say lets build one just to satisfy my asthetic tastes and then go nuclear for the rest of our electricity needs. I don’t really have a problem with the land or air footprint of solar updraft towers when these exist on low value land. I don’t imagine too many aircraft will be flying low over the desert, and if they are an installation that size will stick out like the proverbial [fill in your metaphor]. Build them 2k high for all I care, assuming it is cost-benefit and technically feasible to do so. The real problem is the cost both of construction and of connection to the grid. If current nuclear is about $3000 per installed Kw then a $300bn worth of non-nuclear needs to get you about 100Gw of output of similar quality as the nuclear to break even. OK you can throw in some allowance for higher running costs (labour, site management, uranium/thorium, public liability) but even so, if it only gets you 5% of that it’s not really in the game. Re nuclear aesthetics I like the low angle aerial shots of the peleton in the Tour de France passing by a reactor. The overall impression is of health and harmony. On the other hand coal stations have tar, heavy metals, uncontained radioactivity, smoke and smells. They are the Dark Satanic Mills of the modern era. If for example you have a small group of agricultural villages not connected to a grid but which could benefit from solar panels, an anaerobic digester, perhaps a small scale 200Kw wind turbine with a you beaut DIY pumped hydro for not very much built in not very long. Maybe the whole thing could cost $200k or less A nuclear plant isn’t going to scale down to that setting very well and it’s not as if you could build one in three months either, leave aside connect it to a reliable grid, most of the time. @Terje that was a better image than I could find. I gather there are several nuclear power stations in France’s Loire Valley which prides itself on fine food and wine. I note the use of cooling towers despite abundant river water for direct heat exchange. An excellent analysis of why solar can’t possibly power civilization. If only we had the water and geologic formations to make pumped-storage hydro dams and wash the solar panels every 10-20 days. It’s really wind that has proven to be more efficient and cost effective, but even wind isn’t where it needs to be. If you’re looking for a compact, timely read that completely summarizes and explains the energy issues the world faces, you may be interested in my new book “the nuclear economy,” which just became available. All of the alternative energies are discussed, as well as peak oil, climate change, energy transitions, and 4th generation nuclear power. Look up the “Potential for Building Integrated Photovoltaics” report. The IEA estimated that half of Australia’s electricity needs could be provided by 10% efficient building mounted PV. i.e. You could provide a significant fraction of Australia’s electricity with zero land use impact. If PV doesn’t come down to a competitive price the 50% penetration argument is moot let alone the limit position argument. Starting with coal, terje’s pic shows all that is wrong with coal, huge emissions – specifically in the pic, heat – being allowed without consequence. We all know about the toxic emissions and the ash. Why are these incredibly indolent corporations allowed to waste so much heat, and why is it easier to pass the costs on to the customer than use CHP and/or Rankine cycle energy recovery? How is it that these corporations can threaten to close down or go offshore rather than spend money on plant which will save them money and reduce emissions? PV costs are taken usually over 15 years which is nonsense because the cells are guaranteed for 25yrs alone. The ongoing costs for solar are minimal whereas nuclear requires all sorts of ongoing costs for mining, inrichment, reprocessing, waste storage, decomissioning and insurance. Nowhere have I seen a reliable assessment. How can you plan without one? I am definitely in favour of solar and definitely in favour of nuclear over coal but most of the cost analysis I have seen so far on nuclear are people pushing their barrow with rubbery figures being bent to the max. What you need to be doing is pinning the government down to an energy plan. Obviously they haven’t got one and they need to be seriously embarrassed by this, Australia’s energy security etc. If you can force them into one, then you can make submissions, influence policy etc. My opinion is that both the major party’s are drunk on coal and fully intend to obfuscate it’s problems with crap like CCS and huge handouts and weak targets. They rightly reason that solar power and storage technologies will evolve enormously over the next 20yrs. If they can suck the public into going with coal for a bit longer while building a lot of renewables to deal with the extra load of electrification of transport, some other mug government can deal with nuclear power. They don’t care about nuclear. There’s more money in coal. Rather than a campaign for nuclear, we need a campaign against coal. Instead of always defending nuclear against ignorance, we should be attacking coal for greed, indolence, energy wastage, environmental vandalism, acid rain, mercury in our food, government handouts without accountability, fugitive methane emissions, medical problems. Expose the true cost of doing business with coal and get them to pay for it. Thank you for a fascinating and sobering series of articles. You, Peter, and Ted, have persuaded me that renewables can’t supply the current (never mind BAU projected increases in) energy requirements of the developed world on their own without vast and unrealistic expenditure in money, time and effort. The numbers seem pretty clear. I’m sure that when recognition of the CO2 and energy supply problems reaches a critical mass, and the political will and money starts to flow on the required scale, economic forces will do the rest and the nuclear option will indeed be widely deployed. Our current society functions on the basis of large amounts of instantly available energy, and without a major and disruptive reshaping of the way we live- which, incidentally, is what most greens seem to want, and may go some way to explain their attitude to nuclear power- sources of power with high energy densities are going to be necessary. But I’m a little uncomfortable with the impression I often get from reading this site- that nuclear power is the only viable FF alternative and that it should be pursued vigorously and as soon as possible, to the exclusion of all other options (and wind/solar in particular). Many articles and discussions seem to circle around this idea. As a layman, it’s difficult to know what to make of it- that viewpoint may well be true, but for me there are too many unknown unknowns. How about a broadening of the discussion to consider other pertinent issues? Otherwise, this blog risks becoming a nuclear advocacy site with an occasional bit of climate science commentary thrown in. These are the sorts of questions I have in mind (apologies if they’ve been discussed previously on the site, but not much showing up with a basic search) : What about the other potentially non (or low) CO2-emitting high energy density option on the table, with a few hundred years left in it- coal with CCS? What role can gas play in reducing CO2 emissions, at least in the short term while we transition to nukes? What about Ted Trainer’s idea of ‘depowering society’ to the extent that renewables can meet energy demand? (I can see many problems with this, but would love to see a critique on the site. More generally, articles exploring the demand side of the problem seem to be thin on the ground) Accepting that renewables can’t supply the developed world’s energy needs in their entirety, do they have a role at all? (in smaller isolated communities, in the developing world etc) How do smart grids work- how much can be done to with transmission systems/distributed storage/demand management etc to increase the number of viable options on the table? Campaigning against coal is basically campaigning against ourselves every time we turn a light or use an appliance. There really isn’t much point in agitating against coal. We need to stop pointing fingers at people suggesting they’re somehow evil and and fast track a move to nuke energy. It would give us immense energy supply we’ll need going forward in a clean, cheap reliable way. Renewable could be part of the suite as that should ultimately depend on the market. However one thing is for certain going forward. We need immense supplies of energy and nuke power is able to fulfill our needs. Jc, I don’t agree with your reasoning. We are all pretty much stuck with our shonky supermarket duopoly but campaigning against their poor pricing behaviour helps to keep them less shonky and inspires people to look for alternatives. On that subject, why is it easier for them to pass on to consumers the extra costs of their refrigeration than it is to put doors on like they do with the freezers? The average coal power station is only 35% efficient, Combined Heat and Power is up to 90%. CHP would more than halve emissions or more than double coal’s power output but nothing’s going to make them use it. As I said, the government is comfortable with coal, and the general population is more comfortable with coal than with nuclear but they don’t know how bloody evil coal is! My position is that first and foremost we need to power down and depopulate. Without this aim, vast amounts of cheap power will only enable us to go further into overshoot, robbing from future generations and ensuring a catastrophic cull of species in the natural world first, then humans. That’s my kids and grandkids we’re handing a miserable existence to. I am one of the most extreme and radical advocates of the natural environment you’re ever likely to meeet. I advocate the return of most of the land and sea currently devoted to agriculture and aquaculture/fishing to managed wilderness. I advocate a sharp sequestration of the majority of the natural ecosystem of this planet from casual human influence. I advocate devoting a considerable portion of economic output to the task of ensuring a flourishing biosphere under the management of humanity. Recognising that these noble goals can only be met by a civilisation with a vastly expanded resource base, I advocate a crash program of research into and implimentation of nuclear power technology, genetically modified foodstuffs, artificail food, complete enclosed self-sustaining artificial environments, large-scale geoengineering, and space colonisation. Population reduction and powerdown, even if such counter-instinctual goals could be achieved (at whatever cost of despair), would leave us helpless to prevent the drift of the climate system and biosphere into whichever state it will evolve given the damage already done. Turning our backs on the situation and committing racial suicide will not help. finrod #55, that was a pretty stupid comment. Try again in the morning when you’re sober. You got the number wrong for a start, then “sooner stick with burning coal” than what?, and “evil bastard”, besides being completely untrue, what does that sort of offensiveness achieve except to diminish yourself? SG, I’m sure it must come as an awful shock to you that after you’ve posted the carefully crafted thoughts you’ve been inspired to by pseudo-environmentalist literature, every word of it ringing with the guilt-laden mindset of our less secular ancestors, that anyone would have the temerity to challenge your conclusion that we wicked humans had better depart the stage of natural history or else… or at least draw ourselves closer to the passive environmental role of other animals. This is what your path amounts to, and it will indeed lead to racial suicide if followed. Suicide, and ecocide by neglect, as we will have cast away any ability to actively influence the course of climatic events. Matt #53: “But I’m a little uncomfortable with the impression I often get from reading this site- that nuclear power is the only viable FF alternative and that it should be pursued vigorously and as soon as possible, to the exclusion of all other options (and wind/solar in particular).” It is my conclusion, from all of this, that nuclear power IS the only viable FF alternative. I am vitally interested in supporting real solutions that permit a rapid transition away from fossil fuels, especially coal (oil will, at least in part, take care of itself). I the conclusion is that wind/solar cannot meaningfully facilitate this transition, why bother to promote them? Now, I should make one thing quite clear. I am not AGAINST renewable energy. If folks want to build them, go for it! If they can find investors, great! Indeed, I’m no NIMBY, and would be happy to have a conga line of huge turbines gracing the hills behind my home, just as I’d be happy to have a brand spanking new nuclear power station in my suburb. But why should I promote something I have come to consider — on a scientific and economic basis — to be a non-solution to the energy and climate crisis? That doesn’t make sense to me. To your questions: 1. Coal with CCS — doomed to failure. Why? Because the only thing that is going to be embraced with sufficient vigour, on a global scale, is an energy technology that has the favourable characteristics of coal, but is cheaper than coal. CCS, by virtue of the fact that it is coal + extra costs (capture, compressions, sequestration) axiomatically fails this litmus test. It is therefore of no interest and those who promote it can only do so on the basis of simultaneously promoting such a large carbon price that (a) the developing world is highly unlikely to ever impose it, and (b) if they do, CCS won’t be competitive with nuclear. CCS is a non-solution to the climate and energy crises. 2. Natural gas has no role in baseload generation. It is a high-carbon fossil fuel that releases 500 to 700 kg of CO2 per MWh. If it is used in peaking power only (say at 10% capacity factor), then it is only a tiny piece in the puzzle, because we must displace the coal. It it is used to displace the coal baseload, then it is a counterproductive ‘solution’ because it is still high carbon (despite what the Romms of this world will have you believe) and is in shorter supply than coal anyway. Gas is a non-solution to the climate and energy crises. 3. The developing world lives in Trainer’s power-down society already, and they are going to do everything possible to get the hell out of it. The developed world will fight tooth an nail, and will burn the planet to a soot-laden crisp, rather than embrace Trainer’s simpler way. Power down is a non-solution to the climate and energy crises. 4. It is nice to imagine that renewables will have a niche role in the future. But actually, will they? They don’t have any meaningful role now, when pitted in competition with fossil fuels, so why will that be different when pitted fairly against a nuclear-powered world? I don’t know the answer, and I don’t frankly care, because even if renewable energy can manage to maintain various niche energy supply roles in the future, it won’t meet most of the current or future power demand. So niche applications or not, renewables are peripheral to the big picture because they are a non-solution to the climate and energy crises. 5. Smart grids will provide better energy supply and demand management. Fine, great, that will help irrespective of what source the energy comes from (nuclear, gas, coal, renewables, whatever). Smarter grids are inevitable and welcome. But they are not some white knight that will miraculously allow renewable energy to achieve any significant penetration into meeting world energy demand in the future. Smart grids are sensible, but they are not a solution to the climate and energy crises. To some, the above may sound rather dogmatic. To me, it’s the emergent property of trying my damnedest to be ruthlessly pragmatic about the energy problem. I have no barrow to push, I don’t get anything out of it — other than I want this problem fixed. I don’t earn a red cent if nuclear turns out be the primary solution. I don’t win by renewables failing. The bottom line is this — if this website is looking more and more like a nuclear advocacy site, then you ought to consider why. It might just be because I’ve come to the conclusion that nuclear power is the only realistic solution to this problem, and that’s why I’m ever more stridently advocating it. This is a ‘game’ we cannot afford to lose, and the longer we dither about with ultimately worthless solutions, the closer we come to endgame, with no pawn left to move to the back row and Queen. Jc, I don’t agree with your reasoning. We are all pretty much stuck with our shonky supermarket duopoly but campaigning against their poor pricing behaviour helps to keep them less shonky and inspires people to look for alternatives. Salient: Not do digress but they aren’t making super-profits, as you assert. Coles sold itself because it wasn’t profitable, while Woolies is, however not spectacularly so. The competition watchdog looked into pricing, competition etc and found nothing alarming in the last inquiry. Negative aspects, according to the inquiry are more about “nimbyism” and town planning laws stifling competition. In other words things aren’t always as they appear. On that subject, why is it easier for them to pass on to consumers the extra costs of their refrigeration than it is to put doors on like they do with the freezers? Dunno. Perhaps it’s to do with attempting to provide a good customer experience as they see it. Windows and doors etc are really quite visually obstructive I think. The average coal power station is only 35% efficient, Combined Heat and Power is up to 90%. CHP would more than halve emissions or more than double coal’s power output but nothing’s going to make them use it. I’m not sure that would be as appears. If you’re telling me that they could improve their efficiency with a straight to the bottom line positive hit of 35% and haven’t moved on it, then they are really dumb. I don’t believe Origin, AGL or other operators are dumb so there must be more to it. Don’t forget that you may get 35% more efficiency however you also need to figure out if the renovation strategy is cost effective and accreted to the bottom line. In other words you don’t want to be spending (magnified example) $1 billion for a $3.5 million gain as the return wouldn’t make it economic. You need to figure the cost of capital and the expected return. Potential “engineering efficiency” doesn’t always mean it would be profitable. In other words don’t confuse “ engineering efficiency” with “economic efficiency” as they are two different things, or rather many not arrive at the same conclusion. As I said, the government is comfortable with coal, and the general population is more comfortable with coal than with nuclear but they don’t know how bloody evil coal is! Polls don’t show that. Polls show people’s heightened concerns with AGW. You shouldn’t think of coal as “evil”. It’s given us a great of economic utility and provided us with an industrial civilization. What we realize now is that it comes with a cost and the cost is that it’s increasingly likely to be screwing up the atmosphere especially with giga countries moving towards joining the rich world. This means we need to get loads of energy from elsewhere and nuke power is increasingly likely the best alternative. Perhaps it isn’t, however it should be in the suite of alternatives so the markets can determine the optimum choice or choices. My position is that first and foremost we need to power down and depopulate. That will come, possibly mid century. China’s population for instance is a demographic time bomb or rather a good thing in your eyes. Chinese demographics show that by mid century China’s population will fall off a cliff- literally fall of a cliff and become a nation of old geezers- and by 2100 it could be half what it is now. We also find the rich world’s population will be heading in the same direction. Without this aim, vast amounts of cheap power will only enable us to go further into overshoot, robbing from future generations and ensuring a catastrophic cull of species in the natural world first, then humans. That’s my kids and grandkids we’re handing a miserable existence to. Why take such a stasist view of things though? The technology curve is actually curling itself up exponentially. The world will be an entirely different place in 50 years time. In 100 years, technology could make it unrecognizable and if the tech curve continues, which it seems to be, the world in 2100 will look like 1800 to us. There’s no reason to be so pessimistic. Have you seen recent films of car shows around the world? Large numbers of electric cars or hybrids are making their way in the market very soon. GM recently introduced a demo hybridization that can do 230 miles a gallon. Take stock of things, as there’s no reason to be so pessimistic. We’ll get there in the end. Humans tend to bumble around but we generally end up make okish decisions most times. One thing worth noting in digging through Peters numbers is that even if we invent a technology that could store significant amounts of electrical energy at zero cost it wouldn’t on the face of it change the conclusion. Barry #61: Great summary. I haven’t been contributing much as this blog has become deeper and more of an engineering rather than a science blog (not that there is a real distinction between the two), but what it increasingly obvious is the HUGE gap between the level of detail on this blog and the level of detail in mainstream media. Politicians, media and green groups are still stuck trading cliches. Hopefully, there are channels of communication that will enable the detail of this blog to get through to the people who advise politicians … which hopefully includes you. Politicians need to actually lead and not make poll driven policy, because, particularly in Australia, poll driven policy on energy sources will be simply wrong. The business lobby is going in to bat for nuclear power eghttp://www.news.com.au/adelaidenow/story/0,27574,26060433-2682,00.html and their logic seems sound. However they muddy the waters by seeing nuclear as an agent of economic growth associated with increased population and consumption. Few high profile groups seem to be saying ‘let’s have nuclear power and a steady state economy’. I think the reality in the next few years is that it will be difficult to hold the line on the economy let alone grow it. The temptation will be to make do with existing coal plant and sneak in a few more mid sized gas plants. A gaggle of wind and solar installations will be put up basically for show. Many in the public will content themselves with thinking we can adapt to AGW or that renewables, carbonsinks or conservation will get us out of trouble. Until they lose their job that is. Some kind of widely perceived crisis will be needed to instigate the first nuclear plant. That would depend very much on where the power was. If the power in question was near a grid point, and the costs of the harvesting technology were low (wind is fairly cheap) then it would make wind or similar very competititve. I think it’s helpful to be explicit about where you, and your blog, are coming from. Unfortunately, for those of us yet to fully work through the issues themselves, an advocacy blog is less useful than a science commentary or ‘open discussion’ blog. But the numbers are what’s important and I wouldn’t be surprised if I end up agreeing with most of your conclusions (though I would take issue with some of your assumptions about the developing world). Jc #62 I haven’t time now to check this but I’m pretty sure that CapEx for efficiency improvements under current corporate culture must show a payback in under 10 years. A very short-term view IMHO. This would have to change now if the govt’s committed to coal for another 30yrs or more. I agree with you on the historical benefits of coal and I’m sure the world could live with a few highly efficient CHP stations, but I have no trouble demonising coal as it is currently being used. You said “That would depend very much on where the power was. If the power in question was near a grid point, and the costs of the harvesting technology were low (wind is fairly cheap) then it would make wind or similar very competititve.” This statement is totally wrong. Wind is nowhere near competitive even if transmission was free. Wind provides low value electricity at very high cost. It is low value because it is highly variable and not controllable. Consider this question. What price do you think a utility would be prepared to pay for wind power if he had the option to buy coal fired power for $35/MWh instead. Would he be prepared to pay $10/MWh for wind power? The answer to tthe question ‘what would a buyer be prepared to pay for Wind power in an open market’ depends on many factors. One important one is the cost of the system enhancements needed to manage the intermittency of wind power on the network. This is a substantial cost. You said “wind is fairly cheap”. Wind power is not cheap. It has to be mandated to force the distributors to buy it. If they do not buy enough they pay a fine which is more than the cost of the power they were required by regulation to buy. Wind power is subsidised by more than twice its cost. Given that wind power saves very little GHG emissions (refer to the “Wind emissions and costs thread”), I suspect Wind power is actually very near to zero value. It may be negative if all the externalities were properly internalised. SG @ 55: My position is that first and foremost we need to power down and depopulate. Without this aim, vast amounts of cheap power will only enable us to go further into overshoot, robbing from future generations and ensuring a catastrophic cull of species in the natural world first, then humans. That’s my kids and grandkids we’re handing a miserable existence to. And who, pray tell, is supposed to quit having kids to achieve the depopulation you promote? Don’t you see the blind irony in talking about your kids and grandkids in the same paragraph? Here is a CSIRO study: GREENHOUSE GAS SEQUESTRATION BY ALGAE – ENERGY AND GREENHOUSE GAS LIFE CYCLE STUDIEShttp://www.csiro.au/files/files/poit.pdf suggesting that productive of biodiesel (and by inference, biomethane) might be competative with the fossil equivalents. Elsewhere I’ve seen suggestions that raising energy crops to make biomethane ought to be cost competative. The problem is, as I see it, providing enough fresh water. The reverse osmosis necessary to produce fresh water from sea water, also the pumping, needs only interruptable power; wind might do. Certainly worthy of further consideration. Peter – Fran was not discussing zero cost transmission. She was responding to my comment regarding the impact of zero cost electricity storage. As such I would not dismiss her comment too quickly. Obviously we will never have zero cost electricity storage. However emerging technologies such as those that Eestor is rumoured to be working on are worth watching. Although probably more as a mobile energy store more so than as a stationary one. David – growing fuel diverts productive land away from growing food. A bad idea in my book. It’s true that the solar updraft tower I promoted earlier also takes a lot of land but it does not stop the land also being used for agriculture and neither does it have to sit on productive land. John – I can see arguments for zero or negative growth in our ecological footprint. However why you would set lower economic growth as an objective is beyond me. We should aim to both reduce our ecological footprint and increase economic growth. Jc #62 I haven’t time now to check this but I’m pretty sure that CapEx for efficiency improvements under current corporate culture must show a payback in under 10 years. Which is a 10% return. It would be interesting to see if this is a gross return before taking away expenses etc. Look, Salient I’m very sceptical of stories like this that simply sound too good to be true, as they usually are. Put ourselves in a rational position. If you were the CEO of AGL or Origin and someone came to you and said they had an engineering method that could save 30% to the bottom line, why wouldn’t it be introduced, as a 30% accretion to the bottom would be the equivalent of a manor from heaven. Tom Blees #70 C’mon Tom, get with the thread, I’ve already said back at #59 that without baby bonus and immigration, we would be depopulating. No one HAS to “quit having kids”. People CHOOSE not to and we need to empower women in the developing world and bring them out of poverty so that they can CHOOSE to quit having kids also. As for your show of ignorance of my personal situation, it doesn’t do you any credit. I have one biological child and the other three come form my current wife’s previous marriage, something I had no say in but am happy to call them my kids. No blind irony there. Terje I think to get ‘growth’ with reduced emissions we need a less materialistic measure of wellbeing than GDP. Essentially more stuff means more energy input. I don’t have the data handy but China’s boom circa 2002-2007 was accompanied by world record coal use. Could they have done it without coal? In the near term we need to quickly replace coal and petroleum dependence with low carbon alternatives. This is necessary even without climate change since oil output peaked in 2008 and coal will peak around 2030. Transport needs to be electrified such as light rail and plug-in cars. All but two State capital cities will have desalination plants with a substantial power requirement. The ageing population will need extra thermal comfort to cope with severe cold snaps and heatwaves; see my link on another thread to ETSA’s prognosis. There will be regional food crises due to water problems and input costs. Thus we will need more energy to provide the goods and services we already take for granted. To do this I believe that personal mobility, electricity on demand and even our exotic diets will be compromised. In short for most people things will get worse not better. Jc #76, the problem I think is in the failure to account for future price rises due to a diminishing resource, something which they have contributed to greatly in wasting a lot of energy. I am not an economist but there is probably a name for this. It’s the same wherever there is energy wasted that could be harvested. That wasted energy is contributing to an ever rising price on a finite resource. I am sure it could never be an exact science, but corporations need to start thinking further ahead, by certain government incentives, and factor in reduced resource prices as part of the payback. There are millions of results for ‘CHP generating efficiency’, it’s not rocket science so it’s just flawed accounting that has hindered uptake. I don’t want to labor the point Look if there was any efficiency gains of 30% with straight bottom-line gains of the same or even less any CEO would dive for it faster than the speed of light. Bottom line earnings changes would immediately work through to the stock price and if these guys have stock options it would motivate them from a personal perspective. (And everyone has an IQ of 180 when it comes to money) :-) Here: Company X has a market capitalization of $5 billion, normalized on-going earnings of $500 million, trades on a price earnings multiple (PE) of 14 (average for the ASX 200 at present) and it’s stock price is $5.00. A direct potential 30% bottom line improvement would have the following consequences (you could assume the PE stays the same as the 30% efficiency gains are recurring). Earnings rise to $650 million. A PE of 14 translates into a market cap rising to $7.8 billion and the stock price would rise to $6.50. This isn’t something even he stupidest CEO in the world would pass up. John: Terje I think to get ‘growth’ with reduced emissions we need a less materialistic measure of wellbeing than GDP. Why, John, as nuke power suggests we can have our yellow cake :-) and eat it. Essentially more stuff means more energy input. I don’t have the data handy but China’s boom circa 2002-2007 was accompanied by world record coal use. Could they have done it without coal? They could have but possibly not as cheaply. The price of coal has moved from about US$10-15 bucks a ton in the early part of the decade to about US$80-100 bucks now. At one stage coal presented them with a compelling choice, however it doesn’t so much any more, which is why they are beginning to build reactors. John – you can have higher GDP without having more “stuff”. And if we recycle a greater proportion of what currently goes to landfill we can even have more stuff whilst reducing our ecological footprint. Especially so if energy is cheap and plentiful. GDP may not be the right measure for well being but wanting to see GDP fall isn’t the right objective either. I’m afraid these comments aren’t particularly timely as I’ve not been keeping up to date. TerjeP: You started the thread by bringing up the subject of solar chimneys. If one is going to consider the principle underlying this approach to energy generation, do you not think that you may get more bangs for your buck with atmospheric vortex engines than with vast chimneys? Neither approach can be considered in any way mature but the vortex engine, if it really worked on large scale, would surely be cheaper to build. I also accept that neither technology is likely to be superior to the nuclear option. Finrod and Salient Green appear to be taking opposite extremes on the subject of population overshoot. Finrod is offering my grandchildren the prospect of confinement in controlled environment cities or space colonies while Salient Green would seem to prefer them to live in Third World conditions with a probably less than 50% chance of reaching puberty. Neither prospect strikes me as particularly desirable. My own position, FWIW, is that we must strive towards a policy of zero population growth and, after that, a slow decline to half or less of our current levels. However, the age profile of the world’s population is such that we cannot reach this goal quickly without a monumental increase in death rate (which it would be quite immoral to plan for but which might nevertheless happen if we don’t get our energy policy right). Without catastrophe, there is no way to stop population reaching 9 billion plus. This will require plentiful energy with high ERoEI. Given this, and more efficient use of such energy, it might even be possible for economic growth to continue and for the third world to catch up with the richer nations without the living standards of the populations of the latter having to diminish too far or at all. However, BAU is not an option. My huge concern is that many government spokesmen and economic commentators seem primarily focussed on economic growth while ignoring energy and climate constraints. Furthermore, some economists are encouraging higher birth rates or higher levels of immigration to counter the problems of ageing populations. It seems to me essential that rich societies find a way through the demographic transition without recourse to the production or import of more people. In the UK, we have a growing underclass of unemployable young who survive on welfare and rely on immigrants to do the work. In no way can this be deemed sustainable. I would be interested in the reactions of some of the self-professed left wing commentators on this site to my remarks. I feel that left wing goverments are just as responsible for getting us into our current mess as are the multinational corporations that they love to hate. It is true that the former may have the more selfless motives. However, the road to hell is paved with good intentions. Are we compelled to act in the way we do because we are basically ruled by our animal drives, as are all other species, namely to perpetuate our genes in a selfish manner? Alternatively, does the fact that we are unique in the animal kingdom in having consciousness allow us the possibility of an escape route from self destruction? I guess we’ll soon find out. “Starting with coal, terje’s pic shows all that is wrong with coal, huge emissions – specifically in the pic, heat – being allowed without consequence. We all know about the toxic emissions and the ash. Why are these incredibly indolent corporations allowed to waste so much heat, and why is it easier to pass the costs on to the customer than use CHP and/or Rankine cycle energy recovery? How is it that these corporations can threaten to close down or go offshore rather than spend money on plant which will save them money and reduce emissions?” The second law of thermodynamics is not a mere suggestion, but cold harsh reality. The maximum efficiency at which a heat engine can operate is (Thot-Tcold)/Thot, where temperatures are absolute (e.g. Kelvin scale); if it was any other way you could build a perpetual motion machine that needed no fuel to produce infinite amounts of electricity once you get it started. Modern coal plants operate at ~40% efficiency using supercritical steam at ~820 kelvin under enormous pressure. If the rejected heat is at room temperature this particular coal plant could at most be ~63% efficient. Given that no one has invented a carnot cycle heat engine that is practical in the real world, this coal plant is pretty damn good at 40%. The steam you see billowing out of the cooling tower is not particularly hot. Water is sprayed into the cooling tower to evaporate and chill the cold side of the heat engine. In order extract what little usable energy is left you would need a cold reservoir at or very close to room temperature capable of accepting ~2 GW of low-grade heat. This would be an enormous expensive for very little gain. A CHP coal plant is problematic for all kinds of reasons. Firstly there’s the need to have a sufficient number of potential customers, which means the coal plant must be sited near a city, otherwise you end up throwing away nearly all the heat anyway. Secondly you will have to lower the efficiency of the coal plant because the cold reservoir of the CHP system is steam under significant pressure; since this is far hotter than the cooling tower you need to burn more coal to generate as much electricity. Thirdly, in most places demand is very irregular and most heat would still be rejected; quite a bit is needed in winter for space heating and only a little for water heating in summer. “The huge PV array announced for China is quoted at $3b/Gw.” That’s outrageously expensive. The capacity factor for solar PV is ~20% for the very best places on the planet, compared to a typical capacity factor of 70% for coal and 90% for nuclear. 1 GW of solar produces as much power on average as ~280 MW coal or ~220 MW nuclear. Building 2 GW of solar in inner Mongolia also implies very long transmission lines, which you have carefully omitted from the $3b/GW cost estimate. The transmission problem is compounded by the fact that you’re only using these transmission lines very infrequently due to the intermittent nature of solar. If the suckage stopped here, it would be bad enough; but the project will not even be finished until 2019 (what was that about nuclear plants being too slow to construct to make a difference?) and if solar power is to ever replace baseload power you need to overbuild the system to deal with winter and weather as well as provide a significant storage system. That’s lovely dear, but it has no relevance whatsoever. China is building AP-1000 reactors at an expected cost of $1400/kW (and they expect it to drop) with chinese labour and under a chinese regulatory environment (which is unlike western countries is not designed to deliberately add cost and risk to nuclear power). “My position is that first and foremost we need to power down and depopulate. Without this aim, vast amounts of cheap power will only enable us to go further into overshoot, robbing from future generations and ensuring a catastrophic cull of species in the natural world first, then humans.” This kind of casual evil is the worst kind. I bet you don’t even realize what kind of monster you are. Does anyone have reliable studies on hand to give good solid reasons why our current population is too high and much lower is optimum? I’m not referring to the commonly known ones such as we’re using up the world’s resources etc. Why is lower optimum? In lots of ways I think consumption is comparable to an instinct. One of the most effective ways to get around instincts is to cheat them. I’ll use an analogy to explain. People don’t have a baby drive per se, they have a sex drive. If you can give them an acceptable way to have lots of sex without having lots of babies, they’ll take it. If you just tell them to stop having sex, the sex drive will win out and all you’ll end up with is more babies. In a similar way people don’t have a CO2 emissions drive, they have a consumption drive. If you give them an acceptable way to consume lots of energy without emitting CO2 , they’ll take it. If you just tell them to stop consuming energy, the consumption drive will win out and all you’ll end up with is more CO2 emissions. ;) ‘My position is that first and foremost we need to power down and depopulate.’ A lot of research has been done through the last 100 years on the latter problem. Many different technologies have been employed, at many different scales, under many different regulatory regimes and governance models. I can report that they were all completely succesful. We know what depopulation looks like, and its not fun. I agree with Salient Green though, we do need to depopulate, but under a powered-up condition, not powered down, so that it can happen by choice and long range planning, rather than being forced upon us through deeply unpleasant exigency. Soylent # 84 it really is getting tiresome responding to dickheads who don’t read the thread properly. Re-read #78 for a response to your incredibly offensive, let alone brainless and totally unexplained response thus “This kind of casual evil is the worst kind. I bet you don’t even realize what kind of monster you are” This is typical of the hysterical, racist, growth fetishist nonsense which always spews forth from those whose fortunes or religious beliefs hang from the obscene principle of ‘growth is good’. Clearly, I have stepped on some toes on this blog which is supposed to be about climate and energy but is being used by a few Cornucopians to further their delusional and destructive plans for continual ‘Growth’. As for the rest of your post, and I have already posted this, Google ‘CHP Energy Efficiency’ and you will see what can be achieved. Yes, siting is important, but the Europeans have always been ahead of us, and a lot smarter, and they are embracing CHP for future energy needs. My post on solar and nuclear costs was purely to demonstrate the incredible disparity in pricing. You completely missed the point that I am reservedly in favour of nuclear power but the costings are simply, unbelievable. Your reporting of $1500/Gw was completely unsubstantiated and only adds to the uncertainty of the costs of nuclear power. You are completely correct saying solar power in Mongolia will require significant infrastructure. Who said it wouldn’t? Nuclear power will also require considerable infrastructure. More. Much more. What I was saying is, let’s compare apples with apples, over the long term. Can you put a price on Peace of Mind or Set and Forget which seems to be a big part of the enormous expansion of Solar PV around the world? Jc asks for reliable studies on optimum population size and why lower is optimum. I would have thought that there is no definitive answer to optimum size. It will depend, to an extent, on individual perspectives. However, there must be an upper bound. Exponential growth is, by definition, unsustainable. My personal view as to why lower is better is based upon the very high proportion of net primary productivity that our species has co-opted to the detriment of other species. I find it depressing, for example, that the declining global population of wild dogs is only 5000. As a vet and gundog trainer, I can empathise with wild dogs. However, others may take a more anthropocentric point of view and not worry about other species unless their survival has importance for that of mankind. Finrod, on the other hand, appears to believe that we can both increase biodiversity and biomass of other species while maintaining or increasing our own numbers. I suppose, in theory and given unlimited cheap energy, this might be possible for a time. However, it is my personal view that our lives would then become so artificial as not to be worth living. In other words, there exists a range of views, none of which is necessarily wrong per se. Surely, however, most humans will wish to reproduce and it is imperative for our species survival that we live sustainably. It seems to me that it would be easier to achieve these goals with a stable population of less than 6, and certainly less than 9 billion. It’s the transition period that will provide the real challenge. This may or may not prove surmountable. Douglas Wise #83 said “Salient Green would seem to prefer them to live in Third World conditions with a probably less than 50% chance of reaching puberty” Are you sure you’re not Douglas Dumb? How on earth did you arrive at that ridiculous conclusion? Our houses, transportation, businesses, industry and power generation waste huge amounts of energy. We can still have a modern society with far less energy wastage. The link below shows very clearly why economic growth is good, electricity is good, and therfore the cheaper electricity is the better for humanity. You can see, as an example, that the more electricity we use the lower is the infant mortality. Conclusion, if we want to reduce population growth (and save the planet) the more electricity we use the better, so the cheaper electricity is the better!!: I’m fairly thick skinned and you didn’t unduly upset me. Nevertheless, thank you for your retraction in #92. I may have misrepresented your viewpoint when suggesting that you seemed to be advocating that my grandchildren live in Third World conditions with less than 50% chance of reaching puberty. However, you appear to believe that power down and renewables will provide a sufficient solution. It might well be possible for rich nations to become much more efficient in their use of energy and allow their populations to sustain reasonable life styles if the balance of power remains as is. However, our energy is currently being gained at higher price (falling ERoEI) and ERoEIs will fall further with peaking oil and coal and, certainly, with the introduction of renewables. Simultaneously, we are facing a growing population and competition from developing nations striving to bring their living standards closer to our own. I am writing as a UK citizen living on an overpopulated island with few and diminishing natural resources and governed by those who seem intent on exacerbating matters. I am sure you intentions are not to cause my grandchildren unnecessary anguish. It is merely that I think your presription will inadvertantly bring it about. I could not argue my point of view better than John Morgan did in #88, namely depopulation in a powered-up condition. I agree with you that powering-up and making no other changes will obviously be unsustainable. We have already seen the effects of the Green Revolution – more food leading to more people leading to more starving people. SG…no one want’s your world of energy starvation. Clearly this is not the trend. People like air conditioning, some sort of television, having a refrigerator, lights, that sort of thing. People understand they live longer and suffer less this way. So…nations…*every nation*…every people, broadly speaking, need more energy because there is actually *not enough of it*, and certainly those that have it, use it inefficiently (like burning fossil fuel for AUTOmobiles) and often waste it. But the overall trend, as it has been throughout every single advance in human history, is for more, denser, energy, not diffuse, less, energy. So the argument then is how accomplish this with less greenhouse gas emissions, less carbon micro-particulate, better distribution and at far more abundant rates we do now? I see nuclear as simply the *only* way to go. Secondly, your point about $1500/kw nuclear. You don’t ‘want’ to believe it or you factually know this isn’t the case? We have discussed on this blog many times before how the Chinese are doing *just that* with the AP1000 from Westinghouse. Twice that prices is CHEAP. And no carbon. It probably wont be necessary to have completely closed and sealed habitats for humans on this planet (although if it ever does become necessary it would be really good to be confident we know how to do it). It may be prudent to do water recycling and have artificial food tecchnology. I don’t see my proposal as advocating ‘confinement’ any more than current policy, which restricts allowable human activities in national parks. If we can return the farmlands to managed wilderness, there’ll be scope for allotting large tracts of land to human recreational purposes (including leading a quite rustic life if one desires it) while still expanding the land set aside for biological diversity far beyond anything practical today. Not many people in Australia regard the rules against cutting down trees in national parks for firewood as being an insufferable imposition on their rights. I’m just advocating that this principle be somewhat extended. There are a lot of people in eastern Africa who do regard laws against gathering firewood from national parks as such an imposition though, for the very good reason that they have no other source of fuel. The single most effective strategy to prevent deforestation in such areas would be a program of electrification so people have an alternative. That’s what I’m talking about… providing people with as many alternatives as possible, so our survival and that of the natural world doesn’t have to be an either/or situation. Genocide advocates such as Salient Green might occasionally point to demographic trends and claim that they dont need to impliment mass-starvation or some more direct form of extermination to accomplish their program, but the fact is that the kind of demographic transition SG is talking about doesn’t ever happen until after a society has gone through modernisation and transition to high energy useage. SG would presumably oppose such a process. The idea that we can get through this through ‘energy eficciency and conservation’ is delusional in the extreme. What’s goin g to happen if we need to launch a major geoengineering effort requiring great amounts of power to reverse a tipping-point crisis? We need a robust enery source to deal with these contingencies. Well, the biggest problem of species destruction in the developing world is: renewables. Mostly in the guise of charcoal production by burning down forests where ever they exist. Human pressure on existing rain forest brought on by both economic collapse and…oddly…agricultural ‘renewable’ bio fuels like palm oils and sugar cane have lead to a huge destruction of habitat. A nuclear economy would be able to eliminate most wars for fossil and most if not all of these detrimental renewable industries. Food is for people, not cars! At any rate, while all sorts of renewable projects gets financing and play from every developed and developing country the fight for fossil fuels rages on totally uninhibited by renewables. Political alliances between renewable and fossil interests are the bottom line of the day. A night doesn’t go by now on US network and cable TV from BP, Mobil, the Gas and Oil Assn about the great virtues of “Solar, wind and natural gas; our vast resources in ‘clean coal’,” etc. Unfortunately the economic measure of GDP makes no sense; if one inefficiently wastes energy that makes the GDP go up. But energy efficiency is one of the strongest, esaiest ways to help control even further AGW. The whole idea of baseload demand is spurious. If it weren’t for off-peak pricing, demand from 9pm-6am would be an even smaller fraction of daytime and early evening demand. The current pricing scheme, and the demand it generates, reflects the rigidity of a coal-based generation system that (in the terms used here) requires a lot of redundancy at night to be able to meet peak demands during the day. The analysis starts from the presumption that we should try to meet the same demand pattern with the same price structure as we have at present. Not surprisingly, it comes to the conclusion that we should adopt the generation technology most similar in its output pattern to coal, namely nuclear. A shift to solar and wind will require new pricing structures which (just as the present system does for coal) makes renewable electricity cheap when it is plentiful and expensive when it is scarce. Once this is taken into account, the analysis above is entirely invalid. There are other problems with the assumptions, which need a reality check. If this analysis were applicable in the real world, the pattern of new generation investment in the US (big growth in wind, a fair bit of solar, almost no interest in nuclear even with substantial subsidies) would be radically different. Can you give us an example of how the new renewables pricing structure will produce the cost mechanism ensuring that all industrial activities needed to sustain the power system are provided with what they need? Can we run the smelters with renewable power coming down the grid? Can we provide enough power (electric, or synthesised chemical fuel) to run the mines? Can we achieve replacement rate? JQ – I think your point is valid but only up to a point. You can institutionalise certain shifts in power consumption from daytime to night (or the other way) however dealing with downturns in supply such as what happens when solar PV is subject to cloud is less easy to tackle. And in any case Peter Lang based his peaking requirement on 6:30pm not 9pm – 6am. finrod#96, I think you are probably just a liar, but I am prepared to give you a chance to be genuinely mistaken if you can read the definition of Genocide, http://en.wikipedia.org/wiki/Genocide and and explain to me how freely choosing not to have kids, which is what I am advocating, can have you accusing me of genocide. Your previous hysterical accusations of ‘racial suicide’ peg you as a racist. If you were in any way sensible about the subject, you would see that the races most in peril are so because of overpopulation, such as in parts of Africa, and jungle tribes in South America and Indonesia. As an example, the pricing structure would have high prices for electricity on winter evenings and lower daytime prices, more or less the opposite of what we have now. That means that the activities that currently use off-peak power because it is cheap (both domestic hot water system and industries that operate night shifts) would have a strong incentive not to do so. Home heating would shift to systems based on stored heat rather than instant heat. Of course this would involve change. But consumption patterns change all the time in response to changing prices. And, it’s important to note that the discussion here is based on an all-renewable system which is decades away. In the transition, which will involve continuation of the long-standing movement from coal to gas, most of the peak-demand problems raised here are relatively trivial, since gas (low capital costs, high operating cost, easily turned on and off) is ideally suited to dealing with peaks in net demand. Conceivably with a constant output grid every home and business could have a large battery. They could use their fixed inflow in real time, save some for later, buy some more or sell. I’d do it if batteries were cheap enough. I guess aluminium smelters would also except that electricity via batteries costs an extra 10c per kwh. However aluminium smelters feel they are entitled to pay just 2c per kwh which is one reason we need cheap baseload. Energy price increases need to be gradual enough to give us time to adapt and invest. finrod#96, I think you are probably just a liar, but I am prepared to give you a chance to be genuinely mistaken if you can read the definition of Genocide, http://en.wikipedia.org/wiki/Genocide and and explain to me how freely choosing not to have kids, which is what I am advocating, can have you accusing me of genocide.” SG, it’s not your advocacy of birth control which inspired me to peg you as a genocide advocate, it’s your ‘powerdown’ policy. This lunacy will inevitably cause billions of deaths, direct and indirect, if implemented. You may, however, have a point concerning terminology. The definition of genocide given in the Wikipedia article you linked to is as follows: “Genocide is the deliberate and systematic destruction, in whole or in part, of an ethnic, racial, religious, or national group.” This definition seems implicitly limited to the mass-murder and diminuition of particular subsets of the human race, rather than the human race as a whole. What you are advocating has a broader, more cosmopolitan murderous application, so we arguably need a new term to cover it. Cosmocide? I’m up fore suggestions. More from Salient: “Your previous hysterical accusations of ‘racial suicide’ peg you as a racist. If you were in any way sensible about the subject, you would see that the races most in peril are so because of overpopulation, such as in parts of Africa, and jungle tribes in South America and Indonesia.” The race referred to in my ‘racial suicide’ remark is the human race… but if you want to bring up racism, the homicidal impact of the policies you advocate would indeed fall most heavily upon the non-European peoples of the earth. I see you rather in the mould of a British Empire aristocratic elitist, casually disposing of the fates of brown-skinned poeples, secure in the knowledge that you can count on the carefully cultivated racism of the lower orders to shield you from too much criticism from those who figure out what you’re up to. I have late news for you. The world has moved on, and the divisions between first and third world people which you are counting on to dehumanise the great masses which would be the inevitable victims of your policy are dissolving. Finrod #105 said “SG, it’s not your advocacy of birth control which inspired me to peg you as a genocide advocate, it’s your ‘powerdown’ policy. This lunacy will inevitably cause billions of deaths, direct and indirect, if implemented.” That statement gives new meaning to the word ‘hysterical’. Please, show us some more of your ignorance by telling what you think ‘powerdown’ means. I suspect this will explain how you erroneously come to the conclusion that it would cause billions of deaths. SG@#105:“That statement gives new meaning to the word ‘hysterical’. Please, show us some more of your ignorance by telling what you think ‘powerdown’ means. I suspect this will explain how you erroneously come to the conclusion that it would cause billions of deaths.” You’re the on trying to sell this lemon, SG. It’s up to you to define your terms and convince us it’s a good idea. Unless your definition of ‘powerdown’ allows for an actual increase in power production, though, then the conclusions I have drawn certainly stand. My position is that first and foremost we need to power down and depopulate. Without this aim, vast amounts of cheap power will only enable us to go further into overshoot, robbing from future generations and ensuring a catastrophic cull of species in the natural world first, then humans. That’s my kids and grandkids we’re handing a miserable existence to. This is SG’s position. It means less energy, less people, is Malthusian and, while he doesn’t state it, people usually think of places like Africa when making statements like this. “Vast amounts of Cheap power” IS what makes population control, family planning, contraceptives and sex education possible. It’s what gives incentives to farmers and others to have smaller families. It is vast amounts of cheap abundeant power that *allows* us to use our natural resources more intelligently, more efficiently and more for human needs, not less. By “powering down”, actually means MORE wars, more poverty, fewer human resources from which can draw the next Hawkings, Einsteins and Weinbergs from. Genocidal or not, it’s a reactionary future of barbarism that SG is advocating, even if he thinks the opposite will result. We should get back to the thread in question. I say this because there is not one nation, group of people, proposal being discussed by any constituency that rhythms with SG’s dystopian future. This is for 9GW peak power, for 3 hours per day, from 6 hours pumping per day. Of course, if we pump for longer, can extract a higher pumping rate than I have assumed, or if we produce less power, then we can generate for more hours per day. The cost per unit power is A$790/kW. This is still a preliminary estimate. I am still firming up numbers. The estimate I am doing will never be better than +/-25%. For comparison, I have interpreted from the Electricity Supply Association’s chart, http://electricity.ehclients.com/images/uploads/capital.gif , to say pumped-hydro costs per unit power are in the range US$500/kW to US$1500/kW. So the costs for Tantangara-Blowering are in the middle of that range. That is to be expected because we are using existing reservoirs, so no dams or reservoirs have to be built. On the other hand, we’d have to bore three tunnels, each 12.7m diameter and 53km long. There is a lot more involved of course. This length of tunnels is unusual for pumped hydro schemes. Peter, a long forgotten question…what is the efficiency loss for power in to pump storage vs power back again? The largest or second largest pump storage facility in the US is Helms Pump Storage facility in California, built in conjunction with Diablo Canyon NPP to absorb off peak base load from the plant. These are two isolated reservoirs that have not river input to speak of. I believe if you run the upper reservoir dry, it’s 1800MWs for almost 2 weeks straight. I raise this because renewable advocates often get a bit peeved it is suggested that every single storage scheme, from batteries to pump storage to molten salt are far better applied for nuclear energy than reneweables. Just a thought. :) finrod #107, just as I thought, a cascade of aggressive, insulting bluster based on zero knowledge of the subject, apart from that which you dreamed up yourself. You have zero credibility. If you really want to know what power down means, and I don’t believe you do, then educate yourself. I’ve wasted emough time on you. Ditto David Walters. I am using 95% efficiency for generation and 80% efficiency for pumping. Those figures are reasonable ball park figures to use. However, the pumps at Tumut 3 pump at a flow rate only slightly more than half the flow rate that is used for peak generation. Hence 6 hours pumping for 3 hours generation at full power. The power required by the pumps would be 6.4GW. Its important to note, that power needs to be constant power for several hours – wind won’t blow water up 900m. It would take 18 days pumping for 6 hours per day at full power, to fill Tantangara’s active capacity. That Severance chap suggests the round trip efficiency for pumped hydro at one site is 78%. If $5/w is the backstop capital cost for nuclear then I suggest all pumped hydro that comes in under that should be developed. An incentive would to get a renewable credit under the 45,000 Gwh target even if most of the pumping effort could be attributed to coal power. A CO2 cap like the one we were supposed to have back in July should prevent abuse of pumped hydro RECs. SG@#111:“If you really want to know what power down means, and I don’t believe you do, then educate yourself.” So you refuse to define one of the principle concepts of your policy. Can’t say I blame you. Given what ‘powerdown’ must necessarily entail in accoradance with your “cheap power is bad” dogma, you know it’s going to be shot down in flames. The 78% round trip efficiency looks about right. However, the tunnel/shaft length is probably less than 5% the length of the tunnel required to join Tantangara to Blowering at their deep ends. You lost me in the second paragraph. Remember that nuclear provides power 24 hours per day. The pump storage is for peak power; it would provide power for 3 hours per day (at full power). So you cannot compare the two types of generation on a purely power basis. This project would be excellent in combination with nuclear. This new cost figure for 9GW of peak power reduces the cost of the nuclear option from $120 billion to $106 billion (refer to the article at the top of this thread). Regarding incentives and REC’s we should be rid of them. All they do is add cost and reduce economic efficiency. [I know you were responding to John N. but…pump storage and nuclear will be built incrementally, even if Australia adopts a “Chinese Nuclear Streoid” and goes all out.] Thus, there will be a need to over build for nuclear as well assuming Oz builds out to peak load. But even if doesn’t, a 2 to 4 week fuel outage, rotated throughout a fleet to 16 or so LWRs (you came up with a gross national GW load, but not one based on quantity of reactors, unless I missed it) is going to have to require at least a 2 reactor’s worth of power (for powering when one is down for fueling; and for when another has a hic-cup and trips). Thus, pump storage can play this role if there is enough of it, to mitigate the needed 2 unit down overbuild…assuming, of course, there IS a serious national grid, etc. peter#109, A much more cost effective storage option would be to install one tunnel between Blowering and Tantangara(3,000MW) and a similar sized tunnel from Talbingo to Eucumbene(3,000MW) and additional turbine capacity at Tumut3(to 4500MW) and a small return pump from Blowering to Jounama. This would give 11,500MW capacity with a 5 day storage of 1,070 GWh(Tant 150, Euc/Talb 480, Talb/Blowering 240). Togehter with other dam flows of 500GWh/5 days you could have for 6.7Billion >1500 Gwh available over a 5 day period. Using the data you provided for the PV farm at Queanbeyan and the wind data of 11 farms from NEM, this would cover the lowest 5 day solar(24GWh instead of av 72GWh/day) and 5 day lowest wind period(160GWh instead of average 480GWh/day) IF they occurred on the same 5 days, with the use of the present 4,000MW of OCGT existing in eastern Australia. Thus OCGT would be used to generate at <0.10 capacity factor so accounting for just 1.6% of power production. That's assuming that solar power in northern Australia would perform as poorly as the Queanbeyan site and receive no advantage of solar power avialble in WA after sunset at Queanbeyan or more cloud free days during June and July. We should not need much imagination to see that even dispersed PV solar can do considerably better than one farm at one poor winter location. The scenario described at the top of this thread is based on the NEM’s demand in July 2007. July was the month that experienced the highest peak demand (33GW), highest baseload (20GW) and highest average demand (25GW). Nuclear, without energy storage (and no fossil fuel generation), would cost $132 billion for the 33GW capacity needed to meet the peak demand without pumped hydro. With 8GW of pumped hydro, the system (nuclear and pumped hydro) would cost $106 billion, a saving of $26 billion. Nuclear and pumped hydro capacity would be perfectly suited for Australia’s situation. 25GW of nuclear would meet the average demand and provide an excess of 5GW to pump and store the excess energy generated during the times when the demand is at baseload levels. The pumped hydro would generate up to 9GW of additional power during the periods of peak demand. This explains why France has near the cheapest electricity in the EU, exports large amounts of electricity to maost of the remainder of the EU, and enables the European networks to absorb the intemittent energy that is being generated by their highly subsidised and mandated renewable energy programs. I simpley do not understand your figures. I am not sure if you have done the calculations aor are simpley throwing numbers around. They do not make sense to me. I’m still trying to work outr some of what you wer saying in a much earlier post on this thread. I haven’t given up on it. For example, in post #118 you say “and a similar sized tunnel from Talbingo to Eucumbene(3,000MW)”. But that statement is not correct. The same size tunnel would generate only 2,000GW, not 3,000GW. The reason is because the elevation difference is 600m, not 900m. Regarding incremental build, as Neil Howes, points out, there are many possible pumped hydro sites. The most economic will be built first. I started looking at Tantangara-Blowering because of the high head and large storage capacity in each reservoir. If we wanted to we could build that scheme with one tunnel at a time instead of three tunnels all at once. Or we could make smaller tunnels. However, the mobilisation costs for the 12.7 m diameter tunnel boring machine are high. The tunnels make up are half the cost of the project. So it makes sense to bore the three tunnels while the TBN is here. By the way, this scheme has sufficient storage in the smaller reservoir to handle eighteen of these 9GW pump storage schemes, although we would never do that for a variety of reasons. But you could expand it incrementally for a long time. Regarding the need for extra reactors for redundancy and to allow for refuelling, I agree. The papers intentionally did not go to this level of detail. I stated in one of the papers that the redundancy was excluded in the simple analysis I was conducting. The need for redundancy actually turns out to be much greater for the solar thermal option (option 2), than for nuclear. Its more realistic to pursue solar with some vigour when the nuclear power is in place. One day some outfit may agree to maintain a section of road so long as they can draw solar power from it. Heliostats may spring up into the desert powering the circular sprinklers that water circular patches of crops. Like in the deep tropical agriculture of Malaysia. Wind power might be used for ammonia production that can be carried out intermitently. These things take time and its not plausible that solar power could provide the energy for the industrial manufacturing that could put up the solar power plants. So its not anything one expects instant results from. Its just very imprudent not to start sweeping away the obstructions to nuclear. We don’t need another enquiry. We know how the enquiries end up. They wind up with an outcome that guarantees inaction. But inaction doesn’t get the power bills to drop. It doesn’t get us reindustrialising. Since we know what the outcome of the enquiries are it is clear that there is no need for another one. To have a big and growing nuclear industry is a really exciting prospect. Thousands and thousands of very meaningful jobs for intelligent people to get involved with. Thats a good thing even if it were only to draw them away from causing trouble. Peter#120, I have tried to do the calculations correctly. There is already a tunnel through Tumut1 and Tumut2 so that would add to the total pumped storage capacity, with a slower pumping just via the new tunnel. Also extra flow from Eucumbene to Talbingo allows extra flow through Tumut3 and some storage flexibility in the active storage at Talbingo. I thought you had said the Tantangara to Blowering head was 600m. I was calculating a flow rate of 0.75ML/sec to give 3,000MW at 600m. I would have thought that there is no definitive answer to optimum size. It will depend, to an extent, on individual perspectives. However, there must be an upper bound. Exponential growth is, by definition, unsustainable. Thanks for your thoughtful response. Look, the only way humans seem to limit population growth is when they join the list of the wealthy. So if you want to see long term permanent reductions in population without coercion we should strive to see everyone maintain high living standard. Here’s my prediction inside 30 years: within 30 years countries will be vigorously competing with each other to attract young immigrants in order to anchor their failing social security systems. You said; “There is already a tunnel through Tumut 1 and Tumut 2 so that would add to the total pumped storage capacity” There are no pumps in T1 and T2. These power stations cannot be converted into pumped hydro schemes (eg no downstream reservoir, even if there were, the inlet tunnels from up-stream are at the wrong levels for pumping. Tailrace is not designed for pumping even if a downstream dam was built. Downstream dams for T1 and T2, even if built would have miniscule storage. The power stations are underground so virtually impossible to modify without taking the whole Tumut generating capacity out of production for perhaps 2 to 3 years.) It is absolutely a no go option. Let’s put this to bed now. You say; “… that would add to the total pumped storage capacity, with a slower pumping just via the new tunnel. Also extra flow from Eucumbene to Talbingo allows extra flow through Tumut3 and some storage flexibility in the active storage at Talbingo.” Neil, we’ve discussed this repeatedly. I don’t understand what you are getting at with pumping from Talbingo to Eucumbene. Have you done the calculations? Why would we want to pump water out of Talbingo before it passes through T3. Talbingo should be maintained as near to full capacity as practicable to maximise the head, and therfore the power output per m3 of water used. Talbingo is kept a bit below full supply level to catch the water released through T1 and T2 and to hold the small amount of water pumped up at night by T3. The water is released from Eucumbene and through T1, T2 and T3 in a controlled manner to maximise the power per m3 and also to meet other downstream needs for the water. There is no intention to use Talbingo for storage other than what I said above. That is what Blowering is for. I suspect Talbingo would never be allowed to fill to the point where it wastes water (ie spill it over the spillway) except by accident. If you want to improve the pump-storage capacity of T3, I would suspect the best way would be to build a dam downstream from Jounama. There appears to be a suitable site which from the maps looks just about as good a profile as Jounama. If a dam was built at that site, it would increase the downstream storage for T3 by about a factor of 3. If you want to try again to explain what you are thinking, could you please lay out the calculations and explanations line by line so I can follow it. Have you costed your ideas? Have you allowed for the fact that the pumping is slower than the flow rate of peak power generation? Have you allowed for the fact that more power is needed to pump than to generate, and the pumping is against a higher head? Regarding the elevations of the reservoirs, I thought I gave you all the figures in a previous post. Just fo now, I confirm, use 900m for Blowering Tantangara and 600m for Talbingo-Eucumbene. We can get the pump storage capacity we need. However, the problem is getting people to understand that wind and solar are simply not viable. They are draining our wealth for no good reason. That is the problem we face. That is the purpose of these papers – to explain the facts. It seems many people just don’t want to know. They are ignoring what is so balatantly obvious to anyone who is at all numerate. SG: “If you really want to know what power down means, and I don’t believe you do, then educate yourself. I’ve wasted emough time on you. Ditto David Walters.” I know what it means. It means higher birth rates and even higher still mortality rates. It means resource depletion(recycling is only practical with cheap energy), it means total deforestation as people fan out and do slash and burn agriculture on ever last square inch of forest. It means untold suffering from which society may never recover. Re #124 Jc I hope you are correct to assume that increasing affluence (if attainable) will automatically reduce fertility rates with no need for coercion. However, I would urge you to consider the writings of Dr Abernethy on this subject. She appears somewhat less sanguine. (Google Abernethy and demographic transition) Re #128 Peter Lang. You state that the nuclear option is so superior to renewable options that this should be obvious to anyone who is at all numerate. Would that this were so. As a lay reader of this and other blogs, I have gradually arrived at the conclusion that, if anything can save us, it is a rapid transition to nuclear energy. You appear to think that opposition to nuclear power comes only from those who don’t want to know the facts. You are no doubt aware that the great majority of those who correspond on the RealClimate and Climate Progress blogs are opposed to a nuclear solution and by no means all of them are innumerate. Their purported objections (unconvincing to me) relate to cost, time to deployment, sustainability and safety. I would conclude that you have done a much better job with your negative arguments on demonstrating why renewables are unsuitable for baseload power than you have in deploying pro nuclear arguments that are sufficient to change the minds of antis. It may be that we will have to await the deployment of the AP1000s in China before there is sufficient consensus but time seems to be of the essence. Meanwhile, keep up the good work. I wish you every success. I took a quick look at your suggested site. It really doesn’t seem at variance to the comment I made. Here’s the thing…. people in poor countries tend to use children numbers as a social security net and cheap labor. Rich world people don’t. In fact kids in the rich world are a bloody expensive “hobby” and most people can’t can’t have many expensive hobbies :-). You are no doubt aware that the great majority of those who correspond on the RealClimate and Climate Progress blogs are opposed to a nuclear solution and by no means all of them are innumerate. That’s true. However I also think there is an ideological posture to this too. Some people who are obviously numerate may also desire a different world to the one we have or heading to. There are plenty of intelligent people that would prefer a less technologically complex world. Virginia Prostel wrote a book titled ” The future and its Enemies”. She took the view that stasism comes from both the right and left and that the right/left dictum based on a traditional demarcation no longer holds. She viewed the enemy for what she referred to as the stasists, that is people that are anti-development and anti-technology. I think to a large extent that is true. I agree that Abernethy isn’t totally at odds with your perspective but she does point out that it isn’t quite as straightforward as is sometimes suggested. My own observations relating, for example, to the UK and, to a lesser extent, Africa suggest that increasing prosperity often increases fertility rates. Materially successful Africans that I have encountered tend to have larger than average family sizes. Equally, in the UK, many self made (not derogatory) millionnaire entrepreneurs also have large numbers of children. The UK population is rising quite fast. This was initially due to increased immigration but the increased reproductive rates of the immigrants has now become the major factor. This might suggest that breeding increases in response to rising aspirations, if only temporarily. I don’t know why people cite Africa when they talk about over population. It has lower population density than Europe. It has fertile land and an abundance of resources. I presume it is because periodically we see images on the TV of people starving in Africa and assume (wrongly in my view) that this starvation is a product of over population. When it actually has more to do with poor governance, poor property rights and oversized state sectors. However perhaps it is because Africa still has some amazing wild animals that human populations are encroaching on. Wild animals the equivalent of which were driven to extinction in Europe long ago. Thanks for your response. I know you are already busy but wondered whether you could answer a few questions relating to the possible benefits of stranded renewables. Suppose that renewables are always more trouble than they are worth in the provision of grid power. I can go along with that and can also accept that it is more important to consider ways of powering the grid with emissions free fuel than to waste time looking at peripheral issues. However, it is these peripheral issues that I am now asking about. Under what circumstances can stranded renewables (with little or no storage facilities) provide utility and cost competitiveness? I am a biologist, not an engineer. As such, I am fairly clueless as to industrial or synthetic processes can operate with an intermittent and unreliable energy source. I can see that a plastic extrusion plant might gum up big time if the sun went behind a cloud or the wind stopped blowing but this degree of wisdom doesn’t get me far in any rational decision making process. Can stranded renewables be used to synthesise transport fuels or to desalinate water? In the Third World, where there may be very poorly distributed grids, would stranded renewables not be of use? Would you still argue that the installation of grids, powered by nuclear batteries, would work out cheaper? Do household solar thermal rooves in Northern Europe make economic sense? I suspect that you may say no because they cause unpredictability for grid operators when they unexpectedly underperform. In short, can you see any use for renewables at all? If so, what do you think their best uses are? You ask why people discuss Africa when they talk about overpopulation. I would have thought that the following might have something to do with it: 1) The continent with the highest birth rate. 2) UN prediction that only 25% of the continent’s population will be able to feed itself from its own agricultural production by 2025. 3) Falling fresh water reserves. You asked: “In short, can you see any use for renewables at all? If so, what do you think their best uses are?” Here is my short answer of the top of my head. I’d say as follows: Yes. But only where they are economic without subsidies or being mandated by governments. There should be no mandatory renewable energy targets. There is a role for solar and wind power in remote sites. We should fund R&D and contribute to demonstration projects, but in an unbiased way with the awarding of funds being made on the basis of projected return on investment. There is a role for solar and wind power in remote communities and in developing countries, but it is a very small role. It has to be very highly subsidised. It is far cheaper to use diesel. Few can afford to waste their scarce resources on renewable energy. Certainly not the developing world. They should be the last to get off fossil fuels. In fact, we need to help them to get onto electrcity as fast as possible, even if they have to use fossil fuels to do so. The sooner they can get onot electricity the sooner they will be able to afford to get off fossil fuels. There will be not bypassing the fossil fulels step via renewable energy (hydro excluded, where it is available) Others discussing population growth rate, fertility rate, life expectancy, litteracy, education and other UN Human Development Program statistics may also be interested in playing with the the link given post #93, if you don’t already use it. Many thanks for your prompt and concise answer. I have no reason to doubt the validity of your comments. All a bit depressing, though. It makes it all the more necessary to bet the farm on the success of nuclear, given that you have exempted all other practical options. Pity that few, if any, politicians or their advisors are prepared to come off the fence and fully commit to a nuclear strategy. Actually, pity is an understatement. They can’t Zachery. Both parties are frightened stiff of being the first to come out and openly support the policy of including nuke in the suite of choices after the ETS. Labor won’t move as it has to worry about losing primary votes to the Greens and the Libs won’t overtly run with a pro-nuke policy as they can’t unless there is strong bi-partisan support from Labor. I always thought the initial move has to come from the ALP anyway. The crying shame is that I couldn’t imagine any of the heavyweight minsters that quietly don’t support nuke anyway other than say those heavily tied to the union movement. Nuke reactors would basically mean far less employment in that sector as reactors essentially run and would employ nearly all their front line people from engineering disciplines I would guess. Nuke energy is actually very highly capital intensive which means the labor content required to produce energy greatly diminishes. That’s not the way to the union movement’s heart obviously. I’m not giving a political opinion here, it’s just as I see things as I vote LDP wherever possible anyway. Funnily enough the obvious direction for a first world, highly developed nation such as ours is to move, or rather allow movement towards capital intensive industries rather than favoring labor intensive sectors, as that is where higher incomes are. Renewables such as wind and solar are not highly capital intensive by the way, as that sector requires a hell of a lot of maintenance. Population. Gawwwddd…what an god awful discussion. The ‘brass tacts’ are harder to decipher. #Population growth in Africa from the emerging middle class usually take a generation or two to even out. This is true in the UK, also brought up, as growth among 2nd generation immgirants is more or less the same as those of English/Welsh/Scottish nationality. Newly arrived immigrants carry over reproductive traditions. In Africa it is not a so simple to state growth doesn’t slow down with wealth. What you see in Africa is continued fertility rates *among tribal organized society* not in urban areas. “Wealth” is not just “money” and “income” it is a whole host of social ladders and support that does not require large number of children. In in the teaming slums of India and Cairo, population growth *inevitably* goes down even among the poorest of the poor…with no “income” increase. Thus is is as much a function of urbanization as it one of income. #Secondly, the idea that we “need less people” is simply utopia (or dystopia, depending if you take the Pol-Pot approach to population control). Do we want to go down that parth? Do we really even want to discuss this? #Thirdly, yes, there are all sort of religious issues as well. Italy and Poland, both 99% Catholic countries, will have continued higher than European growth rates because of the influence of the Church. So, a form of secularization is needed as well, but this comes *naturally* as people’s access to things like the internet, sex education, family planning, urban society, etc, all a function of wealth creation, all a function…more available energy because ubiquitous. #Back to Africa. The commentator is 100% correct: starvation and environmental disaster is almost always “Africa” in the public minds. This is the result of the media. But problems ARE there, but, in fact it has almost zero to do with population density but with the legacy of colonialism, imperialism, tribalism, etc. If you look at an image of Africa at night, you’ll see exactly why the term “Dark” continent is so appropriate. All these countries are searching for better means to electrify their societies, provide fresh drinking water and redistribute water resources there. Africa has more water available on it than any continent but South America. But it’s not in the right places. That’s where Gen IV, high temp reactors come in. We could build them along the coasts of northern Africa to provide drinking water and power. What, pray tell, is wrong with that vision? Life can be good for MORE people. You do this by making it wealthier, not fewer in number. Nuclear reactors employ more people per MW than coal does at the level of the plant. Far more people are employed, however, in the whole supply train for coal: from mine to plant. Nuclear actually employs a lot of union members, probably slightly over half from operators, to mechanics, to communication and control technicians to radiation technicians and health employees. But engineering is very high as jc notes. There are almost no transportation costs associated with fuel or waste to and from nuclear plants. But if you look at the building of nuclear power plants, and assuming an ongoing nuclear energy development program from components to raw materials to construction of the reactors (gen III reactors that is) then I would bet there are FAR more people employed in nuclear as a whole than coal. The Liberals might lose some votes if they went nuclear. I don’t expect the Labour party would. And then the Liberals would be reduced to feebly tagging along. You cannot make decisions on the basis of how many people you think might be employed David. Thats one rabbit that you don’t want to chase. Since that makes it sound like you are perversely going for the high cost option. Its cost-effectiveness that must be the criterion. Alfred, I agree about jobs. I merely stating what I believe to be the case. Actually, the fewer workers in any system is a case FOR that system, not one against it. It speaks to efficiency as measure in labor-power sold to the employer for a given MW out put. I didn’t raise this, I believe JC did. The issues as I see it are: I think there will be no nuclear decision in Australia for another five years unless there is a crisis. Even the decision due next year on the Olympic Dam expansion will probably degenerate into a lengthy squabble. If a first reactor site was announced the same crowd who invaded Hazelwood power station yesterday will no doubt make trouble. (apparently lignite is to be exported – whoopee!) The easiest thing for politicians to do is impose the lightest of carbon penalties as a token gesture. Meanwhile mid sized gas generators will regularly come online without fanfare and a few wind farms will line the routes of Sunday afternoon drives. Pollies happy, greenies happy. Shame about astronomical electricity prices though. As you know David Walters, you and I agree on this though I’d make phasing out crude-oil-for-transport as important an objective as phasing out coal-for-energy. The environmental and social footprint of resort to crude oil is at least as bad in practice as that of coal, and arguably worse. And since you mention it above, DW, I do disagree with the general thrust of your remarks on population. It is clear that we will need to taper, stabilise and ultimately reduce population sharply over the next 150 years if biodiversity is to be maintained and humanity is going to acquire substantial margin for adaptation to those parts of climate change we can’t foreclose. I’d like to think that come 2160 population was the low side of 5 billion and continuing to edge lower each decade. Regarding population, and the benefits of electrcity, I’ll just mention this link again because it seems some contributors are not actually aware of the statistics. Gapminder This is a lovely package that pulls UN data and charts it. You can run ‘Play’ and it runs through the data as a video and you can see how the statistics change over time. You can select what data you want to display on two axes and what countries you want included. You can pick log or linear for the axes. Here is an example that shows the more electricity we use the lower is the infant mortality. Conclusion: if we want to save the planet the more electricity we use the better, so the cheaper electricity is the better. Peter#125 I will make a last attempt to explain the calculations of the maximum storage capacity of the Snowy using 120-140km of tunnels(>12m bore as you suggest for Tantangara/Blowering). A flow of 1ML/sec delivers 970MW of power( 3600ML/h)dropping 100m in height. Thus one 900 m tunnel from Tantangara to Blowering will use about 0.33ML/sec(1,200ML/h) to generate 3,000MW of power, and an active storage of 140,000ML will allow 116h production or 116×3=348GWh total storage. Eucumbene has up to 4,800,000ML and Blowering 1,600,000ML potential( not sure active storage but assuming Blowering can store 1,000,000ML with suitable booster pumps). If we assume we keep 140,000ML capacity for Tantangara we have an unused potential of 860,000ML in Blowering. In no way was I suggesting the existing Tumut1&2 be used to pump back to Eucumbene, but adding an separate >12m tunnel between Talbingo and Eucumbene capable of 0.33ML/sec generating power and a slower rate return pumping and a small 6km pumping system from Blowering to Journama ( 1ML/sec 10-30m head) would allow water to flow in both directions between Blowering and Eucumbene. I think the present Tumut1&2 flow rates are 0.24ML/sec(theoretically 1,500MW but lower efficiency of only 1,200MW). The new tunnel would allow 0.33X 600 =2,000MW for a total generating CAPACITY of 3,200MW, plus Tumut3(1,500MW) for a total capacity of 4,700MW. How much energy can be stored? At full operation, total flow into Talbingo will be 0.57ML/sec and outflow to Tumut3 will be 1.1ML/sec so the Talbingo active storage(160,000ML) will we drained at the rate of 0.63ML/sec(2300ML/h) or 70h at 4.7GW or 330GWh. After this Tumut3 would have to reduce output to 750MW , and another 700,000ML could flow from Eucumbene to Blowering at 0.57ML/sec to generate about 3,950MW for another 320h or about 1200GWh. Thus the total Tantangara and Eucumbene system would generate up to 7,700MW and a total storage of (348+330+1200)=1,880 GWh of storage. Adding another 2,300MW of reversible turbines at Tumut3 would give a short term output of 10,000MW, a 3day output of 7,700MW and a much longer output(weeks) at 3,950MW. Based on your cost of $6.7Billion for the 9GW Tantangara project this would we a similar cost, or $4million/GWh total storage or $8Million/GWh 5day storage. Together with the other 4,000MW of non-pumped hydro power and 740MW existing pumped storage, an additional 2,000MW of turbines added at existing Hydro and existing 4,000MW of OCGT(NEM only) we would be able to get though any combined low solar(assuming av 8,000MW peak) and low wind period( assuming 24,000MW average), using very small amounts of NG(1-2% of present CO2 emissions). The 10% over-capacity in wind would be mainly used to replace pumped hydro losses. Most transmission additions would be Sydney and Melbourne to Snowy( if solar was PV) and 3,250MW from Perth to Pt Augusta and an increased Bass-Link(400km). On the ‘ pearls of wisdom to swine’ principle, I will not respond directly to the childish goading of finrod and soylent, but some here with a bit of class may still be wondering anout ‘power down’, especially if all you know of the principle is the hysterical disinformation provided by those two orcs. No-one in their right mind would ever suggest that the third world power down and you know who that poignant little fact is directed at. However we can’t bring the third world up to our standard of consumption, and it’s not just energy constraints. Anyone who thinks this planet can support 8 billion people at first world consumption rates, and all it would take is lots of cheap energy, is truly incapable of rational thought. It is the first world that must be subjected to ‘power down’. Far from causing billions of deaths as a couple of looneys have stridently asserted, the process will ensure that billions will NOT die off. All it involves is living with less consumption, being careful about things like food miles, waste, excessive use of chemicals, localising, using passive heating and cooling. There’s much more if anyone’s interested and they can go to Ted Trainer’s site http://www.permaculturevisions.com/TedTrainerssite.html or google him for articles he has written. Fran, the ‘thrust of my argument’ about population is based on what are the real factors and effects of population growth as it relates to production (food, power, land, etc etc). Things are no always as they seem. There are whole areas of the Philipines, to cite one example, that have returned to jungle and forest after millenia of human occupation because of the distortion of the Filipino economy now has over 50% of that population living urban areas. Distortion, brought on by globalization in Haiti has had the opposite effect and the human pressure on remaining forests have virtually eliminated trees form that country (in this case the substitution of trees/charcoal for propane/butane gas and an increase in goat herding). I’ve avoided, and will continue to do so, discuss here the issue of whether population ‘control’ is a “target”, that is a good thing or bad thing. My point is that it is inevitably a function of the mode of production and always has been, regulated by poverty, urbanization, access to technology, etc etc. That is it’s a effect of these factors, not the primary cause of the problem. I said all that to say this: for a discussion of serious family planning (and I’m FOR that), the huge social distortions created by what is called “economics” in developing countries are going to change. As someone pointed out, no one is saying Germany or the UK is “too dense”, if they are, I’ve never, ever, heard this expressed in the media. Population pressures can only be discussed as part of serious family planning in a democratic society. This hardly exists as globalization and the religion of free trade and “left the market decides” being the motus operandus in the world today. Until that changes, ‘reducing’ population growth to some arbitrary number simply is like aruging whether we should grow grapes or blueberries in our controlled green house on Mars when we set up a colony there. I’m not sure, Fran, what it is you disagree with me on about the use of fossil fuels for transportation. I’m again ’em, as you are, yes? No-one in their right mind would ever suggest that the third world power down and you know who that poignant little fact is directed at. No? You failed it seems to distinguish between first world and third…thus what is one to think? I looked at the cute village life in the link you provided SG. Cute. How do the 20 million people in and around London supposed to get on with their lives when living like actors in Sherwood Forest? Seriously…this is a ‘catholic’ solution…that is in the literal meaning of forced universality…you have to have total social buy in to such a utopia for it too work. Cities just go…bye bye? Again, rhetoric and hyperbole aside, your vision of the world living in pastoral villages requires 100% buy in, rejections of every social norm I can think of [Like, I WANT to live in SoHo in London…]. Who is to enforce this wonderful new life the web site promotes? BTW…a 5 MW LFTR would be *ideal* to power the example villages. Just a thought. David PS…to see how remote a possibility this is, a good read of Engels “The Origin of the Family, the State and Private Property” would really be worth a read to show we’re not going in the direction you want us to. Western countries electorates will never power down willingly. Western governments will have to apply coercion to achieve such an objective. No political party legitimately seeking power would ever go that far as they would be swept out of office for a generation. Despite a form of ETS being in operation in Europe for nearly a decade, no Euro country has seen a secular drop on power demand or supply. The obvious corollary to a permanent drop in power demand is a corresponding permanent drop in GDP. Every single western government at the moment is expending billions of dollars to avoid a deep recession and get economies moving again, so a drop living standards is not even close to becoming a realistic policy anywhere in the world. Given that, the best thing to do is to meet demand for cheap and abundant energy in the way that is least damaging to the atmosphere and allow living standards to continue on rising. I’m not sure, Fran, what it is you disagree with me on about the use of fossil fuels for transportation. I’m again ‘em, as you are, yes? I suppose I was inferring that when you said “phasing out coal, as priority No. 1” you meant that this was a greater priority than phasing out liquid fossil fuels … On the broader question of population, I’d be for measures that would have the effect of reducing population in the longer run through attrition as populations fewer than an average of 1.0 per birth, but I wouldn’t be for setting specific targets. On the broader question of population, I’d be for measures that would have the effect of reducing population in the longer run through attrition as populations fewer than an average of 1.0 per birth, but I wouldn’t be for setting specific targets. You don’t have a problem with that in Western countries. So how would you implement that around the world and in places where male is the child of choice without creating all sorts of social dislocation such as in China where there’s an estimated 300 million excess males in the younger generations? Yes, I think coal is priority No. 1. Coal is the biggest stationary source of GHG and kills hundreds of thousands of people every year directly. It’s true more people die over wars for liquid petroleum but it’s not as intrinsic to it as coal is. But I’d concede they are of equal malevolent value. The difference however is that coal burning is a highly centralized utility form of power, liquid fuels are not, just the opposite in fact. The social investment by *individuals* in their cars is paramount. NNadir on the Daily Kos always mentions it as the “CarCULTure”. True enough. All that for this: I think it will be from every angle: socially, politically, technologically a lot easier to phase out coal than liquid fuels. Now…if you were to WRITE SOMETHING :) here, in a completely different thread, on, say, biodiesels and syn fuel, I’m all for it. But I don’t consider the liquid fuel problem to be in anyway “competition” with anything in terms of power generation. It’s parallel to the electricity discussion. Alfred, it’s all relative. There is no doubt that coal saves millions of lives each year by providing a ready and reliable source of energy and higher standards of living. Yet if you can provide that same level of service via other means that have the same (or better) features of coal (concentrated store of energy, easily transported, reliable baseload, cheap to supply, able to operate at large energy scales and yet be compact and able to be housed close to demand centres, etc.), yet don’t suffer from the damaging effects of coal pollution (both direct and indirect — however you might weigh those relative risks), then you save many additional lives. I’m quite certain that David was talking about the additionality that an energy source like nuclear power provides. All that for this: I think it will be from every angle: socially, politically, technologically a lot easier to phase out coal than liquid fuels. Doubtless, and the technological challenges are considerable. I strongly believe the key to this lies less in new technology and more in reconfiguring the design of major population centres, so as to make cheap efficient effective high quality mass transport (much or all of which could be put onto an electric grid) available to nearly everyone. If we design suburbs properly, everyone should be able to walk, use a bike or a local bus to do pretty much everything they need, and get their shopping delivered, or at worst use a small EV to do it. So strategy number 1 is make it possible for people to largely give up their cars and use grid powered vehicles. Then your biofuels only have to shoulder the load for vehicles for which grid-power would not be feasible. Since this would be the minority demand, producing biofuels to the scale necessary from algae would be feasible. @the deathmonger who calls him/herself Salient Green:“No-one in their right mind would ever suggest that the third world power down and you know who that poignant little fact is directed at. However we can’t bring the third world up to our standard of consumption, and it’s not just energy constraints. Anyone who thinks this planet can support 8 billion people at first world consumption rates, and all it would take is lots of cheap energy, is truly incapable of rational thought.” This planet is stocked with resources af energy and matter vast beyond your woeful intellectual grasp. There is enough extractable uranium and thorium in the earth’s crust to support a human population far larger than the current level until the death of the sun. There are no fundamental shortages of any significant material resource. All it will take to survive in great high-tech style is some engineering skill, good management, and the will to rally these resources to that cause. In the end, people will respond to a positive message with much greater enthusiasm than to all your talk of limits, restrictions, and ‘powerdown’ with all that it implies, which you are not willing to describe explicitly (and for very good reason), but which I and others will not let you ignore. So strategy number 1 is make it possible for people to largely give up their cars and use grid powered vehicles. It’s not so simple, Fran. Decades upon decades of poor planning decisions and rank nimby policies by various state governments has made it extremely difficult to give up cars without causing great hardship for a lot of people. Some of the burbs, in fact most of the burbs in Australia and the US force people to buy cars in order to commute to work and to do other sundry tasks. Facts are that people don’t really go joy riding in cars on Sundays any more. Cars are used help with modern standards of living such as commuting to work, taking kids to school and shopping. Most people live too far away from travel points to even contemplate public transport otherwise life would be very difficult. Public transport like rail is particularly useful to carry large numbers of people in straight lines such as Tokyo or taking people from Midtown Manhattan to the Downtown area. However in Australia work-commuting is extremely diffuse with people traveling in all sorts of directions to get to work these days. Allan Moran from the CIS (?) published a study which showed that only 15% of work commutes these days are down a straight line as the majority of people no longer commute from burb to the CBD. Most in fact travel from burb to burb in all sorts of directions. This makes the public transport option very difficult in the way our cities are planned. Various ways to counter such problems would be to remove height restrictions in the cities and attempt to create Manhattan like living which incidentally is actually very green, as cars are really more of a nuisance in Manhattan rather than a necessity. I lived there for 15 years and it was only after being there for 5 years that I actually bought a car that I rarely used and ended up hating to write a monthly cheque for garaging fees for something that was of little use to me. Here’s a great piece in US City Journal a free market NYC based think-tank that talks about this and why bad planning decisions in places like California make the country less green. It talks about how housing in Texas costs about $200,000 to 250,000 for an average home while the same home costs about $500,000 in Cal when it’s loaded up with all sorts of planning restrictions. The weather in parts of Cal is very conducive to living without much heating or a/c for most of the year while Texas has shocking weather in comparison. “Green Cities, Brown Suburbs- to save the planet, build more skyscrapers—especially in California.” David Walters #157 “You failed it seems to distinguish between first world and third…thus what is one to think?” One would think that the third world use little and sometimes no power, that I did mention raising the third world out of poverty and that Barry has recently refered to Ted Trainer which should have raised a fair bit of awareness. No matter, people make erroneous assumptions all the time, it’s more about their behavior based on those assumptions and that’s what I take issue with more. Anyway, I correctly pegged you as a class above the others. The 5Mw LFTR sounds very elegant. I am a bit of a fan of them. The trouble is, by the time they could realistically be massed produced, solar pv and lithium or other storage technology will be much more advanced. On your PS, I realise it is a remote possibility but firmly believe it is the right thing to do. I think there is much more likelihood of our business and political leaders taking us into resource and environmental crisis from which position the first world will be too self involved to care a whit about possibly billions dying in the third world. It will not stop me from increasing awareness of the issues. Alfred Nock #162 said on coal killing hundreds of thousands “No it saves tens of millions of people every year David. Now what can you possibly be talking about?” That had to be irony right? If not, lets put it realistically. The ENERGY from burning coal enables millions of lives to be saved but the EMISSIONS from burning coal cause hundreds of thousands of deaths and probably many millions of health disorders in Humans and untold damage to the natural world. I very substantially agree with your perspective — increasing urban densities is a key strategy in reducing the energy cost of providing urban peop,e with the services they need. I don’t think it all has to be high rise — if by high rise we are talking more than about six stories and I think there is scope to have a mix of densities not excluding villas … But 30 people per Ha is too low — something like 100 is closer to the mark. I think there are things we could do in the interim. Since people have cars and there is existing road infrastructure it would make sense to build large car parks (capacity 3*5000 = 15,000 vehicles) at or just before major choke points and service these with buses to the city centre. In Sydney for example this would seriously unclutter the motorways allowing those for whom the service was not useful a free run. In the longer run this would encourage car pooling. You could put retailing and housing into these buildings for extra utility and even have wind/solar PV on the top and plug-in recharge facilities in them. A second thing I’d do is change the basis on which cars were put on the road. I’d reduce the registration charge to a nominal fee and abolish fuel taxes but charge everyone a distance based fee based on a) how much CO2 (assumption pro-rata $100 per tonne) and other pollutants came from the tailpipe with a credit for lifecycle offsets from properly benchmarked biofuels b) the traffic volumes where they were driving at the time they were driving (a GPS-device would be installed to track this) c) their accident/road compliance/driver competence profiles d) the tare of their vehicle As to the design of suburbs I’d have them designed like a peer-to-peer bus network diagram — so that each suburb would be like a node off a major connecting road. There would be just two ways in and out (one at each end of the suburb and only one connecting to the MCR) and to pass through the non-MCR connector you’d need a local tollway style tag. This would stop rat runs but allow local flexibility to go to adjoining suburbs by car. Streets would carry only local traffic and everyone else would be forced onto the MCRs or mass transit. Of course, on foot or by bicyle you’d be able to move freely past bollards and gates, through parks etc … Option 1, Talbingo-Blowering is clearly the best option. Option 4 Tumut 3 Expansion is the least attractive. Option 2 is preferred to Option 3. The options are in order of preference. I suspect the best program would be to proceed with Option 1 first. Option 2 could be built at a later date. Neither of these options interfere with or compromise the existing T1, T2 and T3 development. They can all run in parallel. T3 Expansion could be added at a later date. However, I suspect there would be other more attractive options. I do not believe Eucumbene-Talbingo would ever be viable. It would be sharing the limited storage capacity of Talbingo with T3. This would compromise the efficient and flexible operation of T3 (T3 is currently our biggest pump storage scheme and was always one of the most efficient of the Snowy generation assets). I’ve inserted my responses within your text. [NH] I will make a last attempt to explain the calculations of the maximum storage capacity of the Snowy using 120-140km of tunnels(>12m bore as you suggest for Tantangara/Blowering). [PL] 130km of tunnels (with steel lining and surge shafts in similar proportion by length as Tanatangara-Blowering) would cost $4.4 billion. This cost does not include pumping or generating stations. The cost would be higher if the average length of the tunnels is shorter, which they would be. [NH] A flow of 1ML/sec delivers 970MW of power( 3600ML/h) dropping 100m in height. [PL] A flow of 1,000m3/s dropping 100m delivers 981MW excluding efficiency losses, or 932MW at 95% efficiency, (excluding head loss due to tunnel friction – head loss depends on tunnel diameter, length and the roughness of the tunnel surface.) [NH] Thus one 900 m tunnel from Tantangara to Blowering will use about 0.33ML/sec(1,200ML/h) to generate 3,000MW of power, and an active storage of 140,000ML will allow 116h production or 116×3=348GWh total storage. [PL] The design, calculations and cost use the same flow rate as Tumut 3, that is 1133m3/s. Flow for one tunnel would be 377m3/s for 3,000MW. Tantangara would have storage for 58h of generation at peak power. It would take 111h to fill by pumping. [NH] In no way was I suggesting the existing Tumut1&2 be used to pump back to Eucumbene, but adding an separate >12m tunnel between Talbingo and Eucumbene capable of 0.33ML/sec generating power and a slower rate return pumping and a small 6km pumping system from Blowering to Journama ( 1ML/sec 10-30m head) would allow water to flow in both directions between Blowering and Eucumbene. [PL] Talbingo-Eucumbene tunnel, with generating and pump station would cost $2.3 trillion (very roughly). It would generate 6GW. Flow rate (m3/s): generating = 377; pumping = 200. [PL] Pumping system from Blowering to Jounama would be 20km (not 6km because it needs to suck from the deep end of Blowering). Hydraulic head is 86m from Blowering MOL to Jounama MSL (not 10m to 30m). Flow rate of pumping from Blowering for the 1500GW new T3 The smaller option), at half the pumping rate of the new T3, would be 300m3/s. Flow rate of pumping from Blowering for 3000GW new T3 (the larger option), at half the pumping rate of the new T3, would be 600m3/s. [PL] we’d need to build a new dam downstream from Jounama dam to make this work. The new dam would approximately tripple to quadruple the active storage capacity of Jounama Reservoir. Rough cost estimate, $100 million. [PL] Rough cost for T3 power increase of 1500MW = $1.9 trillion. For increase of 3000MW = $3.6 trillion [NH]The new tunnel would allow 0.33X 600 =2,000MW for a total generating CAPACITY of 3,200MW, plus Tumut3(1,500MW) for a total capacity of 4,700MW. [PL]The Talbingo-Eucumbene tunnel would could generate 2,000MW + T1 + T2 generating CAPACITY of 2600MW, plus Tumut3(1,500MW) for a total capacity of 4,100MW. Your last two paragraphs remind me of the expression “you cant make a silk purse out of sow’s ear”. It looks to me as if you are prepared to advocate to the Australian and state governments that they should commit to a wind power system that depends on using all the stored hydro energy in the country just to get us through three days of low wind and sunshine. What happens when a second event occurs within a few days? It should be plain as day by now that wind and solar are simply not viable. They are not economic. They are not low cost. A while ago the Wind power advocates were arguing that ‘the wind is always blowing somewhere’. I get the impression from your previous blogs that you now argue that ‘the wind is always blowing everywhere’. Neil, your figures simply do not add up. You do not have 33GW of generating capacity to meet peak when the wind is not blowing and the sun is not shining. I’d also add it is not acceptable to draw down on the hydro storage that your wind generators did not store. This storage must be maintained for emergency use and grid stabilisation. The power you can draw on is only what you’ve stored by pumping. Face it, wind is simply not going to work. However, all is not lost, because there is a far better option. All we have to do is get past the irrational hang-ups. @Alfred: I think coal played one of most important progressive technological developments in human energy history. I was and is vitally important. There is probably nothing coal does that gas can’t do better, in terms of fossil except the production of coke for the steel industry. But as Barry noted the accumulated facts surrounding coal show it to be detrimental with *other* superior energy sources available, like nuclear. As coal is the largest stationary source of GHG emissions, phasing out coal (and other fossil fuels) needs to priority No. 1 for climate and energy activsts. @Fran. I too see the future as a grid based auto future and, possibly, biofuels. The other issue is to give incentives for people to use public transportation (like making it free, for example). But in the US only 6% of the population used public transportation. So we have to make it more available, obviously. But the US, and, until recently there are other major hindrances to getting people out of their cars and that is Suburbia. Most of the US population lives in somewhat diffuse, largely suburban residential neighborhoods. I do for example, living outside of SF. For me to get to work, I’d have to take a BART train and then a street car. It takes 1 and 15 minutes door to door. By my truck it takes 14 minutes. Wanna guess what I do? This is true for many people and it will take generations of change to make the US population of 300 million more friendly to mass transportation. Perhaps we should put the reservoirs you are suggesting on top of the solar towers :) To assist you to understand what you are suggesting, and so you can do some of your own calculations, below I’ll give you the formulae to calculate the volume of water and the height from upper reservoir to lower reservoir to get 1kWh of energy, and the flow rate to get 1kW of power. If you haven’t already, you might like to read the “Solar Power Realities” paper. It shows the area that would need to be innundated, at 150 m height above the lower reservoir, to provide our energy demand for a day. There is also a problem with putting sea water in reservoirs on land. How do we prevent infiltration of salt water into the ground water. Love your ideas, but much of waht you are suggesting is very well understood. A great background as to how to do some simple calculations yourself is provided by David Mackay in his book “Sustainable Energy – without the hot air”. You can access the whole book from the blog roll list at the top left of any of the BNC web pages. Here are the formulae: Power = flow rate x density of water x acceleration due to gravity x hydraulic head (height). Power in kW = m3/s x 1000kg/m3 x 9.81m/s2 x m Most transmission additions would be Sydney and Melbourne to Snowy( if solar was PV) and 3,250MW from Perth to Pt Augusta and an increased Bass-Link(400km). I don’t agree with this staement. More on this below. Also, in post #35 you said: Your study of transmission costs is dissappointing. The theory behind the wind blowing somewhere idea IS NOT to have the entire wind capacity moved from one site of the continent to the other. For example, WA would have 20% of the wind capacity(SA,TAS,VIC, NSW about the same with a small amount in QLD) so on the observation that wind dispesed over the size of a state will at most generate 75% capacity WA would only ever produce 15% of capacity(9GW not 25GW) and some of this would be used locally (3GW) so at most 6GW would be exported east(even less with CAES), but not to Sydney, to Pt Augusta with perhaps another 1-2GW moved to Adelaide. Sydney and Melbourne would get most power from pumped storage( moving much shorter distances). When high winds exist in NSW and VIC energy would be returned to Snowy with 2-3GW to WA ( if no wind in WA, most unlikely considering the 2,000km of good wind coastline). Your statement that 10,000km would have to carry 25GW is totally mis-understanding how grids work. Feeder lines will only have the capacity of the solar and wind farms and none of these would be anything like 25GW. The major transmission links would be Snowy to Sydney, Snowy to Melbourne, Melbourne to Tasmania and Pt Augusta to Perth. We already have a large grid in SE Australia, but it would have to be increased. OCGT/CCGT and nuclear will probably be sited at existing coal fired power stations using existing transmission lines. I disagree with most of this. The statements are correct for a grid that is supplied by reliable generators, like fossil fuel, nuclear and hydro, but is not correct for intermittent generators like wind and solar. To make this easier for our readers to follow, let’s consider a scenario with wind power for generation and pumped hydro storage in the Snowy Mountains for energy storage. The wind farms do not have on-site energy storage storage. The wind farms are distributed along the south coast from Perth to Melbourne. We can have several days of very low levels of generation. Occasionally there is no generation at all. At other times one or more areas may be generating at near maximum output. Regarding sizing the transmission lines, if the wind power advocates want to be able to include, in their average capacity factors and average power outputs, the full power output of a wind farm, then the transmission line must be sized to carry the full capacity of the wind farm, not just its average output. Similarly for a region of wind farms. The transmission lines must be able to carry the full capacity of all the wind farms if we want to be able to have access to all the power when the wind farms are generating at full power. If we ever need all the power that Western Australia’s wind farms can generate we must size the transmission system to carry all that power. With intermittent generators we can have the storage at the generator (e.g. chemical energy storage) or centrally located (eg pumped-hydro) or a mixture. For the case where the storage is located at the generator (such as with solar thermal) and it has sufficient storage so the power station can provide continuous power on demand throughout the year (even through several days of overcast conditions), then the transmission line will be sized to carry the peak power that would be demanded from that power station. The transmission lines must be able to carry that power to the demand centres. For the case where the storage is centrally located (e.g. pumped hydro) the transmission line will be sized to carry the peak power output that would be supplied by any region of wind farms. The main transmission lines would run from the generators to the central storage site. The enhancements to the grid from the pumped storage sites to the demand centres would be less significant (relatively). The transmissions system requirements to support intermittent renewable energy generators will be very costly. The paper attached to the too of this thread shows that the cost of the transmission system for the solar thermal option would be greater than the total cost of the nuclear option. I certainly agree that the problem won’t be easy — price signals on both fuel and motor cvehicle usage will be needed alomng with coextensive measures to relocate people in such a way that high quality services can be supplied cost-effectively. In my own case, I spend an average of 55 minutes each way by car in preference to a walking + public transport journey that would take about 70 minutes. (I do carpool though) If what I outlined above were in place, my carpool journey time would probably fall to about 35 minutes and the public transport journey to not much more (maybe 40 minutes). Peter Lang (178) — I don’t know enough about Australia to work out actual estimates. Here we have considerable hydro with winter weather doing the pumping. The hydro provides backup for the wind being installed under a tax incentive plan. BPA has already stated that they cannot do backup for more than 20% of Pacfic Northwest grid; that amount of wind is projected to be reached in 2025 CE. I’ve seen a photgraph of some of the Nullarbor coastline and plain beyond. Other than the distance to consumption, maybe that would work as a place to locate sea water resevoirs. As for soaking into the ground, there are several methods to rather inexpensively keep that from happening. Peple have an infinite number of possible suggestions that all look great until they are costed. There is no point at all in chasing many of these suggestions. The bedrock under the Nullabour plains is limestone. It is cavernous. The cost of sealing a reservoir is tootally prohibiticve. It is clear from your suggestions you have no appreciation of the volumes of eater involved, and the area that would be requiredf to be innundated. Have a look at the Solar Power Realites paper. This will give you some perspective. David, intermittent renewables are totally uneconomic, and less environmentally benign than nuclear. So, why do you keep pushing them? Peter Lang (182) — Wind is apparently the choice here in the Pacific Northwest. There is a paper indicating that once all the costs, actually all of them, for the historical record in the USA, that nuclear has cost around $0.25–0.30 per kWh. So that does not look so economic to me. Perhaps in the future nuclear will be cost effective, but so far it does not seem so to me. Peter#175,179, Thank you for some of the corrections for flow rate of Tumut1 and 2 and power outputs. I am not sure why you cannot envision Eucumbene to Talbingo and Talbingo to Jounama/blowering acting as one system with Talbingo providing buffer for short term increased power outputs similar to what is available now. You seem to now agree that we could store rather large amounts of energy(several days supply) in the Snowy with the existing dams, which was my point. The issue of meeting peak demand for 1-6 hours is separate to providing 1200GWh due to wide stread cloud/low wind conditions. The former is an issue of capacity(GW), the later storage energy(GWh). You are still missing the issue of wind/solar farms dispersed and the need for 10,000km of 25GW transmission lines. Take the case of Perth having a 3GW capacity wind farm to the South and a 3GW capacity solar farm to the North and a 3GW transmission line to the East to Adelaide and on the the Snowy.Becasue Perth and Adelaide(with 3GW local wind farms) consumer about 3GW at peak, the 6GW of wind farms and 3GW solar are never going to require more than 3GW transmission capacity from Perth to Adelaide. Adelaide is linked to Melbourne and on to Tasmania hydro and Melbounre linked to the Snowy. In the case where wind farms are generating maximum at WA, SA(about 75%of 6GW) the maximum load would be <3GW for Perth to Adelaide and 65% of total power consumption(ie will use about 8GW of the 12GW storage capacity). Major energy flows do not have to move from one end of the grid to the other, just minimum energy flows, for wind this would be about 10% of capacity, less for solar unless all of the solar was in one location. A similar grid would be highly desirable for nuclear power, for example if Perth had 3x1GW reactors there would be a small chance that 1 of the 3 would have an unscheduled outage, while a second was on scheduled shutdown so 2GW from the E coast would make sense. The other alternative is to keep 2-3GW OCGT capacity on standby, the same solution that would be used to provide insurance against continental wide cloud cover and continental wide low wind occurring on the same day. As someone who has always been keen on pumped storage and who was especially keen on seaboard pumped storage (since you save yourself the cost of a lower reservoir) I’m sympathetic to your argument here. Yet the cost of the lower reswervoir is only one of the challenges. Fairly obviously you need lots of head pressure, so the ideal location will have topography at high elevation close to the shoreline. It’s also going to have to have quite a bit of scope to be modified to accommodate very substantial water, which implies that it is structurally very sound, has a large fairly flat area (or one that could be made so). Of course you don’t want this place to be a long way from the demand for power or a grid point otherwise tranmission costs become a factor, and ideally you’d want to be close enough to have it do desal cost-effectively, since then you can spread the cost to water users. This tends to narrow sharply your options. Consider also the quantity of concrete and steel you’re going to need to retain the volume of water you have in mind. There’s a huge built energy cost right there. Storing, for argument’s sake, 0.1 Petalitres of water would be roughly 100 million tonnes. Assuming you think you can contain 1 cubic metre of water securely with 0.25 cubic metres of reinforced concrete, your major cost will be the 25 million tonnes of concrete each tonne of which weighs about 2500kg. That topography is going to have to be very strong indeed. I don’t know how much this would cost to build, but I’m guessing $100 per tonne wouldn’t be high, and might well be low. And of course you haven’t bought any pumps or turbines or other equipment yet. Assuming head pressure of 100m, there’s 27.2GwH — a little more than one hour of Australia’s average power. I’ll address your points one at a time. It’s to difficult doing it one large post. I am not sure why you cannot envision Eucumbene to Talbingo and Talbingo to Jounama/blowering acting as one system with Talbingo providing buffer for short term increased power outputs similar to what is available now. Likewise, I am not sure why you cannot see that it is the least uneconomic of the four options and, in addition, it imposes constraints and reduces the efficiency of the existing assets, as I have explained. I have not attempted to cost the loss, but I suspect it is substantial. You seem to now agree that we could store rather large amounts of energy (several days supply) in the Snowy with the existing dams, which was my point. That is a misrepresentation of my position. I agree that we can from a pure physics perspective. But, my position relates to the cost effectiveness of the proposals. I agree there is substantial untapped energy storage in existing structures; however, I am not sure how much is viable to develop. I also believe the requirements for storing energy from intermittent energy generators are very different from storing from reliable generators that will pump at constant rate throughout the hours of the night when baseload is less than average daily demand. In part this issue relates to the transmission, where we are poles apart (so to speak). Secondly, you say “we can store rather large amounts of energy”. The active capacity of the reservoirs is not the constraint. The constraint is how much we can pump per day. The economic viability depends largely on the length of tunnels required to connect the existing reservoirs. The tunnels are the high cost item. They comprise about 50% of the Tantangara-Blowering facility. Tantangara can store 58 hours of energy at full generation capacity. However, that assumes Tantangara is used for nothing else. It means a lot of the water that Tantangara catches and diverts to Eucumbene would be lost. It would be spilled over the Tantangara spillway and run down the Murrumbidgee. So this loss of water (i.e. energy) should be factored in. I haven’t done that. So your statements are misleading. They are not a correct interpretation of what I said. I do admit, that the power of the Tantangara-Blowering facility did surprise me. That does look to be a potentially viable option, although what I’ve done is a very preliminary, purely desk top, analysis. I have some overseas colleagues checking my calculations and costs. It will be interesting to see what comes back. By the way, do you have any costs. You mentioned that you do for the Blowering Jounama and Tumut 3 expansion project. Are you willing to post them here. I’d particularly like to see any costs you have relating to the following: 1. Civil component of a new Tumut 3 power station 2. Headrace excavation or tunnel and inlet structure 3. Penstocks (same as T3) 4. Turbines (same as T3) 5. Generators (same as T3) 6. 6 pumps (same as the three in T3) 7. Tailrace excavation 8. Pumps for Blowering to Jounama 9. Pipes for Blowering to Jounama 10. New dam down stream from Jounama Dam The issue of meeting peak demand for 1-6 hours is separate to providing 1200GWh due to wide stread cloud/low wind conditions. The former is an issue of capacity(GW), the later storage energy(GWh). I agree. The point I was making about power is that, for the scenario I have analysed (ie the NEM demand in 2007, and no fossil fuels), we need the generation capacity to meet peak demand. I also added that we cannot rob the energy stored in the Snowy, because it is required for the maintenance of grid stability and for emergencies. The Snowy is constrained by the amount of water entering its dams. Recently the Snowy’s capacity factor was 14% for a year. That is because of the lack of water inflow. So we cannot rob that water to try to make wind and solar power look viable. Wind and solar power need to stand on their own. So, I am adding a new constraint to my scenario: the intermittent generators can draw what they have stored, but no more. If we need to add very large amounts of storage capacity (as we would for intermittent renewables), then Eucumbene-Blowering (trippled) would be the way to go. On the other hand, Tantangara-Blowering would be more than sufficient to allow nuclear to provide the total NEM demand (2007) as laid out in the paper “Solar Power Realities – Addendum”, and summarised in the overview at the top of this thread. To support intermittent renewables, we need 33GW of power and 1,350GWh of energy storage (for three days). To support nuclear, we need 8GW of power and about 50GWh of energy storage Quite a difference! And that storage required for renewables is on top of the far higher generation costs and the far higher tranmsision costs. This is my reply to the last part of your post #184. I hope this clarifies the issue, although I suspect wa are a a distance apart on this, in part due to the different scenarios were are analysing. I think you want to consider the scenarion of a potential position and generation mix in 2030. What I’ve been doing, and to keep consistency with the other papers I’d prefer to stick with it for now, is to consider the technologies that are available now that could provide the NEM’s 2007 demand without burning fossil fuels. So that, if we really want to make the changes quickly, we could and we’d have some idea of the cost of the options. Having said that, below is my response to the last part of your post #184. You are still missing the issue of wind/solar farms dispersed and the need for 10,000km of 25GW transmission lines. Take the case of Perth having a 3GW capacity wind farm to the South and a 3GW capacity solar farm to the North and a 3GW transmission line to the East to Adelaide and on the the Snowy. Because Perth and Adelaide(with 3GW local wind farms) consumer about 3GW at peak, the 6GW of wind farms and 3GW solar are never going to require more than 3GW transmission capacity from Perth to Adelaide. The premise is false. You are not looking at the problem correctly. Following is the way to analyse it. The situation is that there is zero or near zero wind over the wind farms in eastern Australia. The only place with wind is SW Western Australia. We are dealing with the wind farms at the moment. Leave the solar power stations out of it. They are totally ueconomic. The average demand in the eastern states is 25GW. We will store energy in pumped-hydro storage when demand is less than 25GW and release energy from pumped-hydro storage when demand is more than 25GW. So we need transmission lines with 25GW capacity. By the way, this assumes that all the wind farms have their own on-site storage, and this storage is sufficient to allow them to provide sufficient power to meet the 25GW demand at all times. If the wind farms do not have their own on-site storage, the transmission line needs even more than 25GW capacity. Adelaide is linked to Melbourne and on to Tasmania hydro and Melbounre linked to the Snowy. These links are totally inadequate. They can’t even handle the transient flows we have on a relatively stable, fossil fuel powered system, let alone on a fully wind powered system. The two interconnections from South Australia to Victoria are 200MW and 250MW. They would have to be increased to 25GW capacity (less SA demand) to transmit the power from WA. In the case where wind farms are generating maximum at WA, SA(about 75%of 6GW) the maximum load would be <3GW for Perth to Adelaide and 65% of total power consumption(ie will use about 8GW of the 12GW storage capacity). I don’t follow this bit. Anyway, the scenario we are considering is the case where the only power is coming from WA, not from SA. Major energy flows do not have to move from one end of the grid to the other, just minimum energy flows, for wind this would be about 10% of capacity, less for solar unless all of the solar was in one location. The scenario is we have a demand of 25GW in the eastern states and the only wind farms generating are in WA. So we need to transmit 25GW. A similar grid would be highly desirable for nuclear power, for example if Perth had 3×1GW reactors there would be a small chance that 1 of the 3 would have an unscheduled outage, while a second was on scheduled shutdown so 2GW from the E coast would make sense. Transmission from the eastern states is one option to provide the necessary redundancy. There are other options. For example, five 600MW units instead of three 1GW units. It depends on which is the least cost. The transmission lines needs a redundant line also. The other alternative is to keep 2-3GW OCGT capacity on standby, the same solution that would be used to provide insurance against continental wide cloud cover and continental wide low wind occurring on the same day. We’d need 25GW of OCGT back-up for wind (less the hydro generating capacity and less the transmission capacity from WA)? The wind and solar power outages are frequent. The sort of scenario you paint for the nuclear outages would be rare. We do have to have sufficient back up to cover for them, but it is not the same situation as with wind where it is a frequent occrrence. Anyway, it is quite likely that Australia would not adopt large nuclear units. To facilitate the change from coal to nuclear, smaller power reactors that are more closely matched to our coal fired units may be better. The nuclear/grid issues have been worked out long ago. The management and capital cost issues of the grid where the supply is from nuclear power are totally insignificant compared with the problem of trying to manage intermittent renewables. Option 1, Talbingo-Blowering is clearly the best option. Option 4 Tumut 3 Expansion is the least attractive. Option 2 is preferred to Option 3. The options are in order of preference. I suspect the best program would be to proceed with Option 1 first. Option 2 could be built at a later date. Options 1 and 2 would not interfere with or compromise (much) the existing T1, T2 and T3 development. They can all run in parallel. Option 4, T3 Expansion and pump from Blowering, could be added at a later date. However, I suspect there would be other more attractive options. I do not believe Eucumbene-Talbingo would be viable. It would be sharing the limited storage capacity of Talbingo with T3. This would compromise the efficient and flexible operation of T3 (T3 is currently our biggest pump storage scheme and was always one of the most efficient of the Snowy generation assets). The main constrain on Tumut 3 is the insufficient downstream storage. This problem would be exacerbated by the proposed extension. I suspect the new Dam would be virtually mandatory for this option to be considered. Peter#191, It’s a valid point to have a theoretical simulation of power demand in 2007, but it should consider the whole of Australia. The reality is that it’s going to take 20-30 years to replace all coal-fired power so saying we have to have a solution now that uses no FF is a bit restrictive. It would make more sense to compare a coal replaced by CCGT with other options such as all nuclear or all wind or mixes of 2 or more. To elaborate on the situation of just wind power replacing FF generated electricity would need x3 ( 25GW NEW and 2.5GW WA and considerable off-grid NG power generation, for example LNG, the goldfields mines, alumina refining). For simplicity lets say this is 28GW average(85GW capacity) for wind. QLD would have just a few % and TAS up to 15% with WA, SA, each VIC and NSW each about 20% of this capacity(17GW in WA). How much transmission capacity is needed from WA to eastern Australia? Clearly not 25GW. The wind regions of WA cover 3,000km so the maximum output would be considerably less than the 75% output of the 13 NEM farms. Lets say 70% of capacity 99% of the time with a small power shed( 5% of output 1% of time), or 11.9GW maximum. But WA uses about 2.5-4GW so maximum available for export would be 9.4GW. Since WA has limited pumped storage, they may want 3GW CAES available to insure that a HVDC link to SA would be used to move up to 6.4 GW to SA. This is about 8% capacity of entire grid. One region never has to move 25GW, of the 6.4GW 2-3 GW would be used in SA and the other 3-4GW would go to other cities if no other wind available or go to pumped storage in the Snowy or TAS if other regions had adequate wind. SA, VIC and NSW have more options if they are the only high wind regions, most would be used locally with the surplus (9-10GW) going to other regions, so SA would be exporting energy to WA and VIC and NSW and these regions would also be drawing on storage. For short term power(GW) the size of storage is not relevant. For storage capacity there is no reason why this cannot be replaced in weeks. Data of 13 wind farms shows that there are long periods of wind power higher than average where pumping could be used and only short periods of little or no power, for example 1st July to Sept13 has a one day(8/7) and a 3 day(15,16/7,17/6) low wind period separated by 6 days and then 13 good wind days before the next low wind day(30/7). That’s without considering any wind power from northern NSW or from WA. Pumping would take 1.5h to restore water used for every 1GWh/h generated( for example Tumut3 has 3 turbines that use 80% of output in pumping at 80% efficiency=64%) The other point about pumped storage is that it would always operate from the grid which is usually stable power I am not sure why you think the grid would be unstable? It’s a valid point to have a theoretical simulation of power demand in 2007, but it should consider the whole of Australia. The reality is that it’s going to take 20-30 years to replace all coal-fired power so saying we have to have a solution now that uses no FF is a bit restrictive. It would make more sense to compare a coal replaced by CCGT with other options such as all nuclear or all wind or mixes of 2 or more. There are an infinite number of alternative ways to do these analyses, and an infinite number of alternative approaches we could propose we could or “should” do. You seem to be missing the main point of the exercise. The main point was to show the economic viability, or lack thereof, of the intermittent renewable energy technologies to provide us with low emissions electricity generation. The central point of the exercise would become less clear and less obvious to most people the more complicated we make the analysis. Also, the main point would get lost if we attempt to look into the future and try to guess about what might be. As we look into the future the main point gets missed as we argue about: what technologies might be available; what the costs might be in the future; what the total demand and the demand profile might be; what the emissions might be; and a host of other ‘maybes’. You and I don’t even agree, within orders of magnitude, as to what transmission capacity is needed to transmit solar power from the deserts to the demand centres. And all this is using currently available technologies and their current costs. What chance would we have of making any headway if we were attempting to guess what might be in the future? To reinforce this point, consider the number of alternative options that have been proposed on this blog site as to what I should have considered instead of what I did. Here are a few: solar thermal chimney; chemical storage; CAES; pump-storage using windmills pumping water onto lined reservoirs on the Nullabhour Plain; smart-grid; bio-gas. If we look into the future, the options are endless. We’d be burried in arguing about assumptions and minutiae and get nowhere. The whole point would be burried. I sometimes wonder if that is, perhaps, the aim of some of the blogs. The point of the exercise was to keep the analysis sufficiently simple that most people could check the calculations themselves. There are many, many sophisticated analyses being done and published all the time, but most people’s eyes glaze over. They do not understand the assumptions nor the inputs, and so cannot check them. If people want to see the outputs of the sophisticated modelling forecasts there is seemingly no end of them. To elaborate on the situation of just wind power replacing FF generated electricity would need x3 ( 25GW NEW and 2.5GW WA and considerable off-grid NG power generation, for example LNG, the goldfields mines, alumina refining). For simplicity lets say this is 28GW average(85GW capacity) for wind. QLD would have just a few % and TAS up to 15% with WA, SA, each VIC and NSW each about 20% of this capacity(17GW in WA). How much transmission capacity is needed from WA to eastern Australia? Clearly not 25GW. The wind regions of WA cover 3,000km so the maximum output would be considerably less than the 75% output of the 13 NEM farms. Lets say 70% of capacity 99% of the time with a small power shed( 5% of output 1% of time), or 11.9GW maximum. But WA uses about 2.5-4GW so maximum available for export would be 9.4GW. Since WA has limited pumped storage, they may want 3GW CAES available to insure that a HVDC link to SA would be used to move up to 6.4 GW to SA. This is about 8% capacity of entire grid. One region never has to move 25GW, of the 6.4GW 2-3 GW would be used in SA and the other 3-4GW would go to other cities if no other wind available or go to pumped storage in the Snowy or TAS if other regions had adequate wind. SA, VIC and NSW have more options if they are the only high wind regions, most would be used locally with the surplus (9-10GW) going to other regions, so SA would be exporting energy to WA and VIC and NSW and these regions would also be drawing on storage. For short term power(GW) the size of storage is not relevant. For storage capacity there is no reason why this cannot be replaced in weeks. Data of 13 wind farms shows that there are long periods of wind power higher than average where pumping could be used and only short periods of little or no power, for example 1st July to Sept13 has a one day(8/7) and a 3 day(15,16/7,17/6) low wind period separated by 6 days and then 13 good wind days before the next low wind day(30/7). That’s without considering any wind power from northern NSW or from WA. Pumping would take 1.5h to restore water used for every 1GWh/h generated( for example Tumut3 has 3 turbines that use 80% of output in pumping at 80% efficiency=64%) The other point about pumped storage is that it would always operate from the grid which is usually stable power I am not sure why you think the grid would be unstable? Sorry Neil, I do not agree with this. I think we have discussed it repeatedly. I am not keen to go around the buoy all over again. I believe the papers, and the subsequent discussions on this thread, address your points. In short, you are still using averages to hide the problem of the intermittency of wind. There are periods where there is no, or little, wind over SE Australia (see chart in the “Wind and carbon emissions – Peter Lang Responds” thread; it highlights the irregular output from wind). So we either have no generation or perhaps a contribution from WA. Since we need to supply power to exactly meet demand at all times, the balance of the power has to come from energy storage. When there is no wind power we need to draw 33GW of power from energy storage. You say you can recharge the energy storage quickly. To do that you need transmission capacity from every wind farm, for each wind farm’s total capacity! Without that, the maximum capacity you can have is limited by the transmission. The cost for what you propose would be much higher than for the scenario used in the analysis described in the introduction to this thread. Also, we have to have reliable steady power to pump. Therefore, much of the wind power that is available when the wind is blowing couldn’t be used; it would be wasted. I hope you will focus on the total system and the costs of a total system that can meet all the constraints and requirements. On a separate point, could you please say if you have some cost figures you are using for your estimates for the Tumut 3 enhancement you propose, and are you prepared to share them (see the end of my post #187)? Peter, In the last table within paragraph “Appendix – Cost Calculations for Solar Thermal,” under the section “Cost for 25GW baseload power, through…” it shows dramatically reduced Collector Field cost ($1487B vs. $8583B) only because of a disproportionally small increase in storage capacity. Could this be right and would scaling up the storage further reduce the overall cost? Thanks, Bunion Peter, I am finding this too detailed for me to follow, but may I venture with this remark. You are winning by an impressive margin. Question will arise in many minds though, how robust this margin is? If non-intermittent renewables (biogas etc) are incorporated; CCS gas and coal are allowed, in reasonable amounts; maybe even non-CCS gas and coal (why not? Under proper international deal, we’ll be paying others to save the planet — nothing wrong with that); more demand-side management, if feasible; and the whole mixture optimized — does nuclear still win, and by how much? Thank you for this post. Your suggestion of expanding the scope is noted. I’ll answer that in another post. Here are a few, quick, off-the-top-of-the-head comments: 1. the most prospective, non-hydro resources are wind, solar PV and solar thermal. The solar optons are 20 to 40 times higher cost than nuclear. That means they are totally out of contention. Not worth any further consideration. Wind power with gas back-up saves very little GHG emissions and requires the full capital cost of the gas generation system PLUS the full capital cost of the wind generators, PLUS massive extra expenditure on the grid and distribution systems. If, instead of gas fired back-up, we use energy storage – either centralised (eg pumped hydro) or at the generators (eg chemical storage, perhaps CAES on the Nullabhour) – we will have very high energy storage costs and very high transmission costs. In summary, wind power provides low value energy at high cost and saves little GHG emissions. All it does is save some fuel. It’s a dud. So the most prospective non-hydro renewable technologies are all uneconomic by very large margins. 2. I don’t believe CCS has any real prospects of succeeding at the scale required. I expect there will be many demonstration projects around the world because they are the political “in thing”. Just as wind and solar are. Let’s not waste time debating CCS. 3. “more demand-side management”. Yes, of course. That is always important. It was known to be important in the early 1990’s and was an important part of ABARE’s modelling for the Ecologically Sustainable Development (ESD) policies. The idea of ‘smart grids’ was a hot topic back then (under different names). The smart meters, which are starting to roll out nearly 20 years later, were an important recommendation from those days. This gives some idea of how long it takes to actually implement these sorts of ideas. I was involved in all that ESD stuff back in the early 1990’s. I recall the strongly held views of certain groups pushing that we could achieve most of the ‘Toronto Targets’* by implementing efficiency improvements and demand side management. ABARE said “give us the numbers and we’ll include your proposals in the models”. The proponents couldn’t give figures. Despite this, ABARE did its best to model the suggestions. ABARE did a lot of good modelling (see Dr Barry Jones et al). But the forecasts that were based on long term trends and their projections of ecomomic growth, were the ones that were correct. This is what ABARE believed would be the case. As ABARE and other more pragmatic and rational groups argued at the time, it is easy to say what we could do to improve efficency in the existing systems (known at that time as “no-regrets” measures), but what we cannot forsee is the new technologies that will increase the demand for electricity. * Toronto Targets – “Australia will reduce its CO2 emissions to 20% below 1988 levels by 2005 …(subject to a caveat that said: as long as business is not be disadvantaged)”. Unfortunately, the government of the day had a policy that nuclear energy was banned and was not to be mentioned in reports by the bureaucracy. We seem to be in much the same position now as we were in 1990. It is amazing to me to see how so much of what was proposed in those days is being repeated again now. Many of the blogs on the BNC web site from the renewable energy, and smart grid, DSM and efficiency improvement enthusiasts are very similar to what was being said in the early 1990’s. We are going around the same loop, 20 years later. 4. Alexi, I’ve kept your best suggestion until last. You said: …maybe even non-CCS gas and coal (why not? Under proper international deal, we’ll be paying others to save the planet — nothing wrong with that); This really is the key suggestion. And this is what I would like world policy and Australia’s policy to be. We want an international free trade agreement that includes greenhouse gas emissions. It will be managed by the WTO. This would be the least cost way to reduce the world’s greenhouse gas emissions. Everyone knows that. The economic modelling for IPCC says it clearly and Stern and Garnaut say it too. The problem is the politics. If we did go this route, as you suggest, it would generally be a lower cost option for Australia to contribute to other countries reducing their emissions than to massively and suddenly cut our emissions – initially. This is true despite the fact Australia is near the highest GHG emmitter per capita. The reason it is true is that some other countries’ industry is less efficient than ours (although that is changing rapidly). Still, we do have to get the African and other developing nations through the hump onto electricity first and then into reducing their emissions. So it would be best, from a world emissions perspective, for Australia to buy permits (freely traded internationally) until it gets to the point where it is cheaper for us to clean up our own act. Of course there will be a lot we can and must do all the time, I’m not denying that. I’m just saying the best way for the world to cut GHG emissions is the way that is most economically efficient. Great suggestions, Alexi. Thanks for the opportunity to get outside of the nuclear/renewables/transmission box. But, having had a little peak at the outside world, I probably should get back in my box now. Wind available 50% of the time at 4 cents/kWh; lifetime 20 years. CCGT available 100% of the time at a variable cost (varting cost of gas) but assumed to average 9 cents/kWh, including carbon offsets purchased; lifetime 20 years at 100%. Combining these provides power at an average of 6.5 cents per kWh with only half of the carbon dioxide to be offset, this for 20 years. The CCGT is now paid off, so cost of ruuning it drops dramatically and it still can run at 50% of the time for another 20 years before it has to be refurbished/replaced. I’d like to say some more in response to this comment of Neil Howes’ (#194): Peter#191, It’s a valid point to have a theoretical simulation of power demand in 2007, but it should consider the whole of Australia. The reality is that it’s going to take 20-30 years to replace all coal-fired power so saying we have to have a solution now that uses no FF is a bit restrictive. It would make more sense to compare a coal replaced by CCGT with other options such as all nuclear or all wind or mixes of 2 or more. The reasons I used the scenario described in the papers (2007 NEM demand, current technologies and their current costs) for the simple analyses I’ve done so far are: 1. to keep it simple (so non-specialists can follow the assumptions and calculations); 2. to minimise the opportunity for distracting arguments about minutiae; that is, to head-off, to the extent possible, the virtually unlimited number of likely arguments about the assumptions regarding future demand, demand profile, technology options available, which will be the most prospective, and the capital cost of each technology at some time in the future; 3. to allow us to make use of available, current, detailed data; 4. I chose to use the NEM demand, rather than whole of Australia demand, because we do not have the detailed demand and supply data for whole of Australia. We can get the 5-minute generation and demand data across all the NEM and for all the individual generators – even for most of the wind generators. There is no such data freely available for Western Australia (that I am aware of). 5. Importantly, as I commented in post #200, I believe we are in a similar position now as we were in in about 1991 regarding the technology options, the costs, the government policies and the politics. So it is informative to consider what Australia’s electricity generation mix might have been in 2009, if our political leaders (with bi-partisan support) had endorsed nuclear power in 1991 and taken a bipartisan, pro-nuclear policy to the 1993 election. This is where we could be now: a. Greenhouse gas emissions some 20% lower than they are; b. 5 GW of nuclear power operating (one reactor in each of the mainland states). 5GW coming online about now, another 5 GW under construction and coming on line over the next 5 years. So, by 2015 we would have 15GW of nuclear generation and 20GW or more if we wanted to by 2020. c. I do not believe it is irrelevant to look back like this at what could have been. Because, from my perspective, we are in a similar position now as we were in about 1992 and about to repeat the same mistake we made back then. We are now a year at most from the next federal election. The government seems intent on going to that election with an anti-nuclear policy. In 1992 we were in a similar position. The opposition’s policy was to allow nuclear as an option. The Government used that position as an effective divisive tactic to help it win the election. Nuclear was off the agenda for the next 14 years, and is now off the agenda again. I see a very similar situation right now. I can foresee another long delay. d. Instead of some 95% of electricity generation related research effort in our universities, CSIRO and others, and modelling by ABARE, ACIL-Tasman, MMA and many other modelling consultancies being dedicated to renewable energy, they would have been mostly working on nuclear energy. So we’ve had 20 years of research with low return on investment. What a waste of our resources! I can not do the sort of modelling analysis you are suggesting. But many others are churning out modelling exerices all the time and applying a wide variety of assumptions. I am intending to do a (relatively) simple projection of what we could achieve by 2030 in terms of CO2 emissions and cost. I intend to remove existing coal fired power stations as they reach 40 years age. And replace these and provide extra capacity to meet demand with these options: CCGT, Wind + OCGT + pumped-hydro storage, nuclear + pumped-hydro storage. I will work on current capital costs for the technologies. The figures will be at 5-year increments from 2010. Pat Swords is one of the engineers of the first Irish revolution, the one that turned his country into the Nº1 European performer. Now he tells us, in a few chosen words and visuals, how the Irish miracle is being disengineered into chaos and poverty. “But many others are churning out modelling exerices all the time and applying a wide variety of assumptions.” — would you recommend any particular one to look at? I am looking for ammunition against the Green argument, that a mix of technologies will tackle intermittency easily. I look forward to hearing more on the real world experience of working with the simple cycle GTs and CCGTs. What is the real world practicality of using CCGT’s to back up for fluctuating wind power? I received a report a few days ago of the actual rate of change of wind power output being experienced for the total of all the wind farms on the NEM in August. The maximum ratres pof change were: up = 100MW/5min, down = 115MW/5min. The ramp up rate exceeded 50MW/5min 13 times in August. The ramp down rate exceeded 50MW/5min 9 times in August. First, we have to ask how the ISO uses GTs now. For the most part, both OCGTs and CCGTs are integrated into a grid that is largely conventional thermal units, many with load changing capabilities and fairly good predictability of what the load will be throughout the day. This means there is a huge “elasticity” of generation and, the bigger the grid, the more elasticity. Now, most GTs are ‘baseloaded’. This means the opposite of what the jargon for the grid, it means they get turned on (either for peak or, because some expected load didn’t arrive for a variety of reasons) and go to their ‘loadlimit’. This is essentially what they were built for. The CCGT plays this role also but…has better loading changing capabilities because there are, basically two power plants in one: a GT and and Steam turbine, the later with governor valves that can respond to load. But more importantly, such as the wildly popular GE Frame 7, it uses a remarkable controller called a Mark V (now Mark IV) which can actually regulate the firing of the GT to control the steam turbine for a specific total MW target…and do so VERY fast. The big issue with these suckers is that they are limited at how *low* they can go without tripping off line. Always tricky even with a Mark VI. When the CCGTs were *conceived and designed* they were done so as highly efficient *peaking* generators that had a secondary role of multi-hour, even multi-day *baseload* generators. OCGTs were never conceived of load changers at all, even though they can. In the industry efficiency is not measured as a percentage. It’s measured in *heat rate*. The heat rate of 99% of all simply cycle GT is very, very bad. 10,000 is a number that is very common. This is he same as my 40 year old conventional, crappy, gas thermal unit. From what I remember, the HR of a brand new simple cycle GE Frame 7 is about 9,200 ( this needs to be references for sure). This also sucks. What sucks more is when it goes down on load, say, from it’s 172MWs (at sea level) down to it’s minimum at about 110MWs. The heat rate starts going up to about 12,000 or higher. In other words, the expense of running a simple cycle unit down on load is really, really bad and expensive. I believe this is true with the most advanced GT out there, the LNS-100 from GE which is designed to only run in simple cycle mode at a very efficient heat rate (8000 I think). Load changing it’s not being marketed as. So…if you have bunches of CCGTs running, the more elastic load changing, generally, you have. Can a *lot* of OCGTs and CCGTs handle the wild fluctuation of rapidly changing wind: yes. The operating word is “lots”. This means that despite the generally low heat rate of CCGTs (5,000s to 7,000s) and the ability to follow load, prodigious amounts of natural gas will be burned, uneconomically, to accommodate the winds eclectic and temperamental output. Thank you for this reply. It is very interesting and informative. It’s really great to receive comments from people who have worked at the ‘coal face’. There are many others contributing on the BNC web site too. Its great. You have enlightened me with your post. I am surprised by what you say about the relative suitability of OCGT and CCGT for load following. I do also note your very important last paragraph. Hi Peter, well, I don’t have documentation with me but it I don’t understand the numbers they present. To wit: • The coal generator is a base load plant that runs all the time. It has a cost structure of high capital costs and low fuel costs. I agree in general here. • The CCGT is an intermediate generator. Compared to the base load generator it has lower capital costs but higher fuel costs. This is what I was saying about it’s initial design and intent, it’s “Marketing” so to speak. It is however increasingly used AS a baseloader generator but it can be easily taken off line at night. So it’s highly flexible. But it’s heatrate is as good or better than *any* other thermal unit which in some case, depending on the price of gas, can be *lower* than coal. Rarely, but true. • The OCGT is a peaking generator that is optimum for low capacity factor usage. Yes, it’s a peaker and is inline with what I noted. But it’s “capacity factor” is…well, it’s not a good term to use. This is where industry jargon is much better and more appropriate: it’s *availability* by definition needs to be 100% for it function as a peaker. The real-world capacity, that this what it actually runs *as determined by the load*, maybe low, but irrelevant. It’s function is different than a base load plant. Further down the page is this statement: • OCGT has the lowest average cost at operating capacity factors of less than 14%; • CCGT the lowest average cost for operating capacity factors between 14% and 55%; Probably true…I’m not sure how they parse these numbers but ideally the ISO pays, via rate increases for the operator of the OCGT, for ONLY *availability* and nothing else. They are also paying for all fuel costs as well. This means the *less* it runs the better off everyone is: because it implies a better scheduling of base load facilities, outages, etc. It means all nuclear is running, hydro available, gas and coal thermal units online etc. This is why it’s important for people to stop thinking of all MWs as equal, they are not. I am particularly resentful of some renewable advocates who think willy-nilly to keep these plants running or available or as permanently part of the mix as if there are zero costs or the costs are incidental. They are not. As California’s own usage has shown, natural gas production for electricity generation is going up, and going up every year because of the wide scale, ISO approved use of both OCGTs and CCGTs. Generally, the CCGTs are used, as I noted above and and in my previous comment *as* baseloaded plants, running 24/7 if gas prices are, as the are now, low. As this huge, rapidly growing sector of the energy market (MUCH bigger than wind or solar, I might add) these assets become “obiligitory run” units because the renewable single digit percentages of the ‘capacity’ of the system goes to double-digit, then we have to pay for more and more of these ‘cheap’ GTs…but because of the unreliability of the renewables (still, to this day, NO industrial storage for renewables, including pump-storage) then MORE and MORE gas is burned. The gas companies LOVE this. For every MW of renewables they they get to build 2 to 3 MWs of NG plants. What’s not to like? I’ve just heard something from a solar/wind presentation that sounded unbelievable. Basically, the presenter said that if we changed all power plants to nuclear then the water used to cool them would raise the temperatures of the oceans by 1 to 2 degrees and cause similar problems to those of global warming. A person next to me remarked that coal plants are cooled by water the same way nuclear plants are so why haven’t we heard anything about this problem about the hot water that comes from them. Is there anything to these concerns? It is indeed unbelievable that this rubbish is being presented as fact. The comparison to coal plant heating is quite correct. The effect is real, but LOCAL. The river just down stream of the plant will be a little warmer than it should be if direct cooling is used. If cooling towers are used, water temperatures are not effected, but water is used (evaporated) so there is less of it in the river. The GLOBAL effect is undetectable because energy releases from power plants are so small compared to the solar energy absorbed by the earth. The solar promoters are right that there is a huge quantity of solar energy available, but neglect how difficult it is to collect this dilute resource, compared to the much smaller but very concentrated energy resources of fossil and nuclear fuels. Peter I will go over them this weekend when I have more time. They require a serious looksee. I’m not an economist…at all…but I know some general things about the issue from my experience. Some of this stuff should be looked at by our friends on Kirk Sorensen’s blog as well, at energyfromthorium.com for feed back. It is possible to use dry cooling towers. These are available and have been installed in several, indeed many, locations. I suspect these are a bit more expense initially, but obviously only heat up the air. Regarding the rotating reserve, around here these reserve units are sent signals from the grid operator each two seconds; power up a little, power down a little. In this way the reserve units are always ready to go online in case of need. Peter, Thanks for all the details in #200. All of it edifying, and the historical bit is fun. But as the answer to my question, not fully convincing. The question was, whether nuclear still wins if renewable mix is optimized; and if yes, by how large a margin. I accept you aren’t doing the modelling required for a thorough optimization. Still there may be something you could do. A quick robustness analysis would be, for example, to take a case for wind-with-backup, add a little solar to it. What happens? Etc etc. That may seem like too much work. So maybe take someone else’s optimized case for renewables, compare to your case for nuclear? (Now that’s a crazy idea.) Thank you for the leads in #208. A quick hop through them was unavailing, but I’ll look more thoroughly later. It came up today at a brown bag lunch presentation on “green jobs” at the City of Sunnyvale campus. Silicon valley is a region with a large amount of solar pv companies. Don’t have the name of the speaker of hand. Can you give me more specifics on what “orders of magnitude” means so I can have an explanation with numbers to counter this false claim? In the last table within paragraph “Appendix – Cost Calculations for Solar Thermal,” under the section “Cost for 25GW baseload power, through…” it shows dramatically reduced Collector Field cost ($1487B vs. $8583B) only because of a disproportionally small increase in storage capacity. Could this be right and would scaling up the storage further reduce the overall cost? Good question. Someone is checking. I believe the calculations are correct but it is a fictious scenario because solar thermal does not yet have the capability for even 1 day of storage, let alone 3 days or 5 days. The collector field capacity required is calculated from the capactity factor. The capacity factor rises over longer periods (see the paper “Solar Power Realities” for more details on this – click on the link at the top of this thread). For 1 day, the capacity factor used in the calculation is 0.75%, for 3 days is 1.56% and for 5 days it is 4.33% (these are based on the actual capacity factors at the Queanbeyan Solar Farm, see the “Solar Power Realities” paper). So less collector field capacity greatly reduces the cost because the collector field capacity is by far the largest cost item. Yes, if we could have more storage, the costs would be reduced substantially. Again, I refer you to the “Solar Power Realities” paper for more on this. That paper shows that the minimum cost using pumped hydro is for the case with 30 days of storage (of course, no one has this amount of storage potential so again it is a theoretical calculation). However, if we used NAS batteries, the least cost would be with 5 days of storage. That is because the batteries are much more costly than the pumped-hydro. The real point of all this is that solar is totally uneconomic. It is not even worth considering. The comparison to meet the same demand (our 2007 demand) would be nuclear = $120 billion, solar PV with pumped hydro = $2,800 billion, solar PV with NAS batteries = $4,600 billion, solar thermal = can’t be done at any cost! Mark, my pleasure. And, turns out, I’ve stretched it. Looking at current production of electricity, I am almost right, “orders of magnitude”. Looking at potential production when the whole world is developed, and producing and consuming energy to American or Australian standard, and assuming nuclear power as source for ALL energy, it begins to look as a bit of a concern. I compute it at 0.014 deg. C for the current electricity production. But 0.23 deg. C for the future prosperous world. Here are the estimates. First, look at the heat system. In global warming analysis, they are worrying about things on the order of 1 Watt per square metre. Doubling of CO2 is thought to cause 4 Watts per square metre. (BEFORE any feedbacks, including water vapor. Just take the atmosphere as it is and enrich it with CO2.) Current imbalance is thought to be 1.5 W/m2. Earth is estimated to respond to 4 Watts per sq.m, from CO2 doubling, with 3 degrees C of warming – eventually, when it has been given the time to heat up. And when most (but maybe not all – this is a complex issue) feedbacks were allowed to play out. However may I do a hypothetical 10 kW first. Were 10 kW continuously produced per person, that’s 120 000 W per 1000 000 m2, 0.12 W/m2. That’s 30+ times smaller than estimated 4W/m2 from a CO2 doubling. Comparing to the 3 degree warming CO2 should cause, we get 3/30=0.1 degrees C. If this much electric energy was produced the way it is now in nuclear plants, three times more – 23 kW – of raw thermal energy would be being produced (15 kW of it wasted as heat.) So with whole world consuming energy as Aussies do now, that would be 23 kW of raw thermal energy production per person. 2.3 times more than the hypothetical 10 kW above. So 0.1×2.3 = 0.23 degrees C. Not to worry too much, Global Warming is far worse; but to ignore either. If we got lots and lots of power from nuclear fission or fusion, wouldn’t this contribute to global warming, because of all the extra energy being released into the environment? That’s a fun question. And because we’ve carefully expressed everything in this book in a single set of units, it’s quite easy to answer. First, let’s recap the key numbers about global energy balance from p20: the average solar power absorbed by atmosphere, land, and oceans is 238 W/m2; doubling the atmospheric CO2 concentration would effectively increase the net heating by 4 W/m2. This 1.7% increase in heating is believed to be bad news for climate. Variations in solar power during the 11-year solar cycle have a range of 0.25 W/m2. So now let’s assume that in 100 years or so, the world population is 10 billion, and everyone is living at a European standard of living, using 125 kWh per day derived from fossil sources, from nuclear power, or from mined geothermal power. The area of the earth per person would be 51 000 m2. Dividing the power per person by the area per person, we find that the extra power contributed by human energy use would be 0.1 W/m2. That’s one fortieth of the 4 W/m2 that we’re currently fretting about, and a little smaller than the 0.25 W/m2 effect of solar variations. So yes, under these assumptions, human power production would just show up as a contributor to global climate change. By email, George Stanford said this: “Approx. global population: 7E9. Average solar power hitting the earth’s surface at ground level = 1 kW / m^2 x pi x (6400 km)^2 = 1.3E14 kW. That’s 18.4 MW per person from the sun. – – – – – – In 2007, the U.S. used 101 quads of energy = 101 x 2.93E11 kWh = 3.0E13 kWh, for an average power usage of 3.4E9 kW. Pop. of US = ~3.00E8. Thus average power consumption per person = 3.4E9/2.0E8 = 11 kW. – – – – – – Thus if the whole world used energy at the per capita rate of the U.S., that would be adding 11 / 18,400 = 0.06% to the total energy input to the biosphere. (BTW, that’s about 6 times the rate at which geothermal energy reaches the surface.)” Now, based on our best estimate of climate sensitivity, you get 0.75C per W/m2 of forcing, so Mackay’s estimate of 0.1W/m2 would predict a warming of 0.075C, which is a bit smaller than Alexei’s estimate — but that’s only for fast feedback sensitivity so you might want to double it for equilibrium, which is 0.15C. Wow and thank you very much. Let me see if I can say this a bit simpler. If by magic say we could instantaneously get rid of every single source of man made CO2 emissions from power generation and replace that with nuclear then we trade off adding degrees rise in global temperature for 1 to 3 tenths of a degree rise in global temperature. However, there is NO concern if we heat the globe up 1 to 3 tenths of a degree. So it’s a none issue. Peter, sorry we’re using your thread for this discussion. Hopefully you aren’t cross with us. Barry, thanks, correction taken. Long-term sensitivity could well be double, i.e. 6 degrees per CO2 doubling. It is prudent to double my numbers. Mark, My numbers apparently agree with Mackay’s. His case exactly matches my “hypothetical” – he takes twice the population but half the power production. You should double my numbers, though, to be prudent, as Barry reminded us. And, do not dismiss too readily a 0.1-0.3 degree C temperature rise. Not if combined with temperature rise from other sources. “Non-issue” it is not. But you’re right that it is dwarfed by the CO2 danger. Why in the world do they have that one single wind turbine sitting there next to all 8 of the Pickering reactors? What on Earth is it supposed to accomplish? Is it supposed to be some kind of marketing tool? Besides the great cost involved in providing 24/7 electricity with solar, I understand that solar power has quite a bit larger CO2 emissions than nuclear. Would solar power related CO2 emissions be as problematic to global warming as the hot water from nuclear power plants? Mark, I have never looked into that, and do not know which factors must be reckoned with. I could look up some numbers and make some estimates, but I could easily miss important factors. Like this one: do we have to emit CO2 while making solar panels? Maybe not. Even if CO2 must be produced, it could be sequestered. CCS is a big expense for coal power; but for solar-panel making, my gut feeling is, it should be affordable. It uses a probabilistic approach. I am not impressed with their p10, p50 and p90 values for the future generating technologies. They look to ne to be clearly biased against nuclear and pro renewables. That would make sense given the strong representation of renewables researchers in overseeing the study. However, this may lead you to some of the other studies. The NEEDS report (link provided above) explains that the present state of the art is about 7.5 hours of storage with trough technology, which is their selection of the most prospective soar thermal technology. They project that 16 hour storage to be achieved by 2020. However, we need 18 hours just to gat through one night in winter. We’d need at least 3 days storage to allow solar to be considered as a basload generator. So the position is that no matter how much money we throw at it, we just do not have the technology yet. Besides the great cost involved in providing 24/7 electricity with solar, I understand that solar power has quite a bit larger CO2 emissions than nuclear. Would solar power related CO2 emissions be as problematic to global warming as the hot water from nuclear power plants? For the non-fossil fuel burning technologies, the CO2 emissions come from the mining, processing, milling, manufacturing, construction, decommissioning, waste disposal and the transport between all these steps. Most of the emissions come from all the processes related to steel and concrete and the emissions are roughly proportional to the mass of these materials per MWH or energy generated over the life of the plant. There is much more material involved per MWh for renewables than for nuclear. So higher emissions from renewables. Also recall that solar and wind require a massive over build to be able to produce the energy we need during cloudy and low wind weather. Furthermore, nuclear power stations have an economic life in the order of three times that of renewable technologies. Put it all together and you find that the solar thermal power station emits about twenty times more than nuclear, about 1/3 as much as a coal fired plant and little less than CCGT plant. Nuclear power plants also have some emissions from the uranium enrichment process. As this is due to electricity use, it is negligible when the electricity is generated by nuclear power. However it often shows up as a significant component in many studies using electricity generated by fossil fules. In this case it is still less than the contribution from construction. Further to my post $236 in answer to your question in post (#231), links to the pdf articles are included in the article at the top of this thread; these will give more information and should answer some of your questions. I think you are being a tad naughty in your attribution of emissions. The only fair way to speak of emissions is as a relationship between output of power and CO2e. The fact that solar thermal and wind don’t have equivalent CF to nuclear is relevant to the quality of the power, but not the CO2 footprint, so you can’t include overbuild assumptions. I also don’t see where you get your life of plant calculations. Since no commercial solar thermal plants are in operation, AFAIK, we can’t say they will only last 20 years, and although it may well be wise to upgrade wind farms if better materials anf technology for harvest arise in the future, there’s no reason to suppose a wind farm can’t last 60 years. Even if you have to change some of the gears or rotor parts, that’s not the same as building an entirely new plant — more like replacing components in a nuclear plant. Thank you for your comment. There are some good points to get my teeth into in this post. I think you are being a tad naughty in your attribution of emissions. Maybe. Let’s see The only fair way to speak of emissions is as a relationship between output of power and CO2e. I’d say the only fair way to compare emissions from different technologies is on a properly comparable basis. One such fair basis is to compare GHG emissions per unit energy (e.g. t CO2-eq/MWh) over the full life cycle (Note: not a fuel cycle analysis which is often used and is biassed towards renewables – watch out for that one). Another better way is on an equivalent energy value basis. This is because a MWh of energy from a wind farm is not the same value as a MWh of energy from a baseload plant, or a peaking plant. The energy from the wind farm is almost valueless. No one would buy it if they weren’t mandated to do so. The fact that solar thermal and wind don’t have equivalent CF to nuclear is relevant to the quality of the power, but not the CO2 footprint, so you can’t include overbuild assumptions. Not true. Consider the solar power station. The emissions per MWh calculated by Sydney Uni, ISA for the UMPNE report were for a solar plant with a given capacity. They calculated the emissions for all the material and divided that by the MWh the plant was expected to generate over its life. So if you need twice or ten times as much installed capacity to get the energy output you need, then you have all that extra GHG emissions embedded in the extra materials. The emissions increase in direct proportion to the amount of materials used in the plant. Bigger plant for the same energy output means more emissions per unit energy. I also don’t see where you get your life of plant calculations. Since no commercial solar thermal plants are in operation, AFAIK, we can’t say they will only last 20 years, and although it may well be wise to upgrade wind farms if better materials anf technology for harvest arise in the future, there’s no reason to suppose a wind farm can’t last 60 years. The life of plant calculation come from the NEEDS report. However, they are commonly quoted. Usually 20 years to 25 years for solar. However, as you say we do not have evidence for that because none have been around long enough to demonstrate it. I suspect it will turn out to be mush shorter than what the optimistic researchers are claiming. Wind farms are already beeing pulled down and there are attempts to sell the old, outdated structures and turbines to developing countries. No one is buying. The intention is to replace them with bigger and better wind generators to make better use of the site. Because the new structures are bigger, everything has to be replaced. The foundations have to be much bigger, the structure and the transmissions lines. It is a complete replacemnt job. So all the emissions embedded in the original wind farm components and site work have to be divided by a shorter economic life. We now find they were actually much higher per unit energy than estimated originally. The same is the case for solar. It will be out of date long before 20 years and will become uneconomic. Even if you have to change some of the gears or rotor parts, that’s not the same as building an entirely new plant — more like replacing components in a nuclear plant. As explained above, wind generation equipment is being totally replaced already. Nuclear plants are upgraded and up rated but that is not a whole sale replacement of the structure. Thanks Fran. It is good to have the opportunity to answer these questions. Another way to look at it is emissions avoided during power production. I once read an article claiming (from memory) – Every kilowatt hour produced by wind replaces a kilowatt hour produced by CO2 emitting coal plants. Now, as we have seen thats just not true. In simplified terms: due to their intermittent nature, 1GW (nameplate capacity – because thats what the public is told they produce) of wind/solar cannot replace a 1GW coal power plant, the coal plant stays operational (or is replaced with a new one) and very little CO2 emissions are avoided. However; a 1GW nuclear power plant CAN replace the 1GW coal plant, therefore ALL of the emissions from the now closed coal plant are avoided. (I’ve excluded embodied emissions here -out of my league – but when you consider the renewable option could require the building of wind/solar plants AND a new coal plant, the ‘one out, one in’ nuclear option has got to be better on that count too.) You could say then, the failure of wind/solar power to be able to replace CO2 emitting power sources, GW (nameplate) for GW, means they have high indirect emissions associated with them that nuclear power does not. Put it all together and you find that the solar thermal power station emits about twenty times more than nuclear, about 1/3 as much as a coal fired plant and little less than CCGT plant. Fran is correct that this staement needs more explanation. I was referring to the 1,600GW of solar thermal capacity needed to produce 25GW baseload power throught the year. That is an overbuild of 64 times. This means 64 times as much steel concrete, transport etc for this plans as for just 25GW of peak apacity. The sentence quoted shoud be restaed as follows: “Put it all together and the solar power station with the capacity described in the ‘Solar Power Realities’ paper emits about twenty times more GHG than nuclear, about 1/3 as much as a coal fired plant and little less than CCGT plant per MWh on a life cycle analysis basis.” First off, thanks again to all. Second, has the heating of H2O by nuclear power plants and the problem it poses to global warming been adequately address? Are there anymore constructive thoughts about this? If this became a problem then could reactors be build that would diminish this effect. Maybe using that heat for something else before putting the water back in the water supply. “Second, has the heating of H2O by nuclear power plants and the problem it poses to global warming been adequately address?” The heat energy put out by nuclear power plants, or any other kind of thermal plant for that matter, is so miniscule in comparison to the other energy flows through the ocean and atmosphere that this is a non-issue. From my perspective, the effect of the heat energy released by nuclear and by buring fossil fules (they are roughly the same per unit of electricity generated) is a way down in the weeds issue. It is about as relevant to climate change as is the ongoing release of natural geothermal energy. They are both so small that they can be ignored in all the analyses we are doing now.. We must apply the Pareto Principle (see link) if we are going to make any headway. Mark, 243. Not sure what exactly you mean. Do David B. Benson’s 220 and Luke’s 217 answer at least partially your question? Why specifically H2O heating, are you concerned about H2O evaporation, it being a greenhouse gas? You may want to re-phrase. You requested/suggested some modelling be done. Neil wanted to see the projected CO2-eq emissions and capital expenditure at 2020 and 2030 for the options we’ve been discussing. Alexi suggested some sensitivity analyses to consider mixing various proportions of the various technologies. I am going away for about two weeks, so I will not get any of this completed for at least the next three weeks. This report http://www.aciltasman.com.au/images/pdf/419_0035.pdf provides projected unit costs for energy and power, and provides much of the other information needed for detailed modelling. I do not believe some of the unit cost figures are what would actually apply if we were to get serious about implementing low-emissions, low-cost electricity generation. Neil, I’ve started on your suggestion. I tried to keep it simple. But it isn’t. The further I go the more complicated it gets. For each technology projected efficiencies, unit costs, and CO2-eq emissions per MWh change over time. The capacity credit for wind power has to change as the proportion of wind power changes. The capital expenditure needs to include the cost of ongoing replacement of existing plant. For the BAU case I needed to include the cost of replacing coal fired power stations at 40 years age with new coal at that time, and with the applicable projected emissions factors and unit cost. It’s not simple. But I am progressing with it. The pumped hydro paper is being reviewed. I haven’t received feedback yet. I’ve received a reply from one of the people who is checking my draft Pumped Hydro paper. He has checked the calculations and the cost figures (ball park) and calculated revenue. He says I have significantly under-estimated the tunnel costs. He also says the power must be estimated on the minimum head not the average head. He says as follows: One would have to assume that the available head is between the minimum operating level at Tangagara, MOL = 1,207 and the full supply level at Blowering, FSL = 380 because any operator would have to guarantee 95% reliability for his peaking power. Thus, the gross head for power generation is MOL – FSL = 827 m. … P computes to be P = 7,860 MW I had calculated 8994MW from the average head difference and lower friction losses in the tunnels. He also checked my cost estimates and says: “… the construction costs may be closer to $15 billion than $7 billion as you have estimated, which will bring the cost per installed kW back into the range of $2,000/kW which is about what pumped storage schemes cost these days.” Lastly, he sums up by saying: I do not mean to discourage you but the capital expenditure for a pumped storage scheme between Tantangara and Blowering seems prohibitive because of the scale of the investment, the high up-front costs and the long period for investors to recover their money. Unfortunately, politicians and banks take a much shorter view of life when it comes to political or financial gains and it seems to me that your idea, as much as I like hydro, seems to be condemned to the ‘not economical’ basket. The person who has done this check for me has been investigating and building hydro schemes all his life and still is. I believe there is an important message here for Neil Howes and the other readers who are very keen that renewables are implemented. Enthusiasm and belief will not make RE economically viable. We frequently go too far with our beliefs, and force our politicians to make dreadfful mistakes. The pumped hydro is not viable, yet renewable advocates want to argue for it in an attempt to make wind and solar appear viable. Solar thermal is not viable but its advocates want to push for subsidies for it despite the costs. Wind is twice the cost that advocates say it is. All the recent wind farms are costing around $2.2 million/MW to 2.5 million/MW. Thankyou Peter Lang for all your diligence and hard work in answering the many comments and queries elicited by your excellent posts. I hope you are going on a holiday for your two weeks away – you certainly deserve one! Alexei, I think you are asking me for more than I can do. Applying the Pareto Principle you can see from the papers so far provided: 1. Wind power saves little GHG emissions compared with nuclear; has very high avoidance cost (>$800/t CO2-eq) compared with nuclear ($22/t CO2-eq); is high cost and generates low value energy (see previous posts). If you look at the chart near the end of the “Cost and Quantity of Greenhouse Gas Emissions Avoided by Wind Generation” paper you can see this information. And that is for the nearest to being economic of the renewable energy technologies. The others are worse. 2. Solar power (both PV and thermal) are totally uneconomic compared with nuclear. They are 20 to 40 times higher cost than nuclear to produce the equivalent output. The “Solar Power Realities” and the “Solar Power Realities – Addendum” papers show this. So there is little to be gained by mixing and optimising technologies that are uneconomic by a factor of 20 to 40 and have higher emissions. I believe the information for the comparison you waant is avalable in the papers already postred on the BNC web site. We know that there isv alue in having about 8GW of pumped hydro combined with nuclear. That reduces the nuclear option by about 10% compared with nuclear only. 3. Transmission costs, alone, to support renewable energy are far higher than the total cost of the nuclear option. The cost of transmission for the renewables is presented in the article at the top of this thread. It shows that the just the trunk transmission lines for solar thermal in the deserts and for wind farms located along the south coast of Australia ($180 billion) is higher cost than the whole nuclear option ($120 billion). And that is just for the trunk lines. The whole transmission system upgrade needed to handle renewables would be probably twice the cost of the trunk lines. I’d argue the information you are asking for is already available. It is a matter of getting to understand it. We have to be careful not to make so many mixes and matches that we simply confuse everyone. There is one thing that Neil Howes asked for and I agree it would be helpful. That is, the CO2 emissions and captital expenditure at key intermediate dates in the path to total removal of fossil fuels from electricity generation. Neil asked for these values at 2020 and 2030. I am working on providing them at 5 year intervals from 2010 to 2050. But it will take me some time to comoplete that. Peter, thank you for the effort and patience… I do not at all want to distract you from that other equally, or more, worthy dimension that you’re going to explore. So, the following is not intended as further prodding, but merely information: With your encouragement that “information you’re asking for is already available”, I’ll keep looking. For now, the best unimpeachable comparison that I can make for nuclear-vs-renewables, is: Nuclear with hydro storage and storage-mandated transmission costs versus CCS gas and coal, wind, solar, in any proportion between the three; NO storage; NO storage-mandated transmission — comparison being by cost per kWh, assuming all capacity is always used, no intermittency problem. The Cambridge professor David MacKay has proposed that in order to decarbonise Britain entirely by 2050, we must slash energy consumption by 50%, increase renewables (mainly wind) 20-fold – and also build more than 60 new nuclear stations. Note that this is not an either-or strategy: we need every tool we have got to throw at this problem. fromhttp://www.marklynas.org/2009/8/12/nuclear-power-challenging-the-green-party Well, David Mackay’s strategy may well work. The operative term is “slash energy consumption by 50%”. If you built 60 new nuclear stations, however, you wouldn’t need to slash energy consumption by 50% you could probably increase it. Outside of a serious Pol Pot approach to consumption, these features of energy starvation are, in a way, barbaric and, unnecessary. The approach to solving climate issues is figure out what we want to do, develop a serious plan, not one where everyone are automatons and ready to ‘sacrifice for the good of all’ and we all live in what is essentially a neo-Malthusian world. Why don’t British environmentalists come out an say here are the major carbon emitters and why: coal, transportation, etc etc and begin to address each one with nuclear or other non-carbon solutions that allow for an *expansion* of energy usage while making things cleaner, greener and more efficient. Alas… And if all of America adopted the same energy efficiency policies that California is now putting in place, the country would never have to build another power plant. From the site whose link you provide. David, this is so wrong it’s hard to know where to start. California adopted the energy efficiency problem in the 1970s into the 1980s. What efficiency FAILED to account for was *growth*!!!!! Efficiency brought down some, and held down overall per-capita increases in energy use. But it can ONLY do that. Once you increase population and increase the *economy* NOT building plants is *exactly* why we had this huge transfer of wealth under deregulation in 2000/2001!!! If had built gas plants and/or nuclear plants, there would of been no energy crisis, period (outside of an increase in gas prices which really started the whole thing). The *reliance* on “efficiency” was a total and absolute disaster for California and this web site *boasts* about how well it works. My, my. California today is building over 10,000 MWs of CCGTs. So much for “efficiency”. I think Mackay’s modelling was based on assumptions about build times, the patterns of energy usage, and a view of sustainable as what would allow for a 1000 years of energy usage at the European level of about 125kwH/per person per day on a world scale. 125kwH/per person per day? Hmmm…. I use about 256 KWhrs a month. Average US home, no AC but a 50inch flat screen. You sure about that? At any rate, the point in Fran, is that none of what he looks at can work without this “efficiency” model. At the end of the day it cannot, by definition, account of growth. There is simply no getting around that. On a per capita basis, without parsing Mackay’s numbers, there is going to have to be a vast increase in per capita energy use. I see no way around it. I think his world view is flawed. Again, we need to look at our goals, sectionalize it out to achievable ends and work up from there. Mackay is in the Lovin’s school of ‘negawatts’. I live through that as Lovins was writings how glorius Governor Brown’s efficiency models were working (and they were, as it happens) and them *poof*. The state grew and that ended that. Efficiency needs to be placed in it’s proper context. View from a military objective, efficiency is but on tactic to use. As is conservation. The strategy, as opposed to tactics, involves the issues of energy growth, economic growth, nuclear and/or renewables, etc. I’d say the only fair way to compare emissions from different technologies is on a properly comparable basis. One such fair basis is to compare GHG emissions per unit energy (e.g. t CO2-eq/MWh) over the full life cycle Just so, assuming you can get reliable, pertinent data. […] Another better way is on an equivalent energy value basis. This is because a MWh of energy from a wind farm is not the same value as a MWh of energy from a baseload plant, or a peaking plant. The energy from the wind farm is almost valueless. No one would buy it if they weren’t mandated to do so. I disagree, and not only because your statement is too sweeping. It is true as I noted that non- less-despatchable sources are of less value, in much the same way frequent flier miles aren’t as valuable as the redeemable value in notional cash terms. Trying to factor in overbuild to have like with like and mapping Co2 from that simply looks like special pleading. It’s more honest to say — sure, lifecycle analysis of wind is about 5g per KwH, but when considering feasibility this is not the only or even a decisive consideration. Wind is a poor match for many of our energy usages because it is insufficiently dispatchable, limited by site constraints which impose ancillary costs such as line connection which don’t apply to more conventional sources. Unless we can do without the utility offered by conventional sources in favour of the utility of intermittent sources, one can really only compare CO2 footprints of things that can operate in lieu of the sources of energy we wish to replace. With this caveat, one can point out that we humans are not merely interested in energy of any quality and quantity, any more than we are interested in water or nutrient or shelter of any quality or quantity. Even those of us who see lowering Co2 emissions as a paramount consideration in energy policy cannot be indifferent to other feasibility considerations. Self-evidently, if each tonne of CO2e avoided/permanently sequestered using wind, for example costs ten times as much as each tonne of CO2e avoided/permanently sequestered using some other source that has five times the CO2e intensity of wind, then we are, ceteris paribus, still way ahead using the second energy source in preference to wind, because for a given spend we can still double our reduction. And there would be places where resort to wind and PV would be the best solution — small non-grid connected rural villages, where oncost and build time and the capacity to maintain a solution locally are key considerations, and where on-demand power is not as important as it is in large conurbations and can be met adequately by resort to ADs with waste biomass as feedstock. The fact that the solution doesn’t scale up isn’t really relevant to its feasibility, unless one wanted to argue that this should be done on a world scale. I udnerstand there is some island off the coast of Denmark that has done this — and well done them. I believe we should stay away from overselling nuclear or overstating the constraints on resort to renewables. An candid and compelling case in comparative utility for nuclear over most renewables already exists without putting our thumbs on the scales. David@260 Mackay’s 125 kWh/day figure is total energy use, including transport and a per capita share of commercial/industrial usage, not just domestic electricity consumption. His major efficiency gains are from replacing today’s cars and trucks with electric vehicles and electric mass transit wherever possible, and from replacing gas-fired space/hot water heating with solar thermal (works, just about, even in our climate), and heat pumps. His main aim is to make people aware of the scale of the challenge, so that it becomes obvious to everyone that objecting to windfarms AND nukes AND lifestyle changes is an untenable position. He acknowledges that the most economic solution is just to build lots of nukes, and sets out what it will cost, in money, disrupted landscapes and reduced comfort, if you don’t like that solution. For those who want the facts about the actual wind power output from ALL the wind farms on the NEM, you can now download it in csv (see link below). The following is an extract from an email just arrived this morning: (Peter L, as of a couple of days ago, Andrew has now captured the balance of the data from the large windfarms. You will remember that one of your blog contributors noticed that there was a discrepancy between the total installed capacity of Andrew’s set and the listed total installed capacity. The St Halletts 1 & 2, Snowtown, Clement’s Gap etc others are seperately categorised on the NEMMCO/ AEMO site. These are now extracted and listed.) My thanks and congratulations to Andrew Miskelly for achieving this. I wonder why can’s AEMO provide this capability. In fact, why can we mine the data in GapMibnder: http://www.gapminder.org/ then click on ‘explore the world’. “A new study by Xi Lu of Harvrd University calculates that wind power in the U.S. could potentially generate 16 times the nation’s current electricity production. The study limits potential wind farm locations to rural, nonforested sites (both of land and offshore) with high wind speeds.” from the October 2009 issue of Scientific American, page 28 Do you beleive there is any question about the sustainability of nuclear fuel over 1000 years. Do you believe wind, solar or other renewables are more sustainable than nuclear? If so, do some calculations on powering the world with hese technologies, calculate the quantities of materials required and where they will come from. Calculate the area of land that ould have to be mined and the quantitires of earth moved. Do the same for all parets of the process chain. The problem is that RE advocates condcern themselves only with the fuel. That is why the comparisons must be on a life cylce analysis basis. Nuclear is far more sustainable over the long term than solar and wind. Crunch the numbers. Energy efficiency is THE core climate solution, Part 1: The biggest low-carbon resource by far This statement is just as wrong now as it was in 1991 to 1993, the last time we had the opportunity to implement polices to build nuclear, and let it slip away. This belief was pushed then, accepted by the government and has proved to be wrong. ABARE’s modelling at the time, and many other pragmatic voices, said it was wrong, but the voices like yours won the day. We lost 20 years then, and if this voice wins again we may lose another 20 years again. There are some very important issues regarding affects on local climate from wind farms mentioned in the conclusion of this paper which your quote omits : —- “The potential impact of major wind electricity development on the circulation of the atmosphere has been investigated in a number of recent studies (22, 23). Those studies suggest that high levels of wind development as contemplated here could result in significant changes in atmospheric circulation even in regions remote from locations where the turbines are deployed.” “In ramping up exploitation of wind resources in the future it will be important to consider the changes in wind resources that might result from the deployment of a large number of turbines, in addition to changes that might arise as a result of human-induced climate change, to more reliably predict the economic return expected from a specific deployment of turbines.” —- The effect on local climate, particularly for farmers hosting turbines and their neighbouring farms, is a significant issue that must be researched before there is any further widespread deployment of industrial scale wind energy developments. The fact is that industrial scale wind energy still requires a significant amount of research (environmental / ecological / health etc.) to understand the negative impacts of deployment. For some more links regarding local climate effects see my recent post #187 on Wind and carbon emissions – Peter Lang responds. For some comments from IPCC regarding industrial scale wind energy research requirements see post #154 on the same page. For some important research, in addition to Peter Lang’s, regarding CO2 emissions / geographic diversity effects see my posts #141 & #144 on the same page : Peter, you forgot to provide a link to Andrew Miskelly’s wind data in CSV format. Bryen, the other thing David B’s statement ignores is whether it is practical to harness this energy. I have no doubt there is huge wind and wave potential on top of solar. Indeed, the earth receives vastly more solar energy each year than humans require. That is not the problem — the problem is in economically harvesting, storing and redistributing it as useful electricity, as the recent posts in this blog has repeatedly and patiently tried to point out. Do you believe there is any question about the sustainability of nuclear fuel over 1000 years Mackay in his discussion distinguishes between resort to uranium used in LWRs and assuming only RARs for uranium and not including resort to ocean-based uranium. Unsurprisingly, the LWR based on RARs is not sustainable for 1000 years at current usage. Of course we will take what we need so this doesn’t settle the matter. FBRs, IFRs, Thorium and if necessary, seawater recovery will all be followed in preference to going without, so my answer is yea but no. (ack: Little Britain) And no, I don’t believe such renewables (even in concert with energy-usage avoidance and efficiency) as are currently available offer a ubiquitous and maintainable low environmental footprint solution or on these criteria as feasible as resort to nuclear power. In some settings though, they surely do, though this is very much an exception rather than a rule. OTOH, in concert with nuclear some renewables (e.g. 2nd gen biofuels) would be more sustainable than they are now. Thanks for the reminder about the RAE study. Not to dismiss it in any way, but it is a bit old now. Nonetheless, it did set the stage. The RAE did not have access to real live operational data as we do, but is excellent backup evidence. Real, live, operational data? Have a look at what Andrew has been up to – you’ll have to query the database with your own set of dates. Warning – ask for about a month of data at any one query. The amount there is enormous. The link is: http://www.landscapeguardians.org.au/data/aemo/ Bryen, AEMO does not provide access to its data in a way that anyone with normal IQ can access. My comment about Gapminder is in the hope that someone might work out how to mine the AEMO data so it can be accessed and displayed in Gapminder. Phew!! Only had time to skim the incredibly rich conversation you’ve all been having.Have been in the Flinders Ranges for the last 2 weeks. I’m sure other countries have had similar arguments/discussions in years gone by and they’ve obviously come down on the side of nuclear as their best chance of having a cost competitive and adequate future energy supply. That’s why 33 countries are already producing 16% of the world’s energy total and a further 20 countries are building reactors now.Can’t we in Australia curtail our debate and follow the example of all of these countries and in the not too distant future? We are far enough behind already in securing a clean green base load energy supply. The alternative for that as you all know is to keep burning filthy coal. We need to phase out coal over coming decades and phase in nuclear. Those panicked by the thought of that should not be too worried even if they have coal shares. We can still keep mining the stuff and use it for fertilizers, pharmaceuticals, liquid fuels etc. We just need to stop burning the confounded stuff for power, clean or otherwise. Had nuclear power not been so villified by the likes of Nader, Toynbee and Caldicott over the last 30 years, probably world nuclear power would be at 30%+ and we wouldn’t need the economy -crippling ETS that we currently face. And, what price any meaningful agreement at Copenhagen?? Rudd’s already written that off as indeed he should. Could I ask all of you to write to Rudd, your local member, Opposition parliamentarians etc and TELL them to get their heads out of the sand, and to start using our world’s biggest uranium reserves, world’s best waste disposal site [both in South Australia]for our own and the planet’s good? We need a bit of vision from our leaders here and for them to start worrying about the next generation and not the next election. I regard Rudd/Wong as very poor on climate change issues even putting aside the exclusion of nuclear power from the discussion. Garrett is probably as useless an Environment Minister as there has ever been. I now think that wind power is likely to be a bit player, most suited for interruptable power usages, but also just to energize the grid somewhat; around here, about 20% of total supply because we have lots of hydro to back it up. Similarly for solar PV when the price comes down in a decade or so. I also favor using biomethane in oxy-fuel CCGT with CCS to begin removing some of the excess CO2 for sequestration. Creating the pure oxygen could be powered by wind, with storage tanks, in some locations. The idea of connecting PV, ST or Wind directly to the grid is a nonstarter. It just injects too many potential problems; brown outs, black outs, surges etc. The only possibility of reasonable utilization is buffering the low energy renewal output through storage. Use the panels, mirrors or windmills to charge up the batteries, heat salt, or pump air or water directly and then release the energy into the grid. This is the only predictable and consistent way to provide base load power, but I’m sure it will be very expensive. I believe you are correct. Intermittent renewables must have on-site energy storage, and sufficient energy storage so the power station (wind, solar, wave power, etc) can provide reliable power, on demand, with the same reliability as fossil fuel, nuclear and hydro-electric generators. As you say, the cost of such a system would be very high. For example, to meet the NEM’s demand with nuclear (plus 8GW of pumped hydro energy storage) the capital cost would be about $120 billion. To do the same with solar PV and on-site chemical storage would be about $4.6 trillion. To do the same with solar thermal is currently not physically possible and not likely to be for decades. I’ve just been looking at the Wivenhoe pumped hydro scheme near Brisbane. It pumps for 7 hours to provide 5 hours generation. It pumps from about midnight to about 6 am and meets peak demand during the day and evening. It is on standby for the remainder of the day, about 12 hours, spinning and ready to provide almost instant power whenever needed. The power generated must be sold at at least 4 times the cost of power used for pumping. The relevance of all this is that pumped hydro is a perfect match for coal and nuclear generation, but is not for intermittent renewables- there is no way that the pumps can bu turned on and off to make use of the intermittent power, the power provided by the wind farms is far too expensive, and fatally, there is no way that pumped hydro can store the amount of energy that would be needed to make intermittent renewables reliable. I’m still on holidays and will work on my undertaking for Alexei and neli Howes when I get back home. That assignment is to show the total capital expenditure, CO2 emissions, CO2 avoidance cost, and other stats, at 5 years intervals from 2005 to 2050, for six scenarios. The six scenarios are: 1. Business as usual (energy demand as per ABARE projections); Scenarios 2 to 6 are for reducing coal fired generation by 2GW per year from 2012 and the supply discrepancy to be provided by: 2. CCGT 3. CCGT to 2020, nuclear added at 1 GW per year to 2030 then by 2GW per year discrepancy filled by CCGT 4. Wind and gas, where gas is 50% CCGT and 50% OCGT 5. Wind an punped hydro 6. Wind and on-site storage (with NaS batteries) The NEEDS report (see link in the article at the top of the thread) reviewed the solar thermal technologies, selected the most prospective (solar trough) and analysed it further. NEEDS projected that 16 hours of energy storage may be feasible by 2020. We need 18 hours energy storage to get through one night in winter, and at least 3 days to enable intermittent generators to supply baseload power through overcast periods in winter. There are litterally thousands of possible options being investigated. None are even close to being commercially viable. The solar thermal option is more than 20 times the cost of nuclear to provide our power needs. It is not worth the time and effort to investigate it further at this stage. If someone can provide cost figures from competitive bids and/or from commercial, operating solar thermal power stations that can provide baseload power throughout the winter months, including through extended overcast periods, I’ll be pleased to include it in the simple analyses I am doing. Hi Peter, the BZE team are about to release their 200 page Zero Carbon Australia (ZCA) plan in May. While there will be other interesting facts about transport and building sectors, I guess this blog is mainly about baseload power supply. For their energy mix they’ve chosen to model today’s wind and solar thermal (but are open to other forms as they commercialise). From their PDF pages 9 and following they discuss a 60% solar thermal (with biogas backup) and 40% wind mix. So again, no one technology does the work alone. They count the 40% wind penetration as ‘baseload’. Have you modelled biogas backup for the longer 3 day periods? From the above it seems you want the solar thermal technology to do it all on its own, and that isn’t the model the renewables proponents are proposing. They readily admit there will be weather challenges, but rather than build 10 times the power plants they need, they simply switch to a gas backup. Mate: I’m not very technical, but even I am left wondering if some of your article above is a straw-man debunking strategies none of the renewables guys are proposing? I don’t have time for that. I’d rather hear what is actually possible according to the technologies actually proposed by either side, not reductio as absurdum arguments that straw-man the other’s position. EG: You guys don’t propose digging expensive 5 mile deep tunnels clad in platinum to store the nuclear waste forever, as you NEED that waste as fuel to burn it! But I’m sure I’ve heard Dr Caldicott interview people proposing something as ridiculous to deal with nuclear waste, and I’m left grinding my teeth and shouting at my iPod, “But they’re going to USE the waste you silly Moo!” So if Peter is right on nuclear at only $4 billion / GW capacity AND if BZE are right on a 60% solar thermal (with biogas backup) and 40% wind grid, then Nuclear still wins as far as price is concerned. My “Black Swan” comment for the day? What is politically feasible. $300 billion won’t destroy Australia’s economy. Over 10 years it is only $30 billion a year. (Political diversion: Dr Mark Drummond’s Phd calculated that we’d save about $50 billion a year in duplication if we abolished state governments and only had one Parliament for Australia, not 8. Interestingly both Bob Hawke and John Howard recently agreed that this would have been a preferable model for Australia). I don’t have time for that. I’d rather hear what is actually possible according to the technologies actually proposed by either side, not reductio as absurdum arguments that straw-man the other’s position. That is painfully obvious to all. You have no time for the grunt-work of dissecting the elements of each new ‘renewables’ scheme put forward by the same bunch of scammers who disappointed you the last time to see if it’s going to hold water, but all the time in the world to trawl the net for such schemes to run to others with and herald whatever it is this time as the coming of the Heavenly Kingdom. Errr, no. I just happen to be fairly busy lately and am limited in how much reading time I get, so listen to podcasts. I also just happened to be listening to the BZE podcast yesterday (while helping the in-laws get ready to move), and the podcast was all about their upcoming plan release in May. So I knew where the site is, and quickly found their summary PDF and the pertinent pages. If BNC had a podcast I’d listen to that as well. (One day I hope you’ll get bored of attacking my motivation and straw-manning my character). You have no time for the grunt-work of dissecting the elements of each new ‘renewables’ scheme put forward by the same bunch of scammers who disappointed you the last time to see if it’s going to hold water Well, I’m limited technically but after a fair bit of reading back in my earlier peaknik days I developed a checklist of questions I try to ask about alternative energy (to oil mainly). It’s not great, but I was just trying to formulate an easy checklist to help other non-technical peakniks explain why no substitutes for oil could do the job with the liquid fuels infrastructure we currently have. From the above it seems you want the solar thermal technology to do it all on its own, and that isn’t the model the renewables proponents are proposing. They readily admit there will be weather challenges, but rather than build 10 times the power plants they need, they simply switch to a gas backup. … Mate: I’m not very technical, but even I am left wondering if some of your article above is a straw-man debunking strategies none of the renewables guys are proposing? No it is not a strawman. It is a ‘limit analysis’ so you can see through the fog of the renewable advocates argument that when one renewable doesn’t work we turn to another. First we need to know what is the cost of each renewable on its own. Then we need to combine them to find the total cost. This paper looks at the solar renewable as a limit position. The previous papers looked at wind. You need to understand the process and follow through the series of articles. It is a ‘limit analysis’ so you can see through the fog of the renewable advocates argument that when one renewable doesn’t work we turn to another. I don’t see how debunking something no-one ever proposed helps clarify the situation. When the solar thermal shuts down, they propose that the evening wind (at a certain average cents / hour) will probably take over for a while, heat from the liquid salt backup thermal storage can be quickly despatched as necessary throughout the night, and if we have some freak week across the continent, we’ll dig into our compressed biogas tanks a bit. These are all known technologies. Critiquing a completely unrealistic, exaggerated strawman of the renewables plans does as much for the credibility of these arguments as Dr Caldicott does for her anti-nuclear cause. I’m amazed at the obfuscation from both sides. Eclipsenow, if you don’t understand the concept of defining the boundaries, I can’t help you. If you want to understand, you do need to put a bit of time into reading the actual articles, rather than just arguing about the comments posted here. You asked for some references a day or so ago. I provided some. You said you’d book marked them to read in the future. Apparrantly you haven’t yet and now you’re onto raising another issue. I get the impression you are more interested in chucking fire crackers than in trying to understand. Sorry mate but you’re the one avoiding the issues. Maybe you need to actually review an actual renewables plan, and not debunk nonsense that no-one is proposing. I have bookmarked the links you referred to, but in amongst a career-change, running our design studio, and helping my in-laws sort through all their ‘stuff’ I don’t have much time for reading… but can fit in listening to podcasts while I attend to some of this stuff. If you have a podcast or 2 for me to listen to, I could check that out. As I already said in another thread, Stanford University have some interesting talks on nuclear that I’ll be catching up on while packing ‘stuff’. (If ever anyone needed a reminder that Western civilisation consumes too much unnecessary junk, try helping your in- laws prune back for a small retirement village apartment. It’s a real education). PS: “Defining the boundaries” is unnecessary as the BZE team are well aware of them. Their team involves dozens of engineers and energy experts who have drawn up their 200 page plan for release in May. They are aware of the boundaries, and have worked around them… and costed them, and say they have a plan for $300 billion. You say you have a nuclear plan much cheaper, but I’d love to see the plans for storing the really long term waste and what the economics of that is. I’d love to hear the Amory Lovin’s characters have a debate over the actual nuclear costings, and what areas I might have forgotten to check. (I’m still getting over the fact that there still is long-term waste with Gen4 reactors. I was so sold on the idea, from multiple online articles about Gen4, that there was no long term waste and the misunderstanding that it would all be pretty much safe within 500 years). If BNC and BZE were to duke it out via a series of podcast debates, then that might be educational for all involved. “The truth will out”. I’m still getting over the fact that there still is long-term waste with Gen4 reactors. I was so sold on the idea, from multiple online articles about Gen4, that there was no long term waste and the misunderstanding that it would all be pretty much safe within 500 years For goodness sake, I wonder why one tries to explain anything to you. You are the most frustrating commenter on this blog, bar none. You’re apparently not listening and not willing to critically evaluate even basic scientific explanations. Some advice — try to think on these matters and to evaluate data in a rational manner. Try the Socratic method and start asking yourself some questions. How ‘hot’ is IFR fuel after 500 years? What does a long half-life mean? If I hold a lump of uranium in my hand, what will happen? And so on. If you can’t do this, then Finrod is most certainly right – you’re playing us for suckers and never had any intention of taking a considered and rational view on nuclear power issues. Barry, I do listen (when it’s explained in English) and have changed my blog accordingly. Now over on the Life time of energy in your hand thread where the waste issue came up, there was quite a few interesting posts, some of which I kind of understood, and some of which were fairly technical and required a general science degree, and maybe even something more specific to nuclear interests, to truly understand. As a layperson with an arts and welfare background I am very interested in the bottom line for society, and have dumped many of my earlier objections to nuclear power which I now see as rather cliché. So the fact that I don’t get some of the more technical explanations as to why certain types of waste might be dangerous and others are not is not really my fault, but the responsibility to communicate this clearly lies with the communicator. Some commenter at BNC occasionally act as high level priests initiated into the arcane arts and snubbing their noses at those who aren’t. But if you wish to communicate to non-technical activists like myself and have the nuclear power debate move forward, then maybe answering those questions in an intelligible manner for the uninitiated might help. I’m still getting over the fact that there still is long-term waste with Gen4 reactors. I was so sold on the idea, from multiple online articles about Gen4, that there was no long term waste and the misunderstanding that it would all be pretty much safe within 500 years. We’ll find uses for that small portion of uber long-lived FPs. I wonder if it couldn’t be mixed in with paint or structural material to provide a radiation hormesis effect as a public health measure, much as flouride is added to drinking water. Woah, I thought it was a joke, but there’s even a wiki. “Consensus reports by the United States National Research Council and the National Council on Radiation Protection and Measurements and the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) have upheld that insufficient human data on radiation hormesis exists to supplant the Linear no-threshold model (LNT). Therefore, the LNT continues to be the model generally used by regulatory agencies for human radiation exposure.” My son recently produced a sales catalogue of which he was very proud. On reading it, I became incandescent by his description of fluorescent lights as flourescent. I suppose that I’m going the way of incandescent lights – my age and concern over correct spelling are making me obsolete. My son was indignant at having his mistake pointed out to him and blamed his computerfor having a defective spell checker. eclipsenow, the LNT model is what’s commonly called a null hypothesis. It does’t need any evidence, whereas the hormesis hypothesis must accumulate sufficient evidence to overturn this null. It has a fair amount already, whereas the LNT still has none. But needs to keep building that body of work. Not fair, but the way some folks like to frame statistics (I prefer multi-model inference with no pre-conceived null). I received HD’s for my sociology essays, could see how sociological surveys were weighted one way or the other from the values implicit in the ‘leading questions’ put to the public, but when it came to statistical analysis of the results… left that to the maths gurus. So, as this is not really on the topic, I might just pass on the ‘multi-model inference’ statistical modelling if that’s ok. (I know it will come as a huge shock to you, but I’m just being honest as to how completely I’m not wired in that direction.) ;-) eclipsenow, – When life began on Earth almost 4 billion years ago, background radiation levels were five times higher than those we experience today. Life adjusted well, as it did to all other forms of energy to which it was exposed – heat, light, electromagnetic. This adjustment took two forms. The first suggests that exposure to low doses of radiation actually stimulates repair mechanisms that protect organisms from disease and may actually be essential for life. The second involves the development of the biochemical systems that protect organisms against the noxious effects of ionizing radiation. One thing life did not apparently do was to evolve an organ that can detect radiation. This lack of a radiation sense points to the fact that living organisms have no need to detect such a low risk phenomenon. Indeed, ionizing radiation only seems exotic and mysterious to some people because it was not discovered until relatively recently, unlike light and heat say. It is nevertheless nothing more than another form of energy. The perceived distinction has serious negative consequences but has no scientific basis. However, for statistical reasons the LNT cannot be falsified and so the precautionary principle has been adopted at an unacceptable societal cost . Barry, I’d argue that the LNT is not the null hypothesis. The null hypothesis is that low-level radiation is harmless. All studies that I am aware of are reasonably consistent with this. The exceptions favour hormesis which asserts that low-level radiation provides some health benefits. This has been demonstrated in some projects like the nuclear shipyard study. LNT for low-level radiation has never been demonstrated as far as I know. Joffan – The definitive proof of the LNT model is to disprove that a risk-free threshold exists and to disprove a quadratic risk/exposure function. This is the LNT null hypothesis. Threshold is a concept borrowed from toxicology, in which a human being can accept a certain amount of a potentially toxic substance up to a certain dose without harm, and then after a “threshold” dose, harm occurs. “Linear” simply means that for a given increment of additional dose, a fixed amount of additional increased risk occurs. A broad look at the available data demonstrates that there appears to be certain levels of radiation exposure that confer no harm to human beings, but then at some point the risk of cancer rises precipitously. In other words, there appears to be a finite threshold, and beyond that threshold there appears to be an increased risk for cancer according to a nonlinear quadratic function. Therefore, the Null hypothesis to the LNT model remains yet to be disproved. Note that this is essentially a Catch-22 situation, because the hypothesis is poorly formed, since there is no stated lower bounds at all. It is, however not necessary to prove or disprove the LNT null hypothesis if the hormesis null hypothesis can be disproved, and that IS possible. I have three hypotheses for exposure to radiation levels that are consistent in magnitude with natural background levels: 1: Increasing benefit 2: No effect 3: Increasing harm Which of these should I select as my null hypothesis? It seems obvious to me that hypothesis #2 is the correct choice. The data is consistent with this, so this should be the basis for any further action. If I use the same three hypotheses for radiation in the range of 100-1000 times natural background, I would still select #2 as my null hypothesis, but now the data would disprove it and support hypothesis #3, so that becomes the basis for future action. Joffan – There is logic, and then there is politics – science is not exempt. The ‘official’ null hypothesis for LNT is the one I stated in the first paragraph of my previous comment. It’s official, because it is the only one that can be set looking at the LNT in isolation. This is where the politics comes in. Any rational examination of the problem would reject the whole damned hypothesis as ill-formed, and strike another one similar to the one you stated. However the radiation health sector, for any number of reasons, (none of them logical or scientific) cannot do this. @27 April 2010 at 8.09 Said Mate: I’m not very technical, but even I am left wondering if some of your article above is a straw-man debunking strategies none of the renewables guys are proposing? 28 April 2010 at 8.54 Said I don’t see how debunking something no-one ever proposed helps clarify the situation. … Critiquing a completely unrealistic, exaggerated strawman of the renewables plans does as much for the credibility of these arguments as Dr Caldicott does for her anti-nuclear cause. I’m amazed at the obfuscation from both sides. @ 28 April 2010 at 9.47 Said: Sorry mate but you’re the one avoiding the issues. Maybe you need to actually review an actual renewables plan, and not debunk nonsense that no-one is proposing. @ 28 April 2010 at 12.57 Said: Some commenter at BNC occasionally act as high level priests initiated into the arcane arts and snubbing their noses at those who aren’t. But if you wish to communicate to non-technical activists like myself and have the nuclear power debate move forward, then maybe answering those questions in an intelligible manner for the uninitiated might help. The issues you are raising have been discussed at length in the comments on these threads. I note you’ve bookmarked the paper but haven’t yet read it. I’ve responded to your comments and question, but understand that my explanation may not have made sense to you. I’ll make another attempt to answer your question below. If this is not sufficient, can I persuade you to read the article, and the preceding articles that it build on, and also perhaps follow through the discussion on the threads as these discuss the points you are raising. The reason for the limit analysis – that is, looking at just solar power rather than a mix of renewable energy generators – in the first instance is so we can get an understanding of the mistakes and misinformation that is being propagated by the solar power advocates. One of the most important mistakes is doing calculations on the basis of the average capacity factor over a year. Using an average capacity factor instead of the minimum capacity factor, under-estimates the cost by a huge amount. Here is the explanation, in layman’s language The average capacity factors from an actual solar farm are: annual = 13%, 3 months of winter = 9.6%, the worst days in winter = 0.75%, at night = 0%. The “Solar Power Realities” paper considered the option of all power being generated by solar power and using energy storage to supply the electricity when the sun is not shining. No one is suggesting this is a scheme that would be built (other than advocates like David Mills), but this is a way to look at the real costs of solar. You can downscale from providing all electricity to providing just 1 GW or 1MW or whatever you like. The principles apply generally. The principle is that you cannot use average capacity factors. You must look at how you will provide the power when the solar plant is generating at its minimum capacity factor. As I mentioned, the ‘Solar Power Realities” paper looked at the situation with solar generators and energy storage. It considered two storage options: pumped hydro and NaS batteries. NaS batteries are the least cost battery option at the moment. The “Emission Cuts Realities” paper considers a simple mix of renewable energy technologies together with gas back-up for wind power. Lastly, let’s consider, in a really simple way for clarity, the situation with a mix of renewables to provide our power needs. We must remember that the power must be provided at the instant we need it. Let’s say we need to deliver 1GW of power on demand (just to keep this simple). Let’s start with 1GW of solar PV. The capital cost is around $10 billion. We find we have no power at night and almost no power at some times on some days (heavily overcast). So we need to add something else to provide the 1GW power when it is demanded. So we add 1 GW of wind power. The capital cost is about $2.6 billion. But we find the sun isn’t shining and the wind isn’t blowing. So we add 1GW of wave power. I don’t remember the capital cost but let’s say $10 billion. But then we have times when the sun isn’t shining, the wind isn’t blowing and the sea swell is small. We are now up to $22.6 billion To link all these dispersed generation systems, we need a massively expensive electricity grid and we still don’t have dispatchable power (power that can be supplied when the user demands it). So we have to add either: energy storage, or fossil fuel back up, or a dispatchable generators like biomass, geothermal or nuclear. Biomass is expensive, requires enormous land area and has its own environmental problems. The type of geothermal energy that Australia is attempting to develop has not been developed anywhere in the world yet. It may or may not eventuate as a commercial proposition. The world has been working on it for nearly 40 years and we have not advanced much in that time. There are still no commercial power stations anywhere in the world. So why not simply skip all this nonsense and go straight to nuclear. The capital cost of the 1 GW would be around $4 billion, with all the impediments to nuclear remaining in place, or perhaps around $2 to $2.5 billion if the imposts were removed and we had a genuine level playing field for electricity supply. Given that nuclear is about 10 to 100 times safer than our current electricity generating system, and is far more environmentally benign than any (including wind and solar), why don’t we just cut through all the irrational arguments and go straight to nuclear – preferably by removing all the impediments to it? I have to laugh at the pathetic attempt by the Old Greens to find some way, any way to avoid nuclear power, They are no longer even bothering to mount their usual pathetic attacks against nuclear energy, so thoroughly have those tried arguments been debunked. But they will not give up, and desperately hope their renewable dreams can still be shown to be superior, even as they begin to see the truth. Do you know what I think? They are afraid of nuclear energy because its acceptance will show everyone the magnitude of their error. They know that their followers will realize that they have been backing the wrong side, and as always in these cases will turn on their leaders like a pack of dogs. Suppose you are building a house. You have a variety of construction materials to choose from – timber, brick, steel beams, glass, tile, etc. You obviously expect to use a mix of these materials. But you can’t begin to design that mix unless you understand the characteristics of the individual materials. How strong are they? How much do you need? How much do they cost? Peter is trying to build an energy system. On his design palette, he has fossil fuels, wind, solar, hydro, nuclear. But he can’t design with these design elements unless he understands their individual characteristics. How much power can they provide? How reliable are they? How much do you need? How much will it cost? And, in this case, how much CO2 will they produce? To understand his design elements, Peter has done the equivalent of designing a glass house to understand the limits of using glass as a building material. He’s done the same with wood, and steel. These design exercises have probed the qualities and limits of the design elements. He has then followed up with a further design exercise where he builds from various combinations of materials, and compared the different structures in terms of strength, cost, build time, and waste. By analysing each renewable technology individually, he’s also thrown light on the characteristics of an integrated system. Unfortunately the wind and solar components turn out to be the equivalent of wet cardboard and cured ham, and he’s found that if you build a house out of these materials, you’re still going to need just about as much brick and steel as a normal house, if you want it to stay standing, even if you use a combination of ham and cardboard. If it all pans out the way you say DV8, I might join you in that. If the objections to nuclear proliferation and waste are dealt with as easily as some on this list imagine, I’m all for it. (IF). @ Peter Lang, thanks for that. Let’s just say at this stage I’m very sympathetic to nuclear power. One last exercise. I’m not saying the following is costed and competitive with today’s nuclear, but I’d question the synergies you suggest. Why 100% wind + gas backup? The papers coming out at the moment suggest that they build enough wind to be around 40% of the grid as baseload, and then the solar thermal operates with biogas backup. The thermal turbines on the solar plant are already there. Just turn on the bio-gas taps and cook up the steam and the plant keeps operating. It prevents needing to build a whole new biogas plant & turbine, which would otherwise be necessary in the 100% wind + biogas system you have suggested above. (If the biogas actually comes from biochar it’s a carbon Negative system as well). Sure after the growing season’s you’d probably have to brew up one heck of a lot of biogas for storage, but that storage would probably not have to make 100% of the storage we use. Don’t forget the V2G cars are coming that can charge whenever the wind is blowing, and then sell back when the grid demands it. If we use Better Place battery swap systems, the price is gratis of Better Place… they have included the batteries in the price / km of their public charging points and battery swap charges (which are already almost half the price of oil). As my car sticker says, “My next car will run on the wind”. (Free Better Place propaganda sticker… if you want them to go nuclear, have a chat with Shai Agassi and I’ll put one of those on my car instead. My focus is Better Place and Australian independence on oil. I like the wind idea, but not if it really is distracting from the debate we NEED to have on nuclear). Lastly, some are saying wind is cheaper than coal, IF we don’t have to cost a backup system. Say we have a baseload nuclear capacity with wind power mainly charging our cars. Could that be economically competitive? This is going on and on and on and you simply are not getting any of it. Can I beg you to have a go at answering your own questions. Just do a bit of thinking, and perhaps a bit of research for yourself. If each house becomes a generator of solar and wind power there are minimal transmission costs! Just a completely unrealistic use of resources. Where’s the warp drive? OR FUSION REACTORS – NOT Fission? | Mid | [
0.5785714285714281,
30.375,
22.125
] |
BBC America Picks Up Season Two of Fantasy Series Atlantis BBC America has picked up season two of new fantasy co-production, “Atlantis,” from the creators of “Merlin” and “Misfits.” On November 23, “Atlantis” delivered the highest rated series premiere ever for Supernatural Saturday with 838,000 total viewers. The 13-part series premieres Saturdays at 9:00pm ET on BBC America. In the UK, “Atlantis” is the biggest new Saturday night drama series to launch across all BBC channels since 2006, even up on the launch of hit show “Merlin.” “Atlantis” is an Urban Myth Films production for BBC Cymru Wales co-produced with BBC America. Executive producers are Johnny Capps, Julian Murphy and Howard Overman for Urban Myth Films. The show is executive produced for BBC Cymru Wales by Bethan Jones. Richard De Croce, SVP Programming, BBC America says: “‘Atlantis’ is off to a fantastic start and is the centerpiece of our Saturday nights for the next twelve weeks. We’re committed to bringing our viewers even more entertaining episodes inspired by Greek mythology from Howard and the talented team in season two.” Capps, Murphy and Overman said: “We are all thrilled at Urban Myth Films that ‘Atlantis’ has been re-commissioned and look forward to continuing the legend next year.” In the show, far from home and desperate for answers, Jason washes up on the shores of an ancient land. A mysterious place; a world of bull leaping, of snake haired Goddesses and of palaces so vast it was said they were built by giants – this is the lost city of Atlantis. The series follows the adventures of Jason (Jack Donnelly), Hercules (Mark Addy) and Pythagoras (Robert Emms) who battle against some of the most famous names of Greek legend, often in unexpected guises. The show also stars Jemima Rooper as Medusa, Aiysha Hart as Ariadne, Sarah Parish as Pasiphae and Juliet Stevenson as The Oracle. All of these characters will return in season two. | High | [
0.6682134570765661,
36,
17.875
] |
Q: GWT application bandwidth congestion issue I am curious to know this question for a long time about GWT Application. Why Bandwidth consumption is high during first run on the server and after that bandwidth consumption decreases big time? So why this happen? Please Reply as soon as possible A: Because the whole application (loads of JS/image/CSS) is loaded at start up. Additional calls to fetch data are made via AJAX. Search the interwebs for GWT bootstrapping to learn more. You can improve said bootstrapping using code splitting and client bundles. See the GWT documentation http://code.google.com/webtoolkit/overview.html | High | [
0.693935119887165,
30.75,
13.5625
] |
StartChar: tilde Encoding: 438 732 182 GlifName: tilde Width: 818 Flags: W HStem: 1222 124<429.843 565.411> 1302 124<252.645 387.748> VStem: 106 606 AnchorPoint: "Top" 409 1080 mark 0 LayerCount: 4 Back Fore SplineSet 228 1216 m 5x60 106 1280 l 5 142.655813953 1348.27631579 210.023255814 1426 319 1426 c 4x60 429 1426 458 1346 499 1346 c 4 541.988764045 1346 569.550561798 1391.15 590 1432 c 5 712 1368 l 5 676.344186047 1300.72368421 607.976744186 1222 499 1222 c 4xa0 388 1222 360 1302 319 1302 c 4 276.011235955 1302 248.449438202 1256.85 228 1216 c 5x60 EndSplineSet Layer: 2 Layer: 3 EndChar | Mid | [
0.568965517241379,
28.875,
21.875
] |
Uprooted | The ongoing quest for identity Posted by WMHT Web Editor on Uprooted tries to answer the question: what is today’s Armenian identity and its evolution over time? A personal and passionate film searching for answers in today’s world. Coloring the quest for identity, with interviews from unique and eclectic sources, exploring in parallel the lives of everyday people and their customs as we try to define the critical elements of identity and transmitted cultural memory. Interspersed with musical punctuations, interviews with a wide spectrum of Armenians and prominent academics the film guides us through the general concepts of identity while I narrate the bridges between segments. I have pursued the identity theme in all my films, as it is the critical and defining element of individuals and cultural collectives. I have further developed the identity theme in four films, as cultural memory transmitted through music. Do myths and legends reinforce or distort cultural identity? "I have been filming Uprooted part three of my “Armenian Trilogy” over the last few years and the film takes us on its final journey to new corners of the quest for identity." The survival of Armenian identity is at the core of every aspect of Armenian life, as it has been mine. In the first film of my “Armenian Trilogy,” “Armenian Exile,” I explore my neglected and sometimes forgotten identity. In the second, “My Son Shall be Armenian,” with five traveling companions I explore the question of identity as we meet Genocide survivors in Armenia. In the part three of the Trilogy I probe deeper into the question of identity and transmission of memory. This quest is every cultural group’s inescapable journey, ultimately bridging all cultures in their path to thrive. | Mid | [
0.6405228758169931,
36.75,
20.625
] |
Q: Capturing objects in Django Url Suppose I have this model: Class animals(models.Model): name = models.CharField(max_length = 20) and I make 3 objects of it, say ob1, ob2, ob3 with ob1.name = cat, ob2.name = dog, ob3.name = cow Now if I have a url like this www.domain.com/cator www.domain.com/dog, How to capture /cat or /dogfrom the url and check against the names of objects of class animal? I am trying to implement a view function that takes a parameter from the url, eg: object.name, and execute according to that object. Any help is appreciated. A: Use named groups. It’s possible to use named regular-expression groups to capture URL bits and pass them as keyword arguments to a view. urls.py: from django.conf.urls import patterns, url urlpatterns = patterns('', url(r'^(?P<name>\w+)/$', 'my_view'),) views.py: def my_view(request, name=None): # get a model instance animal = animals.objects.get(name=name) Hope that helps. | High | [
0.6790299572039941,
29.75,
14.0625
] |
PARIS — In 2012, when Charlie Hebdo editors defied the government’s advice and published crude caricatures of the Prophet Muhammad naked and in sexual poses, the French authorities shut down embassies, cultural centers and schools in about 20 countries. “Is it really sensible or intelligent to pour oil on the fire?” asked Laurent Fabius, the foreign minister at the time. But Charlie Hebdo’s editor, Stéphane Charbonnier, who died in the attack on the paper’s offices Wednesday, was not deterred. Image An October 2014 cover depicted immigration critics. Week after week, the small, struggling paper amused and horrified, taking pride in offending one and all and carrying on a venerable European tradition dating to the days of the French Revolution, when satire was used to pillory Marie Antoinette, and later to challenge politicians, the police, bankers and religions of all kinds. | Mid | [
0.591093117408906,
36.5,
25.25
] |
Fancy Horse Makes Blizzard $2 Million in Four Hours - orborde http://www.1up.com/do/newsStory?cId=3178849 ====== Aron Doesn't second life allow people to develop their own in-game designs and sell them? Seems like a logical extension for WoW, where one of the principal motivations appears to be a virtual materialism (isn't the real one bad enough?) Probably only a matter of time before one of the most valuable items is something certified to have been worn by some real-life celebrity's avatar. ~~~ binarymax Virtual monetisation is an excelent placeabo for consumer vanity. I honestly believe physical materialism to be a much worse than virtual materialism (even if it is less socially acceptable in the non-virtual mainstream). This is because aside from the energy used to maintain the virtual space, it has no physical waste (such as wrapping, receipt, carrier bag, and the eventually discarded object). ~~~ Aron That's an interesting point. I imagine the topic of virtual vs. physical consumption would make for a good thesis from some enterprising econ student. You focus on the externalities of waste. I agree that's important. Of course, the very fact that its socially looked down upon in comparison is relevant, as a lower reputation hurts one's economic prospects. Maybe that's a zero-sum game though. | Mid | [
0.572115384615384,
29.75,
22.25
] |
[*Update: This headline was changed. For a full explanation, scroll to the bottom.] Reason contributor Wendy McElroy and liberal feminist Jessica Valenti debated campus sexual assault, rape culture, and due process at Brown University on Tuesday afternoon. The debate preemptively generated student protests, alternative events, and even a statement from Brown President Christina Paxson. These reactions had one thing in common: disdain for McElroy's perspective that rape is the work of a small number of serial predators, rather than a cultural phenomenon. Paxson lamented that view in her campus-wide email, writing, "I disagree. Although evidence suggests that a relatively small number of individuals perpetrate sexual assault, extensive research shows that culture and values do matter." McElroy's contrarian perspective on rape was in fact so traumatizing for certain members of the campus that they felt they needed to create alternative events. Some students organized a "BWell Safe Space." According to The Brown Daily Herald: Students who may feel attacked by the viewpoints expressed at the forum or feel the speakers will dismiss their experiences can find a safe space and separate discussion held at the same time in Salomon 203. This "BWell Safe Space" will have sexual assault peer educators, women peer counselors and staff from BWell on hand to provide support. No student should feel the need to be protected from an opinion. But those who sought further insulation from McElroy's perspective were invited to attend another alternative event, which promised "The Research on Rape Culture." Samantha Miller of the Foundation for Individual Rights in Education explained why this nonsense is insulting to students, as well as the debate participants: Given the debate organizers' prior arrangements to provide support to anyone who actually felt the need for it, Paxson's choice to counterprogram the event makes little sense in terms of "emotional safety." But it makes all the sense in the world if you assume the real goal is to provide an intellectual cocoon for students—an effort to create a ideological bubble on campus in which students' beliefs will be free from challenge. It's a miracle the debate even took place at all, considering how allergic Brown seems to be to constructive discussion of controversial topics, but McElroy and Valenti were able to make their points. McElroy's main argument, according to The Herald: McElroy said rape culture exists in places like parts of Afghanistan where "women are married against their will" and "murdered for men's honor" but not in North America, where "rape is a crime that's severely punished." What's more, those who politicize rape and assert the existence of rape culture imply that all men are guilty or that the accused do not deserve due process, McElroy said. It is unacceptable that men can now be disciplined for rape through college hearings based on a preponderance of evidence rather than the traditional criminal justice standard of guilt beyond a reasonable doubt. "Let's not build justice for women on injustice for men," McElroy said, closing her talk. And Valenti's: Valenti never tackled the question of whether a preponderance of evidence or guilt beyond a reasonable doubt should be the standard for conviction of men in college hearings, but she did talk about other aspects of sexual assault as it relates to college campuses, such as the fact that alcohol plays a role in most sexual assault incidents. "Alcohol is not the problem," Valenti said, chuckling at the notion. "What we need to discuss is the way rapists use alcohol as a weapon to attack and then discredit their victims." Rapists benefit from others' insistence that a victim's inebriation is to blame for his or her assault, she added. Both speakers addressed how students might move forward in eliminating rape and sexual assault on campus. "Stopping someone from telling a rape joke or saying they got 'raped' by a test" would be a start, Valenti said, but she also urged students to hold university administrators responsible for addressing rape on campus. Since the college already saw fit to rebut McElroy, I will only deal with Valenti. I find her view on rape not only misguided, but positively deleterious to the cause of lessening sexual assault. The idea that stopping someone from telling a joke is "a start" to preventing rape is utter nonsense. People jokingly say, "you're killing me," when they don't get what they want; it doesn't mean they anticipate being murdered. When I say that I was beaten up in an argument, I don't mean that I suffered physical pain. Professing to have been "raped by a test" may be an off-color remark, but it has nothing to with actual sexual assault. Pretending otherwise is ludicrous. Valenti's cavalier attitude about alcohol abuse is even worse. No one paying serious attention to the campus rape problem could conclude that "alcohol is not the problem." Binge drinking and alcohol-induced incapacitation are the conditions under which campus rape occurs. In fact, Valenti knows this, since she admits that alcohol is the rapist's weapon of choice. A teen culture of responsible alcohol consumption would be the best deterrent to sexual assault, and we should be discussing strategies for fostering that (like lowering the drinking age!). Telling students that dangerous drinking is just random some side effect is not merely dishonest, but actually dangerous. Only in the warped world of the modern college campus—where protecting students' delicate feelings and upholding liberal orthodoxy is more important than giving them the truth about rape and alcohol abuse—could Valenti's views escape criticism while McElroy's earned an official condemnation. Updated at 1:05 p.m. ET: Valenti tells me on Twitter that my headline is a distortion of her position and that she never asserted rape jokes cause rape. I based my headline on The Brown Daily Herald's news story, which reported: Both speakers addressed how students might move forward in eliminating rape and sexual assault on campus. "Stopping someone from telling a rape joke or saying they got 'raped' by a test" would be a start, Valenti said, but she also urged students to hold university administrators responsible for addressing rape on campus. It seemed to me that Valenti was saying that if we want to reduce sexual assault on campus, we should start with the rape jokes. She declined to speak with me further about her piece, but did provide the text of her speech, which can be viewed here: So "social license to operate is foundational to rape culture," and "stopping someone when they are telling a rape joke," weakens that social license. To my eyes, that's a confusing way of saying that abolishing rape jokes is what we should be doing to stop rape. But I have amended the headline to more perfectly encapsulate exactly what Valenti said, based on her prepared remarks rather than the news article. | Low | [
0.512915129151291,
34.75,
33
] |
#!/bin/sh # # Folding@Home Rank # # Parameters: # # config (required) # autoconf (optional - only used by munin-config) # # Magic markers (optional - used by munin-config and some installation # scripts): #%# family=contrib statefile=$MUNIN_PLUGSTATE/plugin-fah_rank.state if [ "$1" = "config" ]; then echo 'graph_title Folding@Home Rank' echo 'graph_args -l 0 --base 1000' echo 'graph_category htc' echo 'graph_vlabel rank' echo 'rank.label rank' echo 'rank.type GAUGE' echo 'rank.max 12000' exit 0 fi rank=$(wget "http://vspx27.stanford.edu/cgi-bin/main.py?qtype=userpage&username=8d" -q -t 1 -T 5 -O - | grep -E "<TD> <font size=3> <b> [0-9]* </b> of [0-9]* </font></TD>" | sed 's/.*<font size=3> <b> \([0-9]*\) .*/\1/') if [ -z "$rank" ]; then if [ -f "$statefile" ]; then echo "rank.value $(cat "$statefile")" fi else echo "$rank" >"$statefile" echo "rank.value $rank" fi | Low | [
0.516806722689075,
30.75,
28.75
] |
Up to 240 new jobs could be on their way to a village near Derby if plans to build office blocks at a business park on a former farm are given the go-ahead. Bowler Adams LLP, which owns Badger Farm Business Park, in Willowpit Lane, on the outskirts of Hilton, has applied to South Derbyshire District Council to build a pair of two-storey office buildings at the site. According to the plans, the two buildings will be identical - with each property providing almost 2,000 sq metres of office space. Each property would also be served by 65 parking spaces, plus 10 cycle spaces. The plans state that Bowler Adams wants the new buildings because it needs the extra space to meet demand. The new offices would be built to the south of the existing office buildings at Badger Farm (Image: google) Planning documents submitted by Beckett Jackson Thompson Architects, on behalf of Bowler Adams, it said: “Being unable to satisfy demand Bowler Adams is applying to provide additional accommodation on site, which will be able to work in conjunction with their existing building. poll loading Would the new office blocks be good for Hilton? 0+ VOTES SO FAR “The new offices will provide accommodation for up to 240 office-based staff, providing both full-time and part time jobs to the local community.” The plans state that the new office blocks will be designed to match the existing office buildings at the site. The application states that the proposed buildings would match those already on site (Image: Derby Telegraph) Back in 2008, permission was granted on appeal for an egg packing station at Badger Farm. poll loading What would make you switch your business banking account? 0+ VOTES SO FAR However, that development was never built. The height of that building was 9.6 metres. According to the office plans, the buildings will be lower in height, reducing their visual impact. This architects' drawing shows where the new office building would be built (Image: Beckett Jackson Thompson Architects) The plans state: “The proposed buildings are to enable the expansion of an existing business in its provision of flexible office accommodation, adjacent to and integrally serviced by the existing facility, and constructed on land in the ownership of the existing business. “All these factors directly shape the nature and commercial viability of the proposal.” Badger Farm Business Park is already home to a number of businesses. Read More Stories about Hilton Last year, national multi-technical services provider SPIE UK moved out of Derby to a new 4,500 sq ft office at Badger Farm. Bowler Adams started developing the land following the sale of John Bowler Eggs. Now named Bowler Eggs, the firm rents an 8,000 sq ft building at Badger Farm along with another company, Noble Energy (formerly Bowler Energy). Today, Bowler Adams has a diverse portfolio of work in residential, commercial and agricultural land, alongside farming, an app business, renewable energy and finance. | Mid | [
0.605809128630705,
36.5,
23.75
] |
Put this into perspective...I do a ton of Rose IP autographs and pay in the 150.00-180.00 range per auto....Legends of the Field had a signing with him last week and were charging 400.00 per auto on balls, shoes and jerseys!!!!!! So, I think it's safe to say that this jersey will go up in value! | Mid | [
0.6497326203208551,
30.375,
16.375
] |
Pages Archive for July 1st, 2010 China’s Purchasing Managers’ Index (PMI) fell to 52.1 in June from 53.9 in May, reports the BBC, but the figures suggested the [manufacturing] sector was still expanding rather than contracting. The report attributes the – relative – slowdown to government efforts to cool the property market and to curb bank lending. The central government insisted on larger down-payments on new homes and made it harder for investors to buy several homes. The BBC also quotes observers as saying that the faltering global recovery was affecting China’s output. Xinhua explained early in June that the PMI is one of the leading economic indicators. Simply put, when the number is above 50, the economy is in a state of expansion. In the opposite case, the number says that the economy is contracting (简单来说,若该数据高于50%,反映经济正处于扩张;反之,则说明经济衰退;而数据越高,则说明经济扩张速度越快). The manufacturing industry’s June’s PMI was down to 53.9 per cent from April’s 55.7 per cent, writes Xinhua, and when looking at the individual indices, comparing May and April, ten*) indices had dropped – particularly in terms of new orders from customers (新订单指数, from 59.3 to 54.8) -, and the only exception among a total of eleven indices was the finished-products inventories index, which had actually risen, writes Xinhua, and reassures its readers by quoting HSBC China’s (汇丰中国) chief economist Qu Hongbin (屈宏斌) as saying that the manufacturing industry’s PMI indicated the effectiveness of the [government’s] austerity measures which alleviated the risk of overheating. Besides, in another article, also of early June, Xinhua wrote that most of the recent drop in the PMI index was seasonal (季节性). When adjusted for seasonal influences, there was no obvious downward momentum. The economy would maintain a rather fast growth rate, and if there was a slight drop in growth numbers, and a [comparatively] strong one in PMI, this was only showing that imported inflation pressure was easing (在扣除季节性影响之后,回落势头并不明显,预计未来经济将继续保持较快增长,但增长水平或将略有下降,这其中,新购入价格指数大幅下降,表明输入型通胀压力得到有效缓解). One news agency, two victorious but somewhat contradictory messages: so, how effective are the government’s measures to keep economic growth sufficiently cool? Inflation, even if not imported any more, edged higher in May, exceeding the official target of 3 percent for the year, amid some initial signs that the world’s major developing economy’s investment has slowed, The People’s Bank of China (PBOC) puts it this way: China’s economy is very likely to maintain steady and rapid growth in 2010, with more positive factors than last year boosting the economy, but the nation’s economy still faces a complex domestic and international situation. If the Chinese government’s expectations are as vague as their macroeconomic tools, they can’t possibly be caught on the wrong foot. The CCP rules China on many different levels, and the cadres’ tools amount both to taking some advice from economists and to taking very different governmental views – from one central and many local perspectives – on what would be desirable goals in terms of growth numbers and their composition – and on what kind of results should be expected from the tools applied respectively. But while no results may come unexpectedly under these conditions, they can be rather undesirable – both for the central and the provincial governments. While the central government’s budget looks fairly balanced, except for the past year or two, the provincial governments’ finances are a completely different story. The Chinese stimulus programs were decided in Beijing, at least nominally. But the implementation was arguably a provincial affair, with the provinces, or more specifically the provincial-government-owned investment companies, generating the flow of money – and incurring the corresponding public debt. In March, Northwestern University’s Victor Shih told journalists in Beijing that as of November 2008, some 8,000 local investment companies took loans of at least 11 trillion – seven times the total revenues of local governments. The central government was facing a choice between quickly issuing restrictions on the flow of money, or let bad loans and inflation spread. “China’s leadership may prefer to let inflation rise, and to continue to make the banks lend money. They may not even wish to allow a healthy contraction, before determining the next generation of party leaders.” If the foreign reserves Beijing has accumulated during the past decades are to play a role in recapitalizing China’s banks, there is certainly no open talk about it yet. And interestingly, party and state chairman Hu Jintao (胡锦涛), during the G20 summit in Toronto last week, seemed to join Timothy Geithner, Lawrence Summers, and other US economists and politicians in urging a cautious – if any – exit from the stimulus programs launched since the beginning of the global financial crisis in 2008: “We must act in a cautious and appropriate way concerning the timing, pace and intensity of an exit from the economic stimulus packages and consolidate the momentum of recovery of the world economy”, The Herald Sun quotes Hu. The Economic Cooperation Framework Agreement signed by Taiwan and China in Chongqing yesterday is a serious threat, Singapore’s Beijing-leaningUnited Morning News (联合早报) quotes South Korean media and experts. The Korea International Trade Association (KITA) released an “After-ECFA Response Program” on June 29, pointing out that tariff reductions on more than 500 Taiwanese products, among them machinery, petrochemicals, and automotive spare parts with a value of about twelve billion US dollars, were a big blow to South Korean exporters. Apparently in cooperation with the Korea Institute for International Economic Policy (KIEP), the response program finds that among the twenty top products exported to China by Taiwanese and South Korean companies, liquid crystal displays, petrochemicals, semiconductors, and office equipment, there are fourteen items among them which rank high both in Taiwan’s and South Korea’s exports to China. The preferential treatment of Taiwanese products would immediately weaken South Korean competitiveness, United Morning News quotes KITA. East Asia Daily (this name apparently refers to donga ilbo, a South Korean paper which also runs an English, a Japanese and a Chinese language edition), is quoted by United Morning News as commenting that, facing Taiwan taking away the Chinese market, South Korea should sign a free-trade agreement with China. Also, as relations with Taiwan had been distant since South Korea’s establishment of diplomatic relations with China in 1992, South Korea should, by improving relations with Taiwan, seek a common approach with Taiwan to enter the Chinese market. Taiwan News writes that It should come as no surprise that the country most impacted by changes in cross-strait relations is Japan, which is seriously concerned that any excessive “‘leaning to one side” by Taiwan toward the PRC will tilt the balance of power in East Asia in Beijing’s favor. […] In particular, Japanese analysts are concerned that the reversal of the previous administration of the Taiwan – centric Democratic Progressive Party’s pro-Japan and anti-PRC stance toward the restored KMT government’s adoption of a “pro-China and anti-Japan” stance could have serious implications for Japan’s substantive interests in the Taiwan Strait and may add weight to the “China factor” in Tokyo’s policy – making regarding Taiwan. Even if Chen should be misquoted here, this statement certainly reflects Beijing’s position. And it may reflect a irreversible trend of Taiwan moving into China’s orbit. But this isn’t only up to China. So far, Japan’s, America’s, and probably everyone’s main concern seems to have been not to displease Beijing. ECFA should be read as a signal that letting Taiwan down would come at a price, just as well. Standing by some moral principles will be costly. But in the end, the costs of mere opportunism would be much greater. | Mid | [
0.569196428571428,
31.875,
24.125
] |
1. Field of the Invention The present invention relates to an electronic device containing plural modules in a housing which has plural inner surfaces. The present invention also relates to a method of reducing multi-path fading in a housing that has plural inner surfaces. 2. Description of Related Art In an electronic device, boards contained in housing have been generally connected to each other with wiring. Alternatively, the boards and/or circuit elements provided on the board have been connected to each other with wiring pattern. In a case of using the wiring and/or the wiring pattern, it has been difficult to improve a transmission speed based on any interference between the items of wiring by spurious radiation, electromagnetic induction and the like as well as any variations in an amount of delay at respective items of wiring. If a space occupied with the wiring or wiring pattern is made small when downsizing the electronic device, large interference between the items of wiring may arise. According to an inner structure of the electronic device, a position where wiring is to be located may be set. This deteriorates flexibility in a design of the electronic device, so that it may be difficult to design the electronic device. Japanese Patent Application Publication No. 2004-220264 has disclosed an electronic device in which wireless communication can be performed without any wiring or the like to enable data transmission to be performed with a high speed. | Mid | [
0.5605381165919281,
31.25,
24.5
] |
About 30 September 2009 Saint Jerome, who's memorial we celebrate today, is perhaps best known for his translation of the Bible into Latin (the Vulgate) and for his celebrated line, "Ignorance of the Scriptures is ignorance of Christ." Jerome was a prolific letter writer, full of many great insights and quips. Here are just a few I have selected for you: "Man's nature is such that truth tastes bitter and pleasant vices are esteemed" (Letter XL). "Indeed it is dangerous to pass sentence on another's servant, and to speak evil of the upright is a thing not lightly to be excused" (Letter XLV). "I often discoursed on the Scriptures to the best of my ability: study brought about familiarity, familiarity friendship, friendship confidence" (Letter XLV). "...people are more ready to belive a tale which, though false, they hear with pleasure, and urge others to invent it if they have not done so already" (Letter XLV). "Our opinion of you is like your opinion of us, and each in turn things the other insane" (Letter XLV). "Let them know us [clergy] as comforters in their sorrows rather than as guests in their days of prosperity" (Letter LII). "Change your love of necklaces and jewels and silk dresses to a desire for scriptural knowledge" (Letter LIV). "The face is the mirror of the mind, and eyes without speaking confess the secrets of the heart" (Letter LIV). "I groaned to hear his tale, and by silence expressed far more than I could with words" (Letter CXVII). "Marriage is a raft for the shipwrecked, a remedy that may at least cure a bad beginning" (Letter CXVII). "Nothing is happier than the Christian, for to him is promised the kingdom of heaven: nothing is more toil-worn, for every day he goes in danger of his life. Nothing is stronger than he is, for he triumphs over the devil: nothing is weaker, for he is conquered by the flesh" (Letter CXXV). "If the merchants of this world undergo such pains to arrive at doubtful and passing riches, and after seeking them in the midst of dangers keep them at the risk of their lives, what should not Christ's merchant do who sells all he has to buy the pearl of great price, and with his whole substance buys a field that he may find therein a treasure which neither thief can dig up nor robber carry away" (Letter CXXV)? There is a particular passage for which I am looking. If I find it, I will post if for you later tonight. 28 September 2009 Just a few moments ago one of my former students and soccer players tagged me in a note he posted on Facebook of an article he wrote for the student newspaper, The Bulldog Bark. He wrote: I was heading back to my study hall, after going to put some homework away, when I decided I’d see if Fr. Daren was around to just hang out and talk with him. A few steps later I stopped, realizing that Fr. Daren was gone. No more random hellos in the hallway, no more chess club games, no more soccer practice with him as assistant coach! It might not have happened to you yet, but you’ll probably “realize” that Papa D has left sometime soon, like when you see someone drinking a can of Dr. Pepper, go to eat at Buffalo Wild Wings, or when you go to church. No one wanted to give him up, but he had to go. Even though he’s gone, the time he spent with us was the greatest! He was a great priest, and an even better friend. If you’re ever feeling the “Papa D blues”, there’s always texting, phone calls, Facebook, and Virden is only 2 hours away if you have time to visit. We know he’ll do well in Virden, and hopefully he’ll come back to visit Effingham very often! Keep him in your prayers and send him a text to say good luck! I don't know what to say. I do miss those kids. I suppose that's a fourth gift the Lord sent my way today. Those insightful Dominicans have an excellent post on one of my favorite virtues: eutrapelia. It's one of the reasons I still sneak away for soccer games, as I hope to do again tomorrow, and maybe even on Thursday, too. If you haven't read Hugo Rahner's book Man at Play: Or Did You Ever Practice Eutrapelia, now's a good time to do so. This evening I had the pleasure of convoking again the former Parish Finance Council. After requesting their continued service - and receiving affirmative responses - I will be happy to reappoint each of the members (in the morning). I mention this because the Lord has sent two great gifts my way today. The first came in the morning of an envelope from the Office of the Master of the Liturgical Celebrations of the Supreme Pontiff. In it was my ticket to help with the distribution of Holy Communion during the Mass of canonization of Blessed Damien of Molokai. Strange as it may seem, the meeting of the finance council is the second of these two great gifts. It is a group of people whose wisdom and guidance I will seek readily and they have - in only meeting - demonstrated the effectiveness. They are eager to help and have offered what is - in my judgment - very good counsel. And now I will close the day with third good gift: a bowl of fresh pineapple. When I was in Effingham one of the friars gave me a holy card that I recently rediscovered. It tells the brief story of the Servant of God Simon Van Ackeren, O.F.M., who entered the Franciscan Order in Teutopolis and died in Effingham: The seventh child in a family of twelve children, Lawrence Van Ackeren was born at Humphrey, Nebraska, on February 17, 1918. Even as a boy he stood out by reason of his spirit of prayer and his love of our Lord in the Blessed Sacrament. After completing the grade school, he wanted to go to the Franciscan preparatory seminary at Oak Brook, Illinois; but he had such a hard time with his studies that he was told to finish high school first. In September, 1936, he was admitted to the preparatory seminary and joined the fourth-year students. But by Christmas he realized that he did not have sufficient talent to pursue the required studies for the priesthood, and he applied for admission as a Franciscan lay brother. Toward the end of January, 1937, he was sent to St. Joseph Theological Seminary, Teutopolis, Illinois, and was invested as a Third Order Brother about a month later, receiving the name of Brother Simon. His ankle started to bother him about a year after he arrived at Teutopolis, and he began to walk with a slight limp. Soon afterwards, the limb became too painful and he could scarcely walk. He was taken to St. Anthony Hospital in nearby Effingham and received treatments for a month, but his ankle failed to respond. He returned to the seminary on crutches, and was permitted to make his profession as a Third Order brother on March 4, 1938. The next day he left for St. Louis to consult a specialist. After three weeks he came back, his ankle in a cast. The verdict was tuberculosis of the bone. Soon his general health began to fail. On the last day of April he went to the hospital in Effingham. There the doctors found that he had galloping consumption and gave him only a short time to live. Brother Simon's condition quickly grew worse, and he was anointed on the sixth day after his arrival at the hospital. The next few days his strength failed rapidly. About ten o'clock on the night of May 10, while the sister on night duty was with him, his innocent soul winged its way to heaven. Though he was only a Third Order brother for little more than a year, Brother Simon has gained a greater reputation as a saint and intercessor in heaven than any other deceased member of the Franciscan Province of the Sacred Heart. During his illness and suffering no one heard an impatient word escape his lips; and he never ceased praying. His sunny smile never wore off. His greatness consisted in doing the little things well - doing them with extraordinary and always cheerful willingness, fidelity, charity, patience, and piety. "Being made perfect in a short space, he fulfilled a long time" (Wisdom 4, 13). As it was Brother Simon's delight to help others in life, so he has continued to help others in a remarkable manner also after his death. Innumerable favors have been reported and attributed to his intercession. Strangely enough Brother Simon is gaining a growing reputation as a missionaries' broker and a helper in financial difficulties. Favors are reported also from sick persons who have gained health or alleviation from ill health through a novena made in his honor. If you have a special prayer request, why not ask Brother Simon for the help of his prayers? I will ask his assistance this morning for a student concerned about taking his religion test today. There is a novena asking Brother Simon's intercession: O Lord, in these days wherein souls are hungering for pleasure and devoured by greed, and refuse to renounce themselves to take up your Cross and follow you, you have raised in our midst Brother Simon, who during his short life kept his eyes on your passion and, responding to your call, gave himself to you. Touched with this excess of charity and spirit of renunciation in a world of ingratitude, you have deigned, O Lord, apparently as a sign of approval, to make him a champion of your Cross. We beseech you, O Lord, to make known the power of intercession reserved to your servant by hearing the prayers we are saying in union with his, and to grant us not only the petition of this novena, but also the grace to follow you, who are the Way, the Truth, and the Life. Amen. After this prayer, you are to pray five Our Fathers, five Hail Marys, and five Glory Bes, in honor of the Five Holy Wounds. And, while you are at it, please offer the prayer for his beatification: O, Jesus, you love the meek and humble of heart. Hear the prayers we offer you in honor of your humble servant, Brother Simon. Approve the cause of his beatification. Through his merits and intercession may we receive the favors we seek. We ask this in your name, Christ our Lord. Amen. I woke this morning right about three o'clock to some buzzing or ringing sound that I could not immediately determine. I thought it might be the telephone in the bed room I am in (which apparently wasn't plugged in until about seven o'clock yesterday evening). That was not it. Then I thought it might be a smoke alarm somewhere in the house. That was not it, either. When I returned upstairs the I could tell the sound was coming only from the area of my bed room, by the door and, after some pondering and checking of electronic things, inside the wall. It is a pparently a cricket that is afraid neither of pounding on the wall nor of the television. I am at a loss at to what to do. It is so loud that if I moved to a guest room I would still hear the cricket loud and clear. Now I know why Saint Francis of Assisi told the cricket to stop singing. I tried; it is not listening to my request. 27 September 2009 Looking around the rectory I have realized that I am drastic need of a new mop and broom, or swiffer wet or dry, or some other such item to be utilized in the cleaning of the kitchen and bathroom floors. What do you most recommend? I'd like something simply but that also does a good and thorough job. If that means a regular mop and bucket, so be it. 25 September 2009 The Diocese of Springfield in Illinois was created in 1853 from the territory of the former Diocese of Alton, which was formerly the Diocese of Quincy. The See City was transfered with the changing modes of transportation; the coming of the railroad made Springfield a city easy to reach without relying on the mighty Mississippi River, on whose banks sit the cities of Quincy and Alton. All of this is a set up for the news of a truly profound moment in the life of the Church of Springfield in Illinois to take place tomorrow in the former Cathedral of the Diocese of Alton, the church of Saints Peter and Paul, when Steven Thompson will ordained to the Order of Deacons at the hands of His Excellency, the Most Reverend Victor Balke, Bishop of Crookston and a son of the Diocese. I ask your prayers for Steven and for the Diocese as Bishop Balke ordains him for service in the Church, with the view of Steven's ordination to the Priesthood of Jesus Christ. I know Steven to be a good man of prayer and look forward to ministering with him in the years ahead. His quality was reaffirmed for me at the recent clergy convocation when he and spent one of the sessions in an excellent conversation. He will be a good and holy deacon and priest, with the help of your prayers. I regret that I will be unable to be present for his ordination; I will have to preside at a burial here in Virden at the time of his ordination. I cannot say that the past few days have been busy per se, but they have been full. I have spent them largely cleaning our drawers and closets in the house, sorting through various papers to see what is important and what is not, learning about the parish finances and unpacking (which seems to be least on the list). I have met several very friendly and helpful parishioners this week who have very generously offered to help me in whatever way they can. I would like to take them up on their offers, but am not quite sure what to have them do. I am delighted to have a Deacon who lives in the parish but is assigned to another parish north of here. He attends daily Mass here since he works in town and has been a great help to me; it's also very nice to have a deacon at daily Mass. He has served on the finance council in the past and helped me considerably Monday morning but running through various financial matters. I am happy to say that I have a very capable and efficient secretary. She is new to the post so we are learning together. In many ways this is a blessing since we can organize the office together so both of us know where we put things. She works part-time as things in the parish are rather quiet. We have only 169 families here in Virden and only 127 families in Girard. The combined number of actual parishioners - at least according to the books - is 669. Wednesday morning the current and former secretaries and I sat down for a couple of hours to go through several things in the parish, from files to finances. I felt much more comfortable after our meeting. We also have a very faithful, dedicated and thorough sacristan at the parish who sets up for the Masses and various liturgical celebrations. He has worked as the sacristan for many years and knows just about all there is to know. Tuesday morning he took me on a lengthy tour of the parish complex and told me more information than I could retain. I will have to ask him several questions later on. Several years ago the parish between a period of twenty-four hours of Eucharistic Adoration following Mass on Wednesdays. I could not be more pleased to have this in the parish, especially considering our smaller size. I am confident that it will bring many rich blessings from the Lord. I have also made it back to Effingham this week for two soccer games, both of which, I am happy to say, the boys won. When I went to the game yesterday I also stopped in for a haircut. The woman I go to does a great job and I am not sure I want to try another person. A good haircut is not always easy to find. Life in Virden has certainly been an adjustment. The city has - according to the population sign when you enter town (which seems a bit high) - 3,500 citizens. The bank closes daily at 3:00 p.m. (which caught me quite by surprise Wednesday afternoon when I went to have my name put on the parish accounts). There is no grocery store (though I hear one is being planned). I am not sure if we have a dry cleaners yet. We do not have a McDonald's (which does not bother me much), but we do have a Dairy Queen and a Star Hardee's. In terms of shopping, we have a Family Dollar (or a Dollar General, I do not remember which). Springfield is just twenty-five minutes north and that drive somehow does not now seem very long. Being from Quincy, where anything more a seven minute drive is long, this feels very strange to say, but it is true. I suppose I have adapted well to my new surroundings. Virden is a quiet town and peaceful and I do not believe it has a stop light. Less than ten percent of the population has a bachelor's degree and fewer than three percent have advanced degrees. Here in Virden we are 150% more likely to have a tornado than the rest of the country, but we are 97% less likely to experience an earthquake than the rest of the country. Today I intend to finish settling into my office (I finally found my printer cable last night) and then work on the kitchen. Sometime today I will have to go to Springfield to pick up several supplies for the secretary and I also have to shop for groceries sometime. 21 September 2009 During the celebration of the Holy Mass, the Reverend Monsignor Carl A. Kemme, Administrator of the Diocese of Springfield in Illinois and a good friend, publicly installed me as Pastor of Sacred Heart parish in Virden and of St. Patrick parish in Girard. Many positive comments have been received on the simply but profound ceremony. I am deeply grateful for Msgr. Kemme's support, encouragement and prayers. His presence yesterday was a great help to me and to the parishioners. My favorite parts of the ceremony involved my placing my hand on the Book of the Gospels as I made the Oath of Fidelity, I'm happy to say the parishioners seem sincerely happy that I have been sent to them. They have been most gracious and welcoming and I think we will grow well together in love of the Lord Jesus Christ. Apparently, the talk around town - even among some who aren't Catholic - is that the lights in the rectory are on. That's a good sign; my presence is noted, even if sometimes leave town and forget to turn off a light or two :) Today has been a good, productive and informative day. After Mass I cleaned the sacristy a bit. After unpacking a bit more in the house, I met with a parishioner and member of the finance council to talk through the parish finances. That meeting was very good and helpful. Soon I'll convoke the pastoral council and set to work on what needs to be done, both temporal and spiritual. 18 September 2009 On Sunday Sacred Heart parish in Virden and St. Patrick parish in Girard will welcome the Diocesan Administrator, the Reverend Monsignor Carl A. Kemme, a native of Shumway. Although I took canonical possession of my parishes this past September 15th, the memorial of Our Lady of Sorrows, he will install me as Pastor in a public way. Monsignor Kemme will introduce me to the parish (though they've already briefly met me) and will hand me the keys to the church (I think). I believe Bishop Lucas' letter of appointment will be read, after which I will make the profession of faith: I, Daren J. Zehnle, with firm faith believe and profess everything that is contained in the symbol of faith: namely, I believe in one god, the Father, the Almighty, maker of heaven and earth, of all that is seen and unseen. I believe in one Lord, Jesus Christ, the only Son of god, eternally begotten of the Father, God from God, Light from Light, true God from true God, begotten not made, one in Being with the Father. Through him all things were made. For us men and for our salvation he came down from heaven: by the power of the Holy Spirit, he was born of the Virgin Mary, and became man. For our sake he was crucified under Pontius Pilate; he suffered, died and was buried. On the third day he rose again in fulfillment of the Scriptures; he ascended into heaven and is seated at the right hand of the Father. He will come again in glory to judge the living and the dead, and his kingdom will have no end. I believe in the Holy Spirit, the Lord, the giver of life, who proceeds from the Father and the Son. With the Father and the Son he is worshipped and glorified. He has spoken through the prophets. I believe in one, holy, catholic and apostolic Church. I acknowledge one baptism for the forgiveness of sins. I look for the resurrection of the dead, and the life of the world to come. Amen. With firm faith I believe as well everything contained in God's word, written or handed down in tradition and proposed by the Church - whether in solemn judgment or in the ordinary and universal magisterium - as divinely revealed and calling for faith. I also firmly accept and hold each and every thing that is proposed by that same Church definitely with regard to teaching concerning faith or morals. What is more, I adhere with religious submission of will and intellect to the teachings which either the Roman Pontiff or the college of bishops enunicate when they exercise the authentic magisterium even if they proclaim those teachings in an act that is not definitive. I will proclaim the Gospel and Monsignor Kemme will preach, explaining the office of a pastor and the meaning of the rites. After his homily, I will renew the promises I made on the day I was ordained. Monsignor Kemme will address certain questions to me, to which I will respond affirmatively: My dear brother, in the presence of the people whom you are about to receive into your care, I ask you to renew the promises you made at your ordination. Are you resolved that under the guidance of the Holy Spirit you will without fail live up to your responsibility to be the faithful co-worker of the order of bishops in shepherding the flock of the Lord? R/. I am. Are you resolved that in praise of God and for the sanctification of the Christian people you will celebrate the mysteries of Christ devoutly and faithfully, and in accord with the tradition of the Church? R/. I am. Are you resolved that in preaching the Gospel and teaching the Catholic faith you will worthily and wisely fulfill the ministry of God's word? R/. I am. Are you resolved that you will bind yourself ever more closely to Christ, the high priest who for us offered himself to the Father as a spotless victim, and that with Christ you will consecrate yourself to God for the salvation of your brothers and sisters? R/. I am. Do you promise respect and obedience to [the Diocesan Bishop and his successors]? R/. I do. May God who has begun this good work in you bring it to fulfillment. After I renew my promises, Monsignor Kemme may lead me to around the church to the principle locations of the church: the chair, the tabernacle, the baptismal font and the confessional. At some point I will also place my hands upon the Book of the Gospels and renew the Oath of Fidelity: I, Daren J. Zehnle, on assuming the office of pastor of Sacred Heart Parish, Virden, Illinois, and of Saint Patrick Parish, Girard, Illinois, promise that I shall always preserve communion with the Catholic Church whether in the words I speak or in the way I act. With great care and fidelity I shall carry out the responsibilities by which I am bound in relation both to the universal church and to th particular church in which I am called to exercise my service according to the requirements of the law. In carrying out my charge, which is committed to me in the name of the church, I shall preserve the deposit of faith in its entirety, hand it on faithfully and make it shine forth. As a result, whatsoever teachings are contrary I shall shun. I shall follow and foster the common discipline of the whole church and shall look after the observance of all ecclesiastical laws, especially those which are contained in the Code of Canon Law. With Christian obedience I shall associate myself with what is expressed by the holy shepherds as authentic doctors and teachers of the faith or established by them as the church's rulers. And I shall faithfully assist diocesan bishops so that apostolic activity, to be exercised by the mandate and in the name of the church, is carried out in the communion of the same church. May God help in this way and the holy Gospels of God which I touch with my hands. Monsignor Kemme will be present at the 8:15 a.m. Mass in Virden and at the 10:00 a.m. Mass in Girard for this rite. I know some of my new parishioners are readers of this blog; would any you like to take pictures for me? 17 September 2009 As you can probably tell by the recent lack of posts, this past week has been a busy blur. After soccer practice Friday evening about thirty of the high school students came to the rectory to help load my personal effects onto the trailers and into the trucks that would take me to Virden the next day. It took about three hours to get everything loaded, most likely simply because there were too many of us there to coordinate things well and I hadn't planned on so many (you'd think I might've learned after Monday's turnout...). Typically in such situations you end up, as it were, with too many chiefs and not enough indians; we had too few of either but plenty of jesters. Toward the end it became a bit hectic, but I'm glad so many came to help.Saturday morning I celebrated my last Mass at St. Anthony's as Parochial Vicar. After Mass, one of the parishioners gave me a farewell gift of delicious chocolate chip cookies.That morning the soccer team played against Mater Dei high school at Bulldog Field. The boys played the best game I've seen them play eventhough they lost 1-3.The boys came onto the field wearing wristbands made from atheletic tape with an overlapping D and Z on them in imitation of my initials they found on a wax seal they found when helping to pack my things. I was very touched and knew then they intended to play that game for me. I was humbled and proud.Just before the game, they presented me with a ball bearing each of their signatures and jersey numbers. It was a most fitting gift.After the game we hopped into the vehicles and made our way to Virden after a mostly uneventful drive. Twelve students accompanied me and three others met us in Virden that evening.It took only one hour to unload the trucks and trailers, much to my surprise. I concelebrated Mass with Father Sperl at 5:30 and joined the parishioners afterwards for a potluck dinner to thank Father Sperl for his ministry over twenty-six years and to welcome me.The parishioners welcomed me very warmly and were very hospitable to the students who came to help me move. It was a good and relaxing evening and I look forward to meeting more of the parishioners in the coming days and weeks.The students and I returned to the rectory and continued unpacking for a bit before retiring for the night.I celebrated Mass at 8:15 Sunday morning and then continued unpacking. That day happened to be the parish's annual fried chicken dinner so my helpers and I attended the dinner and enjoyed a delicious meal. I was really impressed with the food and the organization of the event.After more unpacking we left for Effingham so they could finish their homework.The priests of the Diocese gathered in Effingham this week for their annual convocation, which concluded this morning.I am now in the rectory in Virden and have spent a good part of the evening unpacking my library; I still have a way to go.In the morning I have to make an unplanned and quick return to Effingham to pick up some drycleaning I forgot to pick up before I left this afternoon. I'll also pack up some Christmas decorations that I intended to pick up Sunday afternoon.In the afternoon I'll meet with my secretary and see where our conversation leads.Saturday I will be in Springfield teaching a class on the Creed for the lay ministry formation program and Sunday morning the Diocesan Administrator will come to install as pastor of these two parishes.Hopefully next week will be a bit slower than this week. This past Monday afternoon the high school students did an excellent job packing up most of my things and placing them in the garage in preparation for the loading of the trucks that will take place this evening following soccer practice (and maybe a quick bite to eat). I still have a few things to finish packing this morning, but all is well underway. My furniture (a bedroom set, two bookcases, two chairs and a couple of side tables) will be loaded this evening, as well. In the morning I will celebrate my final Mass here as the Parochial Vicar. The soccer team has a match here later in the morning. After the game, we will hop in the trucks and make our way to Virden. Several of the students will be accompanying me to help unload the trucks and unpack the boxes to help me get settled in as quickly as possible; I'm one of those sorts that does not function well with clutter and needs to be at least somewhat settled in. Yesterday and today I feel rather in a daze, with much to do but uncertain what should be done first and when. My mind and emotions are mixed on this last full day here in this parish. With the recent and delightful news that Fr. Leo Patalinghug of Grace Before Meals, whom I had the pleasure of meeting at the World Youth Day 2008 in Sydney and who very kindly links to my blog, defeated the Iron Chief Bobby Flay in a recent episode of the Food Network's Throwdown with Bobby Flay, I can't help but offer a small reflection on food and the presence of God. Wednesday afternoon I was making a batch of the Roman speciality sauce all amatriciana - very simple and wonderfully delicious - for the soccer team's pasta night that evening. I had been in the kitchen for some time preparing the sauce when our housekeeper came into the kitchen and said, "It smells delicious in here!" I was a bit struck by her words because at the time I did not notice the smell; I had simply grown accustomed to it and noticed it no longer. To remedy this I went upstairs for a moments and when I returned to the kitchen the delicious scent could not be missed. She was right. Pancetta, garlic, tomatoes, salt and pepper: what's not to like? Prayer and our recognition of the presence of God in our lives is often like this. Sometimes we grow "used" to God's presence, we grow "used" to the "routine" of prayer and do not notice its effects, until we step outside of God's presence or stop praying. Then, once we enter back in, we realize what we had all the while but did not notice. Let of each of us, then, not stay out of the kitchen, but hop right in. 07 September 2009 As the chaos of the past week and a half comes - thankfully - to an end, the chaos of the next few days begins. I am glad to report that I have been able to rest the past couple of days and have happily slept through the last two nights, something I hadn't done in about a week. After celebrating a funeral Mass late this morning, I set to work for an afternoon of packing my belongings in preparation for my move to Virden this Saturday. About fifteen of the high school students came to help. I was a bit surprised by the number of them, and very grateful, especially considering most of them stayed for the four hours I had planned to use for packing. We started off really well and kept basically organized, but there were only about six of them at first. Some of them I put to work in my office and packed it they did. They packed more of it than I intended, but all is well and will save me more work later. The others I set to work in my library and we now have that nearly finished. As the afternoon moved along and more students came to help the more disorganized we became. I really did not expect so many helpers all at the same time; I expected them to be coming and going throughout the day. I felt rather overwhelmed as several of them would ask me at the same time what else they could pack or sort. Their willingness to help either shows their affection for me, or their readiness to be rid of me ;) I still have to pack my electronic equipment (television, stereo, computer, etc.) and clothes, and a few other odds and ends that I'm not quite sure how to pack. I also have a "junk drawer" or two to sort through (don't we all?) and a closet to go through that has things in it I'm not sure I've set eyes on since I arrived here four years ago. All of the packed boxes have been moved to the garage and are ready to be loaded onto the trucks and trailers Friday afternoon. I'm amazed at the generosity of the students. Of all things they could have been doing on a beautiful afternoon free of school, they chose to spend it helping me pack. I'm also amazed at the speed with which they can work, when they put their mind to it. Tomorrow morning I will drive to Springfield to concelebrate the funeral Mass for my Pastor's father. Afterwards I will return to Effingham for a bit of packing and a soccer game. I'm not sure how much blogging will be done during the remainder of the week. If I don't post much, know that it is because I'm packing and saying farewell and not because I'm abandonging the blog. 05 September 2009 ...or just what the Dr ordered (a little tribute to Rocky and Bullwinkle). Last night around eighty high school students turned out for the Bring Your Own Dr Pepper party thrown as a farewell bash. I had more fun last night than I've had all week. It was a much needed night. I don't believe I've ever seen quiet as many empty Dr Pepper cans in one place before and I've certainly never played Apples to Apples with so many people all at once. The party began a little after 7:00 p.m. and ended about 10:30 p.m.; many of the students had sporting events early this morning and I couldn't have stayed up any longer if I tried. Those three hours reminded me that even though is much on my plate, as it were, right now, I can only do one thing at a time. Over the course of the week I failed to carve out time simply for fun, though I daresay I've no idea how I could have it in anywhere if I had tried. Pliny the Younger once said: As in my life, so in my studies, I consider it most fitting for a true man to mingle and mild and cheerful spirit with my more serious mood, so that seriousness should not fall away into melancholy nor jest into mere license. Guided by this principle, I now and then interupt my more serious work with jollity and play. I neglected jollity and play and began to feel quite overwhelmed. It is a lesson I hope I don't soon forget. So, I'm living life right now one Mass at a time. I've celebrated two Masses already today and will celebrate one more in a couple of hours. Tomorrow I will celebrate three Masses and Monday it looks as though I will celebrate two Masses, as well as on Tuesday (though I'm not concerned about Tuesday). If you've done the math, you'll see that over the span of three days (Saturday, Sunday and Monday), I will celebrate eight Masses (one Saturday and one Monday Mass, two funerals, three Sunday Masses and one memorial Mass on Sunday night). We announce with profound regret the death Friday, September 3, 2009, in Effingham, Illinois of: Mr. Leo Joseph Enlow, Sr. Father of the Reverend Monsignor Leo Enlow, pastor of St. Anthony of Padua Parish, Effingham, Illinois. Visitation will be at Staab Funeral Home, 1109 South 5th Street, Springfield , Illinois, on Monday, September 7, 2009 from 4:00 p.m. - 7:00 p.m. A prayer service will be at 3:30 p.m. The Concelebrated Mass of Christian Burial will be at Blessed Sacrament Church, 1725 South Walnut Street, Springfield, Illinois, on Tuesday, September 8, 2009 at 11:00 a.m. Priests wishing to concelebrate should be present by 10:30 a.m. 04 September 2009 This morning my day began at 6:40 a.m. with the ringing of the telephone. "Good morning, St. Anthony's," I answered rather groggily, fearing news of yet another funeral. The voice on the other end asked, "Is anyone in the office yet?" "No," I answered, with no small trace of irritation and frustration - and even a bit of anger - in my voice. "It isn't even seven o'clock." "Okay," came the reply, and the caller hung up. Thus I expected nothing less than an absolutely miserable day, and the first few hours of the day did not disappoint this expectation, leaving me to wonder what else could go wrong in one day. But thanks be to God the day improved remarkably once I made my First Friday visits. One of the women I visit is a dear friend and a tremendous woman of faith and prayer. As we said our goodbyes she reached to the table by her chair and handed me a small prayer book. As she did so she told me she wanted me to have it and opened it to a page with two signatures. When I read the signatures I was left quite speechless and unbelievably grateful. The second of the signatures reads, "+ John Cardinal Cody, Archbishop of Chicago." The second reads, "All for Jesus M Teresa, MC" Many years back she attended a conference at which Blessed Teresa of Calcutta spoke. She asked the then-living saint to sign her prayerbook and she did. The day can now only get better. I will soon try to finish the first of three funeral homilies and then have the oil changed in my car before soccer practice. After soccer practice some of the players and I will go out for dinner before a long awaited party. When I first announced my transfer, one of the senior soccer players told me he was going to have a BYODP (Bring Your Own Dr Pepper) party for me at his house and we would play Apples to Apples and other such games and have a great time. His parents consented and he is hosting it tonight; many of the high school students plan to attend as a farewell bash. It should be a hoot. 03 September 2009 This afternoon about 3:10 p.m. my pastor's father fell asleep in Christ surrounded by his children. I was privileged to have returned to the rectory just before he died and was with him and his family at the end. He is a good man, filled with much love and humor, and will be very much missed. This morning I learned of the unexpected death of another of our parishioners, who likely be buried on Monday or Tuesday. She was one of those I visited on First Fridays. Please keep her, and her family, in your prayers. I told the Pastor of her death and mentioned that I thought next week would be as bad as this week. He replied, "It can't be." I hope he's right. I'm not sure who the patron saint is of overly stressed and exhausted clergy, but I suspect Saint John Vianney will certainly be happy to intercede for us. Please ask him to pray for us. Pray that we remain calm and are filled with the strength to see to our duties and to provide pastoral care for our people. Pray that my pastor can be of comfort to his siblings as they grieve for their father, who remains with us this morning. Pray that I will be able to pack my belongings by Friday afternoon. Throughout the course of this month, please remember these intentions of the Holy Father Pope Benedict XVI: General: That the word of God may be better known, welcomed and lived as the source of freedom and joy. Mission: That Christians in Laos, Cambodia and Myanmar, who often meet with great difficulties, may not be discouraged from announcing the Gospel to their brothers, trusting in the strength of the Holy Spirit. Lord Jesus, present in the Most Blessed Sacrament, and living perpetually among us through Your Priests, grant that the words of Your Priests may be only Your words, that their gestures be only Your gestures, and that their lives be a true reflection of Your life. Grant that they may be men who speak to God on behalf of His people,and speak to His people of God. Grant that they be courageous in service, serving the Church as she asks to be served. Grant that they may be men who witness to eternity in our time, travelling on the paths of history in Your steps,and doing good for all. Grant that they may be faithful to their commitments, zealous in their vocation and mission, clear mirrors of their own identity, and living the joy of the gift they have received. We pray that Your Holy Mother, Mary,present throughout Your life,may be ever present in the life of Your Priests. Amen 02 September 2009 Sunday afternoon the parish hosted an open house for me to allow the parishioners to wish me well before things became too chaotic (for which I am very grateful now). The above picture was taken before the last 9:15 a.m. Sunday Mass I celebrated in the parish. The lector and I must have just exchanged a humorous comment when the picture was taken. It was a very touching afternoon, filled with many tears and much laughter. I will miss this parish immensely. You can see a few pictures from the reception here. With the preaching of my farewell homily at two Masses prior to the farewell reception, the day was very emotional and quite exhausting. The day ended with dinner with two families followed by the beginning of my packing. Three of the high school students came by to help begin packing my library. After an hour and a half of packing, we were almost halfway through my books. Another group of students and I worked a bit more on the books last night and tonight, and are now almost finished with my books; I think we have only the ones in my office left. The books are requiring a lot more boxes than I anticipated. The students have proven both good company and good helpers, and for their generous help I am very grateful. As all of this was happening, the health of my pastor's father continued to decline. Today he was placed in hospice care and is with us in the rectory, together with my pastor's immediate family, who have been with us for several days now. Please keep his father, and all of them, in your prayers. His family is a delight and it is always good to have them here, even in these difficult days. I taught my pastor's classes at the high school Monday and Tuesday and had a funeral yesterday, as well. He has a funeral tomorrow and I have another funeral on Saturday while he has two weddings that day. I wish that there was more I could do to ease his schedule, but I'm not sure what else I can do, since I have my own pressing duties to attend to. You might be able to guess that these past few days have been filled with much chaos, as the exceptional circumstances are tended to at the same time as the usual daily work. Consequently, we are both very tired and, at least for me, a bit stressed. I've slept precious little the past three nights and have had only a few moments of "down time" each day. I'm in need of a holiday, but cannot possibly take one. Please keep me also in your prayers. I leave the parish next week Saturday and hope to be able to find time to pack up my belongings before then... At the rate things are going that seems less and less likely, but it must be done. On a happier note, a much awaited package arrived for me on Monday from the land of rainbows: Last night we had our first non-tournament soccer match of the season. I'm delighted to say that Vandalia Vandals fell to the St. Anthony Bulldogs on Bulldog Field. Well done, boys! Our coach wrote up the following for the local newspaper: The St. Anthony High School Soccer Team hosted the Vandalia High School Vandals in Varsity and Junior Varsity soccer matches. In the Varsity game the Bulldogs opened scoring when Riley Westenforf passed to Doug Field who then passed it back to Westendorf who placed an outside shot into the corner of the goal. St. Anthony led 1 - 0 until about the 15:00 minute mark when Doug Field placed a "direct, free kick" past the waiting Vandalia goal keeper. The free kick was awarded after a Vandalia player pushed off a St. Anthony player. The foul resulted in a 2 - 0 lead for St. Anthony. Riley Westendorf scored again on a cross from Michael Kabbes when the ball deflected off of a Vandalia player. Vandalia then posted their first goal when a St.. Anthony player accidentally stopped the ball with his hand inside their penalty box. When a team commits a foul or hand-ball inside their own penalty box, a penalty kick is awarded to the opposing team. The ball was placed 12 yards from the goal and it was one vs one, Codey Norris (Vandalia) vs Gary Hanner(St. Anthony). Codey Norris shot a driving shot to Gary Hanner's right side which was too much for the senior goal keeper. St. Anthony later added three more goals from John Kay, Michael Kabbes, and Aaron Wall to end the match St. Anthony 6, Vandalia 1. The Junior Varsity Match was an evenly matched bout between the same schools. St. Anthony was able to squeak out another victory for the JV squad led by sophomore Ryan Willenborg and Junior, Hayden Esker. St. Anthony's next match is this Thursday at 4:30 Varsity and 6:00 Junior Varsity against East Richland. Come out and support the 2009 St. Anthony Soccer Team. Please, Lord, a saint! About Me Father Daren J. Zehnle, J.C.L., K.C.H.S., a priest of the Diocese of Springfield in Illinois, serves as Pastor of St. Augustine Parish, Ashland; Director of the Office for Divine Worship and the Catechumenate; Adjutant Judicial Vicar; and as Diocesan Judge for the Diocesan Tribunal. | Mid | [
0.5789473684210521,
33,
24
] |
Determination of the aromatic hydrocarbon to total hydrocarbon ratio of mineral oil in commercial lubricants. A method was developed to determine the aromatic hydrocarbon to total hydrocarbon ratio of mineral oil in commercial lubricants; a survey was also conducted of commercial lubricants. Hydrocarbons in lubricants were separated from the matrix components of lubricants using a silica gel solid phase extraction (SPE) column. Normal-phase liquid chromatography (NPLC) coupled with an evaporative light-scattering detector (ELSD) was used to determine the aromatic hydrocarbon to total hydrocarbon ratio. Size exclusion chromatography (SEC) coupled with a diode array detector (DAD) and a refractive index detector (RID) was used to estimate carbon numbers and the presence of aromatic hydrocarbons, which supplemented the results obtained by NPLC/ELSD. Aromatic hydrocarbons were not detected in 12 lubricants specified for use for incidental food contact, but were detected in 13 out of 22 lubricants non-specified for incidental food contact at a ratio up to 18%. They were also detected in 10 out of 12 lubricants collected at food factories at a ratio up to 13%. The centre carbon numbers of hydrocarbons in commercial lubricants were estimated to be between C16 and C50. | Mid | [
0.623853211009174,
34,
20.5
] |
/* * Copyright (C) 2008 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.example.android.apis.app; import com.example.android.apis.R; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; public class Intents extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.intents); // Watch for button clicks. Button button = (Button)findViewById(R.id.get_music); button.setOnClickListener(mGetMusicListener); } private OnClickListener mGetMusicListener = new OnClickListener() { public void onClick(View v) { Intent intent = new Intent(Intent.ACTION_GET_CONTENT); intent.setType("audio/*"); startActivity(Intent.createChooser(intent, "Select music")); } }; } | Low | [
0.5235955056179771,
29.125,
26.5
] |
###### Summary box - Communities are often poorly involved in the planning and implementation of interventions, yet their commitment is fundamental to control outbreaks in all the phases. - African countries are responding to the COVID-19 pandemic with measures such as restrictions of movement of people, home confinements and states of emergency such as total or partial lockdowns. - But structural challenges and vulnerabilities of health systems and the well-being of people challenge the acceptance and compliance with this package of measures. - Lessons learnt from responding to Ebola outbreaks in Africa (2014--2016 and 2018--2020) can strengthen community engagement to enhance the community ownership of the COVID-19 pandemic response. - We present 10 lessons learnt from responding to Ebola that African countries should quickly adapt in their response to the COVID-19 pandemic, namely: ```{=html} <!-- --> ``` - involve social scientists early in the response; - mobilise family leaders for surveillance, case detection, contact identification and follow-up and quarantine; - treat contacts with dignity and the empathy they deserve; - communicate laboratory results promptly; - care for the severely ill, while maintaining family connections; - prevent stigmatisation of people and the families of those who recover; - recruit local staff in the response and involve local people to build response structures; - mobilise and involve resistant communities in the response to overcome dissent; - involve grass-roots leaders in the preparation and implementation of response measures; - mobilise media players, including social media networks. ###### Summary box - Health actors, community leaders and communities must co-construct options for COVID-19 response that are acceptable, and feasible, and foster commitment of affected communities. - This approach calls for an urgent paradigm shift from a predominantly biomedical approach to outbreak response to one that balances biomedical and social science approaches. Introduction {#s1} ============ During public health emergencies, such as the current COVID-19 Public Health Emergency of International Concern, communities are often poorly involved in the planning and implementation of interventions, yet their commitment is fundamental to control outbreaks. African countries are responding to the COVID-19 pandemic with restrictive public health measures such as states of emergency and either total or partial lockdowns. All the countries share similar structural challenges and vulnerabilities, including and not limited to weak health systems, an informal economy, with more than half the population 'making do' or 'getting by day by day' and living from hand to mouth. These vulnerabilities challenge the acceptance and compliance of the package of restrictive health measures. The structural weakness of health systems in Africa means that few critically ill patients will have access to medical care in intensive care units and the kind of medical technology available in these facilities. Preventing spread of infection is essential. As a result, reduced social interactions and increased physical distancing are a central part of many public health strategies and this requires co-constructing of solutions that are acceptable and feasible, and that foster commitment of affected communities. Lessons learnt from Ebola outbreak response in West Africa and most recently in the Democratic Republic of Congo have demonstrated that the co-construction of sociocultural solutions has fostered commitment of affected communities and has succeeded in enhancing community engagement and ownership of the response. Community engagement and co-construction are two complementary notions: the first being the end of a process, and the second being the method or steps to achieve a desired goal. Experiences of community engagement and co-construction during Ebola response have shown that when communities were involved in problem analysis and co-construction of solutions, they took ownership of the response interventions and committed to efforts to curb the epidemic. We summarise here, under 10 successful lessons learnt from Ebola, responses that can strengthen community engagement in the fight against the COVID-19 pandemic, and specifically with respect to compliance with state of emergency measures, including partial or total lockdowns. Lesson 1: involve social scientists early in the response {#s2} ========================================================= During emergency response, social science experts bring specific expertise in analysing the dynamics of actors and communities engaged in the response in their social, cultural, historical, political and economic contexts.[@R1] In this way, social scientists can build bridges or facilitate dialogue in challenging situations. Further, social scientists can facilitate the co-construction of culturally and epidemiologically appropriate solutions and redefine interventions for increased community ownership. In this way, response measures can account more fully for the human experience, and reduce potential for unintended additional suffering to communities, some of which may be destabilised by fear of disease, death and conflict prevention. There is often a misconception about the homogeneity of communities. Community engagement starts from the premise that community groups are heterogeneous and that the diversity of opinions and sociocultural perspectives must account for acceptable solutions to be developed. Epidemics often reawaken old resentments and conflicts within and between communities. These conflicts can negatively affect the success of public health interventions and hamper their ownership by communities. To find mutually acceptable solutions, responders must account for the unique and varied perspectives of affected communities and be open to finding unarticulated and, at times, unexpected solutions. Lesson 2: mobilise family leaders for surveillance, early case detection, contact identification and follow-up, and quarantine {#s3} ============================================================================================================================== Early case detection, contact tracing, as well as contact quarantine require the commitment of families and community leaders; these interventions can themselves be 'violent and destabilizing' and reminiscent of police house arrest. Involving the head of the family, for example, who is the provider and responsible for protecting the family, ensures a quality interlocutor who has the power to mobilise family members. During Ebola, even in situations of extreme reluctance to follow-up contacts, it was useful to mobilise a family leader to take on this task with his family. By using his duty to protect, he was able to follow-up his family\'s contacts properly and with the trust of the surveillance teams. Lesson 3: treat contact persons with dignity and the empathy they deserve {#s4} ========================================================================= Contacts must be treated with dignity and not as 'contaminating subjects'. Regardless of their place in the social hierarchy, their change of status due to suspicion of disease puts their status or place in the family and/or community at risk. It is important to set up a mechanism to facilitate communication between the contacts in quarantine and their family, as well as access to quality psychosocial care provided by experts who speak their language. Quarantine facilities should be pleasant, ventilated and with play areas to account for small children, if possible. Moreover, it is important to ensure that meals for people in quarantine are better than those provided by their families, thus alleviating the traumatic experience of quarantine. Experience from previous epidemics highlights how attending to theses aspects is critical to prevent escapes and promote acceptance of quarantine. If resources permit, it is advisable to provide some additional 'treats' such as drinks, chocolates, cookies and balloons for the children of those in quarantine Lesson 4: communicate laboratory results promptly to the patient {#s5} ================================================================ The diagnosis of COVID-19, like that of Ebola, requires confirmation by a biological test---RT-PCR (a method of molecular biology)---which takes at least 4 hours to complete. Added to this is the time needed to transmit the results to experts, the authorities and finally to the patient. As a result, patients may only know the result of the test after 24 hours in urban areas and sometimes longer in rural areas. For the patient and family this waiting period is filled with uncertainty, causing disruption and anxiety. It is strongly recommended to establish a rapid process for communicating the results to doctors in the field to relieve the anxiety of the patients and their families and to initiate the protective public health actions very quickly. Lesson 5: care for the severely ill and maintain family connections {#s6} =================================================================== COVID-19 gives rise to a spectrum of illness with around 80% of patients experiencing mild to moderate illness. Those who become severely ill and who have access may receive hospital care. Hospitalisation of patients means transferring them from a familiar environment to a stressful environment; medical and paramedical personnel who provide care are strangers and wear personal protective equipment, such as goggles and surgical masks, and this can reinforce disorientation, anxiety and fear. There are multiple uncertainties facing both patients and their families, not least of which involves uncertainty regarding the progression of disease. Patients and their families need proactive, clear information about the hospital setting and what to expect. The way in which the physical environment is structured communicates a lot to patients and families. Ensuring a toilet is easily available, having dedicated waiting rooms with provision for young children and paying attention to privacy needs are small but important aspects. At an interpersonal level, patient-centred communication can help reduce anxiety and isolation. Getting updates from the patient beyond their clinical condition, encouraging them to get well, smiling behind the protective mask and speaking in the patient's language all contribute to providing reassurance and quality humane care for the hospitalised person. It is also helpful to keep the patient connected with relatives by allowing phone calls and safe visits of a selected family member where feasible. Lesson 6: prevent stigmatisation of people who recover and their families {#s7} ========================================================================= Fear of the disease often leads to stigmatisation and 'scapegoating' of patients and their families. Preventing stigma and acting to counter it helps reduce the negative effects of the epidemic on social cohesion. The mobilisation of psychologists at the beginning of the epidemic is an effective means of mitigation. The involvement of local authorities and leaders helps protect and support victims of stigma and reassure the community. In addition, there are endogenous reintegration mechanisms that are important to explore; these mechanisms are very useful outside crises to resolve community disputes, and restore peace and forgiveness. People who have recovered from COVID-19 also need the acceptance of their communities to prevent stigma. Lesson 7: recruit local staff in the response, including local people to build the structures of the response {#s8} ============================================================================================================= The management of a response is very resource intensive. For a population and especially for young people who are facing unemployment and whose socioeconomic demands are not always met, the response can be an opportunity to find jobs and relieve their suffering. During Ebola outbreak response, partners often recruit young people and women into the response services; for example, youth and women employed in the neighbourhoods where response structures (treatment units, points of control/points of entry) have been built. This has helped facilitate community acceptance of these new structures, preventing reluctance, vandalism and violence against the health teams. Lesson 8: mobilise the most resistant people in the response to overcome dissent {#s9} ================================================================================ Fear and frustration can provoke popular uprisings. However, as in any social movement, there are leaders who direct the hostilities. During Ebola, many uprisings, reticence and resistance were defused by recruiting these leaders into the response. They were thus able to control their own groups, ensure the security of teams and facilitate access to communities for public health activities. Young people can be involved in monitoring and securing their areas of residence. This would prevent risk taking, recklessness and vandalism. Lesson 9: involve grass-roots leaders in the preparation and implementation of response measures, including containment and emergency preparedness {#s10} ================================================================================================================================================== It is essential to be able to discuss the conditions and operationalisation of restrictive measures with community leaders, so that solutions can be co-constructed with the communities. Involving religious leaders may strengthen the spiritual tranquillity and to some extent, the predisposition to fight the disease as a spiritual battle. This tranquillity is very often sought among the supporters of socioreligious institutions, in localities considered sacred, depositories of mystical powers that can change the course of events, based on prayers and ritual sacrifices. Failing this, health measures such as a state of emergency and lockdowns can be considered to be in the sole interest of the authorities and political leaders. Some credible and influential community leaders are also very useful in managing rumours, misinformation and accountability in the face of unfulfilled promises by certain actors that can undermine community engagement. Lesson 10: mobilise media players and take social networks into account {#s11} ======================================================================= African populations in general remain closely linked to the traditional media (radio and television to a lesser extent). Treating media actors as partners in tackling pandemic challenges allows response actors to properly engage them with messages disseminated through their channels and appreciated by the communities. Involving the media as partners also provides access to their own social networks, because most people involved in the media are also heavy users of social networks. Finally, associating media actors and considering social networks enables the activation of the media communication monitoring function, which remains a challenge during public health emergencies. Conclusion {#s12} ========== Given the experience of responding to Ebola epidemics in Africa, it is imperative that communities must be accountable to the response to COVID-19. Health actors and authorities must co-construct solutions to address COVID-19 with community leaders and communities. However, a 'one size fits all' approach to community engagement is likely to fail. Each community is unique, and engagement must be contextualised to affected communities of each country. This engagement of cooperation with communities calls for an urgent change in the approach to health emergency response. All member states, health authorities and humanitarian actors are urgently called on to quickly move from a dominant biomedical design of public health emergency response to a public health design that balances biomedical paradigms with those of social sciences. Dr Nina Gobat and Ms Maria Caterina Ciampi for reviewing the manuscript. **Handling editor:** Seye Abimbola **Twitter:** \@AnokoJulienne, \@MR_Belizaire **Contributors:** JNA, BRB, ABD and BD complied the Ebola response lessons learnt. ABD, MRB, MK and MHD reviewed the concept. JNA, MYN, ZY, ISF and AT reviewed the concept and tailored it to the COVID-19 response. JNA wrote the first draft and AT extensively reviewed the draft. All authors have reviewed and approved the final manuscript. **Funding:** The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. **Competing interests:** None declared. **Patient consent for publication:** Not required. **Provenance and peer review:** Not commissioned; internally peer reviewed. **Data availability statement:** No additional data are available. | Mid | [
0.631313131313131,
31.25,
18.25
] |
// // Generated by class-dump 3.5 (64 bit) (Debug version compiled Oct 15 2018 10:31:50). // // class-dump is Copyright (C) 1997-1998, 2000-2001, 2004-2015 by Steve Nygard. // #import <objc/NSObject.h> #import <CoreLocation/NSCopying-Protocol.h> #import <CoreLocation/NSSecureCoding-Protocol.h> @class NSArray; @interface _CLLSLHeadingEstimation : NSObject <NSCopying, NSSecureCoding> { int _status; NSArray *_headings; } + (BOOL)supportsSecureCoding; @property(copy, nonatomic) NSArray *headings; // @synthesize headings=_headings; @property(nonatomic) int status; // @synthesize status=_status; - (id)descriptionWithMemberIndent:(id)arg1 endIndent:(id)arg2; - (id)description; - (void)encodeWithCoder:(id)arg1; - (id)initWithCoder:(id)arg1; - (id)copyWithZone:(struct _NSZone *)arg1; - (void)dealloc; @end | Low | [
0.505446623093681,
29,
28.375
] |
Month: August 2012 Mitt Romney and his ancestors have been extensively networked in the Family Forest® for years, so I wasn’t particularly surprised to see his connections to Clint Eastwood…. What an inspirational and energizing Republican National Convention! While I was watching, I was wondering how many surprising connections I might find in the Family Forest®. Mitt Romney and many of his ancestors have been extensively networked in the Family Forest® for years, so I wasn’t particularly surprised to see his connections to Clint Eastwood(they are ninth cousins, through their Lathrop/Lothrop ancestors) or to a local topic of interest (Commodore Plant and the Save-The-Biltmore project). I was surprised to see how closely Ann Romney is connected through family ties to the most unexpected presence at the convention, the American Red Cross. When I queried the Family Forest®, within seconds I discovered that the woman who stole the show at the convention, the woman who our great nation desperately needs to be our next First Lady, is a fifth cousin four times removed of The Angel of the Battlefield, Clara Barton , the founder of the American Red Cross. Maybe there’s some great nationally-beneficial synergism to be tapped into from this connection? | Mid | [
0.560747663551401,
30,
23.5
] |
Bodies of girl, youth recovered in Yamuna river Five days after they went missing from their houses, a teenaged girl and a 20-year-old youth were found dead with their hands tied in Yamuna river in New Delhi on Sunday, police said. Bodies of the youth and the 16-year-old girl, both residents of Inderpuri in west Delhi, were fished out from the river in Sunlight Colony area on Sunday morning, they said. The victims were missing since April 28. Meanwhile, a suicide note has been recovered from the girl's house, stating that the victims, who considered themselves as brother-sister, took the extreme step after their families did not give approval to their relationship, police claimed. | Low | [
0.41247484909456705,
25.625,
36.5
] |
Activated carbon adsorption of PAHs from vegetable oil used in soil remediation. Vegetable oil has been proven to be advantageous as a non-toxic, cost-effective and biodegradable solvent to extract polycyclic aromatic hydrocarbons (PAHs) from contaminated soils for remediation purposes. The resulting vegetable oil contained PAHs and therefore required a method for subsequent removal of extracted PAHs and reuse of the oil in remediation processes. In this paper, activated carbon adsorption of PAHs from vegetable oil used in soil remediation was assessed to ascertain PAH contaminated oil regeneration. Vegetable oils, originating from lab scale remediation, with different PAH concentrations were examined to study the adsorption of PAHs on activated carbon. Batch adsorption tests were performed by shaking oil-activated carbon mixtures in flasks. Equilibrium data were fitted with the Langmuir and Freundlich isothermal models. Studies were also carried out using columns packed with activated carbon. In addition, the effects of initial PAH concentration and activated carbon dosage on sorption capacities were investigated. Results clearly revealed the effectiveness of using activated carbon as an adsorbent to remove PAHs from the vegetable oil. Adsorption equilibrium of PAHs on activated carbon from the vegetable oil was successfully evaluated by the Langmuir and Freundlich isotherms. The initial PAH concentrations and carbon dosage affected adsorption significantly. The results indicate that the reuse of vegetable oil was feasible. | High | [
0.6778042959427201,
35.5,
16.875
] |
Understanding how to reproduce crashes with Firebase Crashlytics Logs Hunt those bugs faster — Enhancing crash analysis Typical android crash stacktrace Today I discovered something very very useful. I’m not sure if I missed some Android Developers blogpost or tweet from firebase, google, etc., but the feature I’m going to show you is awesome. Long story short, it’s all about getting user navigation trace when an app was force closed because of a FATAL EXCEPTION(aka. CRASH). Great news to all Mobile Developers! Specially #android-dev Firebase Crashlytics is now allowing you to see all the screens the user went through right before a crash happened. Detailed log of where the user navigation went before the crash This has a bunch of useful information: You can see the step number (#) from beginning to the end (crash) The timestamp with seconds! The class name in your code that represents a particular screen How to see this information on the Firebase Console (QA) Open your project from the Console. Select Crashlytics on the left panel (Stability Category) On the bottom half of the screen you’ll see the list of issues. Click on one of the issues, and open the detail. 4. You’ll be presented with STACK TRACE tab opened. Switch to LOGS tab. 5. Click on the black arrow near every screen_view Log entry to enlarge and see which screen is that. Done! Now you can easily catch the bug! How to make this work in code? (DEVELOPER) As I understand, you’ll need to have the latest Firebase Analytics and Crashlytics in your app. That’s all you need to do. After you integrate these two SDKs in the project, analytics automatically collected screen_views are reported, again, automatically to Crashlytics. Here’s a minimum Android project configuration: Project/build.gradle task wrapper(type: org.gradle.api.tasks.wrapper.Wrapper) { gradleVersion = '4.1' } buildscript { repositories { maven { url 'https://maven.fabric.io/public' } google() } dependencies { classpath 'com.android.tools.build:gradle:3.0.1' classpath 'com.google.gms:google-services:3.1.2' classpath 'io.fabric.tools:gradle:1.25.1' } } app/build.gradle: apply plugin: 'com.android.application' apply plugin: 'io.fabric' android { compileSdkVersion 27 buildToolsVersion '27.0.3' defaultConfig { applicationId "com.your.app" versionCode 1 versionName "1" minSdkVersion 16 targetSdkVersion 27 } } dependencies { compile 'com.google.firebase:firebase-core:11.8.0' compile('com.crashlytics.sdk.android:crashlytics:2.9.1@aar') { transitive = true } } apply plugin: 'com.google.gms.google-services' IMPORTANT! Use the latest versions. Otherwise you may not see those logs. I know for sure that when Firebase Crashlytics was first announced there was no such feature. And when I updated the versions, I started seeing those logs. Enhance logs even more I haven’t tried, but you may also have the ability to track user events, not only screen_views. With this you can have the full story. Just imagine: SplashScreen -> Opened 3rd element from main grid -> DetailScreen -> Rotate event -> Popup shown -> Click on rate me -> CRASH. UPDATE #1 - Fragments: In order to make Fragments appear in those Logs, add this line to your BaseFragment#onResume() for example. firebaseAnalytics.setCurrentScreen(activity, this.getClass().getSimpleName(), this.getClass().getSimpleName()); How this was solved before Previously I had to implement some homemade solution to this problem. So I had a static array of strings in Application. Then the BaseActivity was adding some lifecycle events to that array. And all this was reported to Crashlytics via log call. And the result is this: Pretty ugly, huh? Now, there’re more sophisticated solutions like this using Model View Intent architecture to solve problem. But now no need to change your architecture or implement custom solution. Firebase is awesome! | High | [
0.7081081081081081,
32.75,
13.5
] |
-----Original Message----- F [Mark Hall] rom: Solberg, Geir Sent: Thursday, May 17, 2001 4:20 AM To: '[email protected]' Subject: Model FOR 5/16 | Low | [
0.48949579831932705,
29.125,
30.375
] |
dining for EVERYONE It's hard to see all there is to do in just one trip!***Be sure to ask about our frequent stay program!*** Hip and historic, our boutique hotel in Jeffersonville is in walking distance of 20+ award-winning restaurants and craft beers loved by locals. No wonder the National Trust of Historic Preservation recognizes most of the homes & buildings in the Rose Hill Neighborhood as 'Century Homes', having been built 100 or more years ago! Just 1 mile to downtown Louisville, the Market Street Inn is a prime location for dining in Louisville and its surrounding area. 700 W Riverside Dr #300, Jeffersonville (1 block from the inn) Welcome to Bridge & Barrel, a loved-by-locals hideaway with a soft spot for barrel-aged cocktails and local flavors. The mood (and menu) is easy-does-it Southern, with an emphasis on craft brews and sippable elixirs in true Kentucky style. 707 West Riverside Drive, Jeffersonville (2-blocks from the inn) From classics like pot pie and meatloaf, to black angus steaks and burgers, Buckhead Mountain Grill is the go-to for scratch-made meals and riverside views. 131 West Chestnut, Jeffersonville (1 block from the inn) PEOPLE. PIZZA. PINTS. We serve 10 different. New York-style pizzas. Parlour fits the bill for both, and tops it off with friendly, attentive service! 253 Spring St, Jeffersonville (2 blocks from the inn) Ramiro's Cantina explores the authentic blend of mexican traditions with the traditional local flavors. Our team has one thing in common – we are passionate about food! That is why our dishes are made with great care and attention to details. And we want to share this passion with you! Just come and try our food! 256 Spring Street, Jeffersonville (2 blocks from the inn) The Red Yeti opened in 2014. Our driving force is preparing hand crafted food and beer, and providing the best service in the industry, not just the area. Our executive chef, Michael Bowe, has designed every menu item to feature locally grown and sourced ingredients, and although presented as a fine dining item, are approachable as recognizable southern comfort food. 347 Spring St, Jeffersonville (2 blocks from the inn) G.A. Schimpff’s Confectionery is one of the oldest, continuously operated, family-owned candy businesses in the United States. It was opened in its present location in 1891 by Gustav Schimpff Sr. and Jr. Lunch, Soda Fountain, Candy store & Museum! | Mid | [
0.651629072681704,
32.5,
17.375
] |
2010 World Cup - interactive guide to the groups All the fixtures in all the groups, with profiles of every team and host city | High | [
0.6949384404924761,
31.75,
13.9375
] |
Tim Guldimann MOSCOW, March 27. /ITAR-TASS/. Russian and OSCE diplomats have discussed measures to organize a special monitoring mission to Ukraine, the Russian Foreign Ministry says. During the meeting, “Russian Deputy Foreign Minister Alexei Meshkov and Ambassador Tim Guldimann, personal envoy of the Swiss OSCE Chairperson-in-Office for Ukraine, considered the OSCE role in stopping Ukrainian radicalism and nationalism, contributing to dialogue, normalizing the situation in the country and carrying out a constitutional reform to meet the interests of all regions in Ukraine”, the ministry says. Earlier, the Organization for Security and Cooperation in Europe (OSCE) agreed to send an observer mission to Ukraine after two weeks of consultations. Russian President Vladimir Putin who met UN Secretary-General Ban Ki-moon in Moscow last Thursday, March 20, said the UN chief asked for a possibility to send an OSCE and UN observer missions to Ukraine’s eastern and southeastern provinces. “I would like you to continue discussing this matter with our partners and find a solution,” Putin told Russian Foreign Minister Sergey Lavrov. “We have almost reached an agreement on the draft document with the OSCE Standing Council. We ensured that alongside the eastern and south-eastern parts of Ukraine, the list of regions to be covered by the OSCE observer missions would also include western and central regions which have seen very unpleasant incidents in recent months,” Lavrov said. The Russian foreign minister added that the number of observers and regions where they are going to be sent had been fixed. “Any change in these agreements concerning an increase in the number of observers or regions should be authorized by the OSCE Standing Council. We are going to proceed from you instructions,” Lavrov stressed. | Mid | [
0.562367864693446,
33.25,
25.875
] |
Cytochrome b5 and a recombinant protein containing the cytochrome b5 hydrophobic domain spontaneously associate with the plasma membranes of cells. Both cytochrome b5, isolated from rabbit liver microsomes, and LacZ:HP, a recombinant protein consisting of enzymatically active Escherichia coli beta-galactosidase coupled to the C-terminal membrane-anchoring hydrophobic domain of cytochrome b5, were shown to spontaneously associate with the plasma membranes of erythrocytes and 3T3 cells. Association was promoted by low pH values, but proceeded satisfactorily over several hours at physiological pH and temperature. About 150,000 cytochrome b5 molecules or 100,000 LacZ:HP molecules could be associated per erythrocyte. These proteins were not removed from the membrane by extensive washing, even at high ionic strength. After incubation with fluorescently labeled cytochrome b5 or LacZ:HP, cells displayed fluorescent membranes. The lateral mobility of fluorescently labeled cytochrome b5 and LacZ:HP was measured by photo-bleaching techniques. In the plasma membrane of erythrocytes and 3T3 cells, the apparent lateral diffusion coefficient D ranged from 1.0.10(-9) to 8.10(-9) cm2 s-1 with a mobile fraction M between 0.4 and 0.6. The lateral mobility of these proteins closely resembled that reported for lipid-anchored proteins and was much higher than that reported for Band 3, an erythrocyte membrane-spanning protein with a large cytoplasmic domain. These results suggest that the hydrophobic domain of cytochrome b5 could be employed as a universal, laterally mobile membrane anchor to associate a variety of diagnostically and therapeutically useful recombinant proteins with cells. | High | [
0.6863270777479891,
32,
14.625
] |
A simple method for determining the ligand affinity toward a zinc-enzyme model by using a TAMRA/TAMRA interaction. Thiolate coordination to zinc(ii) ions occurs widely in such functional biomolecules as zinc enzymes or zinc finger proteins. Here, we introduce a simple method for determining the affinity of ligands toward the zinc-enzyme active-center model tetramethylrhodamine (TAMRA)-labeled 1,4,7,10-tetraazacyclododecane (cyclen)-zinc(ii) complex (TAMRA-ZnL). The 1 : 1 complexation of TAMRA-labeled cysteine (TAMRA-Cys) with TAMRA-ZnL (each at 2.5 μM), in which the TAMRA moieties approach one another closely, induces remarkable changes in the visible absorption and fluorescence spectra at pH 7.4 and 25 °C. The 1 : 1 complex formation constant (K = [thiolate-bound zinc(ii) complex]/[uncomplexed TAMRA-ZnL][uncomplexed TAMRA-Cys], M-1) was determined to be 106.7 M-1 from a Job's plot of the absorbances at 552 nm. By a ligand-competition method with the 1 : 1 complexation equilibrium, analogous K values for thiol-containing ligands, such as N-acetyl-l-cysteine, l-glutathione, and N-acetyl-l-cysteinamide, were evaluated to have similar values of about 104 M-1. As a result of the ligand affinities to TAMRA-ZnL, nonlabeled zinc(ii)-cyclen induced remarkable stabilization of the reduced form of l-glutathione and a cysteine-containing enolase peptide to aerial oxidation in aqueous solution at pH 7.4 and 25 °C. | High | [
0.6585956416464891,
34,
17.625
] |
Pages Thursday, July 10, 2014 I have an idea that could greatly increase the production of electricity. But I do not have the tools to test it. I was thinking that maybe others with the necessary resources could test it for me. So, I was looking up how a fusion reactor works. I knew what fusion was, but I wanted to know specifically how you could put together a machine to artificially cause it to occur. I got something called the tokamak fusion reactor. There are many designs of course, and this design like all others, seemed interesting, new, advanced, futuristic, and best of all, innovative. What of course baffled me, was how were you going to transfer fusion power, into electricity? Then I saw, that the design of the reactor would make it so that the heat generated by the fusion process would turn water to steam, to then turn a turbine. But what's even more interesting, is that EVERY other type of electrical generatoration eventually leads to turning a turbine. Wind power = turbine Hydroelectric dam = turbine Nuclear fission = turbine Burning coal = turbine Nuclear fusion = turbine I eventually learned how and why turning a turbine produces electricity, and that's because the turbine is connected to a magnet, which turns around a conductive metal rod. This produces electricity, I guess because it moves electrons?? Anyway my point here, is that we have all of these innovative ways to produce electricity. Yet our energy transfer method has stayed the same since..we'll...the FIRST electrical generator. We are talking Edison and tesla here, Nearly 200 years, it's ridiculous. The ONLY other Method, is very new technology that's still trying to be advanced, and that is of course, solar panels. I've learned over my short life (also due to my intuition), that when you see a lack of advancement in one area. Especially If the area is extremely vital. Yet everything else is advancing greatly around it (nuclear fission/fusion). That usually means that one, people don't understand it. And two, there are possibly great advancements to be made there. So I started racking my brain as to a way to produce electricity In a new way. I didn't get very far...but I did get an idea. The idea still revolves around using a magnet that spins around a metal rod, But it goes a bit further. I realized that the magnet stays in one place, spinning in that exact same spot. I of course asked my self. Why? Does a magnet spinning around a piece of metal produce electricity? We'll, what is electricity? Moving electrons. So, the magnet causes electrons to move? If so, I have come to the conclusion, that instead of having the magnet spin in the exact same spot. Wouldn't it be more effective to move it forwards as well? (or whatever direction you want the electricity to go towards). I mean when transferring heat to electricity via turbine, what you are really doing is transferring heat, to kinetic energy, to electrical energy. Which is basically just the kinetic energy of electrons. So my intuition instantly told me, wouldn't it work A LOT better if you actually moved the magnet as we'll? I tried thinking of multiple designs where these magnets move around. One design I came up with, is to completely surround the metal rod in a hollow cylindrical magnet (how it's done now). Then figure out a way to have this metal rod be constantly surrounded by a forward moving possibly spinning cylindrical magnet/magnets. My intuition tells me that you would exert greater directional kinetic force on the electrons, and thus produce more electricity this way. In the picture it shows a turbine generator, which as I pointed out above, is used for EVERY form of electricity generation besides solar. It's also nearly 200 year old technology. In this design, The magnet is stationary, and the metal rod spins place. The one I described above, instead has a magnet spinning around the metal rod. But it doesn't matter, both methods produce electricity. Disregard the turbine while imagining this. My idea, is to move the magnet along the metal rod in the direction you want the electricity to travel (example, forward), while the magnet is moving along the metal rod in that direction, the magnet or rod also spins. Exerting greater directional kinetic energy onto the electrons, thus converting more kinetic energy into electric energy. And thus increasing the efficiency of the whole system. I would like to, or someone else, to run a test where you input the same amount of energy into these structurally different electricity generating turbines. Then measure the amount of electricity is produced by each set up. You would of course have the control, which would be the standard magnet circling a metal rod, then you would have my set up idea. The main thing is, if this idea succeeds, I think it could greatly increase our production of electricity. This could further help solve our energy problems. It also could be highly profitable, but I'm kinda giving up that possibility by posting this online. | Mid | [
0.6407322654462241,
35,
19.625
] |
Q: How to remove rows from a DataFrame where some columns only have zero values I have the following Pandas DataFrame in Python: import numpy as np import pandas as pd df = pd.DataFrame(np.array([[1, 2, 3, 4, 5, 6], [11, 22, 33, 44, 55, 66], [111, 222, 0, 0, 0, 0], [1111, 0, 0, 0, 0, 0]]), columns=['a', 'b', 'c', 'd', 'e', 'f']) DataFrame looks as the following in a table: a b c d e f 0 1 2 3 4 5 6 1 11 22 33 44 55 66 2 111 222 0 0 0 0 3 1111 2222 0 0 0 0 The original DataFrame is much bigger than this. As seen, some rows have zero values in some columns (c, d, e, f). I need to remove these columns from the DataFrame so that my new DataFrame will look as the following (after removing rows where given columns are zeros only): a b c d e f 0 1 2 3 4 5 6 1 11 22 33 44 55 66 And I only need to remove the rows where all these column (c, d, e, and f) are zeros. If, for example, 2 of them are 0, then I will not remove such rows. Is there a good way of doing this operation without looping through the DataFrame? A: try this, df[~df[list('cdef')].eq(0).all(axis = 1)] a b c d e f 0 1 2 3 4 5 6 1 11 22 33 44 55 66 | High | [
0.6666666666666661,
39.75,
19.875
] |
Beautiful Candied Lemon Slices are perfect for topping on pastries, cupcakes, cocktails & more spring treats! Easy recipe & a great way to use those lemons. Plus you can use all the leftover lemon simple syrup to add to all your favorite cocktails later. | Low | [
0.34320323014804804,
15.9375,
30.5
] |
At the heart of international relations and development lies the transformative issue of gender equality. It is our shared responsibility to apply it in everything we do. I support the IGC Panel Parity Pledge. Implement a hiring process that, when candidates are equally qualified, gives preference to those with expertise in gender studies. Take the next steps in developing a gender equality and inclusion plan for the Graduate Institute (a) by mapping existing internal programs and policies to advance gender equality and inclusion, and (b) communicating them broadly on a dedicated webpage to improve knowledge about them and enhance their effectiveness. | High | [
0.6985507246376811,
30.125,
13
] |
"Before I go any further, let me state emphatically that I am not out to dissuade anyone from wearing a bike helmet. Although I am about to express my perception that the facts about helmets often are misinterpreted, I believe that helmets confer some obvious safety benefits and that there’s a certain wisdom to wearing one." This does not sound like someone telling you not to wear a helmet, around town or otherwise. Quote Folks who don't know better may read his opinion (a former Bicycling editor even) and presume that a helmet is unnecessary because of a few isolated, unproven studies. They might... if they have poor reading comprehension skills or only read the title. Quote Frankly I found his argument mainly being that: bicycle safety is really outside the control of cyclists and because drivers are the root cause, there's no reason to wear a helmet. That was not my takeaway at all. He discussed how important it is for cyclists to be aware of their surroundings because their own safety is very much within their control; primarily in the form of avoiding accidents through awareness and visibility. I thought the main argument was that the available data does not provide the definitive answers that some people think it does and that the statistical risk of not wearing a helmet put into perspective with the risks we face everyday is not as terrifying as some people think. Quote His 'statistics' are amazingly isolated. I'm shocked folks would take single casual 'studies' as any kind of proof, particularly with zero causation established. I mean one guy riding a bike with no helmet, helmet, and a wig is a controlled experiment?!? Agreed. I don't put a lot of weight into the some of the studies cited. Abe did a great job of analyzing some of these studies and I wish the author had gone into some of that detail. But in the author's defense, he never presented these studies as "proof" of anything. Quote I don't like the article because its trying to conflate disparate considerations/challenges all the while presuming causation. Yes, infrastructure often needs improvement. So does driver education, behavior, and awareness. Neither of those issues conclude that one should or shouldn't wear a helmet. Unfortunately, a young reader might take away the point (intended or not) that a helmet is useless...or even worse for you. That's a huge disservice. Regardless of drivers, defects, obstructions and debris occur on all roads at random times. Not sure why that 'holds no water'. I'm sure you could look up what the statistics of these occurrences are. And I would wager they are not an exceedingly rare occurrence. Again, ignoring driver impact, a helmet can affect how much damage is transferred to your skull should you crash as a result of NOT making contact with a driver. Why would you choose to not wear one? Because a driver will now hit you?? It holds no water because you took an argument based on statistics and refuted it with "surprises happen". Regardless of whether the data you're making assumptions about here exists and regardless of whether or not it says what you would wager it to say, "surprises happen" is not helpful. Unfortunately, it is how our brains make decisions when it comes to things like safety. "Before I go any further, let me state emphatically that I am not out to dissuade anyone from wearing a bike helmet. Although I am about to express my perception that the facts about helmets often are misinterpreted, I believe that helmets confer some obvious safety benefits and that there’s a certain wisdom to wearing one." This does not sound like someone telling you not to wear a helmet, around town or otherwise. Quote Folks who don't know better may read his opinion (a former Bicycling editor even) and presume that a helmet is unnecessary because of a few isolated, unproven studies. They might... if they have poor reading comprehension skills or only read the title. Quote Frankly I found his argument mainly being that: bicycle safety is really outside the control of cyclists and because drivers are the root cause, there's no reason to wear a helmet. That was not my takeaway at all. He discussed how important it is for cyclists to be aware of their surroundings because their own safety is very much within their control; primarily in the form of avoiding accidents through awareness and visibility. I thought the main argument was that the available data does not provide the definitive answers that some people think it does and that the statistical risk of not wearing a helmet put into perspective with the risks we face everyday is not as terrifying as some people think. Quote His 'statistics' are amazingly isolated. I'm shocked folks would take single casual 'studies' as any kind of proof, particularly with zero causation established. I mean one guy riding a bike with no helmet, helmet, and a wig is a controlled experiment?!? Agreed. I don't put a lot of weight into the some of the studies cited. Abe did a great job of analyzing some of these studies and I wish the author had gone into some of that detail. But in the author's defense, he never presented these studies as "proof" of anything. Quote I don't like the article because its trying to conflate disparate considerations/challenges all the while presuming causation. Yes, infrastructure often needs improvement. So does driver education, behavior, and awareness. Neither of those issues conclude that one should or shouldn't wear a helmet. Unfortunately, a young reader might take away the point (intended or not) that a helmet is useless...or even worse for you. That's a huge disservice. Regardless of drivers, defects, obstructions and debris occur on all roads at random times. Not sure why that 'holds no water'. I'm sure you could look up what the statistics of these occurrences are. And I would wager they are not an exceedingly rare occurrence. Again, ignoring driver impact, a helmet can affect how much damage is transferred to your skull should you crash as a result of NOT making contact with a driver. Why would you choose to not wear one? Because a driver will now hit you?? It holds no water because you took an argument based on statistics and refuted it with "surprises happen". Regardless of whether the data you're making assumptions about here exists and regardless of whether or not it says what you would wager it to say, "surprises happen" is not helpful. Unfortunately, it is how our brains make decisions when it comes to things like safety. So the article was based on statistics and yet you admit the statistics were questionable. Which is it? What is your opinion? Are cyclists safer without helmets or with? Why? Well written article, but damn having a family and kids and doing an experiment like this to prove a point? Dedicated but kind of nuts. Quote While I used to rip it at 45 miles per hour, now I’m far more cautious—anything over 30 feels a bit dicey. A bike helmet can deceive riders into thinking they have a cloak of invulnerability that isn’t actually there, and at least one study has confirmed how riders change their behavior when the hat comes off. I never thought of myself as a big-risk taking cyclist but without a helmet I handle certain situations differently. I've never experienced going 45 on a bike, and 30-without a helmet feck I would have a hard time wanting to go above 20 mph! With regards to safety studies, how many accidents where the cyclist picks himself, dusts himself off and rides away make the "studies" showing helmet use doesn't confer benefit one way or the other? How many regular cyclists here with brutal close calls have been contacted and participated in some pollster's questionnaire? The writer is relying on his personal anecdotes here and yet there are untold thousands of personal anecdotes where the rider was obviously better off with the helmet. I'm not sure the helmet is the actual reason we have crappy bike infrastructure, low respect for cyclists, and low cycling participation, and worse safety outcomes. Or that getting rid of helmet laws and culture will affect any of these things. In the cities he mentions cycling is a natural and normal part of the culture, whereas here in many parts of the US, drivers are exposed to cyclists in annoying and awkward ways. If American cities decided to increase population density and make work a rideable distance from home and taxed the shit out of cars, you might end up with a good biking culture. While anecdotes are not the same as data, here is my anecdote and the lessons I learned. I was cycling downtown through a major intersection on a green light. Driver plows on through intersection and T-bones me at 50+ km/hr. My head smashes through the windshield, I bounce off and land 30+ feet away. Serious traumatic brain injury. Six weeks in hospital. One month memory loss. Had to relearn how to control one leg. Recovery better than anyone predicted but some effects remain. Wearing a helmet was the only reason I was not killed or suffer a non-recoverable injury. Lessons: The legal system takes into account helmet use when determining fault and insurance compensation. Even if the driver was 100% at fault for the collision, you may be partly responsible for your injuries if you were not all taking appropriate precautions (i.e. wearing personal protective equipment). This is particularly important where helmet use is mandatory. The argument of helmets vs cycle infrastructure is misleading as they are not the same approach to risk management. Like safety in the construction industry or professional motor sports, helmets (personal protective equipment) are the last line of defense if all other risk mitigation measures fail. Larger risk reductions are made by physically removing the interactions between cyclists and hazards (i.e. cars and trucks). In places where these hazards have been almost completely removed (Holland etc), the resultant risk the cyclists may be so small that riding without a helmet is reasonable. I don't believe this is the case in North America (yet?). I am not a proponent of mandatory helmet laws, as it possibly reduces the number of people cycling. It also gives ammunition to drivers who are looking to cast all cyclists as law breakers/ irresponsible/ justified in being run over etc. I think it is noble if helmet-less cyclists want to sacrifice themselves to prove that cycling is safe. Dead people can lead to better cycling infrastructure. However, because not all hazards are foreseeable and avoidable, it is prudent to wear your personal protective equipment, just like PPE on construction sites or a seat belt while driving. A helmet only has to save your life once to make it worth wearing every time you cycle. | Mid | [
0.6236559139784941,
36.25,
21.875
] |
Involvement of phospholipids in the mechanism of insulin action in HEPG2 cells. The mechanism of action by which insulin increases phosphatidic acid (PA) and diacylglycerol (DAG) levels was investigated in cultured hepatoma cells (HEPG2). Insulin stimulated phosphatidylcholine (PC) and phosphatidyl-inositol (PI) degradation through the activation of specific phospholipases C (PLC). The DAG increase appears to be biphasic. The early DAG production seems to be due to PI breakdown, probably through phosphatidyl-inositol-3-kinase (PI3K) involvement, whereas the delayed DAG increase is derived directly from the PC-PLC activity. The absence of phospholipase D (PLD) involvement was confirmed by the lack of PC-derived phosphatidylethanol production. Experiments performed in the presence of R59022, an inhibitor of DAG-kinase, indicated that PA release is the result of the DAG-kinase activity on the DAG produced in the early phase of insulin action. | High | [
0.660146699266503,
33.75,
17.375
] |
Q: Asp.net chat application using database for message queue I have developed a chat web application which uses a SqlServer database for exchanging messages. All clients poll every x seconds to check for new messages. It is obvious that this approach consumes many resources, and I was wondering if there is a "cheaper" way of doing that. I use the same approach for "presence": checking who is on. A: For something like a real-time chat app, I'd recommend a distributed cache with a SQL backing. I happen to like memcached with the Enyim .NET provider, so I'd do something like the following: User posts message System writes message to database System writes message to cache All users poll cache periodically for new messages The database backing allows you to preload the cache in the event the cache is cleared or the application restarts, but the functional bits rely on in-memory cache, rather than polling the database. | High | [
0.680555555555555,
30.625,
14.375
] |
Overview (3) Mini Bio (1) Spouse (1) Trade Mark (3) Often plays characters who derive humor from awkward situations Often plays characters who are oblivious or have a lack of self awareness. Trivia (30) Attended and graduated from Denison University in Granville, Ohio (1984). Has two children: Elisabeth Anne Carell (b. May 2001) and John Carell (b. June 2004). Married to actress/writer Nancy Carell , whom he met while both were writer/performers with the famed Second City comedy troupe in Chicago, Illinois. When he attended the premiere for Bruce Allmächtig (2003), he came to the screening with the impression that his scenes were left on the cutting-room floor. However, his scenes were in the film, and he was pleasantly surprised. His paternal grandfather, Ernest Caroselli, was an Italian emigrant, from Bari, Italy, and his paternal grandmother, Marie G. Egle, was of German ancestry. Steve's maternal grandparents Zigmund Koch and Frances Victoria Tolosky were of Polish origin. Steve's father was born under the surname "Caroselli", which he changed to "Carell" before Steve was born. Was once a reporter for The Daily Show (1996). Provides the voice of Gary on "The Ambiguously Gay Duo" cartoons on Saturday Night Live (1975). Originally wanted to be a lawyer, but he reached a question on an application form that said, "Why do you want to be a lawyer?". He could not think of anything. Has the rare distinction of being in two movies that opened on the same day in the United States - Der Anchorman (2004) and Plötzlich verliebt (2004) (July 9, 2004). Was on three failed sitcoms before he starred on NBC's version of the sitcom Das Büro (2005). Worked the overnight shift in a Store 24 in Maynard, Massachusetts, and takes many of his characters from this experience. Grew up in Newton, Massachusetts. Editor-in-Chief of his high school newspaper, Newton South's "The Lion's Roar". Attended Middlesex School in Concord, Massachusetts. The scene in Jungfrau (40), männlich, sucht... (2005), where Andy has his chest hair removed, required five cameras set up for the shot. It was Carell's real chest hair which was ripped out in the scene. Carell told director Judd Apatow just before shooting the scene: "It has to be real. It won't be as funny if it's mocked up or if it's special effect. You have to see that this is really happening." The scene had to be done in one shot. Was a member of Burpee's Seedy Theatrical Company, Denison University's improv-comedy group and the oldest collegiate improv group in the country. Is one of 115 people invited to join the Academy of Motion Picture Arts and Sciences (AMPAS) in 2007. He and Jim Carrey were both ice hockey goalies in their childhood. Worked for a brief period at a post office in Massachusetts where he delivered mail using his own car since the post office did not have mail carrier vehicles. When he resigned from the position to move to Chicago, for months afterward he continued to find undelivered mail under his car seats. He was nominated for a 1993 Joseph Jefferson Award for Actor in a Revue for "Truth, Justice, or the American Way", at the Second City Theatre in Chicago, Illinois. He was nominated for a 1994 Joseph Jefferson Award for Actor in a Revue for "Are You Now or Have You Ever Been Mellow?", at the Second City Theatre in Chicago, Illinois. Owns and operates the Marshfield Hills General Store in Marshfield, Massachusetts, where he has a summer home. Received a star on the Hollywood Walk of Fame at 6708 Hollywood Boulevard in Hollywood, California on January 6, 2016. First rock concert he ever attended featured Jethro Tull Steve Carell references the same quote by Abraham Lincoln in two films. In Dinner for Schmucks "Our countries are not enemies, they are friends" from Lincoln's "We are not enemies, but friends," and in Irresistible "...and appeal to our better angels" which he attributes to Lincoln yet misquotes from Lincoln's "by the better angels of our nature.". Personal Quotes (15) I have no idea where my pathetic nature comes from. If I thought about it too long, it would depress me. I think a character in a comedy should not know they're in a comedy. I don't think of myself as funny - I don't fill up a room with my humor... I would fail miserably as a stand-up comedian. You can't seem to have any sort of inhibition. Or shame. Or absolute horror at your own physical presence. I know I'm not a woman's fantasy man; I don't have to uphold this image of male beauty, so that's kind of a relief in a way. When they approached me about who I would want writing Get Smart (2008), I suggested B.J. The episodes that he's written walk the line between intensely funny and slightly offensive. But they always fall on the side of being funny. I also suggested him because I think he's going to be someone I'll be working for someday, and I want to get on his good side now - on his Das Büro (2005) co-star and co-writer B.J. Novak [on life since Jungfrau (40), männlich, sucht... (2005) made his a movie star.] I have a helluva lot more money than I used to! That's the only perceivable difference. I will definitely be able to send my kids to college now, which was a question before. (2007) [on playing Maxwell Smart in the upcoming Get Smart (2008)] I am sort of billing it as a comedic "Bourne Identity". [referring to Die Bourne Identität (2002)] (2007) [on being a father] I'm already seeing my daughter's cynical sense of humor and she's six! I bought these shoes, and I'm thinking I'm a cool dad, I'm going to show her my new half-boot shoes. So I said, "What do you think of these?" And she's like, "Mmm no, not liking them." (2007) (2005, on a pre-acting job) I worked the third shift at a convenience store for a few months. At four in the morning most people are looking for cigarettes, porn or one of those shriveled, angry-looking hot dogs from the rotating grill. One night, though, a woman came in during the wee hours. She looked a bit distraught as she paid at the counter. She paused for a moment, looked up at me and asked, "Do you think I'm pretty?" As it turned out, she had just walked in on her boyfriend with another woman. We proceeded to have a lengthy conversation about a person's self-worth, fidelity, trust and relationships. And then I treated her to a slushy blue frozen drink. (2005, on originally wanting to be a lawyer) Being a lawyer just sounded good to me. Kind of like how being a doctor or being an astrophysicist or a microbiologist sounds good. But it took a complete turn when I was filling out my law-school application. I couldn't answer the essay question, which was, Why do you want to be an attorney? I had absolutely no idea. Uh, to make a lot of money and sue people? To be hated based solely on my job title? I couldn't come up with one good reason. That ended my law career rather quickly. (2005, on performing announcing duties for the video games, Outlaw Golf and Outlaw Volleyball) Who wouldn't want to get paid for spending a couple of hours in a sound booth? I went in thinking, Yeah, free money! But it was so much harder than I thought it'd be. There are thousands of possible scenarios in a video game, and you have to do lines for all of them. It was pretty taxing. Then again, it's not like I was chopping down trees or anything. That sounds pretty whiny, doesn't it? "I had to say so many words. It was haaaard! Waaaah!" [on his character from The Daily Show (1996)] In my mind, he was a guy who had done national news reporting but had fallen from grace somehow and was now relegated to this terrible cable news show and was very bitter about it and thought he was better, but he wasn't. [on whether he feared being typecast in comedy roles] I've done big commercial movies and little independent movies, and I've played jerks and suicidal Proust scholars, and I feel like I've been really lucky to play all the different types of characters. So, no, I don't worry about that. If I do get pigeonholed, it's nothing I can really control. [on his surprise at hearing so much laughter in Foxcatcher (2014)] The way Bennett [Miller] describes the humor is that it's funny until it's not anymore, and if this story didn't have the outcome that it does, it could just be an absurd, ridiculous story. But the fact it ends up where it does, and that there's this pall that hangs over the entire narrative, changes everything. But some of it so absurd you can't help but laugh because it seems too strange to be true. [on male bonding in Foxcatcher (2014)] It's about offering up yourself to vulnerability. I think Bennett presents all this things in a very open way and allows the viewer to draw their own conclusion. He was finding it, as we were finding it, and I think that's an extremely exciting aspect of working like this. Salary (8) | Low | [
0.511482254697286,
30.625,
29.25
] |
Rutgers's Sosa Wins Inaugural "Rescher Prize" from University of Pittsburgh Professor Rescher has kindly forwarded to me the announcement, which does not yet appear to be on-line: The University of Pittsburgh has named Ernest Sosa as the inaugural awardee of the recently established Nicholas Rescher Prize for contributions to systematic philosophy. Named in honor of a distinguished philosopher who has been on Pitt’s faculty since 1961, the prize consists of a gold medal together with a sum of $25,000. Born in Cuba in 1940, Ernest Sosa earned his doctorate at the University of Pittsburgh in 1964. From that time until 2007, Sosa taught at Brown University. He then joined the Philosophy Department at Rutgers University, which he had visited as a distinguished professor for a decade before that. At Rutgers he is now Board of Governors Professor of Philosophy. Sosa has served as a president of the American Philosophical Association (Eastern Division) and as editor of the prestigious journals Nous and Philosophy and Phenomenological Research. Elected to the American Academy of Arts and Sciences in 2001, he delivered the John Locke Lectures at Oxford in 2005, and the Paul Carus Lectures at the American Philosophical Association in 2010. His work is the subject of John Greco (ed.), Ernest Sosa and his Critics (2004). His contributions to epistemology--and to virtue epistemology in particular--are widely appreciated as a groundbreaking unification of ideas from epistemology, value theory, and ethics. The prize is named in honor of Nicholas Rescher who has served on the Philosophy faculty since 1951 and has served as a President of the American Philosophical Association, of the American Catholic Philosophy Association, of the American G. W. Leibniz Society, of the C. S. Peirce Society, and of the American Metaphysical Society as well as Secretary General of the International Union of History and Philosophy of Sciences. Author of some hundred books ranging over many areas of philosophy, he is the recipient of eight honorary degrees from universities on three continents. He was awarded the Alexander von Humboldt prize for Humanistic Scholarship in 1984, the Belgian Prix Mercier in 2005, and the Aquinas Medal of the American Catholic Philosophical Association in 2007. | High | [
0.6876513317191281,
35.5,
16.125
] |
Adolphus, Kentucky Adolphus is an unincorporated community in southern Allen County, Kentucky, United States. The community is due south of Scottsville. The community is primarily a rural area on farmland. History A post office called Adolphus has been in operation since 1888. The community has the name of Adolphus Alexander, a railroad attorney. Climate The climate in this area is characterized by hot, humid summers and generally mild to cool winters. According to the Köppen Climate Classification system, Adolphus has a humid subtropical climate, abbreviated "Cfa" on climate maps. References Category:Unincorporated communities in Allen County, Kentucky Category:Unincorporated communities in Kentucky | Mid | [
0.560636182902584,
35.25,
27.625
] |
569 P.2d 575 (1977) 279 Or. 595 Albert TROUTMAN and Ogden Farms, Inc., a Corporation, Respondents, v. Ralf ERLANDSON, Appellant. Supreme Court of Oregon, Department 1. Argued and Submitted July 7, 1977. Decided September 27, 1977. *576 Robert J. Morgan, Milwaukie, argued the cause for appellant. Ralf H. Erlandson, Milwaukie, filed briefs in pro per. Gerald R. Pullen, Portland, argued the cause and filed the brief for respondents. Before DENECKE, C.J., and HOLMAN,[*] TONGUE and LENT, JJ. TONGUE, Justice. This was an action for contribution. Plaintiffs' complaint alleges that plaintiffs had paid a $44,000 obligation owed jointly by plaintiffs and defendant and that defendant was obligated to "make contribution of one-third of said debt * * * or the sum of $16,500."[1] Defendant's answer included, in addition to a general denial and three affirmative defenses, a counterclaim for $50,000 in damages alleging, among other things, that plaintiffs "know full well that this defendant was not to be responsible for any part" of the $44,000 obligation and were "attempting to use this litigation as a form of coercion" to "cause defendant to be unable to pursue his rights and remedies in protecting his property rights * * *." The case was tried before a jury, which returned a verdict in favor of plaintiffs.[2] Defendant appeals from the resulting judgment. Defendant's principal assignment of error is that the trial court erred in failing to grant defendant's motion for mistrial based upon alleged misconduct by plaintiffs' attorney in asking an improper and prejudicial question during his cross-examination of the defendant. In order to properly decide this contention it is necessary to consider the context in which that question was asked. *577 It appears that the sum of $16,500 demanded by plaintiff as a "contribution" from defendant arose from two promissory notes representing loans to a partnership between defendant, an attorney, and plaintiff Troutman. That partnership apparently owed over $1,000,000 in debts and was the subject of a suit filed by plaintiff Troutman against defendant for dissolution and an accounting. Defendant testified on direct examination, in support of his counterclaim for damages, that he had told plaintiff Troutman that he was negotiating with one Dale Fackrell to "come up with $140,000" to pay on the partnership indebtedness; that at that time creditors of the partnership were threatening foreclosure and that if he had been able to obtain the $140,000 he would have been able to "remove" the threat of foreclosure and then "acquire a percentage ownership" in the partnership. Defendant then testified that "[b]y filing this action * * * what Mr. Troutman did was to wipe out my opportunity to find an investor who would come up with $130,000" and that this "business opportunity" was "of value" to him "in excess" of $150,000. In the cross-examination of defendant on his claim that "filing this lawsuit caused you to be unable to secure $140,000 from Mr. Fackrell," plaintiffs' attorney asked the following question: "Now, in truth and fact, sir, is it not true that your own client, Mrs. Castor, sued you in this very courthouse in this last year for fraud, defrauding her, and let me finish my question, sir, if you allow me, and secured $30,000 in punitive damages and $9,000 in general damages against you for defrauding her?" Defendant objected to that question and moved for a mistrial. That objection and motion were then argued in chambers. Plaintiffs' attorney contended that to impeach defendant's claim that the filing of this action "wiped out his opportunity to find an investor for $140,000 * * * we would show that this is not the real truth; that there would be other lawsuits that could affect that ability;" that "Mr. Troutman wasn't the only person with lawsuits against him," and that "[i]f I can't bring that in, they [the jury] are going to think that it was only Troutman that prevented you from getting a loan." In response, defendant Erlandson contended that: "He's misstated the facts. Of course, the Castors were never my clients. That lawsuit would take a good deal of explanation and is entirely collateral to this. He injected it strictly to prejudice the jury against me, to bring up a false issue and to deny me the right to a fair trial. He deliberately misstated the facts, saying weren't Castors my clients, and he knows better than that or should know better than that." (Emphasis added). and that: "He's going to force me, your Honor, to go into completely the Castor thing and there's no way I can keep from going into it without further prejudicing myself." The trial court then ruled: "That's a matter of choice for you. I am denying the motion." Plaintiffs' attorney than said: "All right. I will leave it then." Upon resumption of the cross-examination before the jury, plaintiffs' attorney did not repeat the question objected to, but proceeded to ask questions on other matters. Upon completion of the cross-examination defendant Erlandson did not "go into" the "Castor thing," but offered no re-direct testimony and then "rested." He did not call Mr. Fackrell as a witness. In his briefs on this appeal defendant Erlandson charges that: "* * * plaintiffs' counsel knew his statement was erroneous as stated, and pursued the question solely for its highly misleading and prejudicial effect." and that: "He intentionally tainted the jury * * *." Thus, according to defendant, "* * * appellant's first assignment of error concerns two basic and closely related questions: *578 "(1) Whether an attorney may with impunity ask suggestive and highly prejudicial questions, known by him to be erroneous as worded. "(2) Whether an attorney may examine a witness as to matters normally relevant, but known by the examining witness [attorney?] to be in fact irrelevant. "As stated in appellant's brief, counsel for appellee was fully aware of the Castor case; that the Castors were not clients of appellant, and that recovery was rendered on the theory of failure to disclose, not active fraud. Counsel was also fully aware that Mr. Fackrell, appellant's primary hope for raising $140,000.00, was fully aware of the Castor litigation and was unaffected thereby. "* * * His own explanation of his purpose in asking the question was to suggest other causes for appellant's inability to borrow $140,000.00 (TR 112). The purpose is laudable on the surface, but in view of counsel's knowledge of its actual irrelevancy, as opposed to an abstract situation where counsel has a reasonable belief in a question's relevancy, the question here complained of was asked without justification or excuse, was known to be inconsistent with the trust [truth?], and was designedly misleading."[3] (Emphasis added) Thus, defendant appears to concede that the question asked by plaintiffs' attorney would not have required a mistrial if he had a "reasonable belief" in the "relevancy of the question," but contends that in this case the question was not only improper, but required a mistrial because it was "known" to be "inconsistent with the truth" and was "designedly misleading." These are strong charges to be leveled by one attorney against another, unless supported by the record. The difficulty, however, is that defendant's charges (which are denied by plaintiffs' attorney) are not supported by the record in this case. It may be that the Castors were not clients of defendant Erlandson, but nothing in the record supports his charge that plaintiffs' attorney knew that fact. Neither is there anything in the record to support the charge that plaintiffs' attorney knew that the Castor case was not for "active fraud," but for a "failure to adequately disclose." On the contrary, this court may take judicial notice of its recent decision in Castor v. Erlandson, 277 Or. 147, 152-53, 560 P.2d 267 (1977). It appears from that decision that the complaint in that case alleged "defendant [Erlandson] represented [to Castor] that the Jacksons could convey good title but that this was false and defendant knew it was false," as well as an allegation that "defendant had a duty to disclose the full extent of the indebtedness" and that a jury verdict against defendant Erlandson for general and punitive damages totalling $38,500 was affirmed by this court. In State v. Bateham, 94 Or. 524, 186 P. 5 (1919), this court considered a similar problem. In that case defendant called character witnesses who testified that his reputation as a moral, law-abiding man was good. On cross-examination the prosecuting attorney, over defendant's objection, was permitted to ask each witness in substance if he had ever heard that the defendant had taken "improper liberties" similar to that described in the indictment with another little girl, named in the question. Each witness answered in the negative. On appeal defendant contended that this was error because the prosecuting attorney informed the jury by innuendo that defendant was guilty of, or at least charged with, other like crimes. In rejecting that contention, in the absence of some showing that the prosecuting attorney acted in bad faith in asking those questions, this court said (at 530-32, 186 P. at 8): "* * *. Here the moral character of the accused was drawn directly in question. He himself invited inquiry about it *579 by putting in testimony in general terms about his good character. Certainly the prosecution legitimately could ask the general cross-interrogatory if the witness had ever heard of the defendant's doing acts of the same kind as that charged. "* * *. "It is quite impossible definitely to fix the boundary between pettifoggery on one hand and proper cross-examination on the other, so as to govern all cases with exactness. It must be left to the discretion of the presiding judge, acting in the light of the circumstances of the case before him, subject to reversal if an abuse of discretion appears. "* * *. "No abuse of the court's prerogative appears. It is urged that the district attorney did not expect an affirmative answer to any such question, but there is nothing in the record by which we can determine that matter. If, in truth, he asked the questions solely for the purpose of intimating to the jury that the defendant was guilty on other charges of like nature, which he could not prove directly and which had no foundation within his knowledge or information, he was guilty of a most contemptible, unprofessional piece of pettifoggery. It would be beneath the dignity of any practicing lawyer, much more of a public prosecutor, and should lead to a reversal. But that situation is not made to appear and the assignment of error on that point must be disregarded." The rule of State v. Bateham, supra, permitting such cross-examination has been subsequently reaffirmed in State v. Harvey, 117 Or. 466, 472, 242 P. 440 (1926); State v. Matson, 120 Or. 666, 671, 253 P. 527 (1927); State v. Shull, 131 Or. 224, 229, 282 P. 237 (1929); State v. Frohnhofer, 134 Or. 378, 383, 293 P. 921 (1930); and State v. Linn, 179 Or. 499, 514, 173 P.2d 305 (1946). It is also the majority rule in other jurisdictions which have considered the matter when such questions are asked in good faith. See Annot., 71 A.L.R. 1504, 1521, 1541-43 (1931); Annot., 47 A.L.R.2d 1258, 1280, 1316-20 (1956). See also McCormick on Evidence 456-58, § 191 (2d Ed. 1972). Although the asking of such questions in bad faith may be ground for reversal, it is generally held that the good faith of the cross-examiner is, in the first instance, to be presumed, i.e., that there is a presumption of good faith in such cases. See Annot., 47 A.L.R.2d supra, at 1319. The inherent danger of prejudice in permitting such cross-examination of character witnesses in criminal cases would appear to be at least as great, if not greater, than the danger of prejudice from the asking of the question on cross-examination in this case. In addition, the possible relevance of the question asked in this case would appear at least as great, if not greater, than the relevance of such cross-examination in many criminal cases. Here, to paraphrase Bateham, the question whether the filing of this lawsuit prevented defendant from securing $140,000 from Mr. Fackrell was put "directly in question" by defendant's testimony on direct examination. It follows that plaintiffs' attorney "legitimately could ask" if another lawsuit, and one based on fraud, had also been filed against defendant resulting in a judgment for an even larger sum of money. In such event, the jury could properly infer that such a lawsuit, rather than this lawsuit, was the reason that defendant was unable to get Mr. Fackrell to "put up" the $140,000. As for the possibility of "pettifoggery," as also discussed in Bateham, such as in the possible event that plaintiffs' attorney knew that no such lawsuit had been filed and asked that question in bad faith, it would appear that in this case, as in Bateham "there is nothing in the record by which we can determine that matter." As previously noted, however, it does appear that there was another lawsuit against defendant for fraud which resulted in a judgment against defendant for $38,500. Yet defendant, in statements to this court in his briefs, says that the other lawsuit did not involve "active fraud" and that plaintiffs' attorney was "fully aware" of that *580 alleged fact. Defendant also states in his briefs that plaintiffs' attorney was also "fully aware" that Mr. Fackrell was "fully aware of the Castor litigation and was unaffected thereby," despite the fact such statements go outside the record and that no such contentions were made in the trial court. Under these circumstances, it would appear that defendant is not in the best of positions to accuse a fellow attorney of bad faith. In the trial court defendant's primary contention was that "the Castors were never my clients" and that plaintiffs' attorney "knows better than that or should know better than that." If the prejudice to defendant arose from the fact that the plaintiff in the pending action against him was not a client, but was someone other than a client, defendant might well have removed any such prejudice by offering evidence to that effect. When, however, defendant stated to the trial court that "[h]e's going to force me * * * to go completely into the Castor thing," the plaintiffs' attorney stated that he would "leave it there," and did not demand an answer to the question which was the subject of defendant's objection. Defendant then chose not to testify that "the Castors were never my clients" or to attempt any explanation of the pending action for fraud. According to plaintiffs, what defendant was trying to do in the trial court was "to keep from the jury that another lawsuit for fraud was actually pending against him," for the reason that a judgment of $38,500 for fraud is much more harmful to one's credit than law actions which may never result in judgment." Whether or not that was defendant's actual purposes, the trial judge could reasonably have drawn such an inference under the record in this case. Under these circumstances, we think it proper to hold, as held in Bateham, that the question of whether plaintiffs' attorney was guilty of the serious charge of bad faith was a matter to be "left to the discretion of the presiding judge, acting in the light of the circumstances before him," and that there was "no abuse of the court's prerogative" in this case. This result is also consistent with the established rule in appeals from the denial of motions for mistrial based upon alleged improper arguments or other statements by counsel in jury cases. That rule, as stated in Kuehl v. Hamilton, 136 Or. 240, 244, 297 P. 1043, 1044 (1931) is that: "Control over the argument of counsel is intrusted largely to the discretion of the trial judge. In Huber v. Miller, 41 Or. 103, 68 P. 400, Mr. Justice Wolverton said: `It is usually however, within the discretion of the trial judge to determine whether counsel transcend the limits of professional duty and propriety in this particular, and the exercise of such discretion is not the subject of review, except where they are permitted to travel out of the record, or to persist in disregarding the admonitions of the trial judge, or to indulge in remarks of a material character so grossly unwarranted and improper as to be clearly injurious to the rights of the party assailed.' "It is unnecessary to add further citations to the numerous ones assembled by Judge Wolverton in support of his above statement. Obviously the judge who presides over the trial, and who becomes familiar with its atmosphere, is best able to determine whether an excursion into a forbidden field is prejudicial, the extent of the injury, if any, and what remedies must be applied to undo the harm." Our decision in Walker v. Penner, 190 Or. 542, 554, 227 P.2d 316 (1951), the only case cited by defendant on this assignment of error, states substantially the same rule, citing Kuehl v. Hamilton, supra, with approval. This result is also consistent with the general rule that the right of cross-examination extends not only to any matter stated in the direct examination of a witness, but also to any matter "connected therewith," and that "great latitude" should be allowed in cross-examination to *581 include other matters which tend to limit, explain or qualify them or to rebut or modify any inference arising from facts or matters stated on direct examination.[4] Also, we have held that the scope of permissible cross-examination "rests largely in the discretion of the trial judge."[5] We recognize the danger that the rule of "great latitude" in cross-examination may, on occasion, be abused by lawyers who, out of "pettifoggery" and in bad faith, may ask questions designed to suggest or intimate to the jury matters that are not properly admissible and which may be prejudicial.[6] It may be in such a case that "pettifoggery" and bad faith in the asking of such a question appears on the face of the record or is otherwise obvious, so as to require the granting of a mistrial. This is not such a case, however, in our opinion. It may also be that in some such cases the trial judge should, on motion for mistrial and out of the presence of the jury, question the attorney as to whether he has credible grounds to ask such a question. See McCormick on Evidence 458 n. 79, § 191 (2d ed. 1972). In this case, however, no such request was made by defendant in the trial court and no such contention was made by him in this court. Under all of the circumstances of this case, and for the reasons previously stated, we hold that the trial court did not abuse its discretion in denying defendant's motion for a mistrial.[7] NOTES [*] Holman J., did not participate in this decision. [1] By a second cause of action plaintiffs also sought to recover $1,200 as the reasonable value of furniture "had and received" by defendant from plaintiffs and for which defendant had refused to pay. [2] That verdict was for the full amount of the prayer of the complaint and included $16,500 as "damages" and $1,200 for "furniture." [3] Similarly, defendant charges in his brief that: "Mr. Pullen knew * * * that Mr. D. Fackrell knew about the Castor litigation, and that it was the institution of suits by plaintiffs which caused Mr. Fackrell to discontinue negotiations for the $140,000 investment." [4] See ORS 45.570, Ah Doon v. Smith, 25 Or. 89, 93-94, 34 P. 1093 (1893); and Miller v. Lillard, 228 Or. 202, 216, 364 P.2d 776 (1961). [5] Garrett v. Eugene Medical Center, 190 Or. 117, 132, 224 P.2d 563, 569 (1950). See also State v. Sullivan, 230 Or. 136, 142, 368 P.2d 81 (1962). [6] See 3A Wigmore on Evidence (Chadbourn rev. 1970) 920-21, § 988, and 6 Wigmore on Evidence (Chadbourn rev. 1976) 371-75, § 1808. [7] We have also considered defendant's remaining two assignments of error, which were submitted on briefs and without oral arguments. We hold that the trial court did not err in either of those matters. | Low | [
0.504464285714285,
28.25,
27.75
] |
Q: Model values not carrying into partial view from main view- C# MVC I've been stumped for days. I have an index page that contains a renderpartial view. A viewmodel is passed to the index page from its controller, then passed from inside the index.cshtml to the renderpartial view as an extension. The renderpartial view is automatically updated every 10 seconds (via jquery function to the controller from the index page) to update its content, which works fine. The index page contains several checkboxfor's that filter out the contents of the renderpartial view. The problem arises when the initial renderpartial view is called when the time period has elapsed, the controller for the renderpartial view does not have the correct model data the controller for the index had prior. Boolean values in the model which were set to true while in the index controller now are false when we get to the renderpartial view. Lets begin... My Index View: @model SelfServe_Test2.Models.NGTransCertViewModel ... <div class="Services_StatusTable" id="refreshme"> @{ Html.RenderPartial("_Data", Model); } </div> ... @Html.CheckBoxFor(m => m.NGTransServicesModel.filter_NJDVSVR24, new { onclick = "test(id)" }) @Html.Label("NJDVSVR24", new { }) ... <script src="~/Scripts/jquery-1.12.4.js"></script> <script type="text/javascript"> $(function () { setInterval(function () { $('#refreshme').load('/NGTransCertServices/Data'); }, 10000); // every 10 seconds function test(filter) { alert(filter); var serviceChecked = document.getElementById(filter).checked; $.ajax({ type: "POST", url: "/NGTransCertServices/ToggleVisibleService", data: { 'filterOnService': filter, 'serviceChecked': serviceChecked, 'model': @Model }, //success: function (result) { // if (result === "True") // alert("yup"); // else // alert("nope"); //} }); } </script> The PartialView _Data.cshtml: @model SelfServe_Test2.Models.NGTransCertViewModel ... <table> foreach (var item in Model.NGTransServicesList) { if (Model.NGTransServicesModel.filter_EBT == true) { if (item.Description.Contains("EBT")) { } } } </table> My ViewModel: namespace SelfServe_Test2.Models { public class NGTransCertViewModel { public NGTransCertViewModel() { NGTransServicesModel = new NGTransCertServicesModel(); NGTransServicesList = new List<NGTransCertServicesList>(); NGTransServices = new NGTransCertServices(); } public NGTransCertServicesModel NGTransServicesModel { get; set; } public List<NGTransCertServicesList> NGTransServicesList { get; set; } public NGTransCertServices NGTransServices { get; set; } } } The Controller: public class NGTransCertServicesController : Controller { NGTransCertViewModel NGT_VM = new NGTransCertViewModel(); NGTransCertServicesModel certServicesModel = new NGTransCertServicesModel(); public ActionResult Index() { NGTransCertServices certServices = new NGTransCertServices(); NGT_VM.NGTransServicesModel = certServices.InitServiceTypeCheckBoxes(certServicesModel); // sets all checkboxes to true initially. return View(NGT_VM); } [OutputCache(NoStore = true, Location = System.Web.UI.OutputCacheLocation.Client, Duration = 10)] // in seconds public ActionResult Data() { NGTransCertDBHandle certDBHandle = new NGTransCertDBHandle(); List<NGTransCertServicesList> List_certServices = certDBHandle.GetService(); return PartialView("_Data", NGT_VM); } } Finally, the model where the values are lost: public class NGTransCertServicesModel { ... public bool filter_NJDVSVR24 { get; set; } ... } Now then, when the Index.cshtml page is called, i run the InitServiceTypeCheckBoxes method that sets the checkbox values to true, pass the viewmodel to the index page and pass that same model to the renderpartial. All is happy until the 10s timeout is reached and _Data.cshtml is rendered. The checkbox values are now all false. Let me add a visual element. Below is the model when returning from the controller to the index view with the Boolean set to true as desired. (stepping through) Below is the model when the index view Again, in the _Data.cshtml partial view Now with a breakpoint in the Data action in the controller, that same bool value is now false The bool does not have the true value even before the first line of code in the Data action. NGTransCertDBHandle certDBHandle = new NGTransCertDBHandle(); A: I think the issue is that you're not populating your view model correctly in the Data method of your controller. In both methods you're sending the NGT_VM property to the view, but you only populate some of the data in the Index method - this data will not be persisted or created by default when you call the Data method. Each time a request hits a controller method, that controller is created afresh, and only the constructor and requested method are called. In the case of a request to Data the controller is created, the NGT_VM property is set back to the default NGTransCertViewModel object, with a default NGTransCertServicesModel object (the boolean property filter_NJDVSVR24 will default to false). You then create and ignore a variable List_certServices, but at no point have you updated the NGTransServicesModel property on the view model to match the values you had from the Index method. You should probably assign the NGTransServicesList variable to the NGT_VM.NGTransServicesList after you populate it: [OutputCache(NoStore = true, Location = System.Web.UI.OutputCacheLocation.Client, Duration = 10)] public ActionResult Data() { NGTransCertDBHandle certDBHandle = new NGTransCertDBHandle(); List<NGTransCertServicesList> List_certServices = certDBHandle.GetService(); NGT_VM.NGTransServicesList = List_certServices; return PartialView("_Data", NGT_VM); } You could either call same methods to update the NGTransServicesModel as required in the Data method, but I'm not sure that's really the behaviour you're after? | Low | [
0.48832271762208,
28.75,
30.125
] |
Q: valueForKeyPath returning nil unexpectedly This is a fairly short question, but I'm a bit confused on how to fix it. for item in filteredAndSortedDates { print(item.datesSectionHeader()) // Returns a value print(item.value(forKeyPath: "datesSectionHeader") as Any) // Returns nil // The "as Any" part up above is just to keep the compiler quiet. It doesn't have any meaning as this is just for testing purposes. } I'm a bit confused on why this is happening. How come valueForKeyPath is returning nil when the above is returning a value? I'm calling this on an NSDictionary. This is the log I'm getting: HAPPENING THIS WEEK nil HAPPENING THIS WEEK nil HAPPENING THIS WEEK nil HAPPENING WITHIN A YEAR nil Here's how I'm declaring datesSectionHeader: extension NSDictionary { // FIXME func datesSectionHeader() -> String { // Doing some work in here. } } A: NSDictionary modifies the standard behavior of Key-Value Coding so that it accesses the dictionary's contents rather than its properties. It does this by overriding value(forKey:) (which is, in turn, used by value(forKeyPath:)). As documented, its override of value(forKey:) checks if the key is prefixed by "@". If it's not, it returns the result of object(forKey:), accessing the dictionary contents. If it is prefixed with "@", it strips the "@" and returns the result from the superclass's implementation, which accesses the dictionary's properties. So, in this particular case, you can access the results from your datesSectionHeader() getter method using the following: item.value(forKeyPath: "@datesSectionHeader") | Mid | [
0.646766169154228,
32.5,
17.75
] |
Rocket Deposit Tokens Once again, life doesn’t always go the way we plan and sometimes we just need to get some money quickly for unexpected expenses or a myriad of other reasons. When you can have Ether staking in Rocket Pool for up to a year, this situation is inevitable for some users, so we’ve planned ahead :) Need to delay that trip to the moon for a moment to move to a new house? No problem! When you have a deposit staking with Rocket Pool, you can now withdraw ERC20 tokens called Rocket Deposit Tokens (RPD) that back the Ether you have deposited. Users can then sell these tokens to other users on the free market, which can then be redeemed for Ether back at the Rocket Pool Deposit Token contract. When a single RPD token is withdrawn, it represents 1 Ether. So why would people bother buying other peoples RPD tokens when they could just buy Ether I hear you ask? Great question! To answer that we’ve devised the following system: Bob has 200 Ether locked up depositing and looking in the mirror on his 40th birthday he finally decides he can’t live without that hair piece he’s had his eye on since he noticed his hair line marching south at an unusually rapid rate. He decides to withdraw 50% of his deposit as RPD tokens, this leaves his available balance staking at 100 Ether. When withdrawing these tokens, Bob is charged a 5% withdrawal fee, so he ends up with 95 RPD tokens which he can then sell on the free market or if there is available funds in the Rocket Deposit Token contract, he can trade them in there for Ether immediately. Now as you’re sharpening your pitchfork over the 5% withdrawal fee, let me just say quickly, that doesn’t go to us! That fee serves a dual purpose which I’ll explain shortly. But first let me quickly explain how the Rocket Deposit Token contract where tokens are traded in for Ether is funded. If you’re familiar with Rocket Pool, you’ll know we use a system for staking called Minipools, these are small groups of users pooling their Ether together which are spread out over the Rocket Pool network in a decentralised and load balanced method. They can be staking for various time lengths, from two months up to a year. When a Minipool returns to Rocket Pool from staking with Casper and users within that pool have withdrawn RPD tokens, the Ether backing those tokens are sent to the Rocket Deposit Token contract, this means the contract will have a variable rate of Ether available for users to trade tokens in for at any given time, depending of course on how often Minipools return with these token debts. Now about that fee! The first reason for that is it prevents Bob cheating the system when his deposit first begins staking, he can’t simply withdraw his tokens and trade them into the Rocket Deposit Token contract for Ether immediately, then do the same again and again and again. He could essentially keep this contract drained of Ether without the fee and other token holders wouldn’t be able to trade their tokens for Ether. Bad Bob! Secondly that fee is then used as an incentive for buyers to help RPD token sellers like Bob out. If 1 RPD token equalled 1 Ether when buying, why would buyers not just buy Ether instead? That 5% fee is given as a bonus to users who trade tokens in to the Rocket Deposit Token contract for Ether. So essentially users get an extra 5% Ether when trading tokens which would help any seller out who needed funds quickly. These tokens are also fully enabled on the free market, so if the Rocket Deposit Token contract does not have enough Ether in it for you to trade in your tokens, you can always price them at a discount and sell them quickly on the free market. Patient buyers of these tokens who don’t mind waiting for the Rocket Deposit Token contract to fill up with Ether again, are rewarded with a 5% bonus when trading in their bought tokens in for Ether when the time comes. | Mid | [
0.5414012738853501,
31.875,
27
] |
Regional differences in striatal dopamine uptake and release associated with recovery from MPTP-induced parkinsonism: an in vivo electrochemical study. This study directly assessed striatal dopamine (DA) uptake rates and peak release in response to KCl in normal, symptomatic, and recovered 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-treated cats using in vivo electrochemistry. DA uptake rates measured after direct application of known concentrations of DA to the striatum were slowed significantly in both dorsal and ventral striatum in symptomatic cats compared with rates recorded in normal animals. DA uptake rates remained significantly slowed in recovered cats and were not significantly different from the rates recorded in symptomatic animals. In symptomatic cats, both DA uptake rates and the signal recorded in response to KCl stimulation were significantly decreased from normal in all dorsal and ventral striatal regions sampled. Reduction/oxidation (redox) ratios recorded in response to KCl stimulation suggested DA to be the predominant electroactive species. In spontaneously recovered MPTP-treated cats, recordings in the ventral striatum subsequent to KCl stimulation again suggested DA to be the predominant electroactive species released, and peak levels were significantly higher than those recorded in symptomatic animals. In the dorsal striatum of recovered cats, redox ratios recorded subsequent to KCl stimulation suggested serotonin rather than DA to be the predominant electroactive species released. Peak levels of release in the dorsal striatum were not significantly greater than those recorded in symptomatic animals. These results suggest that in spontaneously recovered MPTP-treated cats, there is partial recovery of ventral striatal DAergic terminals, persistent loss of dorsal striatal DAergic terminals, and a down-regulation of DA transporter number/function throughout the striatum. These processes may contribute to volume transmission of DA in the striatum and promote functional recovery. | High | [
0.7026239067055391,
30.125,
12.75
] |
Hong Kong banks warned over harsh vetting for foreign investors Hong Kong Monetary Authority plans spot checks on banks that fail to comply with guidelines if they refuse accounts Hong Kong’s de facto central bank yesterday warned local lenders not to overdo it when vetting investors’ applications in the name of reducing risk, as spot checks would be conducted to ensure they followed official banking guidelines. To make it easier for foreign investors to open accounts in the city, the Hong Kong Monetary Authority also put up a list of banks willing to offer services to foreign small and medium-sized enterprises (SMEs) and start-ups. The list of more than 20 banks – minus HSBC and Standard Chartered Bank – will be provided to investors through the government’s business promotion arm, InvestHK. The warning came as the HKMA issued a circular yesterday to all financial institutions reminding them to strike a balance between exercising anti-money laundering and counter-terrorism financing controls and providing business-friendly banking services to foreign investors. The Post previously reported concerns raised by the city’s 29 chambers of commerce over foreign investors encountering trouble when opening as well as retaining company bank accounts in Hong Kong. Banks say they are enforcing stricter international anti-fraud regulations to protect the city’s status as a global business centre. HKMA deputy chief executive Arthur Yuen said the circular should serve as a reminder for banks to follow guidelines on proper procedures. He insisted that according to their investigation, only two global banks – understood to be HSBC and Standard Chartered – showed a rejection rate much higher than other banks in Hong Kong of applications for company accounts for foreign investors. “We issued this circular to help the banking industry have a good grasp of our regulatory requirements and also clarify which practises are unfair to customers,” he said. The main problems flagged in the vetting process included harsher requirements for foreign companies, especially start-ups and offshore firms, and unreasonable requirements such as very high turnover thresholds and requiring beneficial owners and directors to appear together for interviews in Hong Kong. “It is inappropriate for authorised institutions to adopt a one-size-fits-all approach,” the HKMA’s circular said. The regulator will give banks a grace period to enhance training and adjust their vetting process. After that, HKMA staff will pose as customers to conduct checks to see if local banks have adhered to its guidelines. “If we discover that they are unfair to customers and in breach of the basic requirements for providing banking services, we will exercise our statutory powers to demand them to rectify their practices,” Yuen said. A spokesman for HSBC said: “We welcome the announcement of HKMA’s guidelines on account opening for businesses in Hong Kong and will study these further. We share the Authority’s commitment to ensuring that SMEs in Hong Kong have ready access to banking services.” Related Links Follow us About Us The Belgium Luxembourg Chamber of Commerce in Hong Kong is aiming at creating a platform for companies and people. We provide assistance to members and new comers regarding business doing in Hong Kong thanks to our experience and extensive business network. | Mid | [
0.5968819599109131,
33.5,
22.625
] |
Demonstration of the catalytic roles and evidence for the physical association of type I fatty acid synthases and a polyketide synthase in the biosynthesis of aflatoxin B1. Aflatoxin B1 (compound 5. ) is a potent environmental carcinogen produced by certain Aspergillus species. Its first stable biosynthetic precursor is the anthraquinone norsolorinic acid (compound 3. ), which accumulates in the Aspergillus mutant strain NOR-1. Biochemical and genetic evidence suggest that this metabolite is synthesized in vivo by a specialized pair of fatty acid synthases (FAS-1 and FAS-2) and a separately transcribed polyketide synthase (PKS-A). The N-acetylcysteamine (NAC) thioester of hexanoic acid was shown to efficiently support the biosynthesis of norsolorinic acid (compound 3. ) in the NOR-1 strain. In contrast, the mutants Dis-1 and Dis-2, which are derived from NOR-1 by insertional inactivation of fas-1, produced unexpectedly low amounts of norsolorinic acid in the presence of hexanoylNAC. Controls eliminated defects in the parent strain or enhancement of degradative beta-oxidation activity as an explanation for the low level of production. Southern blots and restriction mapping of Dis-1 and Dis-2 suggested normal levels of expression of the PKS-A and FAS-2 proteins should be observed because the genes encoding these proteins are not physically altered by disruption of fas-1. The impaired ability of Dis-1 and Dis-2, harboring modified FAS-1 enzymes, to carry out norsolorinic acid synthesis implies the need for FAS-1 (and possibly also FAS-2) to physically associate with the PKS before biosynthesis can begin. The failure of the unaffected PKS alone to be efficiently primed by hexanoylNAC, and the presumed requirement for at least one of the FAS proteins to bind and transfer the C6 unit to the PKS, is in contrast to behavior widely believed to occur for type I PKSs. | High | [
0.6838534599728631,
31.5,
14.5625
] |
'Apprentice' starts whiskey biz strict warning: Non-static method view::load() should not be called statically in /home/addison/public_html/sites/all/modules/views/views.module on line 1118. strict warning: Declaration of views_handler_field::query() should be compatible with views_handler::query($group_by = false) in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_field.inc on line 1148. strict warning: Declaration of views_handler_sort::options_validate() should be compatible with views_handler::options_validate($form, &$form_state) in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_sort.inc on line 165. strict warning: Declaration of views_handler_sort::options_submit() should be compatible with views_handler::options_submit($form, &$form_state) in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_sort.inc on line 165. strict warning: Declaration of views_handler_sort::query() should be compatible with views_handler::query($group_by = false) in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_sort.inc on line 165. strict warning: Declaration of views_handler_filter::options_validate() should be compatible with views_handler::options_validate($form, &$form_state) in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_filter.inc on line 599. strict warning: Declaration of views_handler_filter::query() should be compatible with views_handler::query($group_by = false) in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_filter.inc on line 599. strict warning: Non-static method views_many_to_one_helper::option_definition() should not be called statically, assuming $this from incompatible context in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_filter_many_to_one.inc on line 25. strict warning: Non-static method views_many_to_one_helper::option_definition() should not be called statically, assuming $this from incompatible context in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_filter_many_to_one.inc on line 25. strict warning: Declaration of views_plugin_query::options_submit() should be compatible with views_plugin::options_submit($form, &$form_state) in /home/addison/public_html/sites/all/modules/views/plugins/views_plugin_query.inc on line 181. strict warning: Declaration of views_plugin_row::options_validate() should be compatible with views_plugin::options_validate(&$form, &$form_state) in /home/addison/public_html/sites/all/modules/views/plugins/views_plugin_row.inc on line 136. strict warning: Non-static method view::load() should not be called statically in /home/addison/public_html/sites/all/modules/views/views.module on line 1118. strict warning: Declaration of image_attach_views_handler_field_attached_images::pre_render() should be compatible with views_handler_field::pre_render($values) in /home/addison/public_html/sites/all/modules/image/contrib/image_attach/image_attach_views_handler_field_attached_images.inc on line 112. strict warning: Declaration of views_handler_area::query() should be compatible with views_handler::query($group_by = false) in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_area.inc on line 81. strict warning: Declaration of views_handler_area_text::options_submit() should be compatible with views_handler::options_submit($form, &$form_state) in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_area_text.inc on line 121. strict warning: Non-static method view::load() should not be called statically in /home/addison/public_html/sites/all/modules/views/views.module on line 1118. strict warning: Non-static method views_many_to_one_helper::option_definition() should not be called statically, assuming $this from incompatible context in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_filter_many_to_one.inc on line 25. strict warning: Non-static method views_many_to_one_helper::option_definition() should not be called statically, assuming $this from incompatible context in /home/addison/public_html/sites/all/modules/views/handlers/views_handler_filter_many_to_one.inc on line 25. strict warning: Declaration of views_plugin_style_default::options() should be compatible with views_object::options() in /home/addison/public_html/sites/all/modules/views/plugins/views_plugin_style_default.inc on line 24. Posted on September 2, 2010 | RAJ PETER BHAKTA has launched WhiskeyPig rye whiskey, a spirit that he plans to begin distilling on the former Norris Farm in Shoreham as soon as next year. He is currently bottling Canadian-made whiskey under his WhistlePig label until he produces his own product. Independent photo/Trent Campbell SHOREHAM — Raj Peter Bhakta was fired by Donald Trump during week nine of the second season (2004) of the NBC reality television show “The Apprentice.” Bhakta no longer has to worry about “The Donald.” The full text of this article is only available to online subscribers. | Low | [
0.48818897637795206,
31,
32.5
] |
Q: django-social-auth - Missing HTTPSConnection I am pretty new to Django and trying to use django-social-auth in my Django project. I followed the README that is very clear. my settings.py #... AUTHENTICATION_BACKENDS = ( # 'social_auth.backends.twitter.TwitterBackend', 'django.contrib.auth.backends.ModelBackend', ) TWITTER_CONSUMER_KEY = 'xxxxxxxxx' TWITTER_CONSUMER_SECRET = 'xxxxxxxxxxxxxxxxxxxx' SOCIAL_AUTH_DEFAULT_USERNAME = 'user' SOCIAL_AUTH_ASSOCIATE_BY_MAIL = True INSTALLED_APPS = ( #... 'social_auth', ) #... If I activate the twitter backend and try to login the standard way, I get: 'module' object has no attribute 'HTTPSConnection' 3 questions: Do I have to install SSL with Python ? How can I check if it's installed ? What is the best way to install it ? A: To solve HTTPSConnection error you need to install openssl-devel package, libssl-dev for Ubuntu, and recompile Python. | High | [
0.658031088082901,
31.75,
16.5
] |
Characterization of Lactobacillus spp. isolated from the feces of breast-feeding piglets. Lactobacillus spp., referred to as IJ-1 and IJ-2, were isolated from the feces of breast-feeding piglets and analyzed for probiotic properties. According to the analyses of 16S rDNA sequence, Lactobacillus sp. IJ-1 showed greater than 99% homology with Lactobacillus reuteri DSM 20016(T), and Lactobacillus sp. IJ-2 had greater than 99% homology with the L. gasseri ATCC 33323(T) and L. johnsonii ATCC 33200(T). The pH changes in the culture media of Lactobacillus sp. IJ-1 and Lactobacillus sp. IJ-2 were from 6.5 to 4.2 and 4.6, respectively. Their respective resistance against artificial gastric acid and artificial bile acid led to survival rates of nearly 186+/-44% and 13+/-5%. Neither strain produced the carcinogenic enzyme beta-glucuronidase. Both strains inhibited the growth of pathogenic microorganisms, such as Listeria monocytogenes ATCC 19111, Salmonella enterica KCTC 12401, Salmonella enteritidis ATCC 13076, Staphylococcus aureus KCTC 3881, and Bacillus cereus 3711, within 24 h of growth. | High | [
0.65871121718377,
34.5,
17.875
] |
Q: Removing duplicate rows in MySQL by merging info I have a large table with person info. Every record has an ID and is referenced by other tables. I noticed that a lot of records have duplicate keys, but they vary in the amount of information in the other fields. I'd like to merge the info in various fields into one and make that the 'master' record and all references to the other records need to be replaced with the master record. An example | id | key1 | key2 | name | city | dob | |--- | ---- | ---- | ---- | ---- | -------- | | 1 | 1 | 2 | John | | | | 2 | 1 | 2 | | Town | | | 3 | 1 | 2 | John | | 70/09/12 | I need to end up with a single record (id is either 1, 2 or 3) with values key1 = 1, key2 = 2, name = John, city = Town, dob = 70/09/12. Is there a clever way to merge these records without testing for every field (my actual table has a lot of fields)? A: You can use MAX() to get the non-empty values for each key. SELECT key1, key2, MAX(id) AS id, MAX(name) AS name, MAX(city) AS city, MAX(dob) AS dob FROM yourTable GROUP BY key1, key2 If there can be different values between rows, and you don't want to include them, you can add: HAVING COUNT(DISTINCT NULLIF(name, ''), NULLIF(city, ''), NULLIF(dob, '')) = 1 | High | [
0.6703146374829001,
30.625,
15.0625
] |
Spectral analysis of field potential recordings by deep brain stimulation electrode for localization of subthalamic nucleus in patients with Parkinson's disease. Spectral analysis of local field potential (LFP) recorded by deep brain stimulation (DBS) electrode around the subthalamic nucleus (STN) in patients with Parkinson's disease was performed. The borders of the STN were determined by microelectrode recording. The most eligible trajectory for the sensorimotor area of the STN was used for LFP recording while advancing the DBS electrode. The low-frequency LFP power (theta- to beta-band) increased from a few millimeters above the dorsal border of the STN defined by microelectrode recording; however, the low-frequency power kept the same level beyond the ventral border of the STN. Only high beta-power showed close correlation to the dorsal and ventral borders of the STN. A spectral power analysis of LFP recording by DBS electrode helps with the final confirmation of the dorsal and ventral borders of the STN of Parkinson's disease in DBS implantation surgery. | Mid | [
0.648854961832061,
31.875,
17.25
] |
The Scottish Labour Party is not the only headache for Ed Miliband this morning. The Telegraph’s front page doesn’t make for the best reading either, running with the news that Tony Blair predicts a Tory victory next year: However, the story is not all it seems. The only quote The Telegraph supplies is from an anonymous source who claims that the former Labour PM made the prediction in a private meeting with them: “The Conservatives will be the next government because Labour has failed to make a good case for itself. That is what Tony thinks. He does not think that Miliband can beat Cameron.” Not quite so, according to Tony Blair’s office. Possibly trying to quash the story by refusing to give it much oxygen, his office simply confirmed that Blair believes Labour “can indeed win”. To make these two quotes into a front page splash feels like a bit of a stretch, and the story seems to say more about the views of the anonymous source and The Telegraph’s editorial stance than Blair’s own opinions. His alleged prediction is certainly a far cry from Blair’s own rule that “progressives should be optimists” that he made in a speech earlier this year. LabourList understands that Blair is willing to publicly campaign for the Labour Party in the coming months, but has so far not been approached by the central Party. If there are no advances forthcoming from Brewers’ Green, he will campaign on a more local scale with friendly MPs and candidates. Ultimately, while this is not the kind of front page Miliband would like to see, he probably recognises that this story is a bit light on substance and there are more pressing concerns today. UPDATE: “Ed Miliband and the Labour Party can and will win the next election.” Pretty unequivocal. | Mid | [
0.5650406504065041,
34.75,
26.75
] |
Q: get value from the list when check box is clicked i have a list with checkbox, when i check a checkbox i need the selected item to be displayed on click of done button <ion-list *ngFor="let item of options" style="margin-bottom: 0px"> <ion-grid> <ion-row> <ion-checkbox ></ion-checkbox> <ion-col> <ion-label style="margin-bottom: 0px; margin-top: 0px;">{{item.val}}</ion-label> </ion-col> </ion-row> </ion-grid> </ion-list> <button (click)="done()" >done</button> done(){ console.log("done"); /*here i need to get the values of the items that are selected*/ } public options = [{ "val" : "United States" }, { "val" : "Afghanistan" }, { "val" : "Albania" }, { "val" : "Algeria" }, { "val" : "American Samoa" }] "options is an array of items" Could someone help me to get only the selected values A: checkboxes:boolean[]; constructor(){ this.checkboxes = this.options.map(v => false); } done(){ console.log("done"); var result = []; this.options.forEach( (val, idx) => result.push({item: val.val, checked: this.checkboxes[idx]})); console.log(result); } <ion-list *ngFor="let item of options; let i=index" style="margin-bottom: 0px"> ... <ion-checkbox [(ngModel)]="checkboxes[i]" ></ion-checkbox> | Mid | [
0.647814910025706,
31.5,
17.125
] |
The present invention generally relates to data processing. The invention relates more specifically to speech recognition systems. Speech recognition systems are specialized computer systems that are configured to process and recognize spoken human speech, and take action or carry out further processing according to the speech that is recognized. Such systems are now widely used in a variety of applications including airline reservations, auto attendants, order entry, etc. Generally the systems comprise either computer hardware or computer software, or a combination. Speech recognition systems typically operate by receiving an acoustic signal, which is an electronic signal or set of data that represents the acoustic energy received at a transducer from a spoken utterance. The systems then try to find a sequence of text characters (xe2x80x9cword stringxe2x80x9d) which maximizes the following probability: P(A|W)*P(W) where A means the acoustic signal and W means a given word string. The P(A|W) component is called the acoustic model and P(W) is called the language model. A speech recognizer may be improved by changing the acoustic model or the language model, or by changing both. The language may be word-based or may have a xe2x80x9csemantic model,xe2x80x9d which is a particular way to derive P(W). Typically, language models are trained by obtaining a large number of utterances from the particular application under development, and providing these utterances to a language model training program which produces a word-based language model that can estimate P(W) for any given word string. Examples of these include bigram models, trigram language models, or more generally, n-gram language models. In a sequence of words in an utterance, W0xe2x88x92Wm, an n-gram language model estimates the probability that the utterance is word j given the previous nxe2x88x921 words. Thus, in a trigram, P(Wj|utterance) is estimated by P(Wj|Wjxe2x88x921, Wjxe2x88x922). The n-gram type of language model may be viewed as relatively static with respect to the application environment. For example, static n-gram language models cannot change their behavior based upon the particular application in which the speech recognizer is being used or external factual information about the application. Thus, in this field there is an acute need for an improved speech recognizer that can adapt to the particular application in which it is used. An n-gram language model, and other word-based language models work well in applications that have a large amount of training utterances and the language model does not change over time. Thus, for applications in which large amounts of training data are not available, or where the underlying language model does change over time, there is a need for an improved speech recognizer that can produce more accurate results by taking into account application-specific information. Other needs and objects will become apparent from the following detailed description. The foregoing needs, and other needs and objects that will become apparent from the following description, are achieved by the present invention, which comprises, in one aspect, a method of dynamically modifying one or more probability values associated with word strings recognized by a speech recognizer based on semantic values represented by keyword-value pairs derived from the word strings, comprising the steps of creating and storing one or more rules that define a change in one or more of the probability values when a semantic value matches a pre-determined semantic tag, in which the rules are based on one or more external conditions about the context in which the speech recognizer is used; determining whether one of the conditions currently is true, and if so, modifying one or more of the probability values that match the tag that is associated with the condition that is true. According to one feature, the speech recognizer delivers the word strings to an application program. The determining step involves determining, in the application program, whether one of the conditions currently is true, and if so, instructing the speech recognizer to modify one or more of the probability values of a word string associated with a semantic value that matches the tag that is associated with the condition that is true. Another feature involves representing the semantic values as one or more keyword-value pairs that are associated with the word strings recognized by the speech recognizer; delivering the keyword-value pairs to an application program; and determining, in the application program, whether one of the conditions currently is true, and if so, instructing the speech recognizer to modify the probability value of the word strings that are associated with the keyword-value pairs that match the tag that is associated with the condition that is true. Yet another feature involves delivering the words and semantic values to an application program that is logically coupled to the speech recognizer; creating and storing, in association with the speech recognizer, a function callable by the application program that can modify one or more of the probability values of the word strings associated with semantic values that match the tag that is associated with the condition that is true; determining, in the application program, whether one of the conditions currently is true, and if so, calling the function with parameter values that identify how to modify one or more of the semantic values. A related feature involves re-ordering the word strings after modifying one or more of the probability values. Another feature is modifying the probability values by multiplying one or more of the probability values by a scaling factor that is associated with the condition that is true. In another feature, the method involves delivering one or more word-value pairs that include the semantic values to an application program that is logically coupled to the speech recognizer. A function is created and stored, in association with the speech recognizer, which can modify one or more of the probability values of word strings associated with words of word-value pairs that match the tag word that is associated with the condition that is true. It is determined, in the application program, whether one of the conditions currently is true, and if so, calling the function with parameter values that identify how to modify a probability value of a word string associated with the semantic values, including a scaling factor that is associated with the condition that is true. The function may modify a probability value by multiplying the probability value by the scaling factor. The invention also encompasses a computer-readable medium and apparatus that may be configured to carry out the foregoing steps. | Mid | [
0.589430894308943,
36.25,
25.25
] |
Founded in 1993 by brothers Tom and David Gardner, The Motley Fool helps millions of people attain financial freedom through our website, podcasts, books, newspaper column, radio show, and premium investing services. Don't Let a Market Crash Destroy Your Retirement Dreams Panic is never the right answer. Take a considered approach to your long-term retirement finances. The past week has left millions of Americans scared about their financial future, as stock market volatility has reared its head, with triple-digit moves in the Dow Jones Industrials (DJINDICES:^DJI) becoming an everyday event. As stocks have fallen sharply, you'll find all sorts of experts claiming to have the analysis that you need to weather the short-term bumps in the market. Yet while those so-called experts make their guesses about how far the Dow could fall, most of them have a gaping hole in their analysis that makes ordinary retirees and near-retirees ask one question: what does a potential market crash really mean for me? The answer is simpler than you might think. If you're in or nearing retirement, then you need to ensure that you're properly insulated from too much market volatility, especially if you're counting on taking money out of your investment portfolio on a regular basis to cover income needs. As all too many people discovered in 2008, big drops in your portfolio can crush your plans for a happy retirement lifestyle. But that doesn't mean you should panic now. All it requires is the sort of periodic reality check that smart investors make regularly no matter what the market's doing. By setting up some mental warning signs -- much like the way your car will warn you about potential problems -- you can make sure you're on top of the risks involved in your investments and stay in the driver's seat to keep your retirement planning on course rather than being a helpless bystander as your dreams go up in smoke. Things changeIf you want to succeed in investing, you need to have a plan. But good plans have to be flexible to adapt to changing conditions. A good retirement plan is like a roadmap to guide you on a long trip. The map lets you choose the path you'd like to follow, and for long stretches of open road, you can put yourself on cruise control and keep making progress toward your destination. But you have to rely on the accuracy of the map. If it leaves out a road closure, you'll have to change course to find a detour that will get you where you want to go. Moreover, if heavy traffic or bad weather make your preferred route suddenly look less attractive, diverting to a better route is the smart thing to do before you get stuck in a situation you'd much rather avoid. Your plan is your investment roadmap, and it has to accommodate all sorts of changes, both in your personal life and in events happening throughout the world. Smart investors see these changes and make appropriate updates to their course on their investment roadmap. 3 times to do regular maintenance on your investmentsThe challenge for many people, though, is how to know when to make changes to their retirement planning. Smart retirees have learned through experience that doing maintenance checks on their planning makes sense in three different situations: Doing periodic maintenance. Just like changing the oil in your car every few months, checking up on your investments at regular intervals will help keep your retirement planning running smooth. You don't want to obsess over moves on a daily or weekly basis, but a once-a-year look will keep you focused on long-term goals. Consider: even with last week's plunge, the Dow is still up almost 9% over the past year. Addressing goal changes. Everyone faces new challenges in their lives, and they can have a big impact on your finances. For those nearing retirement, an unexpected layoff can throw your finances into turmoil unless you've made contingency plans for how to cope. Similarly, needing to help put your grandkids through school or dealing with a chronic illness can require some changes to your investments to make sure you have the money you need. Dealing with big market moves. From time to time, you'll see your portfolio thrown out of balance as your stocks, bonds, cash, and other assets move in different directions. Doing a special rebalancing at those times can help keep your risk levels stable while potentially taking advantage when future conditions improve. The road to a happy and financially secure retirement isn't always easy to follow. But keeping your eyes on the road ahead and making sure you're still on course will make your ride toward retirement as smooth as possible. Author Dan Caplinger has been a contract writer for the Motley Fool since 2006. As the Fool's Director of Investment Planning, Dan oversees much of the personal-finance and investment-planning content published daily on Fool.com. With a background as an estate-planning attorney and independent financial consultant, Dan's articles are based on more than 20 years of experience from all angles of the financial world. Follow @DanCaplinger | Low | [
0.52258064516129,
30.375,
27.75
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.