text
stringlengths
8
5.74M
label
stringclasses
3 values
educational_prob
sequencelengths
3
3
INTEGRATED TRUTH <511> – BIBLICAL ENTREPRENEURSHIP (6) – The attitude of an entrepreneur is critical because it determines the altitude. As a man thinks in his heart, so is he. (Pr 23:7) The mindset of an entrepreneur will shape the type of enterprise he leads. Apple lured John Sculley (who was recognized worldwide as an expert in marketing with an institutional mindset) from Pepsi in apply his marketing skills to the personal computer market. Steve Jobs (an inventor with an entrepreneurial mindset) successfully sealed the deal after he made his legendary pitch to Sculley: “Do you want to peddle sugared water for the rest of your life or do you want to change the world?” After Sculley accepted the offer to take over as president of Apple, it was a cultural shock and devastating for both Sculley and Apple’s engrained entrepreneurial approach in doing business. The clash between two different mindsets resulted Job resigned from Apple and founded NeXT Inc in 1985. Jobs eventually took back the reins of running Apple in 1997 after Apple purchased NeXT Inc for $427m. This experience awakened Sculley to the operating dynamics of a corporate culture and way of thinking that was entrepreneurial rather that of his institutional mindset. An amazing insight into the entrepreneurial approach to business was outlined by Sculley in his book, “Odyssey: Pepsi to Apple”. Let me summarize his insights from his experience: the main focus for institutional mindset is on the organization, while entrepreneurial mindset is more on the individual creativity. For the most part, innovation in the institution serves to reduce risk in order to maintain and improve existing products. Every invention is an innovation but not every innovation is an invention. The ability sought in an institution gives focus to manage the status quo; whereas entrepreneurially it is the ability to embrace and adapt to change. Since biblical entrepreneurship is about managing risk and opportunity, it fits more closely to the interaction of faith and risk in a Kingdom setting. The expected output for the institutional mindset is market-share, whereas the entrepreneur mindset is about market creation. The leadership focus of institution tends to orientate towards micro-management, whereas entrepreneurial focus is more about motivating and nurturing talents. For the institutional mindset, the product is an artefact or a service, whereas the entrepreneurial product is a dream. The primary motivation for an institutional thinker is to make money, whereas for an entrepreneurial thinker, it is to make history through transformation. Prayers for today: Lord, let our mindset be transformed by your Words into a biblical entrepreneurial mindset to bring the Kingdom culture into all our enterprises. In Jesus’ name. Amen.
High
[ 0.6967741935483871, 33.75, 14.6875 ]
This was generally supported by Banerjee (1974) who found that two injections of 200 ng 6-OHDA intraventricularly failed to reduce muricide in rats [ … ]
Low
[ 0.521212121212121, 32.25, 29.625 ]
Q: Derivative of implicit function with exponential functions of each other We have the equation: $$ x^y = y^x +y $$ Which defines an implicit function $y(x)$ at the point $(2,1)$. I'm asked to find the derivative at $y'(2)$. I saw the answer in Wolfram: $$ y'(x) = \frac{y (y x^y-x y^x \ln{y})}{x (-y x^y \ln{x}+x y^x+y)} $$ Which gives $y'(2)=\frac{-1}{\ln{4}-3}$. I don't understand how to get there. When I try to dervie, after taking $\ln$ from both sides I get: $$ y'\ln{x} + \frac{y}{x} = \frac{y^x\ln{x} + 1}{y^x+y}y' $$ $$ y'[\frac{1 - y\ln{x}}{y^x+y}] = \frac{y}{x} $$ $$ y' = \frac{y^{x+1}+y^2}{x - xy\ln{x}} $$ A: By the property of derivatives, if we find the derivative of all individually, then we can add them all and do the implicit differentiation. $x^y=t$ $y\log{x}=\log t$ $t(\frac{dy}{dx}\log x+\frac yx)=\frac{dt}{dx}$ For $y^x=u$ $x\log y=\log u$ $u(\log y+\frac{x}{y}\frac{dy}{dx})=\frac{du}{dx}$ So $\frac{dt}{dx}=\frac{du}{dx}+\frac{dy}{dx}$ $x^y(\frac{dy}{dx}\log x+\frac yx)=y^x(\log y+\frac{x}{y}\frac{dy}{dx})+\frac{dy}{dx}$ Substituting the point $2(\frac{dy}{dx}\log2+\frac12)=\log1+2\frac{dy}{dx}+\frac{dy}{dx}$ $2\log2\frac{dy}{dx}+1=3\frac{dy}{dx}$ $\frac{dy}{dx}(3-\log4)=1$ $\frac{-1}{\log4-3}=\frac{dy}{dx}$. Which is the required result.
Mid
[ 0.583333333333333, 28.875, 20.625 ]
jest.useFakeTimers() beforeEach(() => { document.body.innerHTML = '' }) test('sets the status', () => { const setA11yStatus = setup() setA11yStatus('hello') expect(document.body.firstChild).toMatchSnapshot() }) test('replaces the status with a different one', () => { const setA11yStatus = setup() setA11yStatus('hello') setA11yStatus('goodbye') expect(document.body.firstChild).toMatchSnapshot() }) test('does add anything for an empty string', () => { const setA11yStatus = setup() setA11yStatus('') expect(document.body.firstChild).toMatchSnapshot() }) test('escapes HTML', () => { const setA11yStatus = setup() setA11yStatus('<script>alert("!!!")</script>') expect(document.body.firstChild).toMatchSnapshot() }) test('performs cleanup after a timeout', () => { const setA11yStatus = setup() setA11yStatus('hello') jest.runAllTimers() expect(document.body.firstChild).toMatchSnapshot() }) function setup() { jest.resetModules() return require('../set-a11y-status').default }
Mid
[ 0.599531615925058, 32, 21.375 ]
This blog is not for the light-hearted or easily offended. If either one of those descriptions applies to you, i would suggest you start drinking before you read this blog. A sense of humor is suggested. If you don't have one that sucks for you … find one and get a life! In which i discuss blogging competition If you're new here or you have previously subscribed by Google Friend Connect, you may want to subscribe to my RSS feed or by email (look to the right). Thanks for visiting! I’ve never been a girl’s girl. Of course, I know women and hang out with them but big groups of women have never been my thing. Instead, I’ve always had a best friend or at the very least, a few really good friends. I’ve also never mixed and matched my friends so I never hung out with one friend group. When I lived in New York I had different friends that I did different things with. For example, one friend might be someone I partied with but I might have just hung out and had wine with another friend. So, yeah this whole “sisterhood” thing isn’t something that I’m really familiar with. They started a sorority when I was in college but by then I was entering my senior year and I had no desire to join it. I was on the swim team but once again, there weren’t that many girls on the team and I mostly hung out with the boys. I’ve always been more comfortable with men. You know where you stood with them. There weren’t any games or stabbing people in the back going on. With men, the competition is obvious because they’re either competing or beating the shit out of each other. If they don’t like you it’s fairly obvious. Women are different. They can smile to your face and stab you in the back. They can pretend to be supportive and have an entirely different agenda. Which brings me to blogging. While there doesn’t seem to me to be much money in the blogosphere, there sure is a hell of a lot of competition to get what little there is. At least, that’s the only way that I can explain why some people are willing to climb all over other people to get more readership. When you first start blogging, one of the things you learn is that you should “read” other blogs and leave comments and then they’ll go to your blog and read your posts; tit for tat basically. I tried this for a short time, found it incredibly boring, and quit. It’s not that many “posts” weren’t interesting; it’s just that I didn’t care. It’s not a lack of time as it is for some people but very frankly just an incredible lack of desire. At this point in my life, I read WHAT I want and what’s interesting. Has that helped my blogging? No, it hasn’t. I discovered when I quit reading other blogs that they quit reading mine too. That’s ok. I don’t really have an agenda so whatever. However, many women are working the blogging world in a major way. Guest posts, link ups, there are many ways to expand your readership. When you do a link up you are supposed to go look at the other links; its just common courtesy. If you go to the other links and leave a comment, how can that be interpreted as anything other than being supportive? Well, I know of at least one case where the person was “scolded” because it was felt she was trying to poach the readership of said link up. Poaching? In blogging? If it weren’t so ridiculous it would be hilarious. What have we come to when we start looking at comments as poaching and worrying about this person or that person “stealing” our readers? First, can’t people read more than one blog? And second, what about support? What about sharing? What about hoping that somebody else is successful? Why does one persons success only mean something if they “beat” someone else? Does it really matter if you have more followers than me? Followers can be bought and very frankly, it’s all a bunch of shit. Long ago I espoused that I wasn’t going to follow people just because they followed me. Just because you read my stuff doesn’t mean I want to read your stuff. It has nothing to do with whether I “Like” you or not? I can like you and still not want to read about your kids, or your life, or your views. I once pointed out to my children that society looked at winning at losing in the wrong way. If there are 10 people in a race and 1 person wins, are the other 9 losers? Not in my book because sometimes its about a personal best. Sometimes its about personal ACHIEVEMENT! So ladies out there. Why don’t you just try to set your own personal goals and be happy and successful within that framework? If your goal is to be “bigger” or “better” than another blogger, you’re selling yourself short. Only by being yourself and trying to achieve within your own framework can you make yourself happy. The rest is bullshit. BULLSHIT I tell you. This is my opinion and I’m sticking to it. I can do that because my only goal is to be AUTHENTIC. “I can like you and still not want to read about your kids” <– that right there is my sentiment, EXACTLY. It's a crazy world, this blogsphere. Totally effing insane.Lady Estrogen recently posted..‘Cos I’m a model, ya know what I mean I’m a big fan of “following,” but it’s not so people will follow me back. It’s because I like seeing my face all over the blogosphere…I’m vain like that. Whatever. But I so get this! This is my pet peeve: follow, don’t follow, I don’t care. However, don’t UNFOLLOW me when you see that I’m not at your effin blog everytime you post something new. To reitirate what Lady E quoted from you “I can like you and still not want to read about your kids”! Great post and good for you for having the balls to write it. So many wouldn’t for fear of upsetting the blogger status quo…I don’t even know what that means, I just felt like using “status quo” in a sentence. I knew this was you just from the comment :-) Thanks for pointing me to this blog post – I completely agree – when I actively tried to get readers by going to other sites, I just ended up feeling inadequate about my own site. Now I just read the ones that make me happy… :-D I don’t always follow people who follow me, and I also follow people who don’t follow me. I certainly don’t beat myself up about followers or about commenting. I read what I like when I like, and often go through phases of preferring some blogs and then moving onto others. I think some people get a wee bit obsessed about their blogs. If that’s their thing, great, but not everyone is like that or cares what shit others do. Live and let live, as they say. I’m also with you on the women thing. I hate big groups of women, they can be so unpleasant and bitchy to each other and I don’t want to be involved in their tight-arse crap.Sarah recently posted..Looking back at my first ever post On the upside, the chance of our lives crossing had it not been for the blogosphere are slim to none … so thank you blogosphere for bringing my sweet, yes, sweet but snarky friend. As for the other bs, I read a variety of blogs because I really like them, no other reason … there is even a cloth diapering Mama I read because she is funny, and really the next person to need diapering in this house is probably my husband ;)By Word of Mouth Musings recently posted..The Perfect Crime Rock on, sista! I feel exactly the same way. I read what I want when I have the desire to do so. I really don’t care if you read my shit…that’s not why I write it. I write it for ME! And I know you do, too. That’s why you are a SuperGal! I hate women too, I mean bloggers, well not all bloggers, I love you….LOL But I’m with you, I’d love to had success blogging but my vision of success doesn’t include loads of money and “stalking” or a “following” This is a great post Lynn, I shared it everywhere so your readership will grow…lol Great post and I couldn’t agree more. I just want to keep it real and I know that some will not even look at my blog but I am okay with that!! I blog mostly for me but also for my readers who need to now that they are not alone.Charlene recently posted..Just a Thought for Pour Your Heart Out Followed you here from a link on Twitter. All I can really say is that just as in life, there are pockets of good in the blogosphere and pockets of negativity. It sounds as though you at one point experienced the negative part. It is a lot of work to read and follow and comment, and it’s especially a chore when there no relationship exists between two bloggers except for that. But when there is a meaningful relationship based on concepts similar to a friendship: similar interests, trust, mutual respect, the two bloggers perfectly understand when one of them can’t stop by every day to comment. This also comes with a certain maturity level in dealing with folk on the Interwebs. Sometimes, we just gotta walk away from someone else’s drama. actually, i have never really experienced that stuff. This happened to a friend of mine. For me personally, i get plenty of hate but i truly don’t give a shit. After all, i’ve been pissing people off for 52 years and i really don’t see myself changing now. great post. i agree. life is too short to be hating on each other because of blogs. people just need to learn to laugh more and not take the little things so seriously.alaina recently posted..Wish List. I’m so glad you’ve said this. In my short time blogging again, I’ve witnessed some of the kinds of things you’ve mentioned at it just boggles my mind. I happen to read your blog not because I want you to read mine, but because I find yours interesting. I comment so that you know I’m here and I can give you a piece of my mind about what you’ve written. If you happen to like my response and then happen to find my blog, and happen to start reading it, great — but no expectations here. Love. Love. Love this post…and I more than 100 % agree with you on every single point you make. Women terrify me, they always have. I have met some really awesome women in the blog world as well though, just like real life you just have to filter through the crap. i’ve met some great people too! I just hate all the stupid games to be honest. This didn’t actually happen to me but to someone i know. Melanie December 9, 2011 Great post. I am extremely blessed in that I let go of all the competitive, catty women and surround myself only with amazing women who like themselves. It has made my world a much better place to be. I’m sure the blogosphere, much like everywhere else, has its share of, “Hey, I need ALL the attention” folks. Don’t let it get you down. I think of that and think it’s the reason why even though I want to start a blog, I haven’t. I have so much to share that even if it helps one person, will be worth it. So you know what? Tomorrow I’m starting my blog darnit. I got rid of Facebook, Twitter, and all of that stuff when I realized it caused me more stress than it was bringing me joy. If the blog does that, I’ll do the same with IT. Don’t let the icky folks get you down. Focus on the good ones. And make a ton of fun of the icky ones! :) well, the story wasn’t actually about me…i have a struck non-compete policy in place in regards to others. I was just commenting on the bullshit attitude of OTHER women. Thanks for reading and commenting… Well, I don’t come over here all the time, but I do read all your posts, I’m pretty sure. This one really hit home with me because I agree. I don’t compete for people’s following. If you follow, yeah. If you don’t, I don’t think about it much at all. I follow, read and comment when and where I want to and that’s probably pissed some people off. Oh well. I love blogging and seriously love many of the blogs and bloggers I’ve come across….but quite frankly, I’m just not interested in some of the subject matter. Probably because I’m 58 and my kids are grown and I’m a long-distance grandma so not much hands-on. I’m selfish and I’m gay (that’s why I don’t get much visitors, but if that’s why, they can kiss my ass). What I like is humor and intelligence. Keep me laughing and keep me informed and I’ll come back. That’s what I try to do over at my place, but with school and this effing job hunt, I can’t always do it. OK I’ll shut up now. I concur. That is all. Lynn, I’m a lot like you, not a girl’s girl (minus the humor, lol). I can’t handle the back-stabbing shit; the drama; the conniving, etc., and so would much rather deal with men, because at least you know where you stand! I’m competitive with myself and have goals for my blog but I don’t compare myself with others because I’ve made it my life’s ambition to be WHO I AM and people will like me (and what I write) or not.Pamela D Hart recently posted..An Organized Pack Rat & An Early Old Gift I started out with the tit for tat mentality. I followed/comment on their stuff and they would follow/comment on mine. 3 years later, I don’t do it and I can tell who my “real” online friends are. I’ve got a Twitter account I can barely use because it’s clogged with so much junk, bloggers promoting other people’s posts/blogs just because the other person tweeted theirs. What I want to see is stuff they are truly interested in, not some constant stream of never-ending marketing tweets. I’m all about sharing with my followers but I only share good stuff I find interesting now. It gives my readers insight into my interests and hopefully they’ll find some of it useful, interesting or funny like I do. But it seems to get them the paid gigs so I guess it works for them. Guess I’ll have to be satisfied with making beans on my blog but oh well! It is what it is! yeah…i really like when something INTERESTING goes by on Twitter. I like reading interesting posts…but i agree, people who are just “i’ll RT you if you RT me” aren’t for me. This blog isn’t a big blog and probably never will be but as long as it’s fun i’ll keep doing it. Jody here, Kiwi living in Oakland (yes!) Came through Sarah in France. I’ve never read a post that addresses all these issues which I’ve thought about for so long. Good on you! One thing is that a lot of bloggers in the US are not terribly honest, it’s all baking cookies with their kids kind of stuff. I’ve thought about linking and having guest post but then I feel quite exhausted. I see you have 51 comments though so you’re totally on the right track – how very refreshing! I am fairly new to blogging. Have had mine up for three months and love writing my crazy-ass stories. I have struggled with the whole concept of gaining readership. I don’t want to step on anyone’s toes, but at the same time, I feel like I put so much time and effort into my blog, I’d love to have more than 3 people reading it (who all happen to be illiterate and from a remote island off the coast of Fiji.) I love your message about just being genuine and authentic. And I do agree that people shouldn’t get all “you are cheating on me with another blog.” It’s nice for a change to read a post ABOUT blogging and all the shit it brings up.Gwendolyn Francis recently posted..Tits for Tots what are the ODDS of having 3 illiterate people from Fiji read your blog? I mean really? All i can say is have patience and …. well that’s all. I started out by harassing my friends. All that stat counting shit will drive you up the wall and .. if you think you’re funny, send in a guest post for “go ahead, amuse me” Sarah :) December 9, 2011 According to her most recent post, she’s pretty darn funny. I’ve even offered to do volunteer work for her fabulous charity. Check it out…it’s one of the best reasons I’ve found to fill your bra with money! i wanted to but all the ponies i have available are unridable for some reason. I don’t even understand it myself. and why is this not an email so i can copy and paste this into a post. Jesus…you’re screwing me up here. Ooops…forgot it was your birthday. One pony in the mail Sarah :) December 9, 2011 Wow, there are alot of comments today!! It really was a good post…even if it wasn’t all about how you were wishing me a fabulous birthday! (Was that an attempt at poaching birthday greetings? Yes, yes it was.) The internet has its uses, and I’m glad that it has produced some hilarious emails between us. I’m also glad that it brought me to my next stalking victim: Sandra. She is mother truckin’ funny! However, drama for drama’s sake is simply ridiculi (that’s the official plural tense of “ridiculous”…when more than one ridiculous thing is happening at a time.) I’m feeling all ADD trying to acknowledge all of the witty comments you’re getting about this post, so I’m going to stop here. After, of course, I wish you as much success as the most successful person that you consider successful. And as far as being authentic…you are already more successful than most Tex-Mex restaurants…so congratulations! wait!! HOld the presses!! it’s your birthday? the ONLY reason i put up this post today was because i didn’t know it was anybody’s birthday. Boy did i fuck up!!! the emails are great…you are supply me with blog posts even though you’re not a “blogger” sweet…and HAPPY BIRTHDAY Sarah :) December 9, 2011 You’re SO forgiven! Especially since by ignoring my birthday, you’ve managed to bring such a good message and get all of this awesome feedback. It’s a sacrifice I’m willing to make, as long as I get props for being so selfless and thoughtful of others. I’ll expect my trophy to arrive in the next 7-10 business days. I’m one of the laid-back bloggers who doesn’t worry about whether people follow me or not. It is fun to have people to comment to, but I don’t stress that more people aren’t “discovering” me, because I write for myself. I, too, only read and comment on blogs that interest me or where I feel that I can make a contribution or a funny to a post. I don’t care if the person I follow comes to my blog, that’s not why I do it. But I do see other people who do this, and I think it’s a shame that they let themselves get stressed over it. If it’s stressful, why do it? I also find other blogs that I like through comments, so I usually visit blogs that are new to me, and if I like two or more of their posts, I keep coming back. Sometimes I weed out the ones who don’t blog more than once every few months, simply because it doesn’t behoove me to go back there every day hoping for some funny. Blogging has become a form of social life for me. I like getting to know others through their thoughts, and I don’t have to go to their house to do so. Some people are more honest this way, because they aren’t speaking to someone in person and can say what they really mean. I find the whole process rather fascinating. I haven’t checked in on your blog in a while. There. Guilty. I’m OK with it. You are OK with it. It’s life and it gets in the way. I used to be on top of it all…I walked away. Screw it. When things became ugly in my personal life…the bloggy world held my hand…when things leveled out and I refused to talk about the ugly…the bloggy world walked away. I’m not here to provide a trainwreck…so I walked away. I love what you say. I love how you say it. I will still visit you. I probably won’t comment. I will still read you though. I’m so glad you’re back…I saw your twitter handle the other day and I thought about you. You DID have a sucky year. Some people like to be there to help people and others dump people when they’re down. Then, some people help you so they can feel better than you. It’s all pretty fucked up actually…so, I’m glad you survived. I’m glad you’re doing better and THANKS for taking the time to comment. Hope all goes better moving forward :) I’ve not seen a lot of reader-poaching (and am happy to have avoided it. who has time for that?) but what I do see, and this *does* piss me off, is IDEA poaching. It’s one thing to be inspired to write about something based on a post or comment you’ve read, or a thread going by on your Twitter stream, and then write about it. Totally cool. It happens all the time. But what’s uncool is to a)take a fellow blogger’s post and basically write the same post yourself, in your “voice”, or b) write about something and not just have the common courtesy to say, “hey, I was reading comments on this post about sheepherding on X blog, and it got me thinking about…” But I can only feel sorry for people that pull those types of shenanigans, because obviously, they can’t generate creative ideas of their own. And yeah, once you stop reading blogs, it’s amazing how many people stop reading yours. :) Thankfully, I feel like my close blogging friends and I have an unspoken thing – it’s all good if you don’t read my blog anymore, dude – our relationship is much deeper than being defined by leaving a comment. :) Hey! I totally agree with you about poaching ideas. I don’t actually think that readers are poachable either which is why I thought it was so ridiculous. I read your blog when I see a link that goes by that looks interesting. For me, it’s all about the mood I’m in and what I see go by. I don’t even read my own stuff once I write it and often have to check back and see what’s posted that day (yeah, I’m THAT far ahead with posts) Our relationship is SO MUCH DEEPER…hahahaha…thanks for reading, commenting and tweeting this post…I’m always shocked when people respond :))) Lindsay December 9, 2011 And with that, I would like to start reading your blog. Because I like this. A lot. I am tired and perhaps a little loopy, because I am trying to figure out the “poaching readers” mentality. I would hope that the (extremely limited amount of) people who read my blog couldn’t be “poached”. Does it involve dart guns? I totally agree. I always see people with hundreds of followers and I’m jealous (kinda) but I do blog hops and visit others. If I visit and they catch my attention, I subscribe via email. I don’t understand how something that started out personal and for fun ended up being dog eat dog. Oh well, that’s life. I am happy with who follow me and I am writing for myself. Your last line…”You should try it some time” is the only thing that sort of ticked me off. I HAVE tried to be authentic, and much like you, I don’t put up with the crap that goes on in the blogisphere. I write because I like to. I have a blog because it’s a great way to improve my writing. That’s about as big as it gets.Name * recently posted..Walking the Mayan Straight Line That’s okay, Lynn. I understand. I guess I didn’t feel like being thrown under a fast moving bus with the inauthentic bunch. I totally understand your feelings about women in groups…picking friends for different reasons…and finding men easier to commune with. I also find there is a lot of games being played in blogging. I once stopped following a blogger and immediately received an email wondering what he’d done wrong and why I’d cancelled my subscription to his blog. I didn’t like that one bit. I also have received emails wondering why I haven’t commented on their posts lately. Extremely annoying. I only read posts (and comment) if I think the writer has something to say. Sorry if I came off too strong. Would hate for you to incur the wrath of a rabid Annie Off Leash who is late on her distemper shot! Lynn, thanks for your feedback/advice to me a million comments ago re: being a new blogger and thanks for checking out my blog. If I get up the courage *nervous cough* I will do as you suggest and send you something as a potential guest post *anxious jaw clenching.* Anywho, sorry I was a po-hoe w/ Sarah and I hope you will still consider joining our great cause for the children Tits for Tots. All the best, GF I really like your non-BS style, Lynn, which is why I do read your blog…. occasionally, I admit (and I know it doesn’t bother you, yay). Life caught up with me and the tit for tat mentality that used to possess me, has finally left town. Now, I just read and comment on blogs I like, and the rest, I read if I have the time. I don’t care about my stats at all and I don’t really care to expand my readership enough to poach readers or ideas. I know who my real bloggy friends/ readers are, and I know that even if they don’t read/ comment on every single post, it’s cool. And vice versa. I only do link ups/ writing prompts if it interests me. And yes, I did host a huge link up recently with another blogger, but only because I loved the idea, not because I thought I’d get sponsors or more readers (I did not). So all that is to say, I blog for me, I do the stuff I do online just because I like it, not because I’m trying to impress anyone. I do hate the cattiness I see, the one upmanship etc. It sucks, and I stay well out of it. Why can’t we all just get along?Alison@Mama Wants This recently posted..Memories Captured Recap and Winner of Canvas Press Photo Print! Hmm..I’m a fairly new blogger, and it’s interesting how many unwritten rules there are to blogging etiquette. I follow most people who follow me (or even just leave a comment, regardless of following), as long as I think I can relate to their blog a little. Then when they post new stuff, it shows up on my reader and I’ll know whether or not I want to click on it and read the whole post. To me, it’s not a matter of “I’ll only go to your blog if you comment on my post” tit-for-tat, but that if I don’t follow your blog, I will most definitely lose track of it in the blogosphere. I figure that if someone is cool enough to leave me a comment, I can at least check their stuff out. And if they like my humor, there’s a good chance I’ll like theirs. That being said, if someone tries to lay pressure on me to check their stuff out or else they’ll stop reading my blog, well they can go eff themselves. The important aspect of it to me is that there’s no EXPECTATIONS that i MUST do something (comment back, follow back, etc).Mayor Gia recently posted..Christmas Polar Bear Well, I occasionally check put people who comment depending on my mood. I just figure that reading blogs is a choice. I try to respond to comments but beyond that, no guarantees. I just don’t feel like reading about a lot of stuff. I caught this link from Absolutely Narcissism and I love the post. I did all of that too for a while until blogging as a creative outlet started feeling like a JOB. An unpaid one! I am picking up what you’re putting down, sister. Still, I think we all use commentluv and post our links hoping that someone will happen upon us, like happened today. :-) Valid point on comment love. I got it because I read that people like you to have it on your blog and when I do read an interesting comment I do like to check further. Thanks for comment ms. Onion! Lisa Victoria December 13, 2011 Well said. As are many of your meanderings. Don’t bother looking at my blog, I like you anyway. Not really trying to “gain readership” WTF is that, I mean really?? It sure doesn’t line my pockets with cash…I only want to make beatiful things even if nobody cares. Thanks for making your beautiful things too, and for putting them out there. :) LV
Low
[ 0.48846960167714804, 29.125, 30.5 ]
487 F.2d 1394 Alexanderv.Saks 72-2006 UNITED STATES COURT OF APPEALS Third Circuit 11/9/73 1 M.D.Pa. AFFIRMED
Low
[ 0.43764172335600904, 24.125, 31 ]
// // Licensed under the terms in License.txt // // Copyright 2010 Allen Ding. All rights reserved. // #import "KiwiConfiguration.h" @interface KWDeviceInfo : NSObject #pragma mark - Getting the Device Type + (BOOL)isSimulator; + (BOOL)isPhysical; @end
Low
[ 0.50632911392405, 30, 29.25 ]
When technology became language: the origins of the linguistic conception of computer programming, 1950-1960. Language is one of the central metaphors around which the discipline of computer science has been built. The language metaphor entered modern computing as part of a cybernetic discourse, but during the second half of the 1950s acquired a more abstract meaning, closely related to the formal languages of logic and linguistics. The article argues that this transformation was related to the appearance of the commercial computer in the mid-1950s. Managers of computing installations and specialists on computer programming in academic computer centers, confronted with an increasing variety of machines, called for the creation of "common" or "universal languages" to enable the migration of computer code from machine to machine. Finally, the article shows how the idea of a universal language was a decisive step in the emergence of programming languages, in the recognition of computer programming as a proper field of knowledge, and eventually in the way we think of the computer.
High
[ 0.674008810572687, 38.25, 18.5 ]
Welcome to HVAC-Talk.com, a non-DIY site and the ultimate Source for HVAC Information & Knowledge Sharing for the industry professional! Here you can join over 150,000 HVAC Professionals & enthusiasts from around the world discussing all things related to HVAC/R. You are currently viewing as a NON-REGISTERED guest which gives you limited access to view discussions To gain full access to our forums you must register; for a free account. As a registered Guest you will be able to: Participate in over 40 different forums and search/browse from nearly 3 million posts. Advice Sought for a High Performance New Construction in Houston My wife and I are in the early phases of designing a new house in Houston. We have started working with an Architect, and we hope to engage a builder here soon. I've spent about three years reading and getting educated on building practices and energy efficiency (e.g., I have read a lot of what Joe Lstirubek has written), and I would appreciate getting some advice from the Pros here before we get too much farther along with the design and to help educate me about how to know when I've got a great HVAC contractor vs. when I've got someone who knows enough to put the right words on the bid, but whose team does the same-old-thing when it comes down to installation. Lots of glass, and, unfortunately, a lot of it West-facing (SHGC will be <0.25 for these, hopefully <0.20) due to lot orientation and building setbacks. ~3,000sf on 2 levels - public spaces downstairs, private spaces upstairs. We will most likely want at least 4 zones. We will probably have a high reflectivity and high emissivity roof, but that is probably for a different discussion (e.g., energy savings vs. damage resistance/durability vs. lifecycle costs) DHW will most likely be gas - either tankless or something like an AO Smith Vertex 100 Man J and Man D will be a must. We will have a big challenge with heat gain from the western afternoon summer sun (overhangs only go so far and no trees on that side yet), but the rest of the day should be fairly moderate. As I understand it, we will have two basic paths to consider for HVAC, each with advantages and disadvantages: Air Conditioner and Gas Furnace Heat Pump (little need for 'emergency' heat given the market and our climate) The Air Conditioner + Furnace option will be standard construction for a lot of Houston homes, so it should be relatively cheap. However, the gas furnace isn't free, so the heat pump might have all-in costs that are lower, plus a heat pump saves me a roof/wall penetration. Further complicating the picture is the need for dehumidification and fresh air intake. We can go with an ERV or a damper for the air exchange, but the more interesting question (to me) is whether a VRF or other continuously variable system will eliminate the need for a dedicated dehumidifier, or whether we would still want a dehumidifier even with a system like a ducted/split Mitsu VRF, Carrier Greenspeed or Lennox XP25. For t-stat, I'd like to be able to change the temperature setpoints from a phone/tablet while we are in the house, rather than have to get out of bed, but I also hate the idea of having to pay a subscription charge to do this. I'm willing to pay for performance (comfort, control), but I don't want to overpay, and I am especially concerned about paying for anything oversized. Since some of these options are manufacturer-specific, that will need to inform which contractors we ultimately look at when we get to that point. However, for now, I want to be sure that we are asking the right questions in the right ways. My initial thoughts, and I may have more to say in detail later, is work with your architect right now to do something about those west facing windows. You can't help what direction they face given your lot and setback constraints, but you can do something about the quantity and quality of each opening. I would highly recommend some form of external shading for each window. There are even electrically operable models that can be raised and lowered without needing to go outside to do it. And opaque enough to not kill all daylighting benefits. Drawbacks are first cost and potential multiple points of failure over the entire west facing facade. If you design the house to incorporate ducts within the conditioned space, cool roof choices are still beneficial, but not pertaining to duct heat gain. That said, if you plan to foam the roof deck inside the attic, DEFINITELY go with a cool roof. You will get the best of both worlds in one fell swoop regarding incredibly reduced heat transfer into the home from the attic. I have a cool (reflective) roof on my 53 year old house with a conventionally ventilated attic. I can't say enough how much it alone has improved our indoor comfort levels and HVAC performance since it was installed last year. Our thermostat sets the temp up to 78 at 8 AM and drops it to 75 around 5 PM. Recent monitoring of our smart meter's historic data shows that the a/c is hardly running at all during that time, yet the house never feels warm or stuffy during that time. And when the a/c does run, it's a two stage unit that uses much less juice to pull down and keep the house at 75 in the evening than our old system did, even with the cool roof. You could go dual fuel and use the Vertex for hot water heat with a hydronic coil. Heating requirements are minimal where your at. Almsot any furnace will be oversized on a hig hperformance home. Seal and insualte it well and you could hear 3000sqft with <20k BTU's. Why both with a furnace. Air handler have higher effciency ratings with an air handler instead of a furnace. Greenspeed is also a nice option and would be pretty slick with a hot water coil so you can get the most out of it. Another solution could be using radiant floor heat downstairs, and have 2 sytems and upstairs a heat pump either electric heat or again hot water. Just some options to toss out there. We just installed a medium sized commerical 200k BTU 100 gallon vertex at my work. Pretty nice unit overall. Great info for troubleshooting that shows you all the interlocks, unit status and tank temperatures. My initial thoughts, and I may have more to say in detail later, is work with your architect right now to do something about those west facing windows. You can't help what direction they face given your lot and setback constraints, but you can do something about the quantity and quality of each opening. Quality will be as high as I can afford, but the quantity is largely fixed and is coming from us (my wife wants as much natural daylight as we can get). I would highly recommend some form of external shading for each window. There are even electrically operable models that can be raised and lowered without needing to go outside to do it. And opaque enough to not kill all daylighting benefits. Drawbacks are first cost and potential multiple points of failure over the entire west facing facade. We are working on the shading aspects. There is still going to be a big heat gain (comparatively) between about 4pm and 7pm during the summer, which will end up as a light load during the day followed by a fast ramp up and then a fast fall-off once the sun goes down. The average load will be minimal. The peak load will be high. This is what is driving the interest in the more advanced compressors. If you design the house to incorporate ducts within the conditioned space, cool roof choices are still beneficial, but not pertaining to duct heat gain. That said, if you plan to foam the roof deck inside the attic, DEFINITELY go with a cool roof. You will get the best of both worlds in one fell swoop regarding incredibly reduced heat transfer into the home from the attic. I have a cool (reflective) roof on my 53 year old house with a conventionally ventilated attic. I can't say enough how much it alone has improved our indoor comfort levels and HVAC performance since it was installed last year. Our thermostat sets the temp up to 78 at 8 AM and drops it to 75 around 5 PM. Recent monitoring of our smart meter's historic data shows that the a/c is hardly running at all during that time, yet the house never feels warm or stuffy during that time. And when the a/c does run, it's a two stage unit that uses much less juice to pull down and keep the house at 75 in the evening than our old system did, even with the cool roof. All good stuff. We will minimize spray foam, but the house is going to be largely as air tight as we can get it. Ducts in the conditioned space are a must, as is using as much sheet metal for the ductwork as practical. Given the high expected peak load, are we going to kill a conventional A/C unit with short-cycling? You could go dual fuel and use the Vertex for hot water heat with a hydronic coil. Heating requirements are minimal where your at. Almsot any furnace will be oversized on a hig hperformance home. Seal and insualte it well and you could hear 3000sqft with <20k BTU's. Why both with a furnace. Air handler have higher effciency ratings with an air handler instead of a furnace. Greenspeed is also a nice option and would be pretty slick with a hot water coil so you can get the most out of it. Another solution could be using radiant floor heat downstairs, and have 2 sytems and upstairs a heat pump either electric heat or again hot water. Just some options to toss out there. We just installed a medium sized commerical 200k BTU 100 gallon vertex at my work. Pretty nice unit overall. Great info for troubleshooting that shows you all the interlocks, unit status and tank temperatures. Hydronic radiant is out. No one in Houston seems to do it, and the heat transfer is too slow (probably why no one does it). The idea of using the water heater tied to the air handler is a good one. I assume that I'd need a device of some sort that would activate the hot water loop? Also, would the air handler be a standard one or some add-on module? FIRST , APPROXIMATE MANUAL J Originally Posted by Bear_in_HOU .... my wife wants as much natural daylight as we can get. We are working on the shading aspects. There is still going to be a big heat gain (comparatively) between about 4pm and 7pm during the summer, which will end up as a light load during the day followed by a fast ramp up and then a fast fall-off once the sun goes down. We will minimize spray foam, but the house is going to be largely as air tight as we can get it. Given the high expected peak load, are we going to kill a conventional A/C unit with short-cycling? High Peak HEAT GAIN may not even be a significant issue MAYTAG or Carrier Greenspeed obviously deserve consideration ( although that may lead you to one unit with Several zones). There is a point of diminishing return via increasing glazing surface area to increase natural daylighting. I would advise working with your architect to model interior light levels as window sizes are tweaked. Computer models exist that can do this; your architect may have them or have access to it. i have lived north of you in DFW most of my life. Cooling season in Texas coincides with glary, hot sunshine. I do not like being inside a house awash in glary sunlight on a hot Texas afternoon. Might be nice in winter, as it's not so glary then due to the lower sun angle, but forget it in summer. Give me light from a shaded window any day over that. Cooling and dehumidification is your primary design criteria for Houston. It should figure not only highly in HVAC design choices, but also the building itself. You are already way ahead of most people in Houston who contemplate building a home from scratch. The devil is in the details. As for heating domestic water with the heat pump, I would weigh the first cost of the equipment required to do that vs. a natural gas water heater. First cost and operating cost of the latter may be lower. Any water tank heated by a heat pump will require auxiliary heat when the heat pump does not run much, unless it can run just for heating water. I have tons of daylight. You can offset it by adding mass. Get tile, stone and as much concrete as you can into the design. Stucco on exterior. Insulating a concrete floor slab. I always thought of using concrete up to the bottom of the widow sills at least... Then you don't have to form openings. Direct sun is you enemy. Add natural shading with deep overhangs. That adds to roof cost, but a huge payback. AS near as I can tell from the modeling that I've done, it should be my only major design issue. The rest will be a simple matter of execution... MAYTAG or Carrier Greenspeed obviously deserve consideration ( although that may lead you to one unit with Several zones). Houston ~3,000 SF on 2 levels House dimensions : 52 x 30 ? Closer to 20x80, at least on the ground floor. The upper floor will be a bit wider (cantilevered) to create both a capillary break and to create additional overhang for the ground floor where a wall of windows faces west. T-stat set Winter 72'F ___ Summer 76' F __ ? We are typically 72-ish, but we live in a 1930 bungalow currently. I have no clue what it will be like when we have actual humidity control. There is a point of diminishing return via increasing glazing surface area to increase natural daylighting. I would advise working with your architect to model interior light levels as window sizes are tweaked. Computer models exist that can do this; your architect may have them or have access to it. SWMBO disagrees. I'm mostly down to the mechanicals in trying to understand what the extent of the trade-offs are. i have lived north of you in DFW most of my life. Cooling season in Texas coincides with glary, hot sunshine. I do not like being inside a house awash in glary sunlight on a hot Texas afternoon. Might be nice in winter, as it's not so glary then due to the lower sun angle, but forget it in summer. Give me light from a shaded window any day over that. I'll have 3 - 4 hours a day during three months of the year that will be problematic until the shade trees get big enough. Cooling and dehumidification is your primary design criteria for Houston. It should figure not only highly in HVAC design choices, but also the building itself. Agree highly, and it is. I am mostly down into the weeds of comparing specific parts so that I can ensure that we ask the right questions for the HVAC contractor. I don't want to get locked-in to a Trane guy when what we really wanted was a Mitsu VRF (or a Lennox, or a Carrier). I'd also like to know going into the point where we begin bidding what the relative tradeoffs are so that I'm not spending a ton of extra money for a bad solution. You are already way ahead of most people in Houston who contemplate building a home from scratch. The devil is in the details. As for heating domestic water with the heat pump, I would weigh the first cost of the equipment required to do that vs. a natural gas water heater. First cost and operating cost of the latter may be lower. Any water tank heated by a heat pump will require auxiliary heat when the heat pump does not run much, unless it can run just for heating water. Thanks! I think the solution proposed was to use a gas-fired water heater as a supplementary heat source attached to the air handler, rather than using a heat pump water heater. Here's how the west window matter appears to me, and perhaps Designer Dan...you are putting forth huge effort to make the house energy efficient and comfortable, perhaps even "green" to some extent, but then shooting a hole in your foot by what amounts to a giant solar collector disguised as low e/high SHGC glazing, all because someone is worried there won't be enough light in the house. I know, I know...the SWMBO factor. If you can't find mitigation there, I would not then wait for shade trees to mature. Do something structurally to shade the west facade, such as a pergola or trellis. Get it incorporated into the design so maybe it can get financed with the construction, maybe?
Mid
[ 0.6311881188118811, 31.875, 18.625 ]
Q: Getting the Nth instance of an element I have a column filled with data that has a path. I'd like to get the last element in the path, the second last element, and the first element. For example, for the following data: \Product\Release\Iteration \Product\Folder1\Folder2\Anotherfolder\Release2\Iteration5 \Product \Product\Somefolder\Release3\Iteration5 I'd like to get the following in cells In cell B1: "Product", cell C1: "Release", cell D1: "Iteration" In cell B2: "Product", cell C2: "Release2", cell D2: "Iteration5" In cell B3: "Product", cell C3: blank, cell D3: blank In cell B4: "Product", cell C4: "Release3", cell D4: "Iteration5" Getting the first and the last component is easy. I'm mostly just struggling with getting the second to last component (column C in the example above). A: In B1 and copied down: =TRIM(MID(SUBSTITUTE(A1,"\",REPT(" ",99)),99,99)) In C1 and copied down: =IF(LEN(A1)-LEN(SUBSTITUTE(A1,"\",""))=2,TRIM(RIGHT(SUBSTITUTE(A1,"\",REPT(" ",99)),99)),IF(LEN(A1)-LEN(SUBSTITUTE(A1,"\",""))>2,TRIM(LEFT(RIGHT(SUBSTITUTE(A1,"\",REPT(" ",99)),198),99)),"")) In D1 and copied down: =IF(OR(LEN(A1)-LEN(SUBSTITUTE(A1,"\",""))={1,2}),"",TRIM(RIGHT(SUBSTITUTE(A1,"\",REPT(" ",99)),99)))
High
[ 0.6804123711340201, 33, 15.5 ]
((c**(-1)*c)**19*(c*c**(1/3))/c**(-2/21))/((c*c**(-1))**(-2/61)*((c*c*c**1*c)/c)/(c/(c/((c*c*c**(-1/6)*c)/c)))) assuming c is positive. c**(11/42) Simplify ((w/(w*w**(-1/10)*w)*w)/(w/w**(-2/43)*w))**(-1/7)/((w**2*w**(-4/5)/w)/(w/w**(-2/7))**(-1/28)) assuming w is positive. w**(271/8428) Simplify ((f/(f**(-27)/f)*(f*f*f**(-6/11)*f)/f)**(4/11))**(-6/7) assuming f is positive. f**(-8040/847) Simplify (((a/(a*a*a**(8/11))*a*a)/(((a**(-8)*a)/a)/a))/(a**(2/41)/a*a)**(1/45))**(25/4) assuming a is positive. a**(235210/4059) Simplify (((y**(-2))**(-4/7)/(((y/(y*y**(-4/7))*y)/y)/(y*y**(-5)*y)))**(-36))**(-26) assuming y is positive. y**(-15912/7) Simplify ((a/(a/(a*(a*a**(-2/5))/a)*a))**26*(a**2)**8)/(a**(-8)*(a**(-2/29)/a)/a*(a**(-2/7)*a)**35) assuming a is positive. a**(-1353/145) Simplify (((j**(-46)/j*j*j)/(j*j**33*j*j))/(j**9*j/((j*j**(9/4))/j)))**(-2/3) assuming j is positive. j**(355/6) Simplify ((g/(g**(-3/4)/g))/(g**7*g))**(-25/4)/((g**(2/7))**(1/17)/(g**(-3/2)/g**3)) assuming g is positive. g**(53875/1904) Simplify ((n*n*n**(1/5))/(n*n**(1/4)*n))/(n**(3/7)*n**6)*(n**(-2/5)*n)**(-4/13)/(n**1)**(-4) assuming n is positive. n**(-4847/1820) Simplify ((m**(2/3)*m)/m)**(-2/3)*m/(((m*m**(-2/7)/m)/m)/m)*m*m*m/m**(4/7)*m*(m**(-3)/m**(-2/15))/(m**(-7)/(m*m**(-2/3))) assuming m is positive. m**(3382/315) Simplify ((p**(2/3))**(-27))**21*((p*(p/(p/(p/((p*p*p**(6/5))/p))))/p)/(p/(p/p**1)))**(-1/25) assuming p is positive. p**(-47239/125) Simplify (((s**4/s)/s**1)/(s**4*s*s**(2/15)*s))/((s*s**3)/s**8)**(2/23) assuming s is positive. s**(-1306/345) Simplify (f*f**(-12/11))**(-19)*(f/(f/((f**(-16)*f*f)/f)))**38 assuming f is positive. f**(-6251/11) Simplify ((d**(2/23)*d**(1/4))/(d**(-3)*d*d**(-4)))/(((d*d**(-1/4)*d)/d*d*d*d/(d/d**(-2/3))*d)/(d**(1/2)*d**(2/17)/d)) assuming d is positive. d**(3368/1173) Simplify (c**(-1/4)*c)/c**(1/4)*(c**(-1/4))**(-5/17)*(c**(-1))**(-6)*(c/c**6*c)/c**(-2) assuming c is positive. c**(311/68) Simplify ((u**1/u)**24/(u**1*u/((u*(u/(u/(u/(u*u/(u**(-3)*u))))*u)/u)/u)))/(u**(1/2)/(u/(u**3/u)))**44 assuming u is positive. u**(-71) Simplify (((q*q**2)**(-1/81)*(q**(-1))**(1/11))**(25/6))**(-43) assuming q is positive. q**(20425/891) Simplify (((j*j**(-4))/(j/(j**12/j*j))*(j*j**(1/4))/(j/((j/j**(-2/13)*j)/j*j*j)))**(4/27))**37 assuming j is positive. j**(21941/351) Simplify ((((i/(i/(i**(-2/7)*i)))/i)/i*i*i)**31)**(4/3)/(i**(-2/9)/(i*i**(-5/3)/i)*i**(-3/8)/i**(3/5)) assuming i is positive. i**(73217/2520) Simplify (((q**1/q*q*q)**(-47)*(q/(q/(q*q**1)))**(-22))**43)**(-6) assuming q is positive. q**35604 Simplify (x**(-2/3)*x)**50*(x/x**(2/7))/(x/(x*x/x**(1/10)))*(x/x**2)**(1/25)/((x*(x*(x*x**(-5/2))/x)/x)/(x**(2/5)*x)) assuming x is positive. x**(11099/525) Simplify v**5*v*v*v/(v*v**(-8))*(v/v**(-6))/(v*v/(v/(v/((v*v/v**(-5))/v)))*v)*(v*(v**(-1/21)*v*v)/v*v**(-1/7))**15 assuming v is positive. v**(365/7) Simplify m/(m**2*m)*m*m**7/m*m**(-5)*m**(1/4)/m*(m**(-2/3)*m**(-6)*m)**(-14/13) assuming m is positive. m**(835/156) Simplify (d**(-1/5)/(d*d**(-3/2)))**(-45)/((d*d**5*d*d*d**4)/((d/d**8)/d*d**(2/5))) assuming d is positive. d**(-331/10) Simplify ((p/(p*(p**(1/5)*p)/p))/p**(4/3))**(-32)*(p/p**3)/(((p**(1/7)/p)/p)/p)*(p/(p*p**(-1/11)))/(p*p/(p/(p*p**8/p*p))) assuming p is positive. p**(46217/1155) Simplify (((l**10/l*l*l)/l**(2/9)*l**14/(l**(1/3)/l))**(-18))**(-17) assuming l is positive. l**7786 Simplify ((g**(-3/4))**14)**0*((g/g**(-6))/((g/g**(-3))/g))**29 assuming g is positive. g**116 Simplify ((w*w*w/((w**(-1)*w)/w))**15/(w/(w*w**(1/3))*w*w/w**16*w*w))**(-35) assuming w is positive. w**(-7595/3) Simplify ((q*q/q**(-2/15))/(q*q**(2/23)*q)*(q/(q*(q**0*q)/q))**8)/((q**0)**(-2/9)/(q/((q/q**(-7))/q)*q/(q/(q*q**(-5))))) assuming q is positive. q**(-3434/345) Simplify (((t**1*t*t*t*t**(-2/9))**(-40))**(-4/3))**(-4/7) assuming t is positive. t**(-21760/189) Simplify (z**(-11/5)*z**17*(z**(-18)*z)/(z*(z**23*z*z)/z))**24 assuming z is positive. z**(-3264/5) Simplify u/u**(-2)*u**7*u*u*u/u**(3/11)*u/(u**(-4/3)/u)*(u**3*u**(-7))**(-1/6) assuming u is positive. u**(184/11) Simplify ((h*h/((h*h**(-2/9))/h))**(19/4))**11*(h*h**0)**6*h**(-2/5)*h**(-1/5)/h assuming h is positive. h**(5423/45) Simplify ((v/(v*v/v**(-1/4)*v*v))/v*v)**48/(v**(3/8)*v**8)*((v**(1/5)*v)/v)/v**4*((v**(-2)*v)/v)**(-22) assuming v is positive. v**(-4967/40) Simplify f/f**(-4/3)*f/(f*f**(-6/5)*f)*f*f**(-8)*f*f*f*f**(2/17)*((f*f*f**(-1/2))**(1/12))**(-19) assuming f is positive. f**(-7597/2040) Simplify (((x*x/(x*x*x/x**(3/2))*x)**(10/11)/(x/(x*x**2*x*x*x*x)*x**(-2/5)))**(7/3))**(2/7) assuming x is positive. x**(854/165) Simplify (((x**(16/5)*x*x**(-2/33))**(2/31))**46)**(-28) assuming x is positive. x**(-1759408/5115) Simplify ((((a*a**(-3/4)*a)/a)/a*a)**(-4/21)*a**15/(a/a**(-20)))**(-9) assuming a is positive. a**(381/7) Simplify (f**4/(((f*f*f**(-6/13))/f)/f))**(36/5)/((f**(-9)/(f*f**(2/15)))/(f**3*f/(f/f**6))) assuming f is positive. f**(1999/39) Simplify (p**(-2)/(p*p/(p*(p**(2/7)/p*p)/p))*p*p**(-5/8)*p**(2/23))**13 assuming p is positive. p**(-54457/1288) Simplify ((g**5*g)/g**(-2/3)*(((g**(2/7)*g)/g)/g)**15)/(g**14/(g*g/((g/(g/(g*g**(-2/3))))/g)*g*g))**(-1/18) assuming g is positive. g**(-667/189) Simplify (f*f**(1/9))/f**(-3/8)*(f**2/f*f)**32*(f/(f**(-5)*f))/(f*f*(f/((f*f**2*f)/f))/f*f*f)*f**8*f/f**3*f assuming f is positive. f**(5507/72) Simplify (((d**(-2/19)*d/d**(1/6))/((d**(-3)*d)/d*(d**(-1/3)*d)/d*d))**(-11))**(5/6) assuming d is positive. d**(-19195/684) Simplify (s/(s*s*(s/(s/(s*(s/s**34)/s)))/s)*s)/s*s**(-22)/s*s**(1/9)*s**12 assuming s is positive. s**(199/9) Simplify ((n**3*n**(-3/2)*n)/(n/n**2)**(-12))/((n**(-3)/n)**(-38))**(5/6) assuming n is positive. n**(-817/6) Simplify d**(1/25)/d*d**(1/4)*(d**5)**22 assuming d is positive. d**(10929/100) Simplify (y**(-1/3)/y**7)**(2/53)/((y**(2/3))**(-3/8)*(y**(-1/2))**(-42)) assuming y is positive. y**(-13373/636) Simplify (g**0/g*g/(g/(g/(g*g**(-2/19)/g)*g))*g)/(g**(-2)*g**(-2/3))*(g**(-2/3))**7*(g*g*g*g/(g**(-3)*g))/((g*g/(g*g/g**(-4)))/g) assuming g is positive. g**(211/19) Simplify (m/m**(-2/3))**(12/19)/(m*(m**(1/3)/m)/m)**(-1/33)*((m*m**1*m)/m)**(-3)*(m/(m/m**(6/5)))/m**6 assuming m is positive. m**(-91864/9405) Simplify (v*v**(3/2))/(v*v**(-7))*(v/(v/(v*v**(-4/3))))/(v/(v*(v/(v/v**(-8)*v))/v))*((v/(v**2/v*v))**(2/19))**(15/7) assuming v is positive. v**(-1643/798) Simplify (j*(((j*j*j**(-5)/j)/j)/j)/j*j*j*j/(j/(j*j**(-5)))*(j*j*j**3/j)/(j*j**(-1)))/(j**(-2/7)/(j*j**(2/13)/j))**(-5/13) assuming j is positive. j**(-4932/1183) Simplify (m**(26/7)*m)/(m/(m**(-2/49)/m))*m*m**(-39)*m**(-2/145)/m assuming m is positive. m**(-258198/7105) Simplify (((q/((q/q**13)/q))/(q/q**(-6)))/(q*q**5*q*q)**(-45))**11 assuming q is positive. q**4037 Simplify ((o**(2/5))**(-2))**10/((o**(2/3)/(o*o**7*o))/(o/o**(-2)*o**(3/4))) assuming o is positive. o**(49/12) Simplify ((m**2)**(21/8)/(m*m*m/(m*m/((m/((m/(m*m/((m*m**4)/m)))/m))/m*m)))**18)**(4/9) assuming m is positive. m**(7/3) Simplify (m/(m*m**(-1)*m*m))**(-6)*m**6*m**(-3)*m*((m/(m/(m/(m/m**(-2)*m*m))))**43)**(5/13) assuming m is positive. m**(-730/13) Simplify (p**(-3/10)/p**(-23))/(p**(16/11)/((p/(p*p**(-39)))/p)) assuming p is positive. p**(6517/110) Simplify ((u**(-2/19)*u**(8/7))/(((u**3*u)/u)/u*u*u**(-4/9)*u))**44 assuming u is positive. u**(-132616/1197) Simplify ((m**46/m)/m*m/(m*m/((m/m**12)/m)))/(m/(m**(-2/11)/m))**14 assuming m is positive. m**(5/11) Simplify ((t/(t*t/(t/(t**(-1/4)/t)*t)))**(2/23))**25*(t**(-1/5)*t)/t**5*(t*((t/t**(2/23))/t)/t)/(((t*t/(t*t**6))/t)/t) assuming t is positive. t**(1749/230) Simplify (s*(s**(-10)*s)/s*s*s/s**(1/4)*s**(-2)*s**(-1/3)*s)**24 assuming s is positive. s**(-206) Simplify (q/(q**(-11/4)/q))**(2/87)/(q**(-13/5)*q*q/(q/q**(-2/11))) assuming q is positive. q**(18097/9570) Simplify (b**(-2/9)*b*b/(b/b**(2/7)))**(-28)/((b**0*b)**(-42)*((b**0*b)/b)**(-31/3)) assuming b is positive. b**(110/9) Simplify ((d*d**(-2/13)*d*d*d)/d**(-3/4))**(-34)/((((d*d*d**(1/3))/d*d)/d)**(-3/2))**(-39) assuming d is positive. d**(-6091/26) Simplify ((r**(-1/2))**(-48)*(r**1/r)**(-5/8))/((r**(3/2)/r**(-2))/(r**5/(r**(-1/5)*r))) assuming r is positive. r**(247/10) Simplify (w/w**(-1/6)*w**(-2/9))**(-12)*((w/(w*w**(2/21)))/w*w**2)/(w*w/((w*w/w**(-1/3
Low
[ 0.49771689497716803, 27.25, 27.5 ]
Kaunisto KM, Roslin T, Sääksjärvi IE, Vesterinen EJ. Pellets of proof: First glimpse of the dietary composition of adult odonates as revealed by metabarcoding of feces. Ecol Evol. 2017;7:8588--8598. <https://doi.org/10.1002/ece3.3404> 1. INTRODUCTION {#ece33404-sec-0001} =============== Recent advances in molecular techniques have opened up new ways for identifying prey species from fecal samples. In particular, they allow us to detect trophic links involving taxa the habits of which prevent us from efficient observations of direct feeding events (Clare, [2014](#ece33404-bib-0013){ref-type="ref"}; Roslin & Majaneva, [2016](#ece33404-bib-0050){ref-type="ref"}). Over the last few decades, ecologists have increasingly applied molecular tools to describing the diet of insectivore and even sanguinivore (blood‐feeding) mammals, spiders, birds, and many more taxa (Bobrowiec, Lemes, & Gribel, [2015](#ece33404-bib-0008){ref-type="ref"}; Deagle et al., [2005](#ece33404-bib-0021){ref-type="ref"}; Pompanon et al., [2012](#ece33404-bib-0049){ref-type="ref"}; Roslin & Majaneva, [2016](#ece33404-bib-0050){ref-type="ref"}; Symondson, [2002](#ece33404-bib-0058){ref-type="ref"}). In this context, a specific methodological challenge emerges for generalist insectivores where individual gut contents may contain many different prey items---as in such cases, the information content to be extracted from the assemblage of partly degraded DNA is much more complex than for a specialist predator (with solutions offered by e.g., Kruger, Clare, Symondson, Keiss, & Petersons, [2014](#ece33404-bib-0036){ref-type="ref"}; Paula et al., [2015](#ece33404-bib-0045){ref-type="ref"}; Pinol, San Andres, Clare, Mir, & Symondson, [2014](#ece33404-bib-0047){ref-type="ref"}; Vesterinen, Lilley, Laine, & Wahlberg, [2013](#ece33404-bib-0059){ref-type="ref"}; Vesterinen et al., [2016](#ece33404-bib-0060){ref-type="ref"}). Among air‐borne insectivores, bats have recently emerged as a particularly well‐studied group (Clare, [2014](#ece33404-bib-0013){ref-type="ref"}; Clare, Symondson, & Fenton, [2014](#ece33404-bib-0015){ref-type="ref"}; Clare, Symondson, & Broders et al., [2014](#ece33404-bib-0014){ref-type="ref"}; Emrich, Clare, Symondson, Koenig, & Fenton, [2014](#ece33404-bib-0028){ref-type="ref"}; Vesterinen et al., [2013](#ece33404-bib-0059){ref-type="ref"}, [2016](#ece33404-bib-0060){ref-type="ref"}), whereas the diet and ecological role of flying insect predators are next to unknown (but see Seifert & Scheu, [2012](#ece33404-bib-0052){ref-type="ref"}). Insects in the order Odonata, including both dragonflies and damselflies, are a globally diverse group of insects with around 5,900 species described to date. Odonates are an important top predators in many aquatic and riparian ecosystems, thus representing both the aquatic and aerial environments and they are fairly well‐known due to decades of research (Corbet, [1999](#ece33404-bib-0017){ref-type="ref"}). Indeed, odonates have long been model organisms for ecological research, and form a highly promising taxon for future genomics focused research (e.g., Bybee et al., [2016](#ece33404-bib-0012){ref-type="ref"}; Córdoba‐Aguilar, [2009](#ece33404-bib-0018){ref-type="ref"}). Their role in global ecosystems is likely large, as odonates are common, large and remain predators throughout their life cycle (Askew, [2004](#ece33404-bib-0005){ref-type="ref"}). The extended larval stage is spent in water, and both diversity and biomass can be very high (e.g., Corbet, [1999](#ece33404-bib-0017){ref-type="ref"}; McCauley et al., [2008](#ece33404-bib-0039){ref-type="ref"}). Upon hatching, the adult odonate transfers to the terrestrial realm, thus moving biomass and energy from one habitat to another, and contributing to the predation pressure in a new environment. However, despite that the role of the odonates in both aquatic and terrestrial ecosystems is likely to be large, the prey use of odonates is poorly documented, especially at adult stage of the life cycle. This is due to multiple constraints: for example, their prey species are usually small and thus hard to identify by observing them in mid‐air. Furthermore, visual prey observations of hunting odonates are likely to be biased toward large prey species. After successful hunting, odonates chew their prey thoroughly, which makes morphological identification of prey remnants from feces practically impossible. Fortunately, molecular tools---involving DNA extraction from predator remains, amplification of prey DNA by PCR, sequencing and identification through comparison to reference sequences (Clarke, Czechowski, Soubrier, Stevens, & Cooper, [2014](#ece33404-bib-0016){ref-type="ref"}; King, Read, Traugott, & Symondson, [2008](#ece33404-bib-0035){ref-type="ref"}; Pompanon et al., [2012](#ece33404-bib-0049){ref-type="ref"}; Roslin & Majaneva, [2016](#ece33404-bib-0050){ref-type="ref"})---are not restricted by these aforementioned obstacles, and for the first time we are able to shed light on precise dietary composition of the odonates. Understanding the prey use of adult odonates is particularly important, as environmental conditions, for example, food shortage during the adult stage can reduce life span and fecundity, thereby reducing lifetime egg production and leading to numerical effects at the egg and larval stages (Stoks & Cordoba‐Aguilar, [2012](#ece33404-bib-0055){ref-type="ref"}). The majority of recent theory about predator--prey interactions is based on the assumption that size is a key factor structuring these interactions (Brose, [2010](#ece33404-bib-0010){ref-type="ref"}; Schneider, Scheu, & Brose, [2012](#ece33404-bib-0051){ref-type="ref"}). In fact, a number of studies have shown that a significant portion of structural information within food webs can be predicted from body size alone (Stouffer, Rezende, & Amaral, [2011](#ece33404-bib-0056){ref-type="ref"}; Williams & Martinez, [2000](#ece33404-bib-0062){ref-type="ref"}). This is particularly prominent in aquatic systems where predation is largely limited by the size of the predator\'s gape---such that the larger a consumer is, the larger its gape and the larger its prey (Brose et al., [2006](#ece33404-bib-0011){ref-type="ref"}; Morgan, [1989](#ece33404-bib-0040){ref-type="ref"}). While large prey may require too much energy to capture, handle and consume, prey that are too small are not worth the energy invested to capture them (Svanback, Quevedo, Olsson, & Eklov, [2015](#ece33404-bib-0057){ref-type="ref"}). This should result in a unimodal relationship between predator and prey body size (Brose, [2010](#ece33404-bib-0010){ref-type="ref"}; Woodward, Ebenman, Emmerson et al., [2005](#ece33404-bib-0064){ref-type="ref"}; Woodward, Speirs, & Hildrew, [2005](#ece33404-bib-0065){ref-type="ref"}); in other words, predators of different size are expected to target prey within a different range, the mode of which should be higher with increasing predator size (Williams & Martinez, [2000](#ece33404-bib-0062){ref-type="ref"}). Diet generality may also increase with body size, allowing larger predators to exploit a wider range of prey (Gilljam et al., [2011](#ece33404-bib-0029){ref-type="ref"}). To evaluate the potential for molecular techniques to describe the diet of odonates, we target three sympatric odonate species. Drawing on a comprehensive DNA barcode library of potential prey (the Finnish Barcode of Life, FinBOL; [www.finbol.org](http://www.finbol.org)), we use next‐generation sequencing techniques (DNA extraction followed by PCR and Illumina MiSeq sequencing) to answer the following questions: (1) How do methodological choices (DNA extraction techniques and choice of markers) affect our perception of prey use? (2) What prey taxa do these adult odonate predators feed on? and (3) do co‐occurring odonate species and sexes of varying size differ in their prey use? We predict, firstly, that methodological choices, especially the selection of PCR primers, will affect the results, with more variable gene regions resolving more prey taxa. Secondly, we expect odonates of different species and sex to differ in size, and this size variation to reflect into prey choice, with larger odonate predators consuming larger prey. 2. MATERIAL AND METHODS {#ece33404-sec-0002} ======================= 2.1. Study species {#ece33404-sec-0003} ------------------ To evaluate the potential for molecular, DNA‐based techniques based on locus‐specific amplification of gene regions to describe the diet of odonates, we target three odonate species, the northern bluet *Enallagma cyathigerum* (Charpentier, 1840) (Coenagrionidae), common spreadwing *Lestes sponsa* (Hansemann, 1823) (Lestidae), and black darter *Sympetrum danae* (Sulzer, 1776) (Libellulidae) into this study (images of the species in Fig. [4](#ece33404-fig-0004){ref-type="fig"}). The target species were chosen to represent locally common dragonfly and damselfly species that are phylogenetically divergent, and have different life history strategies while sharing the same habitat and overlapping phenology (Corbet, [1999](#ece33404-bib-0017){ref-type="ref"}; Dijkstra, [2006](#ece33404-bib-0023){ref-type="ref"}; Dijkstra & Kalkman, [2012](#ece33404-bib-0024){ref-type="ref"}). *Enallagma cyathigerum*, a Coenagrionidae species, overwinters as a larva and develops into the adult stage later than most damselfly species in Finland. Contrary to the majority of damselfly species in Finland, *E. cyathigerum* forages commonly on open areas, also above watersheds. The second focal species, *L. sponsa* belonging to Lestidae, overwinters at the egg stage in Finland, and develops rather fast through the larval stage during the summer. The adults are among the most common damselfly species flying in July---August. *Lestes* species hunt commonly near or inside dense vegetation. The third target species, *S. danae* belonging to Libellulidae, overwinters as an egg, develops quickly in the spring, and then hatches mainly in July. *Sympetrum danae* is by far the largest and strongest flyer of the focal species, foraging in open areas mainly by chasing its prey in mid‐air. 2.2. Study site and sample collection {#ece33404-sec-0004} ------------------------------------- Odonate samples were collected with aerial sweep nets. To remove variation between different foraging habitats and available prey, all our study samples were collected from one location in South West Finland (ETRS‐TM35FIN N: 671180; E: 24600): a freshwater wetland, surrounded by mosaic of arable land and cultural landscape. Altogether 25 odonate species have been observed around the study site after year 2014, indicating rather high species richness of this area (typical range of odonate diversity around typical watersheds is less than 20 odonate species in southern Finland; K. M. Kaunisto, personal communication). To maximize the comparability between samples and to reduce the effect of changes in the prey pool available, all samples were collected within 6 days (August 10--14, 2015) at a constant distance from the water body (5--8 m). The age of focal odonate individuals was determined by the stiffness of their wings and the coloration of their bodies (as described for genus *Calopteryx* in Plaistow & Siva‐Jothy, [1996](#ece33404-bib-0048){ref-type="ref"}). Only sexually mature individuals were included in this study. After being caught, sample individuals were placed into individual plastic Sarstedt 10‐ml tubes with a piece of moist paper towel added to avoid the dehydration of animals. Individuals were kept in the containers for 24 hr to allow complete defecation. All the fecal material (typically some 1--4 individual fragments of irregular size and shape) produced during this time was regarded as one sample. As a proxy for body size, we measured the length of hind wings and calculated average hind wing length for all the individuals. This metric has been previously shown to correlate well with body size (but see Schneider et al., [2012](#ece33404-bib-0051){ref-type="ref"}), and it is fairly easy to measure precisely. Thereafter, the feces were collected into Eppendorf tubes and frozen at −20°C until further analysis. 2.3. Molecular analysis {#ece33404-sec-0005} ----------------------- ### 2.3.1. Procedures for prevention of contamination {#ece33404-sec-0006} To minimize the risk contamination, we tried to adhere to the principles of ancient DNA processing as far as possible in the current laboratory. All the extraction steps were carried out in carefully cleaned laboratory space, using purified pipettes with filter tips. All the PCR\'s were carried out in a separate room, and no amplified DNA was transferred back to the pre‐PCR facilities. Negative controls containing all but template DNA were carried out for each PCR assay. ### 2.3.2. DNA extraction using three different methods {#ece33404-sec-0007} Our dataset consisted of total 72 samples: 24 fecal samples for each three study species as equally distributed among females and males. The fecal material was not pretreated in any specific way prior to extraction, and the amount was so minimal (approximately 1 × 1 mm) that it was not practical to weigh the samples. The total set of samples was divided into three subgroups each consisting of 24 samples with equal representation of females and males of each study species (each subgroup contained four males and four females per species). One group was processed using ZR Fecal DNA MicroPrep (hereafter abbreviated as ZR; product nr D6012, Zymo Research, Irvine, California, U.S.A.), the second group using NucleoSpin^®^ Tissue XS Kit (abbreviation NS; product nr 740901, Macherey‐Nagel, Düren, Germany), and the third group with a traditional salt extraction method (abbreviation SE) (see Appendix [S1](#ece33404-sup-0002){ref-type="supplementary-material"} for detailed salt extraction protocol applied: Aljanabi & Martinez, [1997](#ece33404-bib-0002){ref-type="ref"}; Pilipenko, Salmela, & Vesterinen, [2012](#ece33404-bib-0046){ref-type="ref"}). We did not measure DNA concentrations, but expected them to be rather low due to the small amount of sample. Moreover, as we amplified both predator and prey DNA, the total DNA concentration as such would not be informative in any case. Thus, we used 1 μl of template DNA regardless of potential differences in the DNA concentrations. ### 2.3.3. PCR and Illumina library construction {#ece33404-sec-0008} PCRs were prepared using the protocol of Clarke et al. ([2014](#ece33404-bib-0016){ref-type="ref"}), with slight modifications related to the different indexing scheme as identified in Vesterinen et al. ([2016](#ece33404-bib-0060){ref-type="ref"}). For this study, we used dual indexing designed for Illumina sequencing platform, and thus, both forward and reverse primers were tagged with different linkers, unique barcodes and Illumina‐compatible adapters (Shokralla et al., [2015](#ece33404-bib-0054){ref-type="ref"}). All the individual samples were tagged with a unique index combination. We chose to include the most common mitochondrial markers used for molecular identification of animals: *cytochrome oxidase subunit I* (hereafter abbreviated as COI) and 16S ribosomal RNA (16S) (Hebert, Cywinska, Ball, & DeWaard, [2003](#ece33404-bib-0030){ref-type="ref"}; Yang et al., [2014](#ece33404-bib-0066){ref-type="ref"}). The choice of COI region is natural, as most DNA barcoding to date has been carried out using this gene, resulting in millions of reference sequences available in the BOLD database (ref). The 16S region is the gene region second most commonly employed in DNA metabarcoding, and as this region is more conserved than COI, 16S primers usually amplify a larger set of taxa (Clarke et al., [2014](#ece33404-bib-0016){ref-type="ref"}). To amplify suitable fragments of approximately same lengths, we applied two primer sets---COI: primers ZBJ‐ArtF1c and ZBJ‐ArtR2c after Zeale, Butlin, Barker, Lees, and Jones ([2011](#ece33404-bib-0067){ref-type="ref"}) and 16S: primers Ins16S‐1F and Ins16S‐1Rshort after (Clarke et al., [2014](#ece33404-bib-0016){ref-type="ref"}). For this study, the PCR setup was further optimized as follows: for a reaction volume of 10 μl, we mixed 3.4 μl distilled water, 5 μl KAPA2G Fast MPX MasterMix (product nr KK5802, KAPA Biosystems, Wilmington, Massachusetts, USA), 0.3 μmol/L forward primer, 0.3 μmol/L reverse primer, and 1 μl DNA template. The PCR cycling conditions for COI were 3 min in 95°C, then 16 cycles of 30 s in 95°C, 30 s in 61°C (with the annealing temperature decreased by 0.5°C for each cycle) and 30 s in 72°C, then additional 24 cycles of 30 s in 95°C, 30 s in 53°C and 30 s in 72°C ending with 3 min in 72°C. For 16S, the cycling conditions were 3 min in 95°C, then 5 cycles of 15 s in 95°C, 30 s in 46°C and 15 s in 72°C, then additional 25 cycles of 15 s in 95°C, 30 s in 56°C and 15 s in 72°C. Then, 2.5 μl of PCR products of different primers were first pooled across samples and then cleaned using A\'SAP clean kit (product nr 80350, ArcticZymes, Trømssa, Norway). All samples were used regardless of whether they produced a visible band on the gel used for checking. The second PCR used to attach adapters was implemented as in (Vesterinen et al., [2016](#ece33404-bib-0060){ref-type="ref"}), with minor modifications as follows: for a reaction volume of 12.5 μl, we mixed 6.25 μl KAPA HiFi HotStart MasterMix (product nr KK2602, KAPA Biosystems, Wilmington, Massachusetts, USA), 0.3 μmol/L forward primer, 0.3 μmol/L reverse primer, and 1.75 μl purified locus‐specific PCR product. The PCR cycling conditions were 4 min in 95°C, then 15 cycles of 20 s in 98°C, 15 s in 60°C and 30 s in 72°C, ending with 3 min in 72°C. Negative controls did not amplify in any assay. After tagging, 2 μl of each indexed sample was pooled together and purified using SPRI beads. Sequencing was performed on the Illumina MiSeq platform (Illumina Inc., San Diego, California, USA) by the Turku Centre for Biotechnology, Turku, Finland, using v2 chemistry with 300 cycles and 2\*150 bp paired‐end read length. The pooled library was run together with other libraries using unique dual index combination for each sample. ### 2.3.4. Sequencing output analysis and OTU identification {#ece33404-sec-0009} The sequencing run yielded 637,087 quality‐controlled paired‐end reads. The reads separated by each original sample were uploaded to CSC servers (IT Center for Science, [www.csc.fi](http://www.csc.fi)) for trimming and further analysis. Trimming and quality control of the sequences were carried out as follows. Paired‐end reads were merged and trimmed for quality using USEARCH version 9 (Edgar, [2010](#ece33404-bib-0026){ref-type="ref"}). Primers were removed using cutadapt version 1.11 (Martin, [2017](#ece33404-bib-0038){ref-type="ref"}). The reads were then collapsed into unique sequences (singletons removed), chimeras were removed, and reads were clustered into OTUs and mapped back to the original trimmed reads to establish the total number of reads in each sample using USEARCH version 9. Zero length OTUs do not practically differ from traditional clustering of OTUs, but the UNOISE algorithm performs better in removing chimeras, PhiX sequences and Illumina artifacts (Edgar & Flyvbjerg, [2015](#ece33404-bib-0027){ref-type="ref"}). Finally, our dataset consisted of 11,793 (COI) and 48,985 (16S) reads which were assigned to species. The OTUs were identified to species or---when species‐level determination could not be achieved---to higher taxa using BLAST (Altschul, Gish, Miller, Myers, & Lipman, [1990](#ece33404-bib-0003){ref-type="ref"}) and the Python script package "bold‐retriever," version 1.0.0 (Vesterinen et al., [2016](#ece33404-bib-0060){ref-type="ref"}). Nearly all reads could be identified to at least order level and were thus retained for further analyzes. Rest of the reads were discarded: about 1% of COI reads were identified as Bacteria and plants; of 16S reads less than 1% matched human and other mammalian DNA. A detailed description of the bioinformatics applied is available from the authors upon a request. Data on taxon‐specific size (body length of the prey taxa) were then extracted from literature or pictures from the BOLD database. In other words, prey taxa as per the taxonomic assignment of our sequences were used to estimate the body size range of the prey consumed, allowing the later testing of our explicit predictions regarding relationships between predator and prey size. From these tests, prey identified as "Hemiptera sp." was omitted, as the size range within this compound taxon is too large to be informative. 2.4. Data analysis {#ece33404-sec-0010} ------------------ To compare the size of odonates (hind wing length), we used ANOVA to model body size (predator hind wing length average) as a function of predator, sex and predator×sex. An equivalent model was fitted to data on prey size (prey body length) and to number of prey taxa detected (count of prey items in each sample). In terms of prey used, we characterized the frequency of each prey taxon by its presence/absence at the level of individual odonate droppings. This approach was chosen as with PCR‐based approaches, the number of reads has been shown to carry little information about the original quantity of template DNA (Deagle & Tollit, [2007](#ece33404-bib-0020){ref-type="ref"}; Pompanon et al., [2012](#ece33404-bib-0049){ref-type="ref"}). Frequencies were calculated for each odonate species, for males and females and for different extraction methods. To compare the effect of the gene region amplified, we calculated the number of prey items found in each sample separately for COI and 16S primers. We then used a Kruskal--Wallis Analysis of Variance procedure to compare the performance of each primer set in terms of the frequency distribution of prey items detected (Kruskal & Wallis, [1952](#ece33404-bib-0037){ref-type="ref"}). Likewise, we compared the total number of predator and prey reads among individual extraction methods. To further evaluate the performance of each DNA extraction method, we calculated the performance rate (as a percentage) by dividing the number of samples that produced at least some (prey or predator) reads with the number of samples used in the study. For this performance analysis of each extraction method, we used the read numbers remaining after quality control (see above). These performance metrics were calculated separately for each DNA extraction methods, odonate predator species, as well as for males and females within species. We used ANOVA to model the number of samples that produced taxonomically assignable reads (as explained above) as a function of the different DNA extraction methods. For each extraction method, we also compared the ratio of predator versus prey reads. To visualize the trophic interactions structures resolved by the molecular data, we used package bipartite (Dormann, Fründ, Blüthgen, & Gruber, [2009](#ece33404-bib-0025){ref-type="ref"}) implemented in program R (R Core Team [2012](#ece33404-bib-0200){ref-type="ref"}). Semi‐quantitative webs were constructed for each odonate predator species, using proportional frequencies as explained above. To study the effects of body size, sex and predator species on variation in prey species composition, we conducted a permutational multivariate analysis of variance (PERMANOVA (Anderson, [2001](#ece33404-bib-0004){ref-type="ref"}) for presence/absence data, using 999 random permutations to assess statistical significance. PERMANOVA analysis was carried out using software R with package "vegan" (Oksanen et al., [2007](#ece33404-bib-0041){ref-type="ref"}). To visually compare odonate predator diets at the prey family level, we measured the proportion of the most frequent families (Diptera: Chironomidae, Sciaridae) consumed by each predator species. 3. RESULTS {#ece33404-sec-0011} ========== 3.1. How does the choice of DNA extraction method affect the results? {#ece33404-sec-0012} --------------------------------------------------------------------- Different extraction methods returned proportionally similar amounts of predator and prey reads (Fig. [1](#ece33404-fig-0001){ref-type="fig"}), although a very different amount of absolute target reads. The salt extraction method produced the highest number of reads (32,947), and also the highest number of prey taxa (27 distinct prey taxa). However, although commercial kits did not produce as many reads as did salt extraction, Nucleospin extraction resulted in almost as many prey taxa (26 prey species), as did the salt extraction method. Zymo Research performed the worse in both metrics, allowing the identification of only 11 different prey taxa. Success rates for each extraction method varied from 50% to 96% (Table [1](#ece33404-tbl-0001){ref-type="table-wrap"}). In terms of prey item detection rate per sample, we found no significant differences between different DNA extraction methods (Fig. [2](#ece33404-fig-0002){ref-type="fig"}a). Overall, COI primers produced a much lower number of predator reads than did 16S primers, but generally, the pattern of prey detection was similar between these two markers, as most of the samples only contained one prey item, while just a few contained more than four distinct prey species (Fig. [2](#ece33404-fig-0002){ref-type="fig"}b). The highest success rate was found when using salt extraction and 16S primers (95.8%), and the lowest success rate was observed with the Zymo Research kit and COI primers (50%; Table [1](#ece33404-tbl-0001){ref-type="table-wrap"}). On average, samples from females and males yielded similar success rates for both markers (COI: 61.1% vs. 66.7%; 16S: 94.4% vs. 97.2%; Table [1](#ece33404-tbl-0001){ref-type="table-wrap"}). For COI, most OTUs (33/48) and for 16S, less than half (15/37), were successfully identified to the species level or to an unequivocal higher taxonomic level (genus, family or order). In terms of reads (not OTUs), 87% of the trimmed COI reads and 98% of trimmed 16S reads offered a match in a database (BOLD and GenBank for COI; GenBank for 16S) and were retained for subsequent analysis. We found statistical differences between the DNA extraction methods and also between genetic markers in terms of how many prey items was retrieved per approach (Fig. [2](#ece33404-fig-0002){ref-type="fig"}a; Kruskal--Wallis 15.17, *p* = .0004) or markers (Fig. [2](#ece33404-fig-0002){ref-type="fig"}b; Kruskal--Wallis 4.43, *p* = .035). Labeled raw reads, read counts, and OTU data are available in the Dryad Digital Repository: <https://doi.org/10.5061/dryad.5n92p>. ![Efficiency of different extraction methods, reflected by the proportion of reads identified to prey taxa as retrieved by extraction with (a) the Macherey‐Nagel NucleoSpin XS kit, (b) the salt extraction method, or (c) the Zymo Research Fecal Microprep kit](ECE3-7-8588-g001){#ece33404-fig-0001} ###### Success rate (%) in different strata of the data---that is, the number of samples producing sequence data (after processing by the bioinformatic pipeline) divided by the total number of samples in each group. COI and 16S refer to data retrieved by each primer pair. We found no significant difference between the number of successful samples for COI and 16S or different DNA extraction methods Nr of samples COI % 16S % ----------------------------------- --------------- ------- ------- Extraction method NucleoSpin Tissue XS Kit 24 66.7 95.8 Salt extraction method 24 75.0 100 Zymo Research Fecal DNA Micro Kit 24 50.0 91.7 Predator species *Enallagma cyathigerum* 24 79.2 100 *Lestes sponsa* 24 62.5 95.8 *Sympetrum danae* 24 50.0 91.7 Sex Females 36 61.1 94.4 Males 36 66.7 97.2 John Wiley & Sons, Ltd ![Prey identification success with different extraction methods or genetic markers. (a) Shown is the number of prey items identified per sample using each of the three DNA extraction methods, with (b) an equivalent graph for the two markers used: COI versus 16S. The zero prey items mean that the sample did not produce any sequences after bioinformatic pipeline. There was no significant difference in the frequency of prey detection between methods](ECE3-7-8588-g002){#ece33404-fig-0002} 3.2. What prey species do odonates feed on and is there variation in the diet between odonate species and sexes? {#ece33404-sec-0013} ---------------------------------------------------------------------------------------------------------------- Altogether, we found DNA from 41 different prey taxa, representing 25 different families and seven orders ([S2](#ece33404-sup-0002){ref-type="supplementary-material"}). Of these, 10 could only be assigned to family or higher taxa. Overall, the three odonate species differed in size, *S. danae* being significantly larger than the other two species (Fig. [3](#ece33404-fig-0003){ref-type="fig"}a; *F* ~2,\ 69~=227.7, *p *= \<.0001), with no difference among the two sexes (Fig. [3](#ece33404-fig-0003){ref-type="fig"}a; Sex *F* ~1,\ 70~=1.36, *p* = .25; Sex × Predator *F* ~2,\ 69~ = 1.02, *p* = .36). Nonetheless, the size of the prey consumed did not differ in size among either odonate species (Fig. [3](#ece33404-fig-0003){ref-type="fig"}b; *F* ~2,\ 135~ = 0.05, *p* = .95), or sexes (Fig. [3](#ece33404-fig-0003){ref-type="fig"}b; Sex *F* ~1,\ 139~ = 1.33, *p* = .25; Sex × Predator *F* ~2,\ 135~ = 2.32, *p* = .10). Likewise, no significant differences in terms of the number of prey items detected per pellet were found between predator species (Fig. [3](#ece33404-fig-0003){ref-type="fig"}c; *F* ~2,\ 139~ = 1.09, *p* = .34) or sexes (Fig. [3](#ece33404-fig-0003){ref-type="fig"}c; Sex *F* ~1,\ 139~ = 0.07, *p* = .79; Sex × Predator *F* ~2,\ 135~ = 0.19, *p* = .83). ![(a) Size of adult odonates (measured by the average of the length of the two hind wings), (b) size of prey taxa (measured by the body length of taxa), and (c) number of prey items per fecal sample, as resolved by predator species and sex](ECE3-7-8588-g003){#ece33404-fig-0003} In terms of exact prey composition, different odonate species shared many prey species (Fig. [4](#ece33404-fig-0004){ref-type="fig"}). This dietary similarity was further confirmed by the multivariate analysis: after accounting for the effect of the body size, no significant differences remained between predators or sexes (ADONIS: *R* ^2^ = 0.04, *p* = .08; Table [2](#ece33404-tbl-0002){ref-type="table-wrap"}). The most common prey taxa were found in dipteran families Chironomidae (midges) and Sciaridae (dark‐winged fungus gnats), and when diet was compared at the family level, then the diet of the three predator species was indeed strikingly similar (Fig. [5](#ece33404-fig-0005){ref-type="fig"}). ![A semi‐quantitative food web of the odonate predator species and their prey, combining data from all extraction methods and both markers. The blocks in the upper row represent predators in each web and the blocks in the lower row the prey species. A line connecting a predator with a prey represents a detected predation event, and the thickness of the line represents the proportional frequency of each predation event. The web was drawn using method "cca" which minimizes the cross‐links between predators in R package "bipartite" (Dormann et al., [2009](#ece33404-bib-0025){ref-type="ref"}). Only the male pictures are shown, although the web is constructed from both sex diets. Pictures of odonates adopted from Norske Art databank under Creative Commons License (CC BY 4.0)](ECE3-7-8588-g004){#ece33404-fig-0004} ###### Permutational multivariate analysis of variance using Sørensen dissimilarity matrix of presence or absence of prey species in each sample. Terms were added sequentially to the model meaning that the significance of each term is evaluated against the background of terms above it. Predator body size was measured as the average hind wing length of each individual, with factor Predator referring to the species of dragonfly (three levels) Predictor SS *F* *R* ^2^ *p* -------------------- ------ ------ --------- ------ Predator body size 0.61 1.33 0.02 .146 Predator species 1.27 1.39 0.04 .076 Sex 0.53 1.19 0.02 .289 Predator x Sex 0.75 0.82 0.02 .781 John Wiley & Sons, Ltd ![Prey use at the family level. Shown are the frequencies of the two most common families (Diptera: Chironomidae, Sciaridae) and of other families combined in the diet of each odonate species. EC = *Enallagma cyathigerum*,LS = *Lestes sponsa*,SD = *Sympetrum danae*](ECE3-7-8588-g005){#ece33404-fig-0005} 4. DISCUSSION {#ece33404-sec-0014} ============= To our knowledge, this is the first study to shed light on the complete species‐level diet of adult odonates. With the help of an extensive national barcode library, we were able to identify over forty prey taxa from the fecal samples. Of the 41 distinct prey identified, 28 were assigned to at least the genus level, and the rest (13) to a family (with one taxon only to the order) level. 4.1. Prey use by odonates {#ece33404-sec-0015} ------------------------- ### 4.1.1. Prey taxa consumed {#ece33404-sec-0016} Among the three odonate species studied here (*E. cyathigerum* (Coenagrionidae), *L. sponsa* (Lestidae), and *S. danae* (Libellulidae), the most commonly consumed prey order was Diptera. This group is undoubtedly one of the most abundant prey types available in the wet habitat where the study was conducted. It is also reported to be the most common prey taxon in previous odonate studies based on visual prey identification and sticky traps (e.g., Baird & May, [1997](#ece33404-bib-0006){ref-type="ref"}). Within Diptera, the most frequently observed taxa were families Chironomidae and Sciaridae. Indeed, these are some of the most abundant and diverse insect families in Finland, with approximately 700 and 340 species, respectively (Paasivirta, [2012](#ece33404-bib-0042){ref-type="ref"}, [2014](#ece33404-bib-0043){ref-type="ref"}; Vilkamaa, [2014](#ece33404-bib-0061){ref-type="ref"}). This also concur with previous study conducted with sticky traps in Japan, which found that at least 80% of individual prey items available for dragonfly *Mnais pruinosa* were small Diptera (Higashi, Nomakuchi, Maeda, & Yasuda, [1979](#ece33404-bib-0033){ref-type="ref"}). We did not find statistical differences in the diet of the three focal odonates. Although the *S. danae* was significantly larger than other predator species, no significant difference was found in terms of prey size, detected prey items per sample, or prey assemblage. *S. danae* diet largely overlapped with that of the other predators, and included only two prey items unique to this species. In contrast, the other two predators (*L. sponsa* and *E. cyathigerum*) had nine and 11 unique prey species, respectively. At the level of prey families, the dietary patterns were highly similar between all three predators. Thus, all three species will likely exhibit an opportunistic hunting behavior, with slight differences in the exact prey species consumed reflecting chance events associated with the prey taxa encountered. ### 4.1.2. Similarities to other air‐borne insectivores {#ece33404-sec-0017} Interestingly, the prey assortment detected in odonates was similar to that of bats living in similar habitats in the same region (Vesterinen et al., [2013](#ece33404-bib-0059){ref-type="ref"}, [2016](#ece33404-bib-0060){ref-type="ref"}). This similarity likely reflects joint features in habitat selection and foraging strategies, but also the high general availability of the main prey taxa (Diptera). Of the other diurnal species living in the same areas, the diet of birds is actually less explored by comparable techniques. Yet, results to date suggest that although birds undoubtedly consume large quantities of dipteran insects, they also catch more butterflies and moths (Lepidoptera), most probably at the larval stage (E. Vesterinen unpublished data). Thus, our methods offer a promising tool for assessing ecological similarities and dissimilarities among predator groups for which data have previously been hard to come by, and allow us to finally start mapping out the ecological significance of odonates. While the three focal odonate species differ in size and foraging tactics, there were no differences in the size of the prey they consumed. These results are rather surprising, as we a priori expected larger odonates to hunt larger prey species. 4.2. Methodological considerations {#ece33404-sec-0018} ---------------------------------- ### 4.2.1. Source of DNA {#ece33404-sec-0019} One of the issues complicating the interpretation of molecular information derived from food web studies is the source of DNA: whether it originates directly from the prey of focal predators or does it derive from lower steps in the food chain, a phenomenon commonly referred to as secondary predation (Boyer, Cruickshank, & Wratten, [2015](#ece33404-bib-0009){ref-type="ref"}; Sheppard et al., [2005](#ece33404-bib-0053){ref-type="ref"}). As the odonates are among the top predators of the insect world, they might consume many other predatory species, resulting in secondary predation. Furthermore, in case of parasite(oid)ism, it is possible that remnants of host species' DNA could end up in these parasites and further to their predators. In this study, nonetheless, practically all of the prey items seemed to be species which are either herbivorous or do not feed as adults. Thus, the risk of false positives in the prey species list seems low in the current study. One exception for this in our results could be Trombidiformes, which include water mite species that commonly parasitize aquatic insects (e.g., Di Sabatino, Martin, Gerecke, & Cicolani, [2002](#ece33404-bib-0022){ref-type="ref"}). As our odonate species hunt close to water bodies, it is highly likely that they have consumed other insects that have been parasitized by the members of the order Trombidiformes and that this DNA is consequently represented in our results mainly via secondary predation. Another question is, whether the prey was caught by active hunting or scavenging for example from spider webs, behavior reported for helicopter damselflies by Ingley, Bybee, Tennessen, Whiting, and Branham ([2012](#ece33404-bib-0034){ref-type="ref"}). However, even in such a case, it is not to be taken as a fault in the results or methods, but instead a challenge to be tackled by other methods, such as complementary direct observations. Needless to say, contamination is also a real risk in any study dealing with tiny amounts of degraded DNA. Especially, when DNA is amplified, the unwanted DNA originating from contamination would amplify alongside resulting with false positives. In this study, we followed the procedures from our earlier works to cutting the risks to minimum (Vesterinen et al., [2013](#ece33404-bib-0059){ref-type="ref"}, [2016](#ece33404-bib-0060){ref-type="ref"}; Wirta et al., [2015](#ece33404-bib-0063){ref-type="ref"}). The greatest of caution needs to be taken not to introduce any sources of contaminating material to the laboratory while handling the samples. The amplification has to be done in a separate room (post‐PCR), and no amplified DNA should be taken back to the pre‐PCR facilities. The inclusion of negative control samples is a standard nowadays, but also positive "mock community" samples are used increasingly (Beng et al., [2016](#ece33404-bib-0007){ref-type="ref"}). The idea of mock communities is to add a sample containing a known mixture of potentially expected DNA and using the information from final data to interpret the molecular data quality. This is something to be built on in the future, although there is no easy way of standardizing the mock community approach between different studies. Despite the lack of positive controls with known DNA mixtures, we trust that we have succeeded in preventing the contamination and that what was found was actually eaten. ### 4.2.2. Performance of different DNA extraction methods {#ece33404-sec-0020} Three different extraction methods were utilized in this study (Macherey‐Nagel Nucleospin XS Kit, salt extraction and Zymo Research Fecal Micro Kit). All these methods produced thousands of reads, which were subsequently assigned to various taxa. Salt extraction and Nucleospin retrieved more reads than Zymo Research kit, so in that sense they performed better. More importantly, salt extraction and Nucleospin enabled more prey taxa identifications than Zymo Research. Based on this, it can be concluded that of these three choices Nucleospin and salt extraction are recommended for molecular studies using odonate feces as starting material. The main technical differences between these methods are a) price (Salt extraction is substantially cheapest, around 0.1 euros per sample), b) the manpower required (Nucleospin is the least time consuming), and level of experience needed (commercial kits, such as Nucleospin, do not require that much earlier laboratory experience). ### 4.2.3. Performance of different gene regions as markers {#ece33404-sec-0021} Both sets of primers chosen for our study targeted mitochondrial DNA, but amplified a different gene region. The COI region is the most common region applied in many different DNA barcoding studies as well as food web analyzes (Alberdi, Garin, Aizpurua, & Aihartza, [2012](#ece33404-bib-0001){ref-type="ref"}; Clare, Symondson, & Broders et al., [2014](#ece33404-bib-0014){ref-type="ref"}; Pastor‐Bevia, Ibanez, Garcia‐Mudarra, & Juste, [2014](#ece33404-bib-0044){ref-type="ref"}; Vesterinen et al., [2013](#ece33404-bib-0059){ref-type="ref"}, [2016](#ece33404-bib-0060){ref-type="ref"}; Wirta et al., [2015](#ece33404-bib-0063){ref-type="ref"}). COI typically offers high resolution in identifying the target taxa all the way to the species level (Hebert, Penton, Burns, Janzen, & Hallwachs, [2004](#ece33404-bib-0031){ref-type="ref"}; Hebert, Ratnasingham, & deWaard, [2003](#ece33404-bib-0032){ref-type="ref"}; Hebert, Cywinska et al., [2003](#ece33404-bib-0030){ref-type="ref"}), so it is a natural choice for any ecological research. However, some doubt has been cast over the use of the COI region in general, and in particularly over the use of so‐called mini‐barcode primers (especially ZBJ‐Art1c and ZBJ‐Art2c). The main criticism offered is that the primers may be biased, not amplifying arthropod taxa equally across the phylum (Clarke et al., [2014](#ece33404-bib-0016){ref-type="ref"}; Deagle, Jarman, Coissac, Pompanon, & Taberlet, [2014](#ece33404-bib-0019){ref-type="ref"}). The mitochondrial 16S rRNA region is more highly conserved than COI, offering a more suitable platform for generating widely generic primers. The complication is naturally that higher generality comes at the price of lower resolution of identification: 16S sequences usually cannot be attributed to species‐level taxonomy due both to less variable sites and to less populated reference libraries. In this study, we noticed a difference between identifications based on COI and 16S information: COI primers seemed to amplify only one of the odonate predators, *Enallagma cyathigerum*. This species was only identified from the stool of *E. cyathigerum*, suggesting either cannibalism or---more likely---a DNA origin in the cells lining the gut. On the other hand, 16S primers seemed to amplify all the odonate species examined, with every odonate predator species detected in the diet of all three predators. Taken at face value, this pattern may seem to suggest that the odonates are consuming each other. Furthermore, if these odonates were truly foraging on each other, the same pattern should have been visible in the COI reads, too, at least for the *E. cyathigerum* which was well amplified by the current primers. To add further resolution to future studies, we suggest that additional gene regions or multiple overlapping fragments from COI and 16S (allowing the reconstruction of longer sequences) may be amplified from the same samples. For the current cost‐effective library construction protocols, several complementary primers can be added without significantly increasing the total costs. PCR‐free methods offer another alternative (see Roslin & Majaneva, [2016](#ece33404-bib-0050){ref-type="ref"}), but offer additional problems and will be challenging for diet studies dealing with the current tiny amounts and degraded quality of prey DNA (Paula et al., [2015](#ece33404-bib-0045){ref-type="ref"}). 5. CONCLUSIONS {#ece33404-sec-0022} ============== To our knowledge, this is the first study to shed light on the species‐level diet of adult odonates. Drawing on molecular, DNA‐based tools, we find that Odonata diet shows extensive overlap with previous records of bat diet and tentative records of bird diet, thus revealing major overlap in prey choice by dominant vertebrate groups. Different odonate species appear to overlap in diet, with no significant differences between individuals of different size and/or gender, reflecting opportunistic foraging of adult odonates. Based on the current study, we recommend using a traditional salt‐based method for the extraction of prey DNA from odonate fecal material. From an ecologic perspective, the current findings are partly conditional on a specific site and time. Thus, future studies are needed to evaluate the level of spatial and temporal variation in the dietary composition of odonates more generally. Our work identifies the tools needed for resolving such patterns. Equipped with the adequate methodological caveats, ecologists are now better prepared to establish the general role of odonates in terrestrial food webs as vehicles transporting subsidies between the aquatic and terrestrial realms. CONFLICT OF INTEREST {#ece33404-sec-0024} ==================== None declared. AUTHORS CONTRIBUTION {#ece33404-sec-0025} ==================== Kari M Kaunisto involved in original idea, field work, and writing the manuscript. Tomas Roslin involved in writing the manuscript and statistics. Ilari E Sääksjärvi: involved in writing the manuscript. Eero J Vesterinen: involved in laboratory analysis, statistics, and writing the manuscript. Supporting information ====================== ######   ###### Click here for additional data file. ######   ###### Click here for additional data file. We wish to thank Zoological Museum of the University of Turku for allowing the use of the molecular laboratory. This study was supported by Finnish Functional Genomics Centre, University of Turku and Åbo Akademi and Biocenter Finland. We acknowledge CSC---IT Center for Science Ltd., Espoo, Finland, for the allocation of computational resources. The study was financially supported by the Academy of Finland (KMK and IES), Ella and Georg Ehrnrooth foundation (KMK), Societas Entomologica Helsingforsiensis (KMK), Emil Aaltonen foundation (EJV), and Turun yliopistosäätiö (EJV).
High
[ 0.669856459330143, 35, 17.25 ]
Lajos Fischer Lajos Fischer (1 January 1902 – 1 January 1978) was a Hungarian footballer who played for VAC and Hakoah Vienna, and made appearances for the Hungarian national team. Fischer played as a goalkeeper for American Soccer League sides Brooklyn Wanderers and Hakoah All-Stars. References Category:1902 births Category:1978 deaths Category:Hungarian footballers Category:Hungary international footballers Category:Brooklyn Wanderers players Category:Hakoah All-Stars players Category:American Soccer League (1921–1933) players Category:Place of birth missing Category:Association football goalkeepers
High
[ 0.6739130434782601, 27.125, 13.125 ]
Discover the treasures of the picturesque medieval city of Bruges as you walk along its historic city center! Take in the beauty and charm of Bruges by visiting its beautiful sites such as the Church of Our Lady, the Burg Square and many more. Highlights Enjoy a tour around the medieval city of Bruges and discover its beautiful treasures Admire one of Michelangelo's masterpieces, Madonna, at the Church of Our Lady Walk along the Burg Square and see one of the oldest city halls in the region Get a chance to have a glimpse of city's wonders including the Church of the Holy Blood, the Bruges Béguinage and many more Specifications Type: Tours around Brussels Departs From: Brussels Meeting Point: Grasmarkt 82 or Hotel pick-up Duration: 9 hours Availability: Saturday at 10:00 AM Product Code: 14570 Voucher info: Paper voucher printout not required for this activity. You may show e-voucher from your mobile device. Description Known as the Venice of the North, tour around the medieval city of Bruges! Take a step back into the past as you walk along the city's historic center, a UNESCO World Heritage Site, and admire many remarkable monuments. With the entire city protected by UNESCO as a cultural heritage of mankind, a local guide will take you on a walking tour alongside the city's most beautiful sites. See the only preserved béguinage in Bruges, Princely Béguinage Ten Wijngaerde. Admire one of Michelangelo's masterpieces, Madonna and Child, at the Church of Our Lady. Stop by the Markt (Market Square) to have a glimpse of Bruges' prominent landmark, Belfry (Belfrot). A medieval bell tower, Belfry is also known as Halletoren (Tower of the Halls). Marvel at the Bruges City Hall, one of the oldest city halls in the Netherlands region, situated at Burg Square, the Church of the Holy Blood that houses a venerated relic of the Holy Blood and many more. There will be time for lunch and free time in the city center. An optional boat tour along Bruges' picturesque canals can be taken during your free time.
High
[ 0.7154471544715441, 33, 13.125 ]
Primary Menu PCB Making: 2. Get Started This is a step of looking for an idea or the purpose of what a board should be doing. You will likely know what you need and then build a suitable board. I needed to build something useful so I can look forward to it once it was done: I wanted to be able to interact with it but also use parts that I already have at home. In my parts bin, I found a fairly standard LCD display that I got with an Arduino board. It can display 16 characters in 2 rows. This should be a good display device to use. A couple of push-buttons are definitely a must for any interactivity as well as a few LEDs: can’t do without mandatory blinking lights! Digging through my parts I’ve found a DS18B20 temperature sensor, a simple 1-Wire device, so I added it in. Bins with various parts I wanted to use the Atmel SAM3N1 CPU which I already ordered from DigiKey. This quite powerful MCU needs a JTAG to program it. The MCU has a number of built in peripherals including an UART (“serial port”) so I also wanted to use those pins to communicate through the serial interface. After adding needed parts in the power supply circuitry, pin headers, test pins and so on, I felt that was complex enough for the scope of this project. Bins and bags of parts The final rough idea for a board quickly took shape: it should have a couple of buttons for user interaction, it should display temperature read from a sensor and to make it more interesting, display a random fortune (do you remember that old ‘fortune’ Unix application?).
High
[ 0.6616915422885571, 33.25, 17 ]
Association between nicotine replacement therapy use in pregnancy and smoking cessation. There is an urgent need to find better ways of helping pregnant smokers to stop. Randomized controlled trials (RCTs) have not detected an effect of nicotine replacement therapy (NRT) for smoking cessation in pregnancy. This may be because of inadequate dosing because of faster nicotine metabolism in this group. In England, many pregnant smokers use single form and combination NRT (patch plus a faster acting form). This correlational study examined whether the latter is associated with higher quit rates. Routinely collected data from 3880 pregnant smokers attempting to stop in one of 44 Stop Smoking Services in England. The outcome measure was 4-week quit rates, verified by expired-air carbon monoxide level<10 ppm. Outcome was compared between those not using medication versus using single form NRT (patch or one of the faster acting forms), or combination NRT. Potential confounders were intervention setting (specialist clinic, home visit, primary care, other), intervention type (one-to-one, group, drop-in, other), months pregnant, age, ethnicity and occupational group in multi-level logistic regressions. After adjustment, combination NRT was associated with higher odds of quitting compared with no medication (OR=1.93, 95% CI=1.13-3.29, p=0.016), whereas single NRT showed no benefit (OR=1.06, 95% CI=0.60-1.86, p=0.84). Use of a combination of nicotine patch and a faster acting form may confer a benefit in terms of promoting smoking cessation during pregnancy. While this conclusion is based on correlational data, it lends support to continuing this treatment option pending confirmation by an RCT.
High
[ 0.7123287671232871, 32.5, 13.125 ]
608 So.2d 139 (1992) FORTUNE INSURANCE COMPANY, Appellant, v. Renal EXILUS, Appellee. No. 91-3318. District Court of Appeal of Florida, Fourth District. November 12, 1992. Matt Hellman of Matt Hellman, P.A., Plantation and Diane H. Tutt of Diane Tutt, P.A., Fort Lauderdale, Fort Lauderdale, for appellant. Eric L. Ansel of Ansel & Simon, P.A., and Paul J. Ansel of Law Office of Paul J. Ansel, Hollywood, for appellee. PER CURIAM. Fortune Insurance Company ("Fortune") appeals from a final summary judgment finding that appellee, Renal Exilus, was entitled to automobile insurance personal injury protection (PIP) benefits as a result of injuries he sustained in a shooting incident. We reverse. FACTS On October 5, 1989 at 11:30 P.M., Exilus was driving a 1985 Mazda automobile owned by Athyl Liveral, a friend. Liveral was a passenger in the vehicle and the two were driving from Exilus' home to Liveral's home in Fort Lauderdale. Liveral had driven his car to Exilus' home, picked him up, and they were on the way back to Liveral's home. Exilus was going to drop Liveral at his home and return home with the car because the next day Exilus was going to take Liveral's car to a body shop for repairs as a favor to Liveral. Driving the vehicle in a westerly direction, Exilus stopped at a stop sign, looked both ways and did not see any other vehicles coming. As he proceeded slowly into the intersection, another car pulled up along side of his. An individual in the passenger seat of the other car got out of the car and asked Exilus if he knew a Haitian guy named Jean that lived in the area. Exilus stopped the vehicle he was driving and looked at the man, but before Exilus had a chance to say anything, the other man pulled open Exilus' car door. Exilus then drove away. As he was doing so, he heard gunshots. He did not close the door before he drove away, because he was afraid of the man. The front and rear windows of the vehicle Exilus was driving were shattered, and a bullet struck Exilus' *140 left leg. Exilus drove to the hospital, where he was admitted. Exilus did not know whether the police ever found the occupants of the other car. Thereafter, Exilus sought a declaratory judgment determining that he was entitled to personal injury protection coverage under an automobile insurance policy issued by Fortune. Fortune's answer admitted the existence of an insurance policy but asserted that the shooting in which Exilus was injured did not "arise out of the ownership, maintenance or use of a motor vehicle," as required by the policy. Both Fortune and Exilus moved for summary judgment, and at the hearing, counsel for both sides agreed that there were no material facts in dispute and that the issue presented was one of law. The trial court granted Exilus' motion for summary judgment and held that there was PIP coverage. LAW The parties agree that the controlling statute is section 627.736(1), which requires that automobile insurance policies, such as the one involved herein, provide PIP benefits for any "loss sustained ... as a result of bodily injury, sickness, disease, or death arising out of the ownership, maintenance, or use of a motor vehicle." Numerous cases have construed the meaning of "arising out of the ownership, maintenance, or use of a motor vehicle" in factual situations involving criminal attacks on individuals in, on, or near an insured vehicle. The cases make it clear that some connection or nexus between the injury and the use of the vehicle is required. In Government Employees Insurance Co. v. Novak, 453 So.2d 1116 (Fla. 1984), the Florida Supreme Court stated that the term "arising out of the use of a motor vehicle," as used in section 627.736(1) should be construed liberally because its function is to extend coverage broadly. Id. at 1119. In Novak, the supreme court, agreeing with our decision in Novak v. Government Employees Insurance Co., 424 So.2d 178 (Fla. 4th DCA 1983), held that a sufficient nexus between the vehicle and the injury existed in that case because the assailant sought the use of the vehicle in question. The court stated: Construction of the clause "arising out of the use of a motor vehicle" is an easier matter. It is well settled that "arising out of" does not mean "proximately caused by," but has a much broader meaning. All that is required is some nexus between the motor vehicle and the injury. E.g., Government Employees Insurance Co. v. Batchelder, 421 So.2d 59 (Fla. 1st DCA 1982); Indiana Insurance Co. v. Winston, 377 So.2d 718 (Fla. 4th DCA 1979), cert. denied, 388 So.2d 1120 (Fla. 1980); Auto-Owners Insurance Co. v. Pridgen, 339 So.2d 1164 (Fla. 2d DCA 1976); National Indemnity Co. v. Corbo, 248 So.2d 238 (Fla. 3d DCA 1971). It is clear that in the present case, as the district court correctly concluded, there was a highly substantial connection between Ms. Novak's use of the motor vehicle and the event causing her fatal injury. Obtaining a ride in or possession of the motor vehicle was what motivated the deranged Endicott to approach and attack the deceased. Id. at 1119 (emphasis added). Subsequently, in Hernandez v. Protective Casualty Insurance Co., 473 So.2d 1241 (Fla. 1985), the supreme court held that coverage would apply to a motorist stopped for a traffic violation and injured by the police in removing him from his vehicle. The supreme court reiterated that some connection between the use of the vehicle and the injury was required: We do agree with the proposition reiterated in Reynolds [v. Allstate Ins. Co., 400 So.2d 496 (Fla. 5th DCA 1981)] that "it is not enough that an automobile be the physical situs of an injury or that the injury occur incidentally to the use of an automobile, but that there must be a causal connection or relation between the two for liability to exist." Id. at 497 (citation omitted). The automobile here was, however, more than just the physical situs of petitioner's injury. Petitioner was using the vehicle for the purpose *141 of transportation, which use was interrupted by his apprehension by police officers. It was the manner of petitioner's use of his vehicle which prompted the actions causing his injury. While the force exercised by the police may have been the direct cause of injury, under the circumstances of this case it was not such an intervening event so as to break the link between petitioner's use of the vehicle and his resultant injury. We find these facts sufficient to support the requisite nexus between petitioner's use of his automobile and his injury, thereby allowing him to recover P.I.P. benefits. Id. at 1243 (emphasis added). The court also noted that ingress and egress from a vehicle were actions connected to its use. Id.[1] In Reynolds v. Allstate Insurance Co., 400 So.2d 496 (Fla. 5th DCA 1981), a case discussed in Novak and Hernandez, an assailant hiding in the back seat of the insured's vehicle struck and injured the insured, rendering him unconscious. The assailant then drove the vehicle for several miles, and the insured was thrown from the vehicle, causing him further injury. Noting that insurance does not cover every incident or accident that happens in a car, the Fifth District affirmed a judgment for the insurance company. The court stated: In the absence of effects caused by its movement or ability to move, and circumstances arising from the necessity that its use requires normal ingress and egress to and from it, Padron v. Long Island Insurance Company, 356 So.2d 1337 (Fla. 3d DCA 1978), a vehicle is inherently no different from any other place or object and its existence or use becomes no more than the situs of injuries caused by accidents or intentional acts bearing no causal relationship to its nature as a vehicle... . Id. at 497. In Novak, the supreme court did not disapprove, but rather distinguished Reynolds on the basis that Reynolds did not involve a sufficient nexus between the use of the automobile and the injury: We do not believe that our holding necessarily implies disapproval of Reynolds v. Allstate Insurance Co., the case cited by the petitioner as being in conflict. We believe the facts of that case make it distinguishable from this one. The decision there turned on the plaintiff's failure to allege facts sufficient to show the nexus between the use of the car and the injuries. Novak, 453 So.2d at 1119. The Hernandez opinion also distinguished Reynolds, but did not disapprove of its holding. A review of other cases reflects the manner in which courts have determined whether a sufficient "nexus" exists to provide various forms of automobile insurance coverage. In General Accident Fire & Life Assurance Corp., Ltd. v. Appleton, 355 So.2d 1261 (Fla. 4th DCA), cert. denied, 361 So.2d 830 (Fla. 1978), the insured had car trouble and accepted a ride from the driver of another vehicle wherein he was attacked and robbed by two passengers in the vehicle. He sought uninsured motorist coverage because the driver of the vehicle in which he was riding when attacked, was uninsured. We held that there was no uninsured motorist insurance coverage on those facts because Appleton's injuries did not arise out of the ownership, maintenance or use of an uninsured automobile. We recognized that "arising out of" does not require a showing of proximate cause between the injury and the use of the automobile, but that there must be a connection or relation between the two for liability to exist: We recognize that bodily injury resulting from a criminal assault, under the terms of an uninsured motorist policy, may be caused by accident and arise "out of the ownership, maintenance or use of an uninsured automobile,"... . However, *142 the risks of bodily injury from a criminal assault are not normally contemplated by the parties to an automobile liability insurance policy. For there to be coverage there must be a causal connection between the use of the automobile and the bodily injury resulting from the criminal assault. This may be established by showing that the automobile itself was used to inflict the bodily injury, ... or that the automobile was used in some manner that contributed or added to the bodily injury... . Id. at 1263. In Florida Farm Bureau Insurance Co. v. Shaffer, 391 So.2d 216 (Fla. 4th DCA 1980), rev. denied, 402 So.2d 613 (Fla. 1981), an individual riding in one car shot and injured an individual riding in another car in response to a tangerine being thrown in the general direction of the vehicle in which the individual with the gun was riding. Relying on Appleton, we held: But, just as was true of the victim in Appleton, Shaffer's injury did not result from any incident of use of the vehicle. The fact that the tortfeasor was occupying the car at the time of the shooting was no more than incidental and did not make the injury one resulting from the use of the vehicle. To hold such a relationship alone sufficient to constitute a causal connection would logically lead to absurd consequences, such as allowing recovery under an automobile liability policy when a vehicle is simply used as the means of transporting an assailant to the location where an assault is committed. The injury was not caused by the automobile but by the gunshot. From the standpoint of causation, the injury could have occurred in the woods, in a house or anywhere else. As we stated in Appleton, supra, a criminal assault is not the usual risk anticipated under an automobile policy and for coverage to apply there must be a showing that the automobile itself was used in some manner to cause or produce the injury. Id. at 218. In Allstate Insurance Co. v. Famigletti, 459 So.2d 1149 (Fla. 4th DCA 1984), we reversed a judgment in favor of the insured. In that case, a neighborhood feud resulted in one neighbor shooting into the car of another neighbor and injuring a husband and wife. We held that the fact that the victims were in their car at the time of the shooting was merely fortuitous: Mr. Famigletti intended to murder the Burches. The mere fact that he chose the site of their automobile for his attempted slaughter does not provide a sufficient nexus between the assault and the use of the car to warrant the imposition of PIP coverage. Id. at 1150. The Third District in Doyle v. State Farm Mutual Automobile Insurance Co., 464 So.2d 1277 (Fla. 3d DCA 1985), held that there was no PIP coverage where the insured was shot during a robbery attempt as he exited his vehicle after parking in his driveway. The court found the facts of that case to be similar to the line of cases finding no coverage where the automobile was merely the situs of an injury without a causal connection to the injury. Id. at 1279. Two recent cases from the Fifth District held that there was no PIP coverage available for an injury caused by a criminal act which had no causal connection to the subject vehicle. In Jones v. State Farm Mutual Automobile Insurance Co., 589 So.2d 333 (Fla. 5th DCA 1991), an estranged and distraught husband abducted his wife from her place of business and transported her in a vehicle owned by the parties. While in the vehicle, the husband shot and killed the wife. PIP benefits were denied on the ground of no causal connection. In Allstate Insurance Co. v. Furo, 588 So.2d 61 (Fla. 5th DCA 1991), the insured was shot while riding as a passenger in a vehicle driven by his stepdaughter. An ex-boyfriend of the stepdaughter fired a gun at her while she was driving by his house, but the bullet struck the insured. The Fifth District held that no coverage was available: But no case yet has found a sufficient nexus between the use of the vehicle and the injury when it has not been shown *143 that the assailant either desired possession (Novak) or use [State Farm Mut. Auto. Ins. Co. v.] (Barth) [579 So.2d 154 (Fla. 5th DCA 1991)] of the victim's automobile. In both Novak and Barth the possession or use of the vehicle was the focus of the encounter and the motivation for the attack. In the present case, York wanted to do injury to Pagel — any place, any time. When she drove by his residence, she presented an opportunity he could not resist. He shot at her and hit Furo, not because they were in the vehicle, but because they were in the vicinity. The vehicle was merely the situs of the injury and not the cause of it. Id. at 62 (emphasis in original text). In Stonewall Insurance Co. v. Wolfe, 372 So.2d 1147 (Fla. 4th DCA 1979), cert. denied, 385 So.2d 762 (Fla. 1980), the victim was sitting on the vehicle when injured and this court held that no coverage was available. In the present case, we must determine whether there was a sufficient connection, as defined in Novak and Hernandez, between the use of the vehicle Exilus was driving, and the shooting injury he sustained. Do the facts of this case fit in the Reynolds category as merely the situs of a criminal assault or are they closer to those involved in Novak and Hernandez? Here, there is no claim that the assailant was attempting to seize the vehicle, or that the vehicle itself was the source of the motivation or focus of the assailant. It appears that the vehicle was merely the situs of the injury. Did the assailant mistake Exilus for someone else? The known facts offer no real explanation for the shooting other than the obvious effort to shoot at Exilus as he was driving away. Under the case law, we conclude that these facts are insufficient to establish that the shooting arose from the use of the vehicle. We concede that we are somewhat concerned about the supreme court's holding in Novak, that PIP provisions should be broadly construed to provide coverage. Obviously, there was some connection between the vehicle and the injury in the sense that Exilus was driving the vehicle when he was shot. However, we construe the case law to require more of a connection than the insured's simple use of, or presence in, the vehicle at the time of injury. We are particularly influenced by the supreme court's statement in Hernandez approving the Reynolds holding that, in order to be compensable, an injury must be more than incidentally related to the use of an automobile. If we have misconstrued the holdings in Novak and Hernandez, review is, of course, available to Exilus in the supreme court. Accordingly, we reverse the summary judgment and remand for further proceedings consistent herewith. ANSTEAD and HERSEY, JJ., concur. POLEN, J., dissents with opinion. POLEN, Judge, dissenting. I respectfully dissent. I would affirm the trial court's finding that insurance coverage was available to Exilus because his injury arose out of his use of the insured vehicle. First, I disagree with the majority's implicit conclusion that Reynolds v. Allstate Insurance Co., 400 So.2d 496 (Fla. 5th DCA 1981), remains viable. The same court that decided Reynolds has decided the more recent case of State Farm Mutual Automobile Insurance Co. v. Barth, 579 So.2d 154 (Fla. 5th DCA 1991). Barth allows for a more liberal interpretation of the court's holding in Novak v. Government Employees Insurance Co., 453 So.2d 1116 (Fla. 1984), than did Reynolds. In Novak the Florida Supreme Court stated that "arising out of" as used in section 627.736(1) did not mean "proximately caused by," but rather had a much broader meaning: "All that is required is some nexus between the motor vehicle and the injury." Novak, 453 So.2d at 1119 (emphasis added). The supreme court stated that "arising out of" was framed in such a way as to express an intent to effect broad insurance coverage, and that this language should be liberally construed to extend coverage broadly. Id. While the language *144 utilized by the court in Novak may be difficult to quantify,[2] I believe the language expresses an intent that "close" cases be decided in favor of finding coverage. The instant case is just such a case. I interpret the court's statements in Novak to mean that if the given facts of any particular case support a reasonable inference that the assault was in some way connected with the insured vehicle, it should be found that the injury arose out of the use of the insured vehicle. In the instant case, Exilus was initially approached by the assailant while driving the insured vehicle. He was encouraged to stop his vehicle by either the assailant, or the assailant's companion. Once stopped, the assailant quickly ran to Exilus' vehicle, took hold of the driver's door, and opened it. The assailant did not display a gun at this point. Rather, it was only after Exilus, frightened by the actions of his assailant, began to accelerate and drive away from the scene, that the assailant fired several shots at Exilus who was still seated inside the vehicle. Exilus was injured by this gunfire. I believe these facts support a reasonable inference that the assailant's action of firing upon Exilus flowed from Exilus' use of the insured vehicle. Under these circumstances, I would hold that a sufficient nexus existed between Exilus' injury and his use of the insured vehicle such that his injury arose out of his use of the vehicle. Simply put, there is no justification for the disparate treatment afforded the plaintiffs in Barth and the instant case, when the only significant distinction between the two cases is that in Barth the assailant uttered the words, "Drive, bitch" before assaulting his victim, and "Okay, bitch, if that's the way you want it," after the assault. The subjective intent of Exilus' assailant should not control the issue of PIP coverage. Therefore, I would affirm. NOTES [1] Exilus also cites Tuerk v. Allstate Insurance Co., 469 So.2d 815 (Fla. 3d DCA 1985), rev. denied, 482 So.2d 347 (Fla.), and rev. denied, 482 So.2d 350 (Fla. 1986), as a case allowing PIP coverage. In that case, the court found a sufficient connection between the vehicle and the injury based on evidence that the plaintiffs were shot because of the type of vehicle driven. As the court there stated, "an unknown gunman searched for the occupants of a particular vehicle. ..." 469 So.2d at 816 (emphasis in original text). [2] See State Farm Mutual Automobile Ins. Co. v. Barth, 579 So.2d 154, 156-57 (Fla. 5th DCA 1991) (Cowart, J., dissenting).
Low
[ 0.522058823529411, 35.5, 32.5 ]
NBU strengthens hryvnia official rate to UAH 23.77 to dollar The National Bank of Ukraine on Wednesday continued to strengthen the hryvnia official rate, bringing the value of Ukraine’s ailing currency to UAH 23.77 to the dollar. REUTERS As of 1400 the central bank set the following official exchange rates of the hryvnia against leading foreign currencies: $100 – UAH 2,377.1263 (as of 1400 on March 3 it was UAH 2,482.0658); EUR 100 – UAH 2,654.7747 (as of 1400 on March 3 it was UAH 2,786.6153); 10 Russian rubles – UAH 3.8116 (as of 1400 on March 3 it was UAH 3.9889). As UNIAN reported earlier, according to NBU Governor Valeria Gontareva, the fundamental level of the exchange rate on the program with the International Monetary Fund is the rate of UAH 20-22 to the dollar. According to Gontareva, today there is every reason to predict the quick return of the exchange rate to the fundamental level. If you see a spelling error on our site, select it and press Ctrl+Enter
Mid
[ 0.596412556053811, 33.25, 22.5 ]
Wianda E, Ross B. The roles of alpha oscillation in working memory retention. Brain Behav. 2019;9:e01263 10.1002/brb3.1263 1. [introduction]{.smallcaps} {#brb31263-sec-0005} ============================= Working memory (WM), defined as the ability to maintain and manipulate information in memory over a short period of time, is essential for a wide range of cognitive function such as language, learning, and general intelligence (Baddeley, [2012](#brb31263-bib-0003){ref-type="ref"}). Therefore, understanding the neural mechanisms underlying WM is of great interest. A promising concept of the neural mechanism of WM is that the operational stages of encoding, retention, and retrieval are associated with neural oscillations in various frequency bands. Previous studies supported the functional relevance of oscillations by showing that task demand modulated the magnitude of neural oscillations (Klimesch, [1996](#brb31263-bib-0045){ref-type="ref"}). Temporal and spatial properties of such modulations have been studied using event‐related modulation of the signal power in EEG or MEG, which are termed event‐related desynchronization (ERD) in the case of a power decrease and event‐related synchronization (ERS) in the case of an increase (Babiloni et al., [2005](#brb31263-bib-0002){ref-type="ref"}; Pfurtscheller & Lopes Da Silva, [1999](#brb31263-bib-0076){ref-type="ref"}). Several studies showed ERD of alpha oscillations (8--14 Hz) related to memory function (Bonnefond & Jensen, [2012](#brb31263-bib-0007){ref-type="ref"}; Gevins, Smith, Smith, McEvoy, & Yu, [1997](#brb31263-bib-0024){ref-type="ref"}; Hanslmayr, Spitzer, Spitzer, & Bäuml, [2009](#brb31263-bib-0030){ref-type="ref"}; Klimesch et al., [1996](#brb31263-bib-0053){ref-type="ref"}; Krause, Lang, Lang, Laine, Kuusisto, & Pörn, [1996](#brb31263-bib-0058){ref-type="ref"}; Weiss & Rappelsberger, [2000](#brb31263-bib-0111){ref-type="ref"}). The decrease in alpha power indicates a state of desynchronization in which local neural assemblies become increasingly independent in preparation for a subsequent active process (Pfurtscheller, [1992](#brb31263-bib-0075){ref-type="ref"}). Following this interpretation, the reverse effect of alpha ERS has been suggested as reflecting a state of cortical inactivation (Pfurtscheller, Stancák Jr, Stancák, & Neuper, [1996](#brb31263-bib-0079){ref-type="ref"}). However, findings about the direction of alpha power change were not consistent across experimental studies, and the role of alpha oscillations during WM needs further clarifications. Alpha power decreased during encoding in a visual WM task and the magnitude of ERD was correlated with memory load (Fukuda, Mance, Mance, & Vogel, [2015](#brb31263-bib-0022){ref-type="ref"}). However, a load‐dependent alpha increase was reported during the retention interval of WM (Jensen, Gelfand, Gelfand, Kounios, & Lisman, [2002](#brb31263-bib-0039){ref-type="ref"}). The latter two studies show that it is important to consider distinct effects on alpha oscillations during the different functional intervals of a WM task. The first report of alpha ERS during memory retention came from a WM study in which two different memory sets where used, that either remained consistent across trials, thus involving long‐term memory, or varied between trials and relied on a short‐term memory (Klimesch, Doppelmayr, Doppelmayr, Schwaiger, Auinger, & Winkler, [1999](#brb31263-bib-0049){ref-type="ref"}). ERS in the upper alpha band was observed only in the latter condition, which maximized short‐term memory demands. The authors interpreted the alpha ERS as indicating inhibition of a potential interference from the previous trials in the variable memory set condition. Further evidence supporting this explanation came from a study of a visually cued motor task, in which participants had to perform a finger movement or inhibit such response depending on the cue (Hummel, Andres, Andres, Altenmüller, Dichgans, & Gerloff, [2002](#brb31263-bib-0036){ref-type="ref"}). EEG recording in their study showed alpha ERS over the sensorimotor areas during the inhibition of the response and ERD during the actual response. Those results suggested that the increased alpha activity reflects inhibition of retrieving the stored motor memory traces in the somatosensory cortex, which is consistent with the concept that alpha ERS helps to block the retrieval of information from the previously stored trials. Thus, a current interpretation of alpha ERS is that alpha oscillations protect the new memory by inhibiting further sensory processing that could interfere with the stored information (Bonnefond & Jensen, [2012](#brb31263-bib-0007){ref-type="ref"}). As an alternative to the idling hypothesis of alpha, the authors suggested that the alpha increase plays an active functional role in preventing the flow of distracting information into areas which retain the memory items (Mazaheri et al., [2014](#brb31263-bib-0066){ref-type="ref"}). For instance, the inhibition or disengagement of occipital‐parietal areas could serve to suppress input from the visual stream, which would otherwise, disturb the maintenance of WM in frontal areas. Consistent with studies in the visual modality, an auditory study predicted right hemispheric dominance for processing memory of pitch, and the authors interpreted a left‐lateralized increase in 5--12 Hz activity as functionally disengaging left temporal regions (Van Dijk, Nieuwenhuis, Nieuwenhuis, & Jensen, [2010](#brb31263-bib-0105){ref-type="ref"}). Given the limited capacity of WM, protecting the memory from interferences seems crucial for the successful WM performance. However, recent studies, testing whether the increased alpha activity could serve such an inhibitory function did not support the inhibition hypothesis (Poch, Valdivia, Valdivia, Capilla, Hinojosa, & Campo, [2018](#brb31263-bib-0080){ref-type="ref"}; Schroeder, Ball, Ball, & Busch, [2018](#brb31263-bib-0089){ref-type="ref"}). Thus, current research focuses more on the role of alpha oscillations in controlling the timing within neural networks (Klimesch, [2012](#brb31263-bib-0046){ref-type="ref"}). Findings of oscillatory activity in the gamma frequency range (30--120 Hz) during WM maintenance (Howard et al., [2003](#brb31263-bib-0034){ref-type="ref"}) suggested that a coordinated interplay between alpha and gamma oscillations supports WM function (Roux & Uhlhaas, [2014](#brb31263-bib-0082){ref-type="ref"}). The role of gamma oscillations for cognition and memory has been established more thoroughly than for alpha. For example, gamma band activity was observed during the delay interval in a delayed matching‐to‐sample task and was absent in the control task (Tallon‐Baudry, Bertrand, Bertrand, Peronnet, & Pernier, [1998](#brb31263-bib-0100){ref-type="ref"}). This finding supported the hypothesis that visual objects are represented by distributed cell assemblies, synchronized in their gamma band activity (Tallon‐Baudry, Bertrand, Bertrand, Delpuech, & Pernier, [1997](#brb31263-bib-0099){ref-type="ref"}). Similarly, load‐dependent gamma band activity was found in a visuospatial WM task, in which participants were required to memorize the positions of red disks only and to ignore the positions of the blue disks (Roux & Uhlhaas, [2014](#brb31263-bib-0082){ref-type="ref"}). A consistent relationship between the amplitude of gamma oscillation and the number of target items suggested that gamma oscillations are implicated in the maintenance of relevant WM information (Daume, Gruber, Gruber, Engel, & Friese, [2017](#brb31263-bib-0014){ref-type="ref"}). This role of gamma oscillation seems universal across sensory modalities because a gamma increase had been reported for secondary somatosensory areas during retention in a somatosensory WM task (Haegens, Osipova, Osipova, Oostenveld, & Jensen, [2010](#brb31263-bib-0027){ref-type="ref"}). In a study of auditory pattern memory, induced gamma activity was enhanced over left inferior‐frontal and anterior‐temporal regions during retention, while this was not the case in a control condition (Kaiser, Ripper, Ripper, Birbaumer, & Lutzenberger, [2003](#brb31263-bib-0043){ref-type="ref"}). The authors interpreted their findings that gamma activity is a correlate of cortical networks involved in the mental representation of sensory information. Recent findings that slow alpha waves and fast gamma activity occurred simultaneously during WM maintenance were discussed as an interaction of alpha and gamma for serving as a mechanism of neural communication (Canolty & Knight, [2010](#brb31263-bib-0012){ref-type="ref"}). It was thought that alpha waves modulate the excitability of neural networks that produce high‐frequency oscillations. In this mechanism of cross‐frequency coupling, alpha oscillations seem to play the leading role in controlling gamma. The aim of the current study was providing further support for emerging concepts about the roles of oscillatory activity underlying WM. We hypothesized that multiple functional roles of alpha could be observed simultaneously from the same experiment. Differences in the experimental paradigms in previous studies may have contributed to differences in the observed alpha effects. We implemented a modified Sternberg paradigm (Sternberg, [1966](#brb31263-bib-0097){ref-type="ref"}) because the encoding, retention, and retrieval intervals are well separated compared to other WM experiments. This allowed for separate analyses of oscillatory MEG activities for the subsequent WM intervals. First, we analyzed the temporal dynamics of alpha activity during the different stages of WM. Specifically, we expected alpha during the retention interval being involved in the timing of neural activity. Therefore, time intervals of increased alpha activity would be more phasic compared to intervals of decreased alpha activity. We identified underlying cortical sources with beamformer source analysis, measured alpha coherences, and alpha‐gamma coupling between the cortical sources, and compared those connectivity measures with alpha ERS and ERD. 2. [materials and methods]{.smallcaps} {#brb31263-sec-0006} ====================================== 2.1. Participants {#brb31263-sec-0007} ----------------- Twenty‐five adults (10 female, 15 male) between 21 and 43 years of age were recruited for the study. Participants reported good health, no history of neurological or psychiatric disorders, and did not require correction for normal vision. They provided written consent after receiving a full explanation of the study, which was approved by the Research Ethics Board at Baycrest Centre. 2.2. Experimental task {#brb31263-sec-0008} ---------------------- The sequence of visual stimuli for the modified Sternberg paradigm started with a cue symbol (+), followed by a sequential list of five capital letters, a blank screen retention interval, and a probe. The study list was a unique combination of randomly chosen consonant letters. Vowels were excluded to make it less likely that participants chunked the list into a word. The probe was a pair of two letters, which had been presented next to each other in the study list. Participants had to decide whether the probe items had been presented in same or reversed order in the study list. Cue and list items were presented for the duration of 700 ms with an inter‐onset interval of 1,000 ms, which resulted in a total duration of 5,700 ms of visual stimulation (Figure [1](#brb31263-fig-0001){ref-type="fig"}a). Participants held the studied list in memory during the retention interval of 2,300 ms between the offset of the last list item and probe onset. The retention interval was chosen to be longer than 1,500 ms to reduce the effect of the most recent list item being easily remembered (Olton & Samuelson, [1976](#brb31263-bib-0071){ref-type="ref"}). The probe was presented for 1,000 ms, and participants responded with right‐hand button press within 2,000 ms after the probe onset. A shorter reaction time was expected when the letters of the probe pair were in same order as in the list than for the more difficult task of finding the reversely ordered probe pair. No feedback was given whether the response was correct or not. The next sequence was initiated 4,000 ms after the button press. Thirty WM sequences were performed within an experimental block of 7.5 min duration. Each participant completed six blocks within a session. Stimulus presentation was controlled by Presentation software (Neurobehavioural Systems, Inc., Berkeley, CA). The stimuli were projected onto a back‐projection screen of 50 cm in diagonal at distance of 60 cm in front of the participant. The letters were black on a gray background and had a height of 80 mm, corresponding to a visual angle of 7.6 degrees. For precise timing of the stimulus events, picture onsets were detected with a photodiode. ![Working memory experiment. (a) Time course of the experimental paradigm. After a start cue (+), the five letter list items were presented sequentially for 700 ms duration and 1,000 ms inter‐onset interval. The retention interval between the offset of the last list item and the probe onset lasted for 2,300 ms. The cue onset served as zero‐time for the data analysis. (b) Group mean reaction time in relation to the serial position and order of the probe pair. The error bars denote the 95% confidence intervals of the group mean](BRB3-9-e01263-g001){#brb31263-fig-0001} 2.3. MEG recording {#brb31263-sec-0009} ------------------ Magnetoencephalography was recorded in a quiet magnetically shielded room using a 151‐channel whole‐head axial gradiometer‐type MEG system (CTF‐MEG, Port Coquitlam, BC, Canada) at the Rotman Research Institute. Two MEG channels were disabled for technical reasons. Participants were comfortably seated in an upright position with the head resting inside the helmet‐shaped MEG device. The magnetic field data were low‐pass filtered at 200 Hz, sampled at 625 Hz, and stored continuously. The participant\'s head position relative to the MEG sensor was registered at the beginning and end of each recording block using three small electromagnetic coils, attached to fiducial points at the naison and left and right pre‐auricular points. For an experimental block in which the fiducial positions were different in any direction by more than 5 mm from the mean, large head movements were assumed, and the block was repeated. The mean of both fiducial positions also defined the head‐based coordinate system with the origin at the midpoint between the bilateral pre‐auricular points. The posterior‐anterior x‐axis run from the origin to the nasion, the y‐axis run from right to the left ear, perpendicular to x in the plane of the three fiducials, and the inferior‐superior z‐axis run perpendicular to the x‐y plane toward the vertex. Trigger signals, indicating time points and types of stimulus events were recorded simultaneously with the MEG. 2.4. Data analysis {#brb31263-sec-0010} ------------------ The data analysis was aimed at showing how changes in the magnitude of alpha oscillation relate to the different stages of WM processes, how these alpha power changes manifest across the brain, whether alpha coherence between sensors would indicate functional connectivity, and whether temporal coupling between alpha and gamma rhythms would indicate a role of alpha for precise neural timing. The MEG data were preprocessed to remove eyeblink and heartbeat artifacts. First, the time points of the artifacts were identified using the independent component analysis function **fastica** from the EEGLAB toolbox (Delorme & Makeig, [2004](#brb31263-bib-0016){ref-type="ref"}). Spatiotemporal templates were constructed as the first principle components of averaged artifacts and were used to eliminate artifacts in the continuous data (Kobayashi & Kuriki, [1999](#brb31263-bib-0054){ref-type="ref"}). The preprocessed MEG data were then parsed into epochs of 16 s duration, equivalent to 10,000 samples. The cue onset defined the zero‐time. Each epoch contained 2.0 s of pre‐stimulus time, the encoding interval of 5.7 s of visual stimulation, the 2.3 s retention interval, and a 6.0 s interval, consisting probe presentation, memory recall, decision‐making, response, and post‐response times. 2.5. Time‐frequency analysis {#brb31263-sec-0011} ---------------------------- Time‐frequency analysis was applied to all epochs of the MEG sensor data to study the temporal dynamics of oscillatory brain activity and its spatial variation across the sensor domain. The time‐frequency representation was calculated at 64 frequencies, logarithmically spaced between 2 Hz and 60 Hz, using a complex Morlet wavelet (Kronland‐Martinet, Morlet, Morlet, & Grossmann, [1987](#brb31263-bib-0059){ref-type="ref"}; Samar, Bopardikar, Bopardikar, Rao, & Swartz, [1999](#brb31263-bib-0084){ref-type="ref"}). The full width of the wavelet at half of its maximum was equivalent to two cycles at 2 Hz and six cycles at 60 Hz. This approach of varying the wavelet width across frequencies was suitable to account for the trade‐off between time and frequency resolution across the frequency range of interest (Bruns, [2004](#brb31263-bib-0009){ref-type="ref"}). The 10‐Hz wavelet had a half‐intensity width of 389 ms corresponding to the bandwidth of 2.57 Hz. Wavelet coefficients *w(t,f)*were calculated at each 8th sample in time by convolving the time series with the wavelet. Before applying the wavelet transform to each trial of an experimental block, the averaged signal was subtracted to reduce the effect of evoked responses on the power measures. The resulting 1,250 × 64 time‐frequency coefficients for each trial, sensor, and participant were stored for further analyses. All data analyses were performed with in‐house developed matlab functions. 2.6. Time courses and topographic maps of ERD/ERS {#brb31263-sec-0012} ------------------------------------------------- ERD and ERS were computed as signal power changes relative to the signal power in the baseline interval for each time‐frequency bin and each MEG sensor (Graimann & Pfurtscheller, [2006](#brb31263-bib-0026){ref-type="ref"}). The baseline was the 2‐s interval preceding the onset of the visual cue. Signal power *P(t,f)* was calculated as the product of each wavelet coefficient and its conjugate complex. For each frequency bin, the signal power was normalized relative to the mean power $P_{B}$ in the baseline interval and expressed in percent: $ERS = 100 \times \left( {P\left( {t,f} \right)/P_{B}{(f)} - 1} \right)$. Negative values were termed as ERD. ERD/ERS values were averaged across trials, repeated blocks, and participants. Alternatively, ERD/ERS is sometimes expressed as the logarithm of the signal power ratio and scaled in decibels (Makeig, [1993](#brb31263-bib-0064){ref-type="ref"}). The percent and the logarithmic measures are closely similar for small changes, for example, ±10%, because of the approximation, $\ln(x - 1) \approx x,{} \in x \ll 1$. For larger signal power changes, the logarithmic measure numerically emphasizes on ERD whereas percent changes numerically emphasize ERS. We analyzed alpha ERD/ERS both in the sensor domain and after applying a MEG beamformer analysis in the source domain. The source domain analysis allowed for separating the activity in multiple cortical sources for studying connectivity properties. We employed the sensor domain analysis for studying the global properties of alpha oscillations. First we aimed at providing a spatial map of consistent alpha ERD/ERS across the head. Therefore, we applied a principal component analysis (PCA) to the multivariate ERD/ERS data after averaging across all participants. The PCA decomposed the 149 time series of ERD/ERS in the 8 Hz to 14 Hz frequency band into principal components (PC) consisting of a single time series. The corresponding topographic map of factor loads indicated how strongly the temporal pattern of ERD/ERS was represented at each sensor. A correlation between the individual time series and each PC, measured the similarity between individual ERD/ERS and the group mean PC. We applied a *t* test to the similarity index for testing whether the individual participants contributed consistently to the group mean ERD/ERS map. We corrected the p‐values for the false discovery rate using the **mafdr** Matlab function. 2.7. Distinct frequency bands for alpha ERD and ERS {#brb31263-sec-0013} --------------------------------------------------- We measured the peak frequencies of alpha ERD and ERS separately for the encoding and retention intervals because of previous reports that the upper and lower alpha frequencies are differently related to the memory process (Klimesch, Doppelmayr, Doppelmayr, & Hanslmayr, [2006](#brb31263-bib-0047){ref-type="ref"}; Petsche, Kaplan, Kaplan, Stein, & Filz, [1997](#brb31263-bib-0074){ref-type="ref"}). Also, different peak frequencies for ERD and ERS could indicate that different alpha processes are involved in ERD and ERS. One caveat for interpretation of ERD/ERS is that the numerical values of percent changes may be large in case of low signal power in the baseline interval while absolute power changes are small. Therefore, we analyzed absolute spectral power changes and compared those to the center frequencies of ERD and ERS effects. We measured the individual alpha frequencies as the center of gravity in the 7 Hz to 14 Hz alpha band in the averaged signal power in six regions of interest of frontal, central, and occipital sensors above the left and right hemispheres. The alpha frequencies were calculated for the signal power in the baseline interval (--2.0 s to 0), memory encoding (1.0 s to 6.0 s), and retention (6.0 s to 8.0 s) and for ERD/ERS in the encoding and baseline interval. A two‐way repeated measures ANOVA was applied to the signal power data with the factors "region of interest" (six levels) and "time interval" (three levels). The ANOVA for the ERD/ERS had only two levels for the factor "time interval." We performed post hoc *t* tests and calculated confidence intervals for the ERD/ERS frequencies using bootstrap resampling. 2.8. Effect of memory load on alpha ERD/ERS {#brb31263-sec-0014} ------------------------------------------- Previous research showed increased alpha ERS with increasing memory load (Gomarus, Althaus, Althaus, Wijers, & Minderaa, [2006](#brb31263-bib-0025){ref-type="ref"}). In our study, the memory load increased sequentially with the increasing numbers of letters in the study list. We analyzed the ERD/ERS peak amplitudes during the encoding interval to study the effect of the memory load. For the first four visually presented letters in the list, we measured the individual peak amplitudes of ERD/ERS in clusters of occipital sensors in the left and right hemispheres. We performed a three‐way ANOVA with the factors "hemisphere" (left, right), "response type" (peak, trough), and "letter position" (1st to 4th). 2.9. SAM source analysis {#brb31263-sec-0015} ------------------------ For studying the role of alpha oscillations for brain connectivity, we performed a whole brain source analysis. Source activity was reconstructed with synthetic aperture magnetometry (SAM) (Robinson & Vrba, [1999](#brb31263-bib-0081){ref-type="ref"}). SAM is based on a linearly constrained minimum variance beamformer (Van Veen, Van Drongelen, Van Drongelen, Yuchtman, & Suzuki, [1997](#brb31263-bib-0107){ref-type="ref"}). Participants\' head shapes were obtained with a 3‐D digitization device (Polhemus Fastrak, Polhemus, Colchester, VT). Individual head models for the beamformer were constructed by locally approximating spheres for each MEG sensor to the digitized head shape. A validated procedure of using standard brain and individual head models (Steinstraeter et al., [2009](#brb31263-bib-0094){ref-type="ref"}) was used to co‐register the source images with a standard anatomical MR (colin27) (Holmes et al., [1998](#brb31263-bib-0033){ref-type="ref"}). The standard MRI was warped into the individual head shapes using the Brainstorm software (Tadel, Baillet, Baillet, Mosher, Pantazis, & Leahy, [2011](#brb31263-bib-0098){ref-type="ref"}). A set of weighting coefficients was determined for 72 regions of interest (ROIs) (Bezgin, Vakorin, Vakorin, Opstal, McIntosh, & Bakker, [2012](#brb31263-bib-0006){ref-type="ref"}; Kötter & Wanke, [2005](#brb31263-bib-0056){ref-type="ref"}). The linear combination of the weighting coefficients with the MEG data resulted in virtual sensor waveforms of the source activity at each ROI. We applied the same time‐frequency analysis that had been used in the sensor domain to the source waveforms. 2.10. Weighted phase‐lagging index (wPLI) {#brb31263-sec-0016} ----------------------------------------- For testing the hypothesis that alpha oscillations are involved in functional connectivity between brain areas, the coherence of alpha oscillations between the brain source signals was calculated over the time course of the WM task. Alpha coherence was measured using the weighted phase‐lagging index (wPLI) (Vinck, Oostenveld, Oostenveld, Wingerden, Battaglia, & Pennartz, [2011](#brb31263-bib-0108){ref-type="ref"}) which is an extension of the phase‐lagging index (PLI) (Stam, Nolte, Nolte, & Daffertshofer, [2007](#brb31263-bib-0092){ref-type="ref"}). PLI measures the asymmetry of the distribution of phase differences between two signals and describes the consistency with which the phase of one signal is leading or lagging relative to the phase of the other signal. By weighing each phase difference according to the magnitude of the lag, phase differences around zero contribute minimally to the calculation of the wPLI. This procedure reduces the probability of detecting false positive connectivity in the case of volume conducted noise sources with near‐zero phase lag and increases the sensitivity in detecting phase synchronization (Vinck et al., [2011](#brb31263-bib-0108){ref-type="ref"}). The wPLI approach showed best performance in the presence of noise compared to other phase statistics (Wianda & Ross, [2016](#brb31263-bib-0112){ref-type="ref"}). The weighting factor is the magnitude of the imaginary cross‐spectrum. The complex cross‐spectrum *C(t,f)* between two sources with complex wavelet coefficients *X(t,f)* and *Y(t,f)*was computed as $C(t,f){} = X(t,f) \bullet Y(t,f){} \ast$, where \* indicates the complex conjugate. The wPLI was computed as.$$wPLI = \frac{\left| {E\left\{ {\left| {imag\left( C \right)} \right|sgn\left( {imag\left( C \right)} \right)} \right\}} \right|}{E\left\{ \left| {imag\left( C \right)} \right| \right\}}$$ The wPLI was calculated for every pair of the 72 sources for the 12‐Hz time‐frequency coefficients and the 1,250 samples in time. This analysis resulted in a stack of 1,250 connectivity matrices with a dimension of 72 × 72 for each participant. For comparing connectivity during the retention interval, encoding, and pre‐stimulus baseline, wPLI values were averaged across the time intervals of 6.0 s to 8.0 s, 1.0 s to 6.0 s, and −2.0 s to 0, respectively, reducing the data to three 72 × 72 connectivity matrices for each participant. For estimating the effect size of the connectivity measures, we compared the group mean wPLI for all elements of the connectivity matrix against a maximum obtained from randomized surrogate data. For each trial, we added a random phase in the range of ‐π to π to the data, calculated the wPLI as for the original data, and identified across all 72 × 72 source pairs the maximum of the group mean surrogate wPLI for the retention time interval. We estimated the confidence interval for the mean of the original wPLI by bootstrap randomization across participants and compared the lower 95% bound against the maximum in the surrogate data. The networks of the strongest connections were visualized using the BrainNet toolbox (Xia, Wang, Wang, & He, [2013](#brb31263-bib-0113){ref-type="ref"}). For testing the overall change in connectivity between the three time intervals and differences between hemispheres, we obtained a univariate connectivity measure from a PCA applied to the connectivity matrix and correlation with individual connectivity matrices. Differences between the time intervals for the univariate measure were assessed with permutation tests (*n* = 1,000) across participants. A simple measure of connectedness was obtained for each ROI as the mean across the corresponding row of the connectivity matrix. For testing the hypothesis that connectedness depended on the level of ERS, we performed a linear regression of the ERD/ERS data on the connectedness for each ROI. 2.11. Alpha‐gamma phase‐amplitude coupling (PAC) {#brb31263-sec-0017} ------------------------------------------------ The relationship between the magnitude of gamma activities and the phase of alpha oscillation was analyzed with the cross‐frequency coupling method, which estimates the strength of pairwise interactions between two signals at different frequencies, both between and within sources (Buzsáki, [2010](#brb31263-bib-0010){ref-type="ref"}; Buzsáki, Logothetis, Logothetis, & Singer, [2013](#brb31263-bib-0011){ref-type="ref"}; Canolty & Knight, [2010](#brb31263-bib-0012){ref-type="ref"}). Cross‐frequency coupling between the phase of a low‐frequency signal and the amplitude of a higher frequency signal is termed PAC and has been applied most successfully (Cohen et al., [2009](#brb31263-bib-0013){ref-type="ref"}; Osipova, Hermes, Hermes, & Jensen, [2008](#brb31263-bib-0072){ref-type="ref"}; Voytek et al., [2010](#brb31263-bib-0110){ref-type="ref"}). Specifically, PAC tests whether the amplitude of gamma oscillation in a signal *y(t,f~γ~)*depends on the alpha phase in *x(t,f~α~)*. We employed a PAC algorithm in the time domain. The beamformer source signals were band‐pass filtered in the alpha and gamma frequency bands by convolution with FIR filters, designed with the **fir1** Matlab function. For exploring the properties of the cross‐frequency coupling, the frequency for phase (alpha) was varied between 5 Hz and 20 Hz in 1‐Hz steps, the frequency for the amplitude (gamma) was varied between 30 Hz and 150 Hz in 5‐Hz steps. While the alpha band‐pass filter was in the narrow range between 0.85 and 1.18 times the alpha frequency, the gamma band‐pass filter was defined by gamma frequency ±1.2 times the alpha frequency. Thus, the gamma bandpass included the upper and lower sidebands of the amplitude‐modulation spectrum (Aru et al., [2015](#brb31263-bib-0001){ref-type="ref"}). The Hilbert transform was applied to obtain complex signals. The time points of the peak maxima of the band‐pass filtered alpha signal were taken as phase references. The gamma signal was parsed into short epochs with the duration equal to four cycles of the alpha frequency. Each epoch was centered at the alpha maximum and was fitted with a complex wavelet at the alpha frequency by calculating the dot product between the gamma amplitude and wavelet. The outcome measure of this procedure was the phase of the gamma envelope relative to the alpha peak reference. We used circular statistics to reject the null hypothesis of a uniform phase distribution, indicating no phase relation between the gamma amplitude and the alpha phase. Mapping the outcome measure of the circular z‐score (Fisher, [1995](#brb31263-bib-0019){ref-type="ref"}, p 70) across the frequencies for phase and amplitude resulted in the comodulogram. Cross‐frequency comodulograms were calculated for all pairs of 72 × 72 sources for all individual participants. Comodulograms under the null hypothesis were obtained from surrogate data by adding trial by trial a random delay in the range between plus and minus a cycle of the alpha frequency to the gamma data. This technique removed the phase relation between the alpha and gamma data without disturbing the temporal dynamics of the spectral properties of the signal (Scheffer‐Teixeira & Tort, [2016](#brb31263-bib-0087){ref-type="ref"}). Several authors cautioned that cross‐frequency measures are subject to interpretation and may result from nonlinearity or other events in the signals under consideration but not from neural interaction (Gerber, Sadeh, Sadeh, Ward, Knight, & Deouell, [2016](#brb31263-bib-0023){ref-type="ref"}; Hyafil, [2015](#brb31263-bib-0037){ref-type="ref"}; Kramer, Tort, Tort, & Kopell, [2008](#brb31263-bib-0057){ref-type="ref"}). Therefore, we calculated the bicoherence as an alternative method for cross‐frequency analysis and compared the different methods. The bicoherence was calculated from multiple Fourier transforms over 50% overlapping intervals of 500 ms duration across the WM maintenance interval between 6.0 and 8.0 s. The bispectrum was defined as $B\left( {f_{1},f_{2}} \right) = X{(f_{1})} \bullet X{(f_{2})} \bullet X^{\ast}{(f_{2} + f_{1})}$ with the complex Fourier transform *X* and its conjugate *X\**(Sigl & Chamoun, [1994](#brb31263-bib-0090){ref-type="ref"}) and the bicoherence $BiCoh = \sum B{(f_{1},f_{2})}/\sum{|B{(f_{1},f_{2})}|}$ (Hayashi, Tsuda, Tsuda, Sawa, & Hagihira, [2007](#brb31263-bib-0032){ref-type="ref"}). Bicoherence was calculated for f~1~ = 5...20 Hz and f~2~ = 20...150 Hz. For the group analysis, we considered two gamma frequency bands. First, we calculated PAC between 12‐Hz alpha and 45‐Hz gamma. Second, we averaged the individual comodulograms between 12 Hz and 14 Hz for alpha and between 60 Hz and 100 Hz for gamma. 2.12. Asymmetry of cross‐frequency coupling {#brb31263-sec-0018} ------------------------------------------- Provided the hypothesis that the alpha phase controls the gamma amplitude (Fries, [2015](#brb31263-bib-0020){ref-type="ref"}), an asymmetry of PAC could be related to the directionality of neural communication. Specifically, we would interpret the asymmetry $PAC\left( {x_{\propto},y_{\gamma}} \right) > PAC{(y_{\propto},x_{\gamma})}$ as alpha in area *x* is controlling gamma in area *y* more than vice versa. To test the asymmetry, we performed a student\'s *t* test between *PAC*(x,y) and *PAC*(y,x) and corrected p‐values for the false discovery rate using the Matlab function mafdr. 3. [results]{.smallcaps} {#brb31263-sec-0019} ======================== 3.1. Behavioral performance {#brb31263-sec-0020} --------------------------- The group mean reaction time (RT) increased with increasing serial position of the probe in the study list, with the exception, that the RT for the last position was shorter again. RT was generally longer for the probe in reverse order (Figure [1](#brb31263-fig-0001){ref-type="fig"}b). A repeated measures ANOVA with the factors "probe order" (two levels) and "serial position" (four levels) revealed main effects of "probe order" (*F*(1,24) = 61.0, *p* \< 0.0001) and of "serial position" (*F*(3,72) = 8.85, *p* \< 0.0001) but no interaction between both factors (*F*(3,72) = 0.7, *p* = 0.6, n.s.). Pairwise comparisons showed significance for the RT increase when the serial position of the probe was shifted from (1--2) to (2--3) (*t*(23) = 2.47, *p* = 0.021) and from (2--3) to (3--4) (*t*(23) = 2.21, *p* = 0.037) and for the RT decrease between the positions (3--4) and (4--5) (*t*(23) = 3.60, *p* = 0.0014). RT for the first and last positions were not different (*t*(23) = 0.18, *p* = 0.9, n.s.). The RT was in mean 366 ms longer for the reversed probe (*t*(23 = 7.8, *p* \< 0.0001). The behavioral data, showing the effect of serial position and the recency effect, indicate that the participants performed the WM task. The longer RT for the revered probe order suggests that participants performed an additional task of mentally manipulating the probe. 3.2. Time courses and topographic maps of ERD/ERS {#brb31263-sec-0021} ------------------------------------------------- The PCA applied to grand averaged time series of ERD/ERS provided a global overview about how the temporal modulation of alpha power during the different stages of WM was represented in spatial patterns around the head. The time course of first PC, accounting for 86% of the variance, is shown in Figure [2](#brb31263-fig-0002){ref-type="fig"}a. During the encoding interval, alpha power decreased after the presentation of each visual stimulus, reaching the minimum in mean at 260 ms (95% CI = ±27.0 ms) after stimulus onset, which was immediately followed by a partial rebound with a maximum at 750 ms (95% CI = ±26.5 ms) latency. In contrast to the prominent ERD during the encoding interval, alpha ERS occurred during memory retention. Then again, alpha ERD was prominent during and after the probe presentation. The topographic map, corresponding to the first PC, depicted that ERD/ERS from occipital‐parietal areas, as well as from frontal areas contributed to the first PC. Circle symbols in Figure [2](#brb31263-fig-0002){ref-type="fig"}a indicate sensors, at which the first PC was consistently represented in the individual data at *p* \< 0.001. The topographic map of the factor load of the second PC, accounting for 10% of the variance, showed major contributions from sensors above left central areas, corresponding to the sensorimotor cortices, contralateral to the responding right hand (Figure [2](#brb31263-fig-0002){ref-type="fig"}b). The time course of the second PC showed only minor variations during the memory encoding and retention intervals. However, a strong alpha ERD occurred before and after probe presentation, suggesting an involvement in response preparation and execution. Further PCs, accounting in total for less than 4% of the variance, were not consistently represented in the individual ERD/ERS data. ![Overview of the ERD/ERS time‐frequency analysis. (a) Time course and topographical map of the first principal component (PC) of a principal component analysis (PCA) applied to the grand averaged ERD/ERS data. Filled circle symbols indicate the sensors at which the first PC was consistently represented in the individual data at *p* \< 0.001. Occipital and parietal sensors contributed predominantly to the first PC. The maximum occurred at the the sensor MLO21. (b) The second PC was predominant at left central sensors above the sensorimotor cortex. Its time course was less modulated by the visual stimuli compared to the first PC. (c) Time‐frequency representation of grand mean ERD/ERS observed with the occipital sensor MLO21. The contour lines indicate the *p* = 0.001 level of a *t* test, applied across participants. The time course of visual stimuli is shown as reference on top. (d) Time‐frequency map of grand mean ERD/ERS at the frontotemporal sensor MRT31, and (e) the sensor MLC42 above left central areas](BRB3-9-e01263-g002){#brb31263-fig-0002} Visualization of the grand averaged time‐frequency map of ERD/ERS obtained from the left occipital sensor MLO21, which showed the largest factor load for the first PC, revealed six intervals of ERD at alpha frequencies, which followed the onsets of the visual cue and the five list items, as well as a strong ERD after probe presentation (Figure [2](#brb31263-fig-0002){ref-type="fig"}c). The initial phase of those ERD intervals showed a spectral spread into the beta range. In contrast to alpha ERD during encoding, the retention interval was characterized by ERS at alpha frequencies, also extending into the beta range. Informed by previous literature, one focus of data analysis was on the alpha ERS during memory retention. The time‐frequency map provided the first hint about a possibly higher center frequency for the ERS compared to ERD during memory encoding. Similar observations of alpha ERD during encoding and ERS during retention were made for the frontotemporal sensor MRT31 (Figure [2](#brb31263-fig-0002){ref-type="fig"}d). However, the magnitudes of ERD/ERS were generally smaller. Alpha ERD was concentrated in the time intervals immediately following the visual stimulus presentation. Moreover, the frontotemporal sensor showed distinct intervals of theta ERS following the visual stimuli, which was also strongly expressed after the presentation of the last list item and continued during memory retrieval after the probe occurred. A different time course of alpha ERD/ERS was observed from the sensor MLC42 which had the largest factor load for the second PC above left central areas. (Figure [2](#brb31263-fig-0002){ref-type="fig"}e). The modulation of ERD by the visual stimuli was less expressed in this sensor. Instead, ERD developed during the time interval of stimulus presentation, continued during the retention interval and was strongest after the probe presentation. The most noticeable difference to the occipital ERD/ERS map was the absence of ERS during the maintenance interval. Moreover, ERD extended into the beta range. 3.3. Alpha ERS during memory retention {#brb31263-sec-0022} -------------------------------------- For quantitative analysis of ERD/ERS in specific time intervals of the WM task, nonparametric bootstrap resampling with replacement was applied to the ERD/ERS data across the *n* = 25 participants. Figure [3](#brb31263-fig-0003){ref-type="fig"}a illustrates the time courses of alpha ERD/ERS in selected frontotemporal, central, and occipital sensors. Shaded areas indicate the 95% confidence intervals (CIs) for the group mean. The 95% CI for frontotemporal and occipital sensors did not include zero during the stimulus presentation and memory retention, indicating significant effects of alpha ERD and ERS. During memory retention, the CIs of frontotemporal and occipital sensors were nonoverlapping with each other, indicating significantly larger effect size occipital than in frontal sensors. In contrast, the left central senors showed no effect of ERS. ![Alpha ERS during memory encoding and ERS during retention. (a) Group mean time series of ERD/ERS in the alpha band (8--12 Hz), obtained at the occipital sensor MLO21, the central sensor MRT31, and frontal MLC42. The shaded areas indicate the 95% confidence intervals for the group mean. (b) The topographic map of alpha ERD/ERS, averaged across the encoding interval (1.0 s--6.0 s), is dominated by cool colors, indicating ERD. Sensors, at which ERD reached the *p* \< 0.005, are indicated with circle symbols. (c) Alpha ERD/ERS during the retention interval (6.0 s--8.0 s). Warm colors in occipital and frontal sensors indicate predominant alpha ERS. Sensors with ERS at *p* \< 0.005 are indicated with circle symbols. Significant alpha power increase during the retention interval was observed at occipital and frontal sensors](BRB3-9-e01263-g003){#brb31263-fig-0003} The *t* tests for alpha ERD during the encoding interval and ERS during memory retention revealed how consistent individuals contributed to ERD and ERS at the various MEG sensors. ERD during encoding was maximally expressed in left occipital sensors, however, was prevalent across the whole sensor array, except right central areas (Figure [3](#brb31263-fig-0003){ref-type="fig"}b). Alpha ERS during the retention interval was maximal in right occipital sensors. However, a cluster of sensors above right frontotemporal areas showed ERS effect sizes at *p* = 0.005 (Figure [3](#brb31263-fig-0003){ref-type="fig"}c). 3.4. Distinct frequency bands for alpha ERD and ERS {#brb31263-sec-0023} --------------------------------------------------- We examined whether ERD and ERS occurred at different center frequencies within the alpha band in various brain regions. For example, visual inspection of the time‐frequency map in Figure [2](#brb31263-fig-0002){ref-type="fig"}c revealed that the center frequency of ERS during memory retention was higher than the peak frequency of ERD during the encoding interval. Moreover, ERS during retention extended into the beta range, indicating the involvement of oscillations beyond the alpha band. Power spectra with the alpha peaks during baseline, encoding and retention intervals are illustrated in Figure [4](#brb31263-fig-0004){ref-type="fig"}a. The ANOVA for the peak alpha frequency revealed an effect of the time interval (*F*(2,48) = 6.27, *p* = 0.0038). The alpha frequency during encoding was higher than during baseline (*t*(149) = 5.32, *p* \< 0.0001). Similarly, the alpha peak was at a higher frequency during retention compared to baseline (*t*(149) = 6.55, *p* \< 0.0001). However, the alpha frequency was not different between encoding and retention intervals (*t*(149) = 0.08, *p* = 0.94, n.s.). Moreover, the ANOVA for the peak frequency in alpha power showed an effect of the sensor positions (*F*(5,120) = 14.2, *p* \< 0.0001). The alpha frequency increased between frontal and central sensors (*t*(74) = 3.77, *p* \< 0.0001) and between central and occipital sensors (*t*(74) = 4.78, *p* \< 0.0001). There was also a "time interval" by "sensor group" interaction (*F*(10,240) = 3.93, *p* = 0.0001), because of effects of "time interval" in occipital sensors (*F*(2,48) = 9.98, *p* = 0.0002) and frontal sensors (*F*(2,48) = 3.88, *p* = 0.028) but not central (*F*(2,48) = 1.74, *p* = 0.19, n.s.). ![Alpha peak frequency in different time intervals of the working memory (WM) task at an example occipital sensor MRO21. (a) Alpha power in the baseline, encoding, and retention intervals. The error bars indicate the 95% confidence limits of the mean peak alpha frequency. (b) Alpha ERD and ERS in the encoding and retention intervals. Nonoverlapping confidence intervals indicate consistently higher frequency for the ERS peak in the retention interval compared to the ERD trough during encoding. The inset at the top indicates the time intervals for the spectrum analysis](BRB3-9-e01263-g004){#brb31263-fig-0004} The ERD/ERS spectra for encoding and retention are shown in Figure [4](#brb31263-fig-0004){ref-type="fig"}b. The ANOVA for the ERD/ERS peak frequency revealed an effect of the "time interval" (*F*(1,24) = 7.93, *p* = 0.0096). In mean across sensors, the peak alpha frequency was 9.5 Hz during encoding and 12.9 Hz during retention. An interaction of "time interval" and "sensor group" was significant (*F*(5,120) = 2.97, *p* = 0.015) because the alpha frequency was higher during retention compared to encoding right occipital (*t*(24) = 3.69, *p* = 0.0011) and left occipital*t*(24) = 2.49, *p* = 0.020) and as a tendency right frontal (*t*(24) = 1.78, *p* = 0.063), however not different at left frontal and central sensors. 3.5. Effect of memory load on alpha ERD/ERS {#brb31263-sec-0024} ------------------------------------------- Each visual stimulus elicited a brief period of alpha ERD followed by an immediate rebound. We tested whether the alpha ERD/ERS in sensors above the visual cortex depended on the stimulus sequence and thus could indicate involvement in encoding the increasing memory load. A repeated measures three‐way ANOVA for the peak ERD/ERS magnitudes with the factors "hemisphere" (left, right), "response type" (ERD trough, ERS peak), and "letter position" (1st to 4th), revealed an effect of "response type" (*F*(1,24) =6 6.2, *p* \< 0.0001), which is trivial by definition of the peak types. More importantly, the ANOVA revealed a "letter position" by "response type" interaction (*F*(3,72) = 4.02, *p* = 0.011) and a "letter position" by "hemisphere" interaction (*F*(3,72) = 8.97, *p* \< 0.0001). To unveil the causes of the interactions, separate two‐way ANOVAs with the factors "letter position" and "response types" were performed for the ERD/ERS magnitudes in the right and left hemisphere, respectively. The ANOVA for the right hemisphere revealed a "letter position" by "hemisphere" interaction (*F*(3,72) = 4.52, *p* = 0.0058) because the ERS peak magnitudes increased monotonically with increasing numbers of letters in the study list, while the magnitudes of ERD troughs remained in a steady level. The ANOVA for the left hemisphere showed only a tendency for a "letter position" by "hemisphere" interaction (*F*(3,72) = 2.51, *p* = 0.065). Pairwise comparisons found that subsequent peak amplitudes in the right hemisphere at 12 Hz were significantly larger than the first peak (positions 1--2: *t*(24) = 2.29, *p* = 0.031; positions 1--3: *t*(24) = 2.79, *p* = 0.010; positions 1--4: *t*(24) = 2.67, *p* = 0.013). The linear regression was significant for the right hemispheric peaks (*R* ^2 ^= 0.13, *F*(1,99)=14.6, *p* = 0.0002) but not for the troughs or the left hemispheric responses (Figure [5](#brb31263-fig-0005){ref-type="fig"}). Moreover, the analysis of the ERD peak magnitudes revealed that the cue stimulus elicited smaller ERD than the subsequent letter stimuli (*t*(24) = 3.15, *p* = 0.0022). ![Time courses of alpha ERD/ERS in occipital sensors above the left and right visual cortices. The inset at the top depicts the selected clusters of MEG sensors. Amplitudes of peaks and troughs, indicated by open circles were analyzed across participants. Specifically, the peak amplitudes in the right hemisphere increased with increasing number of letters in the study list. In contrast, no significant change for the magnitude of ERD troughs was observed over the time course of presentation of the study list. The ERD induced by the list items was significantly larger than the ERD after the cue](BRB3-9-e01263-g005){#brb31263-fig-0005} 3.6. Alpha connectivity during the retention interval {#brb31263-sec-0025} ----------------------------------------------------- The connectivity measure wPLI was computed for all 72 sources and resulted in a 72 × 72 matrix of group mean connectivity (Figure [6](#brb31263-fig-0006){ref-type="fig"}a). The elements along the main diagonal are zero by definition. Moreover, the main diagonal mirrors wPLI values in the upper and lower triangles of the matrix because of wPLI(x,y) = wPLI(y,x). A permutation test showed that overall connectivity was stronger during retention compared to the baseline interval (*p* \< 0.0001). Visual inspection of the connectivity matrix revealed similar patterns within hemispheres (i.e., the quadrants along the main diagonal) and between homologue areas across hemispheres. Correlations between the quadrants of the connectivity matrix, excluding the midline connections, showed correlation between connectivity within left and right hemispheres (*R* ^2 ^= 0.467, *F*(1,560) = 1,012, *p* \< 0.0001) as well as correlation between connectivity within the left hemisphere and interhemispheric connectivity (*R* ^2^  0.146, *F*(1,560) = 196, *p* \< 0.0001), and between right hemispheric and interhemispheric connectivity (*R* ^2 ^= 0.114, *F*(1,560) = 148, *p* \< 0.0001). Permutation tests showed that the overall connectivity within the left hemisphere was larger than the right hemisphere (*p* \< 0.0001). ![Connectivity during working memory (WM) retention. (a) Matrix of connectivity between 72 brain sources, measured the weighted phase‐lagging index (wPLI). An arrow at the color bar indicates the maximum wPLI observed in surrogate data. The red error bar shows an example of the 95% confidence interval of the mean across participants. Abbreviations of the brain sources: CCA, anterior cingulate cortex; CCP, posterior cingulate cortex; CCR, retrosplenial cingulate cortex; CCS, subgenual cingulate cortex; A1, primary auditory cortex; A2, secondary auditory cortex; FEF, frontal eye field; IA, anterior insula; IP, posterior insula; Ml, primary motor cortex; PCI, inferior parietal cortex; PCIP, cortex of the intraparietal sulcus; PCM, medial parietal cortex; PCS, superior parietal cortex; PFCCL, centrolateral prefrontal cortex; PFCDL, dorsolateral prefrontal cortex; PFCDM, dorsomedial prefrontal cortex; PFCM, medial prefrontal cortex; PFCORB, orbital prefrontal cortex; PFCPOL, polar prefrontal cortex; PFCVL, ventrolateral prefrontal cortex; PHC, parahippocampal cortex; PMCDL, dorsolateral premotor cortex; PMCM, medial (supplementary) premotor cortex; PMCVL, ventrolateral premotor cortex; S1, primary somatosensory cortex; S2, secondary somatosensory cortex; TCC, central temporal cortex; TCI, inferior temporal cortex; TCPOL, polar temporal cortex; TCS, superior temporal cortex; TCV, ventral temporal cortex; ThalAM, thalamus; V1, primary visual cortex; V2, secondary visual cortex; VACD, anterior visual cortex dorsal VACV, anterior visual cortex ventral. (b) Networks of 66 strongest connections. For the connections shown, the lower bound of wPLI was larger than the maximum obtained from the surrogate data. The networks are shown in a top view (middle panel) as well as lateral (top) and medial (bottom) sections in the left and right hemispheres. (c) Connectedness, defined as the mean across each row of the wPLI matrix, represented by the size of the circles. The color code relates to brain regions as explained in Panel D. (d) Correlation between connectedness and ERD/ERS for the 72 ROIs. Color code represents the different brain areas](BRB3-9-e01263-g006){#brb31263-fig-0006} The group mean wPLI values were compared against the maximum value of 0.114 observed from randomized surrogate data. The network of the strongest connections is visualized in Figure [6](#brb31263-fig-0006){ref-type="fig"}b. Main effects (wPLI \> 0.18) were seen in connections involving occipital, frontal, and mid‐brain sources. Local connectivity was prominent in the visual cortex with the strong (wPLI = 0.199) connections between the right primary visual cortex (V1) and the left secondary visual cortex (V2). Longer range connections were observed between visual cortices and the retrosplenial cingulate cortex as well as thalamic sources. However, connections between visual cortices and prefrontal cortices were numerically smaller (wPLI \< 0.16). Prefrontal cortices were mostly involved in longer range connections involving the mid‐brain sources and temporal cortices whereas local connections were relatively weaker (wPLI \< 0.15). An interesting feature of the network was that even though there was a strong connection between thalamic sources with anterior and posterior brain areas, direct connections between the two cortices were relatively weaker. This suggests that the thalamic structures might serve as a connection hub. Visual inspection of the connectivity matrix in Figure [6](#brb31263-fig-0006){ref-type="fig"}a reveals that certain rows and columns of the matrix have larger values than others. This means that some individual sources are connected to many other sources while other sources are less involved in connectivity. Therefore, we calculated the mean across each row as a measure of connectedness. The strength of the connectedness is visualized for the 72 ROIs in Figure [6](#brb31263-fig-0006){ref-type="fig"}c. Large values were observed in occipital and frontal sources. Considering that alpha ERS in occipital and frontal areas was a prominent characteristic during the retention interval, one question was, whether such signal power increase could indicate a timing mechanism that helps to synchronize brain areas involved in holding information during the retention interval. The regression analysis between the ERD/ERS and connectedness across all 72 ROIs (Figure [6](#brb31263-fig-0006){ref-type="fig"}d) revealed that was positively correlated with ERS (*R* ^2 ^= 0.32, *F*(1,71) = 33.7, *p* \< 0.0001). 3.7. Cross‐frequency coupling between alpha phase and gamma amplitude {#brb31263-sec-0026} --------------------------------------------------------------------- The increase of alpha power during memory retention could indicate that alpha is involved in a timing mechanism of synchronizing gamma oscillations in different brain areas, which was tested with a cross‐frequency analysis. Figure [7](#brb31263-fig-0007){ref-type="fig"} provides an overview about the flow of the analysis and the characteristics of the outcome measures. One trial of the band‐pass filtered data at low and high frequencies is shown in Figure [7](#brb31263-fig-0007){ref-type="fig"}a. The peaks of the alpha oscillations in time, marked by square symbols, served as phase reference for low‐frequency alpha signal. The amplitude of the high‐frequency gamma signal, time locked to the alpha peak, was approximated by a wavelet at the alpha frequency (Figure [7](#brb31263-fig-0007){ref-type="fig"}b). Across the alpha intervals and experimental trials, the phase of the wavelet approximation was uniformly distributed between −π and π. The phase statistics (Figure [7](#brb31263-fig-0007){ref-type="fig"}c) resulted in a measure of phase coherence, which was Rayleigh distributed under the null hypothesis (Figure [7](#brb31263-fig-0007){ref-type="fig"}d). The phase coherence was dependent on the number of trials, which was considered after transformation into circular z‐scores (Figure [7](#brb31263-fig-0007){ref-type="fig"}g). The comodulograms for one individual participant demonstrated large z‐scores for gamma frequencies around 40 Hz, between 80 Hz and 100 Hz, and above 120 Hz (Figure [7](#brb31263-fig-0007){ref-type="fig"}f). In contrast, the comodulogram resulting from surrogate data, simulating the null hypothesis, showed an even distribution of small z‐scores across the plane of gamma and alpha frequencies. For comparison, the comodulogram calculated for the same data with the bicoherence method (Figure [7](#brb31263-fig-0007){ref-type="fig"}h) showed large z‐scores for gamma frequencies below and above 40 Hz, which was the predominant maxima in Figure [7](#brb31263-fig-0007){ref-type="fig"}f. The bicoherence method, which assumes coherent high‐frequency oscillations and phase coherence between the sidebands of the amplitude‐modulation spectrum, showed no significant effects for higher gamma frequencies, in contrast to our time‐domain analysis. The differences between PAS measures obtained with the different algorithms allows for discussion about the underlying mechanisms. ![Cross‐frequency phase‐amplitude coupling. (a) Band‐pass filtered signals in the alpha (lower)and gamma (upper) frequency bands. (b) The gamma signal in short epochs, referenced to the alpha peak, was approximated by a complex wavelet at the alpha frequency. The phase of the wavelet served as a measure of phase relation between the gamma amplitude and the alpha peak. (c) Circular statistics for testing the phase relation between the gamma amplitude and the alpha phase. (d) Distribution of the vector length R, compared to the Rayleigh distribution (gray lines). The phase statistics depends on the number of trials. Alpha oscillations at 12 Hz contain a larger number of peaks than at 6 Hz. The larger number of samples results in smaller values for the phase statistics. (e) Comodulogram under the null hypothesis H~0~ obtained from surrogate data. (f) The comodulogram under H~1~ shows phase‐amplitude coupling between alpha and gamma frequencies. (g) The circular z‐scores are not dependent on the number of samples and allow of defining a global threshold for the comodulograms. (h) Bicoherence, calculated for the same data](BRB3-9-e01263-g007){#brb31263-fig-0007} 3.8. Phase‐amplitude coupling (PAC) in the lower gamma band {#brb31263-sec-0027} ----------------------------------------------------------- The alpha‐gamma PAC at 13 Hz and 45 Hz, corresponding to the maximum in the comodulogram in Figure [7](#brb31263-fig-0007){ref-type="fig"}f, was calculated between all pairs of the 72 beamformer sources during the WM retention interval, resulting in a 72 × 72 connectivity matrix (Figure [8](#brb31263-fig-0008){ref-type="fig"}a). To illustrate the effect sizes, the full range of z‐scores between 0 and 14 is shown without truncating the data at a certain significance level. For comparison, the maximum z‐score of 5.09 was observed for the frequencies of interest across all 72 × 72 PAC values in randomized surrogate data and is indicated with an arrow at the color bar in Figure [8](#brb31263-fig-0008){ref-type="fig"}a. ![Alpha‐gamma phase‐amplitude coupling (PAC) during the retention interval. (a) Connectivity matrix of PAC for the 72 ROIs. Color coded are the normal z‐scores for the phase coherence between alpha and the gamma amplitude. The arrow at the color bar indicates the maximum PAC value which was observed from randomized surrogate data. In contrast to the wPLI matrix in Figure [6](#brb31263-fig-0006){ref-type="fig"}, the PAC matrix is not symmetric with respect to the main diagonal, that is, generally PAC(A;B)≠PAC(B;A). (b) Directed network PAC, indicating how the gamma amplitude in one source (arrow head) is coupled with the gamma phase in a second source (arrow tail). (c) Directed network of the reverse coupling of alpha phase (arrow head) with gamma amplitude (arrow tail)](BRB3-9-e01263-g008){#brb31263-fig-0008} The properties of the PAC matrix in Figure [8](#brb31263-fig-0008){ref-type="fig"}a were different from those of the wPLI matrix in Figure [6](#brb31263-fig-0006){ref-type="fig"}a. Specifically, the PAC matrix was not mirror‐symmetric to its main diagonal because in general it holds that $PAC(a,b) \neq PAC(b,a)$, while the wPLI is commutative. The main diagonal of this matrix represented local coupling between the alpha phase and gamma amplitude of the same brain signal. Matrix rows indicated how the gamma amplitudes of a certain signal were coupled to the alpha phase of the other signals. The columns indicated the coupling between the alpha phases of a signal with the gamma amplitude of all corresponding signals. The matric had been arranged, that the second and fourth quadrants corresponded to the intrahemispheric coupling whereas the first and third quadrants represent the interhemispheric PAC. Directional networks have been visualized for the largest PAC values as a network of gamma‐alpha coupling (Figure [8](#brb31263-fig-0008){ref-type="fig"}b) and of alpha‐gamma coupling (Figure [8](#brb31263-fig-0008){ref-type="fig"}c). Figure [8](#brb31263-fig-0008){ref-type="fig"}b revealed predominant coupling between gamma amplitudes in the visual cortices and the alpha phase in distributed brain areas. Such schema of gamma‐alpha coupling relates to the rows of large PAC values for bilateral visual cortices in Figure [8](#brb31263-fig-0008){ref-type="fig"}a. The directed networks of alpha‐gamma coupling (Figure [8](#brb31263-fig-0008){ref-type="fig"}c) did not show a concentration around a certain source area and could be described as short‐range connections between distributed brain areas. Correspondingly, the PAC matrix did not show a pronounced columnar structure. 3.9. Phase‐amplitude coupling (PAC) in the higher gamma band {#brb31263-sec-0028} ------------------------------------------------------------ The group mean connectivity matrix for PAC between alpha phase and gamma activity in the 60 Hz to 100 Hz range is shown in Figure [9](#brb31263-fig-0009){ref-type="fig"} for the sources with strongest effect sizes. PAC had been calculated for all 72 × 72 pairs of sources. The covariance of the connectivity matrix had been calculated and the sources with the largest values along the diagonal had been selected for Figure [9](#brb31263-fig-0009){ref-type="fig"}. Strongest PAC was observed for sources in the thalamus, posterior cingulate cortex, and visual cortices. ![Group mean z‐scores for the phase‐amplitude coupling between 12--14 Hz alpha and 60--100 Hz gamma frequencies for selected sources showing strongest effect sizes](BRB3-9-e01263-g009){#brb31263-fig-0009} 3.10. *Alpha‐gamma*versus*. gamma‐alpha* asymmetry {#brb31263-sec-0029} -------------------------------------------------- The asymmetry between *PAC(a,b)* and *PAC(b,a)* was tested with two‐tailed *t* test for all pairs of nonidentical sources *(a,b)* and *(b,a)*. The outcome of the test is visualized with the matrix of FDR‐corrected p‐values in Figure [10](#brb31263-fig-0010){ref-type="fig"}a. Warm colors in this matrix indicate stronger gamma‐alpha coupling than alpha‐gamma coupling and cold colors vice versa. ![Asymmetry of PAC as an indicator for directionality of communication within brain networks. (a) The *t* tests, rejecting the hypothesis of PAC(A;B) = PAC(B;A). The labels indicate source pairs for which more details is provided in subpanels B‐E. (b) Asymmetric PAC between the anterior insula (IA) and the dorsal anterior visual cortex (VACD) within the left hemisphere. The red bars indicate the within same source PAC for the IA and VACD sources. The blue bars indicate PAC(IA; VACD) and PAC(IA; VACD), respectively, which were significantly different. (c) Asymmetric PAC between alpha phase of sources in the polar prefrontal cortex (PFCPOL) and intraparietal sulcus (PCIP) in the right hemisphere. (d) PAC between a pair of sources in the right PFCPOL and left ventral anterior visual cortex (VACV). (e) Asymmetric PAC between right PFCDM and left VACV. (f) Directional brain networks based of differences between local and distant asymmetry in their alpha‐gamma PAC. The networks reveal predominant coupling between anterior gamma amplitude (arrow tail) and frontotemporal alpha phase (arrow head). Temporal gamma amplitude was exclusively coupled to frontal alpha phase](BRB3-9-e01263-g010){#brb31263-fig-0010} Four examples for the asymmetry between alpha‐gamma and gamma‐alpha coupling are shown in detail in Figure [10](#brb31263-fig-0010){ref-type="fig"}b‐e. Each panel shows for a pair of sources (A,B), the within‐source PAC(A,A) and PAC(B,B) with red bars and the between‐sources PAC(A,B) and PAC(B,A) with blue bars. For the selected examples, the difference in between‐sources PAC was significant (*p* \< 0.001 for all cases) and was accompanied by a significant difference in the within‐source PAC for the two sources. Also the same relationship of $\left( {PAC\left( {A,B} \right) > PAC\left( {B,A} \right)} \right)|\left( {PAC\left( {B,B} \right) > PAC\left( {A,A} \right)} \right)$. A regression analysis for all pairs of sources showed that the difference in between‐source PAC and the difference in within‐source PAC was negatively correlated (*R* ^2 ^= 0.30, *F*(1,2,554) = 1,091, *p* \< 0.0001). In other words, the alpha‐gamma cross‐frequency coupling for a pair of sources was systematically determined by the strength of local within‐source coupling. We further analyzed if the asymmetry in PAC could indicate the flow of information during memory retention. For this analysis, we considered local coupling as an indicator of neural processing and thus selected pairs of ROIs for which local coupling was significantly different as well as the inter‐ROI coupling. From 43 pairs of ROIs with difference in PAC (Figure [10](#brb31263-fig-0010){ref-type="fig"}a), 13 showed also differences in local coupling. Visualization of the resulting network in Figure [8](#brb31263-fig-0008){ref-type="fig"}f showed coupling of anterior gamma amplitude (arrow tail) and frontotemporal alpha phase (arrow head). Likewise, temporal gamma amplitude was exclusively coupled to frontal alpha phase. 4. [discussion]{.smallcaps} {#brb31263-sec-0030} =========================== Spectral analysis of the MEG during the processes of WM revealed a decrease in alpha power during memory encoding and a subsequent rebound above baseline level that correlated with the number of encoded items. The retention interval was characterized by increased alpha power in frontotemporal and occipital brain areas. Alpha phase synchronization identified occipital and frontotemporal brain areas as having the strongest overall connectivity to other brain areas. Importantly, during the retention interval, sensors with large alpha ERS also showed strong overall connectivity. Cross‐frequency coupling analysis between alpha phase and gamma amplitude during the retention interval revealed networks of short and long distances across the brain. The asymmetry property of PAS was introduced as a possible method for studying directionality in neural communication. 4.1. Reaction time (RT) {#brb31263-sec-0031} ----------------------- The behavioral performance measured with the RT showed the expected characteristic effects of primacy and recency and an increased RT when the probe was from items later in the study list. In case of the reverse probe order, RT was significantly longer than for the same order probe, while the RT dependency on the serial position was same. Thus, the reversed probe order required an additional process of mentally rotating the probe but did not affect memory performance itself. We are using the behavioral results here as confirmation that the participants performed the WM task. The relation between alpha oscillations and behavior will be reported elsewhere. 4.2. Alpha ERD during the encoding interval {#brb31263-sec-0032} ------------------------------------------- Most prominent characteristic of alpha oscillations during the encoding interval was the sudden decrease in signal power compared to the baseline level, that is, an ERD, immediately following the onset of a visual stimulus. The ERD reached its deepest point at 260 ms after stimulus onset was stronger for the list items than the start cue, and most prominent in occipital sensors above visual cortices. Such sensory stimulation‐related alpha ERD has been reported for the visual system (Pfurtscheller, Neuper, Neuper, & Mohl, [1994](#brb31263-bib-0078){ref-type="ref"}) and other modalities like the auditory (Fujioka, Mourad, Mourad, & Trainor, [2011](#brb31263-bib-0021){ref-type="ref"}; Tiihonen et al., [1991](#brb31263-bib-0103){ref-type="ref"}) and somatosensory systems (Hari, Salmelin, Salmelin, Mäkelä, Salenius, & Helle, [1997](#brb31263-bib-0031){ref-type="ref"}; Stančák, [2006](#brb31263-bib-0093){ref-type="ref"}). Despite its strict temporal relation to the stimulus, the alpha ERD is not a primary sensory response. Characteristic for primary sensory responses is their spatial organization in the neocortex according to the spatial organization of the sensory organ, for example, retinotopic, tonotopic, and somatotopic organizations. However, a simultaneous recording of sensory evoked responses and alpha ERD showed that only the evoked response showed a somatotopic organization (Nierula, Hohlefeld, Hohlefeld, Curio, & Nikulin, [2013](#brb31263-bib-0070){ref-type="ref"}) suggesting the stimulus‐induced ERD does not reflect a stimulus‐specific primary response but is more likely supporting the conditioning of the sensory cortex. Alpha ERD has been linked to controlled access to information and attention control through inhibitory filtering (Klimesch, Fellinger, Fellinger, & Freunberger, [2011](#brb31263-bib-0050){ref-type="ref"}). The similar time courses of alpha ERD in the various sensory modalities suggest common alpha mechanism across sensory modalities. Generation and modulation of oscillatory activity have been studied widely on a microscopic level, which identified recurrent inhibitory thalamocortical networks as the origin of alpha oscillations (Steriade & Llinas, [1988](#brb31263-bib-0096){ref-type="ref"}). Those studies agreed that desynchronization indicates an active state of processing (Steriade, Gloor, Gloor, Llinás, Lopes da Silva, & Mesulam, [1990](#brb31263-bib-0095){ref-type="ref"}). Still, a wide gap exists between the understanding of oscillatory mechanism at small scale and the effects on mass activity as observed in EEG and MEG. One explanation of alpha desynchronization could be that local processing in primary sensory cortices results in multiple activities at specific phases, which in turn is reflected in the sum of more global mass activity as an effect of desynchronization (Pfurtscheller et al., [1994](#brb31263-bib-0078){ref-type="ref"}). While synchrony in neural networks serves as a mechanism of communication and binding, the opposite effect of desynchronization, also termed phase reset, seems necessary for dynamic reconfiguration of connectivity (Thatcher, North, North, & Biver, [2009](#brb31263-bib-0101){ref-type="ref"}). In line with those theoretical considerations, a model of ERD/ERS generation proposed a stereotypical pattern of an interval of ERD, which precedes and prepares for a subsequent active state of processing during an ERS interval (Lemm, Müller, Müller, & Curio, [2009](#brb31263-bib-0060){ref-type="ref"}). The observed stimulus‐related alpha decrease during the encoding interval corresponds to such concept of preparation for subsequent action. Alpha ERD in response to the sensory input was also observed in frontotemporal sensors, although the magnitude was smaller than in occipital sensors. Such frontotemporal ERD could also be related to release from inhibition and preparation for specific processing. Moreover, prominent alpha ERD occurred in left central sensors above the sensorimotor cortex contralateral to the responding hand. Central ERD gradually increased over the encoding and maintenance intervals, and it became most strongly expressed after probe presentation and the actual response. Thus, the long‐lasting ERD increase may be explained with preparation for the movement required for a response, and this preparation seems to begin immediately with the beginning of the stimulus sequence as much as 10 s before actual movement execution. Alpha ERD during movement preparation has been reported previously and was interpreted as preparing for a motor task but does not reflect processing for the specific task itself (Deiber et al., [2012](#brb31263-bib-0015){ref-type="ref"}). The alpha ERD in the sensorimotor system may support the concept of a preparatory role of alpha ERD. Finally, alpha desynchronization had been observed even during anticipation of an event (Bastiaansen, Böcker, Böcker, Cluitmans, & Brunia, [1999](#brb31263-bib-0004){ref-type="ref"}; van Ede, Jensen, Jensen, & Maris, [2010](#brb31263-bib-0106){ref-type="ref"}), again emphasizing the role of preparation for further processing. In summary, we interpret the role of alpha ERD during the encoding interval as an active, stimulus‐induced state of preparation for subsequent information processing. Alpha ERD in occipital sensors lasted shorter than the actual stimulus presentation and showed a steep rebound. If alpha ERD reflects a release from inhibition and thus facilitates sensory processing, there seem to be no need to return quickly into a state of inhibition. In contrast, the steep rebound could indicate a more active process. Increase in alpha power has been shown as an active inhibitory process of protecting an encoded stimulus from further interference (Bonnefond & Jensen, [2012](#brb31263-bib-0007){ref-type="ref"}). A novel finding of our study was that the magnitude of alpha rebound was related to the number of stimulus items within the encoded list and could support the hypothesis that the rebound indicates an active process. The peak amplitude of the alpha rebound increased with increasing memory load. In contrast, the troughs of alpha ERD maintained a constant magnitude. An oscillatory model of WM proposed that cycles of gamma oscillation control the storage of items in memory and each cycle of low‐frequency theta or alpha oscillations scans the list of items for maintenance (Jensen & Lisman, [1998](#brb31263-bib-0040){ref-type="ref"}). The model accounted for RT data in the Sternberg experiment. Our finding of increased alpha rebound could indicate that an oscillatory alpha network is increasingly involved in memory encoding and maintenance with increasing load. This results is in general agreement with previous research that proposed a relationship between alpha power and encoding of new information (Doppelmayr, Klimesch, Klimesch, Stadler, Pöllhuber, & Heine, [2002](#brb31263-bib-0018){ref-type="ref"}; Klimesch et al., [1996](#brb31263-bib-0053){ref-type="ref"}). More specific, recent studies showed a relation between increased alpha power and memory load from intracranial recordings (Meltzer et al., [2008](#brb31263-bib-0067){ref-type="ref"}) and EEG (Hsieh, Ekstrom, Ekstrom, & Ranganath, [2011](#brb31263-bib-0035){ref-type="ref"}). While previous studies relied on spectral analysis, time‐frequency analysis in our study preserved the time course of alpha ERD/ERS and showed a clear dissociation of the effect of memory load between ERD and ERS intervals. 4.3. Different alpha frequencies {#brb31263-sec-0033} -------------------------------- A further important finding was that the time course of alpha was correlated with the number of items within the study list only in the upper alpha band, centered around 12 Hz. This result corroborates a previous finding of load‐dependent phase locking of alpha that was maximal at 12 Hz (Schack, Klimesch, Klimesch, & Sauseng, [2005](#brb31263-bib-0086){ref-type="ref"}). Moreover, we showed that the center frequency of alpha ERS during WM maintenance was higher than the center frequency of the alpha ERD during encoding, while absolute power increase from baseline occurred similarly for both ERD and ERS intervals. A first dissociation between the lower alpha band (8--10 Hz) and upper alpha band (10--12 Hz) had been reported as topographically widespread activity for the first and focal activity for the latter in EEG recordings of a cognitive task (Klimesch, Pfurtscheller, Pfurtscheller, & Schimke, [1992](#brb31263-bib-0051){ref-type="ref"}) and a movement task (Pfurtscheller, Neuper, Neuper, & Krausz, [2000](#brb31263-bib-0077){ref-type="ref"}). The interpretation was that lower alpha serves general task demands while upper alpha is task specific. It has been speculated that activity in the upper alpha band predicts performance in memory and cognition. For example, a higher peak alpha frequency was correlated with larger memory capacity (Moran et al., [2010](#brb31263-bib-0068){ref-type="ref"}). However, whether absolute power in the upper alpha band or the amount of event‐related modulation is important seems strongly dependent on the specific task (Klimesch et al., [2006](#brb31263-bib-0047){ref-type="ref"}). Other authors labeled the power increase during memory maintenance as beta activity (Daume et al., [2017](#brb31263-bib-0014){ref-type="ref"}). The frequency band around 15 Hz, centered between the alpha and beta bands, has been also termed the beta~1~ band. Computational modeling showed that beta~1~ rhythms created cell assemblies through concatenation of cycles of beta and gamma oscillation, and such mechanism could underlay memory formation (Kopell, Whittington, Whittington, & Kramer, [2011](#brb31263-bib-0055){ref-type="ref"}). Our spectrum analysis showed ERS predominantly in the upper alpha and band informed us to focus on this frequency band for the subsequent analysis of connectivity and alpha‐gamma coupling. 4.4. Alpha ERS during the WM maintenance interval {#brb31263-sec-0034} ------------------------------------------------- An increase in alpha power during the WM maintenance interval had been reported previously (Jensen et al., [2002](#brb31263-bib-0039){ref-type="ref"}; Klimesch et al., [1999](#brb31263-bib-0049){ref-type="ref"}; Tuladhar et al., [2007](#brb31263-bib-0104){ref-type="ref"}; Van Dijk et al., [2010](#brb31263-bib-0105){ref-type="ref"}). Those reports inspired a range of new interpretations of the role of alpha oscillation beyond its control of inhibitory states. Alpha ERS has been shown to serve as an active inhibition for protecting the memory from distraction by further sensory input (Bonnefond & Jensen, [2012](#brb31263-bib-0007){ref-type="ref"}; Händel, Haarmeier, Haarmeier, & Jensen, [2011](#brb31263-bib-0029){ref-type="ref"}). Our data are in line with this interpretation. However, our study did not include a distraction paradigm, and thus we cannot test how much the alpha ERS contributed to inhibition of further input. In other studies, using a distracting stimulus did not increase alpha power; thus, the hypothesis of protecting the memory by inhibition of possible distraction was not supported (Poch et al., [2018](#brb31263-bib-0080){ref-type="ref"}; Schroeder et al., [2018](#brb31263-bib-0089){ref-type="ref"}) Another role of alpha ERS has been shown for the timing of WM‐related processing (Klimesch, Sauseng, Sauseng, & Hanslmayr, [2007](#brb31263-bib-0052){ref-type="ref"}). Specifically, nested theta/alpha and gamma oscillations have been proposed as a model for WM (Jensen & Lisman, [1998](#brb31263-bib-0040){ref-type="ref"}). Our data support those concepts, and we analyzed specifically the role of alpha oscillation for connectivity and cross‐spectral coupling with gamma oscillations. 4.5. Alpha ERS correlates with functional connectivity {#brb31263-sec-0035} ------------------------------------------------------ Alpha phase synchronization had been considered to play a role in active neuronal processing by modulation of neuronal excitability that biases neuronal and behavioral responses to sensory stimuli (Palva & Palva, [2011](#brb31263-bib-0073){ref-type="ref"}). Such modulations might be important for inhibition of task‐irrelevant processes (Klimesch et al., [2007](#brb31263-bib-0052){ref-type="ref"}; Mazaheri & Jensen, [2010](#brb31263-bib-0065){ref-type="ref"}), executive control of behavioral responses (Klimesch et al., [2007](#brb31263-bib-0052){ref-type="ref"}), or even active task‐relevant processing (Von Stein & Sarnthein, [2000](#brb31263-bib-0109){ref-type="ref"}). If indeed alpha ERS relates to inhibitory processes, then it will imply that areas exhibiting ERS are under inhibition from other brain areas or are exerting inhibition on of other brain areas. Such a process will result in increased functional connectivity between the two areas. Our results showed that areas with strong alpha ERS also exhibited strong connectivity. Specifically, occipital and right frontotemporal brain areas, in which ERS was strongly expressed, were also functionally connected. Because occipital areas are responsible for processing visual sensory input, inhibition of processing of further input would be an effective way of retaining memory. However, it had been suggested that alpha ERS in frontotemporal brain areas will hardly be interpreted as inhibition of external input stimulus since frontal brain areas are not known to be involved in visual stimulus processing (Sauseng et al., [2005](#brb31263-bib-0085){ref-type="ref"}). However, frontal alpha ERS indicated that these areas were prevented from becoming involved in new activities if the memory task was ongoing. As such, alpha phase synchronization between frontotemporal and occipital brain areas could represent a functional network that helps to inhibit both internal and external distractions. 4.6. Cross‐frequency coupling {#brb31263-sec-0036} ----------------------------- Cross‐frequency coupling is a relatively new approach to analyze functional connectivity. The methods are still under development and evaluation. Several recent papers showed that significant PAC measures could result from harmonic components of non‐sinusoidal alpha (Gerber et al., [2016](#brb31263-bib-0023){ref-type="ref"}; Lozano‐Soldevilla, Huurne, Huurne, & Oostenveld, [2016](#brb31263-bib-0063){ref-type="ref"}), sharp edge artifacts (Kramer et al., [2008](#brb31263-bib-0057){ref-type="ref"}), or phase‐to‐phase coupling (Hyafil, [2015](#brb31263-bib-0037){ref-type="ref"}) not related to the proposed timing relation between alpha and gamma oscillations. On the other hand, harmonics of alpha and beta might be functionally relevant (Kopell et al., [2011](#brb31263-bib-0055){ref-type="ref"}; Lozano‐Soldevilla, [2018](#brb31263-bib-0062){ref-type="ref"}). Thus, the results of cross‐frequency analysis have to be cautiously scrutinized before conclusions can be made. We found alpha‐gamma PAC predominantly in the lower gamma band around 45 Hz. The spectral representation of the amplitude modulated 45‐Hz gamma activity includes sidebands below and above 45 Hz. One caveat is that the first harmonic of alpha could be mistaken for the lower sideband and could result in apparent PAC. Our bicoherence analysis showed predominance of the lower sideband of 45‐Hz gamma instead of symmetric contribution from the upper and lower sidebands. Thus, we are aware that further analysis is required to confirm and validate the current results. The influences of harmonics of alpha had been demonstrated for PAC between frequency bands of the same signal. However, we showed alpha‐gamma PAC across distant sources. The effects of alpha harmonics are likely stronger for within‐source PAC. However, our PAC results were not specifically stronger for within‐source coupling than between‐sources. We take these findings as an argument against the hypothesis of nonlinear harmonics of alpha as the cause for PAC. The bicoherence did not show any effect at higher gamma frequencies. One explanation would be that bicoherence requires coherent oscillations and phase coherence between the frequency bands in the amplitude‐modulation spectrum. However, specifically high gamma activity may not consist of coherent oscillations but of short bursts or even single periods (Jones, [2016](#brb31263-bib-0042){ref-type="ref"}). Our time‐domain analysis was more sensitive to such type of gamma oscillations. The effects of cross‐frequency coupling are small, and the observation is limited by the signal‐to‐noise ratio. Most current reports are about intracranial recordings with likely higher signal‐to‐noise ratio. Our findings of alpha‐gamma PAC in the MEG source domain contribute to the development and advancement of the cross‐frequency coupling approach. 4.7. Alpha‐gamma cross‐frequency coupling during memory retention {#brb31263-sec-0037} ----------------------------------------------------------------- A recent review showed that currently, several authors shared the view that low‐frequency theta and alpha and high‐frequency gamma oscillations play distinct active roles during the retention phase of WM (Roux & Uhlhaas, [2014](#brb31263-bib-0082){ref-type="ref"}). However, not much is known about the interactions and joint function of alpha and gamma oscillations. Interactions between the two frequency bands could exist either locally within the same brain area or between the distant brain areas. Intracranial recordings from parietal brain areas revealed that local coupling between alpha phase and gamma magnitude was modulated by a behavioral task (Voytek et al., [2010](#brb31263-bib-0110){ref-type="ref"}). This was considered to reflect a mechanism for selection between communicating neuronal networks. Here, we demonstrated that alpha‐gamma PAC could be observed with MEG source analysis and we corroborated the findings that such PAC was largest in parietal and occipital regions. This findings of local PAC in occipital and parietal brain areas during the retention phase of WM is also in line with the opinion that alpha activity is associated with functional inhibition during retention of memory items, specifically by generating pulses of inhibition every 100 ms that alters ongoing activity thus limiting the processing of incoming visual information (Bonnefond & Jensen, [2015](#brb31263-bib-0008){ref-type="ref"}). An interesting property of alpha‐gamma PAC between distant brain areas is that it is not reciprocal and thus provides information about the direction of flow of information. Previous evidence suggested that low‐frequency oscillations may drive cortical gamma rhythms (Canolty & Knight, [2010](#brb31263-bib-0012){ref-type="ref"}; Schroeder & Lakatos, [2009](#brb31263-bib-0088){ref-type="ref"}; Spaak, Bonnefond, Bonnefond, Maier, Leopold, & Jensen, [2012](#brb31263-bib-0091){ref-type="ref"}) implying information flow from alpha to gamma. A recent simulation and electrocorticography study supported the alternative that the gamma envelope may drive alpha oscillations (Jiang, Bahramisharif, Bahramisharif, Gerven, & Jensen, [2015](#brb31263-bib-0041){ref-type="ref"}), leaving the question of directionality open for debate. However, when considering cross‐spectral coupling as a mechanism of long‐range communication, it is more likely that brain areas exerting control will tend to modulate or serve as the timer for the activities multiple brain areas. On the other hand, it is unlikely that the activity of such controller area will be modulated or driven by other brain areas. Our results showed a hierarchy of dependency in which temporal gamma depended on frontal alpha phase and occipital gamma on temporal alpha. As such, frontal alpha phase had more dependencies compared to frontal gamma activity. This supports alpha phase as having more of a controller role. One question about the underlying mechanism was whether alpha indeed controls directly distant gamma or alpha connectivity could result in synchrony between distant brain areas whereas PAC acts as a local mechanism. In case of direct control of distant gamma by alpha oscillation, a higher PAC between the two areas would be observed compared to the local PAC. Our results, however, showed a higher local PAC compared to PAC between the brain areas. Also, areas with PAC also showed increased synchrony. This supports a mechanism in which communication is established by phase synchrony and PAC represents local computation. This in line with the principle of communication in WM networks in which alpha establishes the long‐range connectivity and gamma is involved in local computation (Von Stein & Sarnthein, [2000](#brb31263-bib-0109){ref-type="ref"}). Moreover, long‐range PAC has been proposed as a mechanism through which different networks can communicate by altering the extracellular membrane potential in local cortical regions such that neurons will be more likely to fire during particular phases or phase network ensembles of low‐frequency oscillations (Canolty & Knight, [2010](#brb31263-bib-0012){ref-type="ref"}; Haider & McCormick, [2009](#brb31263-bib-0028){ref-type="ref"}; Klausberger et al., [2003](#brb31263-bib-0044){ref-type="ref"}). A recent study of alpha‐gamma PAC in WM found evidence for co‐occurrence of PAC and phase synchronization between left inferior temporal and left frontopolar cortices, suggesting that rather than establishing direct synchronization at higher frequencies, distant brain areas could rather indirectly coordinate high‐frequency activity by means of low‐frequency phase synchronization and local cross‐frequency coupling (Daume et al., [2017](#brb31263-bib-0014){ref-type="ref"}). Our analysis revealed similar findings of alpha phase coherence between left frontal and left temporal brain areas with left temporal gamma amplitude depending on the frontal alpha phase. We further found a similar interaction between frontal and occipital brain areas, which supports the co‐existence of phase synchronization and PAC, in facilitating long‐range communication. In summary, our PAC analysis provided support for an active role of alpha during WM maintenance through long‐range coordination of sensory processing in temporal regions and possible inhibition of distracting sensory input in occipital brain areas. Alpha‐gamma PAC at higher gamma frequencies involved strongly the anterior thalamus and the pulvinar. Given the limited resolution of the MEG beamformer analysis, the source labeled pulvinar may include the lateral geniculate nucleus (LGN), receiving the visual input. The LGN communicates the visual input to the cortex, while the pulvinar region of the thalamus controls the information flow between cortical areas by receiving input from cortical regions and influences activity in other areas (Saalmann, Pinsk, Pinsk, Wang, Li, & Kastner, [2012](#brb31263-bib-0083){ref-type="ref"}; Theyel, Llano, Llano, & Sherman, [2010](#brb31263-bib-0102){ref-type="ref"}). It has been suggested that alpha oscillations are specifically involved in these thalamocortical communications (Klimesch, [2012](#brb31263-bib-0046){ref-type="ref"}). Therefore, our findings of alpha‐gamma PAC between cortical areas and the thalamus are encouraging. 5. [conclusion]{.smallcaps} {#brb31263-sec-0038} =========================== Analysis of spectro‐temporal and spatial properties of the MEG provided support that different types of alpha oscillations are involved in multiple neural processes underlying WM. Stimulus‐related power decrease in the lower alpha band indicated an active role in preparing sensory areas and frontotemporal memory areas for subsequent processes. Rebound oscillatory activity in the upper alpha band reflected inhibitory processes of protecting the memory from irrelevant sensory input. Moreover, the correlation between the magnitude of rebound and the number of item in the study list indicated involvement in the memory process. The most prominent increase of signal power in the upper alpha band, occurring during the retention interval in occipital and frontotemporal areas, was correlated with long‐range connectivity measures and suggests involvement in communication between distant brain areas based on synchronization and timing. Moreover, the cross‐frequency coupling between the phase of alpha oscillations and the amplitude of gamma oscillations support the role of upper alpha band activity for controlling neural timing. Asymmetry in the PAS provided directionality information and suggested that frontal and temporal alpha phase controlled occipital gamma amplitudes, which in turn were interpreted as indicating local processes of the WM task. CONFLICT OF INTEREST {#brb31263-sec-0039} ==================== The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. This work was supported by a grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) to B.R. (Funding Reference RGPIN 341562).
Mid
[ 0.6100917431192661, 33.25, 21.25 ]
West End VW 404 1355 HARWOOD STREET, Vancouver West, BC, V6E 3W3, Canada Description Step Inside This Fabulously Spacious, West Facing 759 SQFT 1 Bedroom & Balcony Beauty Near Davie Village & The Seawall, in The Sought After West End. Offering A Generous Living/Dining Area For Entertaining, A Walk-Thru Closet To Your Master Ensuite Bathroom, Large Foyer With Office Desk Nook, Large West Facing Balcony, Newer Stainless Steel Appliances, Live-In Caretaker, & Manicured Fenced Back Yard Garden For Enjoying. Shared Laundry (Insuite Laundry With Approval!), Three Dogs Allowed! Maintenance Fee Includes Heat, Hot Water & HYDRO! No Rentals Allowed Sorry. A Beautiful Home in A Well-Kept Building. Two Blocks to Vancouver's Famous Seawall & Davie Village Shops & Restaurants. A Short Walk To Skytrain & Yaletown. Great Value in A Premium Location. Welcome Home. Amenities Bike Room, Elevator, Garden, Shared Laundry, Storage Listing Provided By RE/MAX Crest Realty Westside Copyright and Disclaimer The data relating to real estate on this web site comes in part from the MLS Reciprocity program of the Real Estate Board of Greater Vancouver. Real estate listings held by participating real estate firms are marked with the MLSR logo and detailed information about the listing includes the name of the listing agent. This representation is based in whole or part on data generated by the Real Estate Board of Greater Vancouver which assumes no responsibility for its accuracy. The materials contained on this page may not be reproduced without the express written consent of the Real Estate Board of Greater Vancouver. Copyright 2019 by the Real Estate Board of Greater Vancouver, Fraser Valley Real Estate Board, Chilliwack and District Real Estate Board, BC Northern Real Estate Board, and Kootenay Real Estate Board. All Rights Reserved.
High
[ 0.672077922077922, 25.875, 12.625 ]
package com.a1anwang.okble.server.advertise; import android.util.SparseArray; /** * Created by a1anwang.com on 2018/5/24. */ public class OKBLEAdvertiseFailedDescUtils { private static SparseArray<String> descArray=new SparseArray<>(); static { descArray.put(OKBLEAdvertiseCallback.ADVERTISE_FAILED_DATA_TOO_LARGE,"Failed to start advertising as the advertise data to be broadcasted is larger than 31 bytes."); descArray.put(OKBLEAdvertiseCallback.ADVERTISE_FAILED_TOO_MANY_ADVERTISERS,"Failed to start advertising because no advertising instance is available."); descArray.put(OKBLEAdvertiseCallback.ADVERTISE_FAILED_ALREADY_STARTED,"Failed to start advertising as the advertising is already started."); descArray.put(OKBLEAdvertiseCallback.ADVERTISE_FAILED_INTERNAL_ERROR,"Operation failed due to an internal error."); descArray.put(OKBLEAdvertiseCallback.ADVERTISE_FAILED_FEATURE_UNSUPPORTED,"This feature is not supported on this platform."); descArray.put(OKBLEAdvertiseCallback.ADVERTISE_FAILED_NULL_ADVERTISER,"This advertiser is null."); } public static String getDesc(int errCode){ return descArray.get(errCode,"unknown error"); } }
High
[ 0.6640316205533591, 31.5, 15.9375 ]
Event slated for February 2-10 in Harrisburg, Pennsylvania was officially postponed today by event producer Reed Exhibitions, costing local economy $80 million in lost revenue. HARRISBURG, Pa., Jan. 24, 2013 /PRNewswire-USNewswire/ -- The producers of the Eastern Sports and Outdoor Show, a longstanding tradition at the Pennsylvania Farm Show Complex & Expo Center in Harrisburg, Pennsylvania dating back to 1951, announced today that the event was being postponed due to the controversy surrounding its decision to limit the sale or display of modern sporting rifles at the event, according to a statement posted on the show's website on January 24, 2013. Tourism Officials at the Hershey Harrisburg Regional Visitors Bureau (HHRVB) estimate the postponement of the state's largest outdoor sports show means $44 million in direct spending from vendors and attendees and $80 million in lost revenue for the local economy. The 22 hotels offering special room rate agreements for show vendors and attendees say the event accounted for approximately 12,000 room-nights over a 10-day period in a traditionally slow tourism season for a region that welcomes 10 million visitors annually. Tourism officials claim the estimated loss is conservative, factoring in only the direct and indirect spending for the 1,000 vendors and anticipated 250,000 attendees. "Reported numbers do not account for lost revenue at the event complex from parking, food and beverage, and service and rental fees," said Mary Smith, president of HHRVB. The bureau did not have details on the lost revenue at the complex but Smith said it would be in the millions considering the scale of this event compared to other shows they have secured for the complex. The estimates also do not account for lost revenue from the 5 percent hotel tax collected by Dauphin County. Officials are not commenting on the producer's postponement decision, nor are they aware at this time what Reed Exhibitions plans are for rescheduling. "Our relationship with Reed Exhibitions has continued to strengthen and grow over the years and we are hopeful that the show will return," said Sharon Altland, director of sales for HHRVB. "This is the largest privately produced show at the complex considering the PA Farm Show is a state organized event. Those two traditional events have become pillars of our January and February tourism business with many local businesses relying on them to make their first quarter numbers."
Low
[ 0.481481481481481, 29.25, 31.5 ]
[Neuropathies of septic syndrome with multiple organ failure in burnt patients: 2 cases with review of the literature]. We report two cases of axonal sensori-motor polyneuropathies complicating sepsis and multiple organ failure (MOF) among severely burned patients (total burned surface area of 35 to 40 per cent) in which no other cause of neuropathy was retrospectively identified. No steroids or neuromuscular blocking agents had been given. The date of onset was not established but the diagnosis was late, between the 30th and 45th day, at the recovery of consciousness. Regression was incomplete, with severe sequellae especially in one patient who was unable to walk 10 months after the injury. Burned patients can present with many kinds of peripheral neuropathies. Postburn polyneuropathies with nerve conduction slowing were described by Henderson. Mononeuropathies can result from nerve compression complicating unfavorable postures in comatose patients or from nerve entrapment in ischemic limbs. Polyneuropathy in postburn sepsis with MOF does not appear to have been previously reported. Postburn sepsis usually occurs in young patients, without other cause of MOF; and therefore represents a relatively "pure" sepsis syndrome.
Mid
[ 0.639024390243902, 32.75, 18.5 ]
Larry Hogan, a Republican, is the governor of Maryland. Last year, I was proud to be the first Republican governor in the United States to put forward a statewide plan to expand paid sick-leave benefits to workers. It was a common-sense proposal to strike a much-needed balance between providing benefits hard-working Marylanders deserve and not hurting our economy, killing small businesses or laying off the men and women employed by them. From that point on, I made it very clear that our administration was not drawing a line in the sand but rather extending an invitation to leaders on both sides of the aisle for open, honest dialogue. Regrettably, the legislature refused to engage in any discussions with our administration and failed to take any action on our paid sick-leave measure. It instead passed a confusing, unwieldy and deeply flawed bill on a partisan vote — hoping I would veto it because many legislators would rather have an election-year campaign sound bite than a solution we can all get behind. It was the type of finger-pointing, point-scoring politics we see in Washington. Marylanders deserve better. After careful consideration, I vetoed that inflexible and burdensome bill in the hope that we could work together this session to get it right. To further that goal, I signed an executive order forming a task force to conduct a comprehensive study to determine the realities of paid sick leave for Maryland employees and employers. After six months conducting expansive, in-person interviews with affected workers and businesspeople across the state, the task force confirmed that the vetoed bill has major policy and legal flaws and numerous unintended negative consequences. These flaws include onerous, bureaucratic provisions and mandated procedures so complicated that even the smallest mom-and-pop shops would need a human resources director to navigate them, and small-business owners trying to do the right thing would risk inadvertently incurring extreme punitive damages. Perhaps most egregiously, an employee could be obligated to reveal deeply personal and private information — including about domestic violence, sexual assault or sensitive medical procedures — to use their leave. This issue was raised by a dozen members of the legislative women's caucus in a letter to the House speaker and Senate president. To address these serious flaws and chart a path forward, I will introduce the Paid Leave Compromise Act of 2018 as emergency legislation on the first day of the upcoming legislative session. This legislation, which would take effect in January with no delay, would require businesses with 25 or more employees to offer paid sick leave by 2020. To give our small-business job creators time to prepare, these benefits would be phased in, similar to the approach used in states such as New York and Rhode Island. Our legislation would cover full-time and part-time employees and protect workers' privacy. The Paid Leave Compromise Act of 2018 would provide Marylanders with paid time off — no questions asked — and would eliminate the costly and punitive red tape in the legislature's flawed bill. To help offset the costs to small businesses for providing these benefits, we are introducing a bill in tandem that would provide $100 million in tax incentives for companies with fewer than 50 employees that offer paid leave. Our legislation would enable small businesses to offer paid-leave benefits to hundreds of thousands of hardworking Marylanders who would have been left behind under the bill I vetoed. Now, we need the legislative leaders to finally work with us on behalf of the people of Maryland. The issue of paid sick leave is too important and the impact is too far-reaching for us to risk getting it wrong. Nearly every day, we see the effects of partisanship and political polarization in Washington — but we have always chosen a different path. We will continue to call on our colleagues in Annapolis to avoid the divisive politics of Washington, to engage in thoughtful, civil debate and to strive for common-sense solutions to the issues we face. Together, we will continue to strive toward that middle temperament that our great state was founded on.
High
[ 0.6559633027522931, 35.75, 18.75 ]
Friends of a man who was assaulted and called “faggot” in an attack the victim says happened near E John and 10th Ave Sunday night around 8 PM say the victim lost several teeth and suffered scrapes and bruises in the unprovoked bashing. According to the SPD report on the incident, the male victim told police he did not believe he was targeted because of his sexual orientation but also said he had not had any contact with the two male suspects and was not robbed during the incident: The suspects were described only as two white males and police were not notified of the crime until around 2 hours after it occurred. Details of the attack have been spreading via social media but police are not currently investigating the incident as a hate crime. “The attacker didn’t know the victim, so there’s not a guarantee they would’ve known he was a gay man,” said SPD spokesperson Detective Drew Fowler. “‘Faggot,’ at times, is used as a general pejorative.” UPDATE (12/18): A commenter who says he was the victim in the incident said he does not believe he was targeted due to his sexual orientation, and thus does not consider the attack to be a hate crime. The incident comes at the end of a year marked with renewed concerns in Seattle about gay bashing and bias crimes against LGBTQ people after a series of high profile and deadly crimes as well as more mundane but equally disturbing assaults.
Low
[ 0.504, 31.5, 31 ]
The decision by Aetna to withdraw from many ObamaCare exchanges was a predictable outrage that opens to the door not to the demise of ObamaCare, but the dramatic improvement of ObamaCare led by a grand battle by Sen. Bernie Sanders (Vt.) and progressives to enact the public option and move toward a Medicare-for-all healthcare system. Let's coin the phrase "BernieCare" to describe the kind of healthcare system that progressives believe, with some reason, would be the kind of program that voters prefer. Sanders has long been a champion of single-payer healthcare — which I personally support — but for obvious political reasons in a lobbyist-dominated Washington, single payer is highly unlikely to happen soon. Sanders, who is more of a highly skilled political and legislative tactician than pundits understand, has responded to the Aetna withdrawal from many healthcare exchanges by publicly announcing he will wage an all-out campaign to enact the public option. The Sanders response to Aetna is perfectly timed and politically powerful. The public option, which should have been enacted with the original ObamaCare program, would guarantee that every healthcare exchange will have at least one highly affordable choice for consumers to accept. The result of including a public option on healthcare exchanges would be that one of two things would happen. Either other insurers would remain on the exchanges to compete for the consumer's dollar, which would create a downward pressure on insurance premiums that benefits consumers, or Americans would enroll in the public option en masse, which would accelerate the move toward a true single-payer system. I was a vehement supporter of the public option during the original ObamaCare debate. It was a tragedy of epic proportion that the public option was not included in the final ObamaCare law, despite the fact that President Obama supported it and Democrats then had large majorities in the House and Senate. That omission occurred, despite strong public support for the public option, because of the power of insurance lobbyists in Washington, the obstruction of Republicans in Congress and the reluctance of a small number of more conservative Democratic senators to defy the insurance lobby. Democratic nominee Hillary Clinton, Sanders and the Democratic Party are now united in support of the public option. This was one of the more important developments at the time of the Democratic National Convention when the Clinton and Sanders camps unified behind a series of platform positions that included long-held progressive policies and ideas. The BernieCare option has always been a frontal assault against the greed of certain insurance companies and the lobbying industrial complex that has dominated healthcare policy for far too long. The BernieCare strategy was dramatized during his campaign for president, where he advocated a full single-payer system, and is now advancing again with the decision of Aetna to abandon most of the ObamaCare exchanges. This strategy has taken various forms in recent years. The idea of a public option on the exchanges has always garnered strong public support. The idea of a Medicare-for-all system builds on the enormous public support for the Medicare program. And I would emphasize again today, as I have throughout the presidential campaign, that I believe the reason that Sanders dominated Republican nominee Donald Trump in match-up polls throughout the presidential campaign is that he embodies the kind of progressive populist reformation that voters prefer over the status quo or the conservative alternative. Many analysts believe that the Aetna decision to withdraw from most ObamaCare exchanges was a retaliation against the Obama Justice Department taking a strong position on egregious examples of mergers and acquisitions in the insurance industry, including a proposed but rejected merger sought by Aetna. I fully support the Justice Department's policy, deplore the Aetna withdrawals, and expect the Aetna move to backfire. Among the many reasons that Sanders is supporting Clinton for president, and turning his attention to electing other Democrats to regain control of the Senate and potentially the House of Representatives, is that he is poised to become one of the most powerful and important senators if Democrats regain control. If Democrats regain control of the Senate, Sanders will have fascinating options as to which Senate committee he will chair. He can take his revolution to the federal budget as chairman of the Senate Budget Committee. Even more interesting is that Sanders could have the opportunity to take his revolution to even more immediate heights as chairman of the Senate Committee on Health, Education, Labor and Pensions, if the only Democrat above him in seniority, Sen. Patty Murray (D-Wash.), chooses to chair the Senate Appropriations Committee instead. Republicans and conservatives are rejoicing at the decision of Aetna to abandon most ObamaCare exchanges, but does the GOP really want to become the party of higher insurance premiums, working as the handmaiden of insurance industry lobbyists? The stage is set for Sanders to campaign throughout the nation and in Congress for his BernieCare alternative, joined by Clinton and Democratic leaders, making the Democrats the party of lower insurance premiums, the great change agent battling lobbyists and influence peddlers who want to stick it to American families and consumers. It will be ironic — and wonderful for liberals — if BernieCare saves ObamaCare and the big winners are American consumers.
High
[ 0.664886515353805, 31.125, 15.6875 ]
The Internet Of Things Is a Security And Privacy Dumpster Fire And The Check Is About To Come Due from the no-hyperbole-intended dept "As more things come under software control, they become vulnerable to all the attacks we've seen against computers. But because many of these things are both inexpensive and long-lasting, many of the patch and update systems that work with computers and smartphones won't work. Right now, the only way to patch most home routers is to throw them away and buy new ones. And the security that comes from replacing your computer and phone every few years won't work with your refrigerator and thermostat: on the average, you replace the former every 15 years, and the latter approximately never." "Systems are filled with externalities that affect other systems in unforeseen and potentially harmful ways. What might seem benign to the designers of a particular system becomes harmful when it’s combined with some other system. Vulnerabilities on one system cascade into other systems, and the result is a vulnerability that no one saw coming and no one bears responsibility for fixing. The Internet of Things will make exploitable vulnerabilities much more common. It’s simple mathematics. If 100 systems are all interacting with each other, that’s about 5,000 interactions and 5,000 potential vulnerabilities resulting from those interactions. If 300 systems are all interacting with each other, that’s 45,000 interactions. 1,000 systems: 12.5 million interactions. Most of them will be benign or uninteresting, but some of them will be very damaging." "Security engineers are working on technologies that can mitigate much of this risk, but many solutions won’t be deployed without government involvement. This is not something that the market can solve. Like data privacy, the risks and solutions are too technical for most people and organizations to understand; companies are motivated to hide the insecurity of their own systems from their customers, their users, and the public; the interconnections can make it impossible to connect data breaches with resultant harms; and the interests of the companies often don’t match the interests of the people. Governments need to play a larger role: setting standards, policing compliance, and implementing solutions across companies and networks. And while the White House Cybersecurity National Action Plan says some of the right things, it doesn’t nearly go far enough, because so many of us are phobic of any government-led solution to anything. The next president will probably be forced to deal with a large-scale internet disaster that kills multiple people. I hope he or she responds with both the recognition of what government can do that industry can’t, and the political will to make it happen . Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community. Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis. While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you. –The Techdirt Team If you're a long-standing reader of Techdirt, you know we've well documented the shitshow that is the "internet of things." It's a sector where countless companies were so excited to develop, market and sell new "smart" appliances, they couldn't be bothered to embrace even the most rudimentary security and privacy standards once these devices were brought online. The result is an endless stream of stories about refrigerators TVs , thermostats or other "smart" devices that are busy hemorrhaging personal data, inadvertently advertising that sometimes the smart option -- is actually the dumb one.This systemic incompetence has now fused with a cultural disdain for more modern consumer privacy protections. The end result has been an obvious uptick in concern about how much data is now being collected by even childrens' toys like Barbie dolls , something that last year's Vtech hack illustrated isn't just empty fear mongering. Convincing parents who already find technology alienating has proven to be difficult, as is attempting to craft intelligent regulation that protects kids' playtime babbling from being aggressively monetized, without hindering emerging sector innovation and profits.To that end, the Family Online Safety Institute and the Future of Privacy Forum held a presentation last week (you can find the full video here ) where analysts and experts argued, among other things, that privacy policies need to be significantly simplified and modernized for an era where a child's doll can profoundly impact the privacy of countless people. It has been, needless to say, an uphill climb.And while this all is seen as kind ofwhen we're talking about not-so-smart tea kettles or talking dolls, the amusement has worn off as the conversation has shifted to territory where incompetence or a clever hack can kill you (namely, automobiles ). As Bruce Schneier notes over at Motherboard , this massive introduction of privacy flaws is a pretty big problem at scale, when appliances aren't swapped out or updated often:And while mocking the internet of things has become a running joke , Schneier notes it quickly becomes less funny when you begin to realize that the interconnected nature of all of these devices means we're introducing millions of new attack vectors daily in homes, businesses, utilities, and government agencies all over the world. Collectively these flaws will, no hyperbole intended, inevitably result in significant deaths:At that scale, the argument that you didn't embed useful security because "it was only a refrigerator" or you didn't impose some basic privacy protections and guidelines because "it might hurt an emerging sector's ability to make more money" start to lose their luster. Schneier tries to argue that the only way we can truly mitigate the looming risk is the involvement of an informed public and an accountable government:This is of course the part of the story where the author is supposed to inform you that with good intentions and, government, the public and industry will come together and quickly nip this problem in the bud. Of course this particular post's readership is painfully aware that the same government Schneier hopes will come to the rescue is too busy trying to embed its own problematic backdoors in everything under the sun while a large portion of it rushes to gut the funding and authority of any regulator capable of imposing basic privacy and security protections.Said readers are also probably painfully aware that neither looming major Presidential candidate has shown the remotest competence in regards to technology or genuine cyber-security. That means it's more than likely these unfortunate outcomes Schneier predicts will need to arrivewe're collectively even willing toto take serious steps to address them. At that point the only certain outcome is that all of the players involved will be sure to shirk their own personal responsibility for the security and privacy nightmare they helped build. Still, for whatever it winds up being worth, we can't say we weren't warned. Filed Under: bruce schneier, internet of things, iot, privacy, security
Mid
[ 0.621951219512195, 31.875, 19.375 ]
803 F.2d 1103 Barney Earl CRUTCHFIELD, Petitioner-Appellee,v.Louie L. WAINWRIGHT, Jim Smith, Respondents-Appellants. No. 84-3508. United States Court of Appeals, Eleventh Circuit. Nov. 10, 1986. Thomas H. Bateman, III, Dept. of Legal Affairs, Tallahassee, Fla., for respondents-appellants. Gwendolyn Spivey, Tallahassee, Fla., for petitioner-appellee. Appeal from the United States District Court for the Northern District of Florida. Before RONEY, Chief Judge, and GODBOLD, TJOFLAT, HILL, FAY, VANCE, KRAVITCH, JOHNSON, HATCHETT, ANDERSON, CLARK and EDMONDSON, Circuit Judges. HATCHETT, Circuit Judge: 1 In this case, the en banc court decides the extent to which a trial court may instruct a criminal defendant not to confer with counsel during a recess which occurs during the defendant's testimony. 2 During Barney Earl Crutchfield's trial for armed robbery with a deadly weapon, the Florida trial judge instructed Crutchfield's counsel not to talk with him about his testimony: 3 THE COURT: All right. We're going to take a little break, Ladies and Gentlemen. We've been at it a little bit. And I see there's a sigh of relief on some faces? Over here. Do not discuss this case, please, while you're in the jury room. All right. 4 [WHEREUPON, THE JURY WAS REMOVED FROM THE JURY BOX.] 5 THE COURT: All right. Gentlemen, in view of the fact that this is going to be a very brief break, I direct that the lawyers for Mr. Crutchfield not to discuss his testimony with him during the course of this break. 6 After receiving this instruction, Crutchfield's counsel did not object, move for a mistrial, or ask to discuss with him non-testimonial aspects of the case. Crutchfield, who was on the witness stand at the time of the admonition, contends that this admonition constituted the first violation of his right to the assistance of counsel. 7 The length of the recess, which occurred near the end of Crutchfield's direct examination, is in dispute. The government contends that it was brief and routine. Crutchfield contends that it extended into a two-hour lunch break. Because of the manner in which we resolve the issue, the length of the recess is rendered unimportant. 8 After the recess direct examination continued for a short period of time. During cross-examination, Crutchfield made statements which indicated that he had no reason to rob or steal because his father supplied his financial needs.1 After soliciting this testimony, the trial court sent the jury out of the courtroom and the prosecutor sought permission to impeach Crutchfield through presentation of evidence that he had been convicted for burglary five years before. Holding that Crutchfield "opened the door" for this impeachment evidence, the trial court granted the prosecutor permission to impeach Crutchfield using the prior conviction evidence. Crutchfield, apparently realizing that the impeaching evidence would be presented to the jury, asked the court to speak with his counsel. 9 THE COURT: All right. Bring the jury in. Son, don't direct any statements to me. If you have anything, you speak to your lawyer. 10 CRUTCHFIELD: Can--can I speak with him? 11 THE COURT: But don't direct statements to me. 12 Later, just before the jury was returned to the jury box, the following colloquy occurred: 13 CRUTCHFIELD: Can I speak with him for a minute? 14 THE COURT: What did I just tell you? 15 CRUTCHFIELD: Yes sir. 16 Immediately following the jury's return to the courtroom, through cross-examination, the prosecution presented the damaging impeachment evidence. The jury convicted Crutchfield of the charges, and the judge sentenced him to forty-five years in prison, with jurisdiction retained over the first one-third of the term. Crutchfield contends that the court's statements, above quoted, constitute a second violation of his right to the assistance of counsel. 17 In a Per Curiam order, dated June 17, 1982, Florida's First District Court of Appeals affirmed Crutchfield's conviction. The state trial court denied Crutchfield's motion for collateral relief (3.850, Fla.R.Crim.P.), and the Florida appellate court affirmed the denial of rule 3.850 relief. Crutchfield v. State, 431 So.2d 244 (Fla. 1st DCA 1983). Petition for Rehearing was denied on June 3, 1983. In the rule 3.850 motion for collateral relief, Crutchfield raised the denial of assistance of counsel claim.2 18 After exhausting state remedies, Crutchfield filed a Petition for Writ of Habeas Corpus in the United States District Court for the Northern District of Florida. Relying on United States v. Conway, 632 F.2d 641 (5th Cir. Unit B 1980), the district court granted the writ of habeas corpus based on the denial of assistance of counsel claim. 19 On appeal, a panel of this court held that Conway had been implicitly overruled; therefore, it reversed and remanded the case to the district court for a hearing on whether the constitutional violation amounted to harmless error. Crutchfield v. Wainwright, 772 F.2d 839 (11th Cir.1985). 20 We took this case for full court consideration to determine the circumstances, if any, in which a prohibition against a criminal defendant/witness consulting with counsel during a recess constitutes a denial of assistance of counsel to the extent that the defendant is entitled to a new trial. CONTENTIONS 21 The appellant, state of Florida, contends that the district court erred in relying on Conway 's rule of per se reversal, and that the prejudice rules of Strickland v. Washington, 466 U.S. 668, 104 S.Ct. 2052, 80 L.Ed.2d 674 (1984), and United States v. Cronic, 466 U.S. 648, 104 S.Ct. 2039, 80 L.Ed.2d 657 (1984), state the correct standard of review. 22 Crutchfield contends that the district court correctly relied on Conway because Strickland and Cronic are not applicable to this situation, which involves a denial of assistance of counsel claim as opposed to the ineffective assistance of counsel claims presented in Strickland and Cronic. Crutchfield emphasizes that the case law of this circuit, the majority of circuits in the United States, and many of the states, mandates a per se reversal rule when assistance of counsel is denied at a critical stage of criminal proceedings. DISCUSSION 23 In resolving the issues presented in this case, it is helpful to review the law presently binding in the circuit. Our review begins with Geders v. United States, 425 U.S. 80, 96 S.Ct. 1330, 47 L.Ed.2d 592 (1976). In Geders, the Supreme Court held that a trial court's order preventing a defendant from consulting with his counsel during a seventeen hour overnight recess between defendant's direct and cross-examination, based on the trial judge's conclusion that the order was necessary to avoid improper influence on defendant's testimony, deprived the defendant of his right to assistance of counsel guaranteed by the sixth amendment to the Constitution of the United States. 24 The Court was careful, however, to limit its holding: 25 United States v. Leighton, 386 F.2d 822 (C.A.2 1967), on which the Court of Appeals relied, involved an embargo order preventing a defendant from consulting his attorney during a brief routine recess during the trial day, a matter we emphasize is not before us in this case. 26 Geders, 425 U.S. at 89 n. 2, 96 S.Ct. at 1336 n. 2, 47 L.Ed.2d at 600 n. 2 (citations omitted). Thus, the Court left undecided whether denial of the right of consultation between a criminal defendant and his counsel during a brief routine recess constitutes a violation of the defendant's sixth amendment rights. 27 Courts were quickly called upon to decide the issue left open in Geders. In Conway, our predecessor circuit, the former Fifth Circuit, held that to the extent that the goal of preventing improper coaching conflicts with a defendant's right to freely consult with counsel, the conflict must be resolved in favor of the right to assistance and guidance of counsel.3 Thus, after Conway, ordering a criminal defendant not to consult with counsel during court recesses, no matter how brief, violated the constitutional right to assistance of counsel guaranteed by the sixth amendment, and required reversal. Conway, 632 F.2d at 645. 28 In 1984, the Conway issue was first presented to the Eleventh Circuit in United States v. Romano, 736 F.2d 1432 (11th Cir.1984). Following Conway, we held that a district court's order that a defendant refrain from consulting with his counsel concerning testimony during an overnight recess, which extended for several days due to the defendant's hospitalization, constituted reversible error. In discussing controlling precedent, we stated: 29 Contrary to the language in some of these cases from other circuits, this court appeared to conclude in United States v. Conway, that the Geders violation was reversible error without any reference to possible prejudice. At least no inquiry along the lines outlined above was made in the opinion in that case. 30 Romano, 736 F.2d at 1438. Thus, the latest case in this circuit followed Conway 's per se reversible error rule. Unfortunately, in dicta, we went on to state: 31 We need not decide whether the government might be able to demonstrate a lack of prejudice to the point of harmlessness in a given case. Our review of the record before us indicates that the error in this case cannot be deemed harmless. [Emphasis supplied.] 32 Romano, 736 F.2d at 1438. The Romano court did not intend to suggest that a harmless error inquiry would always be required after finding a violation of the defendant's right to assistance of counsel. The court simply noted that the defendant in Romano was entitled to a new trial regardless of the standard employed. Consequently, at the time the panel decided this case, this circuit followed Conway 's per se reversal rule. 33 The Crutchfield panel, in holding that Conway had been implicitly overruled, relied on United States v. Cronic and Strickland v. Washington. In Strickland, the Supreme Court identified two components for a successful ineffective assistance of counsel claim: (1) counsel's representation must have fallen below "an objective standard of reasonableness," and (2) the defendant must have demonstrated a "reasonable probability that, but for counsel's unprofessional errors, the result of the proceeding would have been different." Strickland, 466 U.S. at 688, 694, 104 S.Ct. at 2065, 2070, 80 L.Ed.2d at 693, 698. 34 In Cronic, the Supreme Court held that only when surrounding circumstances justify a presumption of ineffectiveness of counsel can a sixth amendment claim be sufficient without inquiry into counsel's actual performance. 35 The panel elaborated upon the relationship between Strickland, Cronic, and Conway, stating: 36 The denial of access to counsel for seventeen hours in Geders was given as an example in Cronic of a case where prejudice could be presumed. [Citation omitted.] In citing Geders for this proposition, the Court indicated the previously unstated rationale of the Geders rule: that prejudice was so likely to result from the overnight denial of access to counsel as to make a specific inquiry superfluous. However, as this court held in Chadwick v. Green, 740 F.2d 897, 901 (11th Cir.1984), presumed prejudice will be available in only a "very narrow spectrum of cases"; it therefore does not follow from the Supreme Court's citation of Geders that prejudice should be presumed in any instance of denial of access to counsel--regardless of how brief. Indeed, the Supreme Court in Geders viewed denial of access during brief routine recesses as a potentially distinct variety of case. This fact, coupled with the admonition of Cronic and Strickland v. Washington against the creation in this area of broad categories of cases requiring automatic reversal, leads us to inquire as to whether Conway has been implicitly overruled. 37 Crutchfield, 772 F.2d at 842 (footnotes omitted). 38 Although the panel viewed Strickland and Cronic as implicitly overruling Conway, it did not consider the Strickland test for determining prejudice appropriate: 39 Under Strickland v. Washington, the burden placed on a defendant to show prejudice in the typical case of ineffective assistance of counsel is a heavy one: he must show that, but for his counsel's errors, there is a reasonable probability that the proceeding would have had a different outcome. The apparent rationale for saddling the defendant with this burden is a balancing of the defendant's right to counsel against the need for finality of trials. The analogous rule in this situation would be to require a defendant to prove that, but for the denial of access to counsel, there is a reasonable probability that his trial would have had a different outcome. 40 We believe that a completely analogous rule is inappropriate, however.... The more appropriate analogy is ... to cases where a clear constitutional violation exists, as, for example, in the case of improperly admitted evidence that has been obtained in violation of the Fourth Amendment. In those cases, the prosecution is given the burden of showing that an error is harmless beyond a reasonable doubt. 41 Crutchfield, 772 F.2d at 842-43 (citations omitted). 42 Contrary to the panel, we conclude that Strickland and Cronic did not overrule or modify Conway. Although Strickland held as a general rule that an individual could prevail on a claim of ineffective assistance of counsel only on a showing that the insufficient representation affected the reliability of the trial, two factors counsel against our relying on Strickland in this situation. First, Strickland involved a claim of ineffective assistance of counsel; here, we are concerned with a claim of denial of assistance of counsel. 43 Second, as this court stated in Chadwick v. Green, 740 F.2d 897, 900 (11th Cir.1984): "In Cronic, the Court carved a narrow exception to [Strickland 's] general rule that a defendant must demonstrate prejudice: a showing of prejudice is not necessary if there are 'circumstances that are so likely to prejudice the accused that the cost of litigating their effect in a particular case is unjustified.' " 44 In Cronic, the Supreme Court cited Geders as a case where constitutional error could be found without any showing of prejudice because the accused was denied assistance of counsel during a critical stage of the proceedings. Cronic, 466 U.S. at 659, n. 25, 104 S.Ct. at 2047, n. 25, 80 L.Ed.2d at 668, n. 25. Thus, the denial of assistance of counsel in Geders was deemed reversible error not due merely to the length of the denial, but also because it occurred at a critical stage of the proceedings. 45 When Geders, Strickland, and Cronic are considered, nothing indicates that the Supreme Court intended the Strickland rule, applicable to situations in which counsel performs below the required standard, to apply to situations where the state, the court, or the criminal justice system denies a defendant assistance of counsel. The language from Chadwick, above quoted, is convincing: denial of assistance of counsel constitutes reversible error. Therefore, we reaffirm the underlying rationale of Conway and Romano that any deprivation of assistance of counsel constitutes reversible error and necessitates a new trial. Our rule does not include a harmless error analysis. Cronic and Strickland make clear that "where actual or constructive denial of assistance of counsel occurs a per se rule of prejudice applies." Chadwick, 740 F.2d at 900 n. 3. The reasons for adopting such a rule are best expressed in Cronic: 46 In our evaluation of that conclusion, we begin by recognizing that the right to the effective assistance of counsel is recognized not for its own sake, but because of the effect it has on the ability of the accused to receive a fair trial. Absent some effect of challenged conduct on the reliability of the trial process, the Sixth Amendment guarantee is generally not implicated.... There are, however, circumstances that are so likely to prejudice the accused that the cost of litigating their effect in a particular case is unjustified. 47 Most obvious, of course, is the complete denial of counsel. The presumption that counsel's assistance is essential requires us to conclude that a trial is unfair if the accused is denied counsel at a critical stage of his trial. Similarly, if counsel entirely fails to subject the prosecution's case to meaningful adversarial testing, then there has been a denial of Sixth Amendment rights that makes the adversary process itself presumptively unreliable. No specific showing of prejudice was required in Davis v. Alaska, 415 U.S. 308, 39 L.Ed.2d 347, 94 S.Ct. 1105 (1974) because the petitioner had been "denied the right of effective cross-examination" which " 'would be constitutional error of the first magnitude and no amount of showing of want of prejudice would cure it.' " 48 466 U.S. 658-59, 104 S.Ct. 2046-47, 80 L.Ed.2d 667-68 (emphasis added). 49 The majority of the circuits that have announced rules for reviewing denial of assistance of counsel claims favor the per se rule. United States v. Allen, 542 F.2d 630 (4th Cir.1976), cert. denied, 430 U.S. 908, 97 S.Ct. 1179, 51 L.Ed.2d 584 (1977) (per se rule applied prospectively); United States v. Bryant, 545 F.2d 1035 (6th Cir.1976) (in absence of extraordinary circumstances, it is abuse of discretion and violation of right of defendant to assistance of counsel for a trial court to direct that defendant have no communication with his counsel during criminal trial over a noon recess). Only one circuit applies a harmless error analysis to a prohibition of consultation during trial between a criminal defendant and his counsel. United States v. DiLapi, 651 F.2d 140 (2d Cir.1981), cert. denied, 455 U.S. 938, 102 S.Ct. 1427, 71 L.Ed.2d 648 (1982). The Third and Fourth Circuits hold that no deprivation occurs in the absence of an objection. See Bailey v. Redman, 657 F.2d 21 (3d Cir.1981), and Stubbs v. Bordenkircher, 689 F.2d 1205 (4th Cir.1982), cert. denied, 461 U.S. 907, 103 S.Ct. 1879, 76 L.Ed.2d 810 (1983). 50 As we noted above, Crutchfield's lawyers did not object, move for a mistrial, or ask to discuss testimonial or non-testimonial aspects of the case with him after the trial judge instructed them not to confer with Crutchfield. As to the first alleged violation, we are unable to find any evidence that Crutchfield's lawyers actually wanted to talk with him during the recess, or that Crutchfield desired to consult with his counsel. If the record reflected such a desire by either, we would find that the trial judge's admonition constituted reversible error. Because the trial record does not reflect--by objection, motion, or request--that Crutchfield and his counsel actually desired to confer during the recess, we find that Crutchfield was not deprived of the right to assistance of counsel within the meaning of the sixth amendment. Thus, we overrule Conway and Romano to the extent they hold that a denial of assistance of counsel is presumed whenever a trial judge instructs counsel not to confer with a defendant during a recess. We conclude that a defendant or the defendant's counsel must indicate, on the record, a desire to confer in order to preserve a deprivation of assistance of counsel claim.4 51 We thus announce a rule that satisfies our concerns for the important constitutional right of assistance of counsel, provides for the orderly conduct of trials, and makes sense. The defendant must show that the prohibition actually prevented the opportunity to confer with counsel. See Bailey v. Redman, 657 F.2d 21 (3d Cir.1981); Stubbs v. Bordenkircher, 689 F.2d 1205 (4th Cir.1982). Once the defendant makes the requisite showing, a new trial is warranted. See United States v. Allen, 542 F.2d 630 (4th Cir.1976) and United States v. Bryant, 545 F.2d 1035 (6th Cir.1976). 52 Although this en banc court is charged with the ultimate responsibility of interpreting the federal Constitution, we are confident in the rule we adopt today because the per se rule is already in effect in several states, including two states in the Eleventh Circuit. Our rule, announced by this opinion, is the per se rule with the additional common sense requirement that the record reflect a desire to consult. Alabama applied the per se rule in Payne v. State, 421 So.2d 1303 (Ala.1982) and Ashurst v. State, 424 So.2d 691 (Ala.1982). Georgia law does not appear to involve a harmless error analysis. Cook v. State, 158 Ga.App. 389, 280 S.E.2d 409 (1981) (not definitive). Florida has wrestled with the per se versus harmless error rules and has reluctantly adopted the harmless error analysis. See Bova v. State, 410 So.2d 1343 (Fla.1982), and Recinos v. State, 420 So.2d 95 (Fla. 3d D.C.A. 1982). 53 Other states following the per se rule are Illinois: People v. Noble, 42 Ill.2d 425, 248 N.E.2d 96 (1969); District of Columbia: Jackson v. United States, 420 A.2d 1202 (D.C.1979); Mississippi: Pendergraft v. State, 191 So.2d 830 (Miss.1966), and Tate v. State, 192 So.2d 923 (Miss.1966); New York: People v. Hagen, 86 A.D.2d 617, 446 N.Y.S.2d 91 (1982, 2d Dept.); Pennsylvania: Commonwealth v. Logan, 456 Pa. 508, 325 A.2d 313 (1974), Commonwealth v. Werner, 214 A.2d 276 (1965), and Commonwealth v. Barber, 250 Pa.Super. 427, 378 A.2d 1011 (1977); and Rhode Island: Mastracchio v. Houle, 416 A.2d 116 (R.I.1980). 54 We have explored the possibility that the instruction in this case, "don't talk about your testimony," is appropriate because it is narrowly tailored to prevent coaching. Coaching has come to mean improperly directing a witness's testimony in such a way as to have it conform with, conflict with, or supplement the testimony of other witnesses. We conclude that the trial court's solution to its concern about coaching could not take the form of an admonition against Crutchfield consulting with his counsel. We reach this conclusion for two reasons. 55 First, the Geders Court suggested a variety of ways to serve the purpose of sequestration "without placing a sustained barrier to communication between a defendant and his lawyer." 425 U.S. at 91, 96 S.Ct. at 1337, 47 L.Ed.2d at 601. See United States v. Romano, 736 F.2d 1432, 1437 (11th Cir.1984) (noting that Geders "did not indicate that a restricted prohibition against talking with a defendant about his testimony was a possibility"). The list of permissible measures cited in Geders excludes by implication a bar on consultation.5 56 Second, Geders explained that traditional concerns about coaching are less applicable to a criminal defendant than to other witnesses, because a defendant is present in the courtroom throughout all testimony. 425 U.S. at 88, 96 S.Ct. at 1335, 47 L.Ed.2d at 599. 57 The trial judge may insure that the trial proceedings are orderly, without unnecessary nterruptions and delays. Such rare right of restriction by a trial court when aimed only at insuring orderly procedures in the trial will receive our approval. We caution trial judges, however, that the discretion to limit consultation is very narrow. 58 Since the record in this case does not reflect a desire to consult or an objection to the trial court's admonition, the district court must be reversed. 59 We reverse the district court and vacate the order granting the writ. We remand for consideration the issue left undetermined by the district court: whether the trial court erred in permitting the state to bring out on cross-examination evidence of Crutchfield's prior criminal activity. 60 VACATED and REMANDED for proceedings consistent with this opinion. 61 TJOFLAT, Circuit Judge, specially concurring, in which RONEY, Chief Judge, and HILL, FAY and ANDERSON, Circuit Judges, join: 62 We are called upon in this habeas case to decide whether petitioner's sixth amendment right to the assistance of counsel was denied when the trial court instructed his attorneys "not to discuss his testimony with him" during a brief recess, and neither petitioner nor his counsel raised any objection. Because I find no denial of the assistance of counsel under these circumstances, I agree with the court that the district court's decision granting the petition for a writ of habeas corpus must be reversed. I write separately, however, because the plurality opinion written by Judge Hatchett (plurality) employs a convoluted and self-contradictory analysis that I cannot endorse. I. 63 Barney Earl Crutchfield was tried for committing armed robbery with a deadly weapon and took the stand to testify on his own behalf. Near the end of his direct examination the court announced that a short break would be taken and instructed Crutchfield's counsel "not to discuss his testimony with him during the course of this break." Neither Crutchfield nor his attorneys1 objected to this admonition, and at no time during the recess did they inform the court that they wished to confer. The length of the break is in dispute, with Crutchfield contending that it ballooned into a two-hour recess2 and the State arguing that it was very brief and routine.3 64 Following the recess, Crutchfield's direct examination concluded and his cross-examination began. During cross-examination, Crutchfield testified that he would never have to commit robbery for money, because his father supplied him with whatever financial assistance he required. At this point, the prosecutor approached the bench and advised the court that he wished to impeach Crutchfield with evidence of a prior burglary conviction. The jury was excused from the courtroom while the court considered this evidentiary question. The court concluded that the impeachment evidence was admissible and the following exchange then took place: THE COURT: 65 .... 66 All right. Bring the jury in. 67 Son, don't direct any statements to me. If you have anything, you speak to your lawyer. 68 [CRUTCHFIELD]: Can--can I speak with him? 69 THE COURT: But don't direct statements to me. 70 [CRUTCHFIELD]: Can I speak with him for a minute?THE COURT: What did I just tell you? 71 [CRUTCHFIELD]: Yes, sir. 72 The jury was brought back into the courtroom and cross-examination was concluded. The jury returned a verdict of guilty, and the court sentenced Crutchfield to a forty-five-year prison term, retaining jurisdiction over the first one-third of that term. The conviction was affirmed on direct appeal.4 Crutchfield moved for collateral relief in state court, pursuant to Fla.R.Crim.P. 3.850, and the state trial court denied relief.5 On appeal, the District Court of Appeal of Florida addressed only one of Crutchfield's claims in its opinion, that he was denied the assistance of counsel when the trial court instructed his attorney not to discuss his testimony with him during a recess, and summarily affirmed the trial court as to all other issues. Crutchfield v. State, 431 So.2d 244 (Fla.Dist.Ct.App.1983). As to the assistance of counsel claim, the court, distinguishing Geders v. United States, 425 U.S. 80, 96 S.Ct. 1330, 47 L.Ed.2d 592 (1976) (order prohibiting any consultation with counsel during an overnight recess violated sixth amendment right of defendant), held that a limited prohibition on consultation during a brief routine recess did not deny Crutchfield his sixth amendment right to the assistance of counsel. 73 Crutchfield then filed the instant petition seeking habeas relief, raising the same grounds asserted in his Rule 3.850 petition. The district court, relying on United States v. Conway, 632 F.2d 641 (5th Cir.1980) (Geders rule applicable to recess of any length),6 held that the trial court denied petitioner his sixth amendment right to the assistance of counsel when it instructed his attorneys not to discuss his testimony with him during a recess.7 A panel of this court concluded that a constitutional violation had taken place, but reversed and remanded the case for the district court to determine whether the error was harmless. Crutchfield v. Wainwright, 772 F.2d 839 (11th Cir.1985). We then granted Crutchfield's petition for rehearing en banc. II. 74 Petitioner has complained about two analytically separate alleged instances of deprivation of assistance of counsel. The first occurred during petitioner's direct examination, when the trial court instructed his counsel not to discuss his testimony with him during a break in the trial. The second occurred during cross-examination, at the conclusion of a conference out of the presence of the jury regarding the admissibility of evidence of a prior burglary conviction. These two instances are unrelated and must be assessed separately.A. 75 The second incident is easily disposed of and I will therefore address it first. This incident occurred when, during petitioner's cross-examination, he testified that because his father supplied his financial needs, he would never have to commit robbery for money. The prosecutor sought to impeach petitioner with evidence of a prior burglary conviction, and the court excused the jury so that it could hear argument on the admissibility of the impeachment evidence. The court determined that the impeachment evidence was admissible, at which point petitioner indicated to the court that he wanted to speak with his attorney. Petitioner was told that he should not direct his statements to the court but should instead speak to his lawyer. The court then reiterated its instruction that petitioner not direct statements to it. Petitioner's attorneys witnessed this colloquy and said nothing. Neither petitioner nor his lawyers informed the court that they needed to consult about matters other than the impending resumption of petitioner's cross-examination. 76 It is readily apparent that no constitutional violation occurred on this occasion. As an initial matter, the court did not prohibit petitioner from consulting with his attorney. Petitioner was instructed not to address the court but to speak to his lawyer. He never attempted to do so, and his lawyers made no attempt to speak to him. More importantly, petitioner has no constitutional right to compel a break in the ongoing trial proceedings to speak with his attorney. United States v. Vasquez, 732 F.2d 846, 848 (11th Cir.1984) (per curiam). Geders and its progeny establish that, when a recess in a trial occurs, the Constitution is violated, under certain circumstances, if the defendant is prohibited from consulting with counsel. These cases do not stand for the proposition that a defendant, having taken the stand, may compel a recess during his examination to consult with counsel. See id. 77 In this case, petitioner volunteered a self-serving statement during cross-examination that the prosecutor sought to impeach. The jury was excused while the court heard argument regarding the admissibility of the impeachment evidence. Having made its decision, the court instructed the bailiff to return the jury to the courtroom. No recess in the proceedings occurred. Petitioner had no constitutional right to compel the court to stop the proceedings so that he could confer with counsel.8 B. 78 Petitioner's other allegation of error is that the court deprived him of his sixth amendment right to the assistance of counsel by instructing his attorneys not to discuss his testimony with him during a break that occurred toward the end of his direct examination. The beginning point for analyzing this claim is the Supreme Court's decision in Geders v. United States, 425 U.S. 80, 96 S.Ct. 1330, 47 L.Ed.2d 592 (1976). In Geders, the trial court took an overnight recess at the conclusion of the direct examination of the defendant. Over the vigorous objection of counsel, the court ordered the defendant not to talk to his counsel "about anything" during the overnight recess. The court was not persuaded by counsel's protestations that he needed to confer with his client about trial strategy, including what witnesses to call the following day, and that he would not discuss the defendant's testimony or impending cross-examination, or improperly "coach" the defendant. 79 In an opinion explicitly limited to its facts, the Supreme Court held that the total ban on consultation during an overnight recess violated the defendant's sixth amendment right to the assistance of counsel and warranted reversal of the conviction. Id. at 91, 93 S.Ct. at 1337.9 The Court emphasized that it was not deciding the constitutionality of an "embargo order preventing a defendant from consulting with his attorney during a brief routine recess during the trial day." Id. at 89 n. 2, 93 S.Ct. at 1336 n. 2. The Court recognized that the prevention of "coaching" was a valid concern, one in fact proscribed by ethical rules, but stated that such a goal could not be achieved at the expense of depriving a defendant of his sixth amendment right to counsel during a long overnight recess. Id. at 89-91, 93 S.Ct. at 1336-37.10 The Court suggested that, in order to vindicate the legitimate interest of preventing coaching, a trial court could constitutionally continue the trial without recess in order to complete the defendant's testimony. Id. at 90-91, 93 S.Ct. at 1336. 80 The en banc court is now called upon to address an issue purposefully left unresolved in Geders: the degree to which a defendant's sixth amendment right to the assistance of counsel is implicated by a trial court's restriction on the ability to confer with counsel during a brief recess that does not extend overnight. Our resolution of this matter should be guided by certain established principles. The starting point is the sixth amendment's guarantee that a defendant is entitled to the assistance of counsel at all critical stages of the trial proceedings. See United States v. Cronic, 466 U.S. 648, 659 & n. 25, 104 S.Ct. 2039, 2047 & n. 25, 80 L.Ed.2d 657 (1984). This fundamental guarantee does not, however, entitle a defendant, once he takes the witness stand, to consult with counsel during the time that he is on the stand. A defendant certainly has no right to be "coached" in the midst of his testimony. The tension between these principles arises when it is necessary or desirable to interrupt the proceedings and take a recess during the defendant's testimony. 81 The extreme cases in this area should present little difficulty. In the Geders situation, where a complete and overnight ban on communication is imposed over strenuous objection, it is clear that the assistance of counsel has been denied. It can be presumed that the defendant was prejudiced by his inability to consult with counsel as to any matter for such a prolonged period of time.11 On the other hand, if at a crucial point in the cross-examination of the defendant the trial judge is forced to stop the proceedings and attend to another matter for a minute or two, a restriction, on the defendant's ability to obtain the coaching of counsel to which he does not object, would appear to work no deprivation of the sixth amendment right to the assistance of counsel. Similarly, a narrowly-drawn instruction to counsel that, during a recess, he must scrupulously comply with his ethical obligations and refrain from engaging in coaching would also not seem to deny the assistance of counsel.12 82 There appear to be four relevant factors to consider in assessing cases that fall between these extremes. First, the length of the recess is important. The Geders holding is limited to a recess that extends overnight and recognizes that the overnight break is typically a time of intense work when counsel and client frequently need to consult about trial strategy. Second, the degree of the restriction placed on attorney-client communication is material. An absolute ban on consultation, as in Geders, in effect deprives the defendant of counsel for the duration of the recess. On the other hand, an instruction against coaching or an admonition not to discuss the defendant's testimony may frequently be appropriate and does not prevent discussion of trial strategy and other relevant matters. Third, the point in the proceedings when the recess is taken has some relevance. There is a greater interest in restricting communication to prevent coaching during the defendant's cross-examination than during his direct examination. Finally, whether the defendant or his counsel objected to the instruction and what was said if an objection was raised are highly probative of whether the defendant was deprived of a right to consult with counsel, which he sought to assert. 83 Turning to the facts of this case, petitioner's counsel asked the court, near the end of petitioner's direct examination, if he "could have just a minute," presumably to collect his thoughts and determine what further questioning he wished to pursue. At that point, petitioner's co-counsel asked the court if the attorneys could approach the bench. The court responded affirmatively, and a bench conference followed. At the conclusion of the conference the court announced that a "little break" would be taken and remarked that he noticed "a sigh of relief on some faces" in the jury box. The court directed that in view of the "very brief break, petitioner's lawyers should not discuss his testimony with him. Petitioner and his attorneys raised no objection to this procedure. During the duration of the recess, the court received no indication that petitioner or his counsel desired to consult. The obvious inference to be drawn is that petitioner and his counsel wished to have the short recess and did not mind the restriction on their ability to discuss his testimony. In any event, they considered it preferable to pressing on with the examination uninterrupted. If the court's instruction denied petitioner an important right that he wished to assert, it was incumbent upon him or his attorneys to bring the matter to the court's attention.13 Similarly, the failure to raise the matter during the duration of the recess indicates that petitioner and his attorneys felt no need to consult at any point during that time. The complete failure to object is highly probative that no constitutional deprivation of the assistance of counsel occurred. See Stubbs v. Bordenkircher, 689 F.2d 1205, 1207 (4th Cir.1982), cert. denied, 461 U.S. 907, 103 S.Ct. 1879, 76 L.Ed.2d 810 (1983); Bailey v. Redman, 657 F.2d 21, 24 (3d Cir.1981) (per curiam), cert. denied, 454 U.S. 1153, 102 S.Ct. 1024, 71 L.Ed.2d 310 (1982). When viewed in combination with the relatively short recess and the limited admonition, merely directing counsel not to discuss petitioner's testimony with him,14 I conclude that there has been no constitutional violation in this case. 84 In this appeal, petitioner contends that he failed to object because he was misled by the court's statement that the break would be brief. This argument implies, however, that neither petitioner nor his counsel had matters requiring consultation during a short break. Nor did the fact that the break became extended, if we accept petitioner's version of the facts, move petitioner or his counsel to ask the court for permission to consult. 85 Referring to the subsequent incident during petitioner's cross-examination, which is discussed in Part II.A., supra, petitioner alleges that he there made it clear to the court that he wished to talk with his attorney. He also claims that the denial of his right to consult with counsel during the break taken toward the end of his direct examination led to his impeachment on cross-examination, because he had information that he would have conveyed to counsel that would have prevented the admissibility of the impeachment evidence. Petitioner's arguments ignore the facts depicted in the record. It is clear that, until the time it became apparent that petitioner was about to be impeached with evidence of his prior burglary conviction, he had exhibited no inclination to consult with counsel. This episode, which is fully recounted in Part II.A., supra, simply had nothing to do with the prior recess taken during petitioner's direct examination, where the court instructed counsel not to discuss petitioner's testimony with him. III. 86 Although the plurality also concludes that the writ should not issue in this case, I cannot subscribe to the analysis it employs. The plurality analyzes the claim in question and concludes that any restriction on the ability to confer with counsel during any recess results in an unconstitutional deprivation of the assistance of counsel, requiring per se reversal on direct appeal, or per se issuance of the writ on habeas. Thus, at the time the trial judge utters the restrictive admonition, he commits error of constitutional magnitude in all instances. 87 The Supreme Court has instructed that "the right to the effective assistance of counsel is recognized not for its own sake, but because of the effect it has on the ability of the accused to receive a fair trial." United States v. Cronic, 466 U.S. at 667, 104 S.Ct. at 2046. The core concern is for the "reliability of the trial process" and "subject[ing] the prosecution's case to meaningful adversarial testing." Id. at 667-68, 104 S.Ct. at 2046-47. Under certain circumstances, constitutional error is patent, and the existence of undue prejudice is inherent and self-evident. In those cases, a per se violation is found without further inquiry. The complete denial of counsel is the most obvious example. See Gideon v. Wainwright, 372 U.S. 335, 344, 83 S.Ct. 792, 796, 9 L.Ed.2d 799 (1963) (right to counsel necessary to insure fundamentally fair trial). In other per se cases, although the defendant has nominally been provided with or obtained counsel, circumstances are such that he has, in effect, been denied representation. See Cuyler v. Sullivan, 446 U.S. 335, 349-50, 100 S.Ct. 1708, 1719, 64 L.Ed.2d 333 (1980) (counsel actively represented conflicting interests); Powell v. Alabama, 287 U.S. 45, 53, 56-58, 53 S.Ct. 55, 58-60, 77 L.Ed. 158 (1932) (indefinite appointment of all members of local bar to help defendant and no designation of specific counsel until just prior to trial, if at all). The plurality is, in effect, saying that the type of error claimed in this case presents a likelihood of prejudice equivalent to a case where the defendant suffers a complete denial of assistance of counsel. I cannot agree with the plurality's conclusion that in every instance where a trial judge restricts communication with counsel during a trial recess a per se constitutional violation results. 88 Despite its elevation of the claimed error in this case to the highest level of severity, the plurality then concludes that if the defendant or his attorney did not indicate a desire to confer, on the record, the error has not been preserved and is not cognizable in any subsequent federal proceeding.15 The defendant, therefore, can forever waive a right akin to the right to be represented by counsel in the first instance and so fundamental as to justify a per se rule, without any indication that he was aware of the right or intended to waive it. This is a far cry from the presumption against waiver of fundamental rights imposed by the Supreme Court in the context of the sixth amendment right to counsel and from the requirement that a waiver be an "intentional relinquishment ... of a known right." See Johnson v. Zerbst, 304 U.S. 458, 464, 58 S.Ct. 1019, 1023, 82 L.Ed. 1461 (1938). I find the logic of the plurality's approach inherently contradictory and unsatisfactory. 89 It would only make sense to adopt a per se rule of reversal in every recess situation involving a restriction on access to counsel if we are confident that all such instances are likely to result in a deprivation of constitutional magnitude. As the Supreme Court has stated, "[t]here are ... circumstances that are so likely to prejudice the accused that the cost of litigating their effect in a particular case is unjustified." United States v. Cronic, 466 U.S. at 667, 104 S.Ct. at 2046-47. Geders presented such a situation. The trial court, over vigorous objection, erected a lengthy, overnight ban on any consultation, at a critical point in the proceedings. Prejudice was obvious. I would suggest that not every restriction on consultation during trial presents an equivalent likelihood of constitutional harm and justifies a per se rule of unconstitutionality. This case represents a prime example. The examination of the defendant had reached a point where all parties apparently concluded that a brief recess was desirable. The court told petitioner's counsel not to talk with his client about his testimony. This was apparently agreeable to all of the parties. Counsel's and petitioner's actions surely led the court to believe that they wanted to take a break and did not need to consult during the recess. It would be difficult to sustain an argument that petitioner suffered an obvious deprivation of the right to counsel such as was involved in the Geders case. In my view, a claim of this type should be analyzed under all of the relevant facts and circumstances, including the duration of the recess, when it occurred, the degree of restriction on consultation imposed by the trial judge, and whether the defendant or counsel objected to the restriction and what was said if an objection was raised. 90 One might argue that the plurality opinion at least has the virtue of fashioning an easy-to-apply, bright line rule. I believe that such clarity is illusory. The plurality opinion leaves unanswered several difficult questions and poses several potential problems. 91 First, the plurality's analysis would apply in the context of a direct criminal appeal in a federal case, as well as in a habeas proceeding challenging a state conviction. On direct appeal, the plain error rule applies. Fed.R.Crim.P. 52(b). Thus, the defendant's failure to object, or place on the record an indication that he wished to confer with his attorney, would be of no moment. Plain error is one that is " 'both obvious and substantial,' " and is more freely noticed in the case of constitutional errors. United States v. Smith, 700 F.2d 627, 633 (11th Cir.1983) (citations omitted). Under the plurality's characterization of the claim in this case as a per se constitutional error, I would suggest that, were this a direct criminal appeal, we would be required to notice the error and reverse the conviction, despite the fact that, in my view, no deprivation of the assistance of counsel occurred.16 92 Second, the plurality requires that a defendant make a contemporaneous objection as a prerequisite to claiming the deprivation of assistance of counsel. It is unclear whether this holding is limited to brief trial recesses during the day, or whether it extends to an overnight recess as in Geders, or even to a week-long recess. If it only applies to brief recesses during the day, I fail to glean the principle on which the plurality would rely for this distinction. Thus, if a trial judge enacts a complete ban on consultation with counsel during a week-long recess, and no objection is raised, the claim will not have been preserved. 93 Third, if during a brief recess, the trial court merely warns counsel to refrain from engaging in any unethical or illegal "coaching," and an objection is raised, has the trial judge committed constitutional error warranting habeas relief? The plurality seems to hold that a trial judge cannot vindicate the valid interest in preventing coaching with an admonition that restricts, in any fashion, communication between the defendant and counsel, no matter how innocuous the instruction. 94 Finally, in its closing passages, ante at 1110-11, the plurality hints that trial judges have discretion, albeit very limited, to restrict consultation during recesses, even over an objection. This appears completely contrary to the holding of the opinion. In the same paragraph, the plurality states that trial judges may insure that proceedings are orderly, without unnecessary interruptions and delays. I have studied this paragraph and am at a loss to decipher exactly what guidance the plurality is providing the trial judges of both the state and federal courts. IV. 95 For the foregoing reasons, I would hold that petitioner was not deprived of his constitutional right to the assistance of counsel. I therefore concur in the court's decision to reverse the district court's grant of the writ of habeas corpus. I cannot agree, however, with the plurality's conclusion that a constitutional error occurred in this case, but is not cognizable, because it was not adequately preserved. 96 EDMONDSON, Circuit Judge, specially concurring: 97 In this case, the trial record does not show that the defendant and defense counsel actually desired to confer during the pertinent recess and would have conferred but for a restriction placed upon them by the trial judge. Consequently, the trial record in this case shows no deprivation of defendant's right to counsel.1 See Bailey v. Redman, 657 F.2d 21 (3d Cir.1981), cert. denied, 454 U.S. 1153, 102 S.Ct. 1024, 71 L.Ed.2d 310 (1982); Stubbs v. Bordenkircher, 689 F.2d 1205 (4th Cir.1982), cert. denied, 461 U.S. 907, 103 S.Ct. 1879, 76 L.Ed.2d 810 (1983). To the extent Judge Hatchett's opinion recognizes this, I agree with his opinion. I also concur in the judgment to reverse and to remand. I share, in part, the views expressed in Judge Tjoflat's concurring opinion, especially Part II.A.; but I speak for myself on two points. 1. 98 THE PROBABILITY OF IMPROPER COUNSELING IS LOW, AND TRIAL 99 COURTS OUGHT NOT INTERFERE WITH ATTORNEY-CLIENT COMMUNICATIONS DURING TRIAL RECESSES 100 If so-called "coaching" of witnesses means improper attempts to influence the testimony of a witness, I agree that "coaching" of defendants is a valid concern of trial judges; but such improper counseling seems rare. Consequently, concerns about "coaching", in general, cannot constitutionally support an order barring communication--even bearing on the defendant's testimony--between a defendant and his lawyer during brief, routine recesses during the trial day in criminal cases. See generally Geders v. United States, 425 U.S. 80, 93, 96 S.Ct. 1330, 1337-38, 47 L.Ed.2d 592, 602 (1976) (Marshall, J., concurring); United States v. Conway, 632 F.2d 641, 644 (5th Cir.1980); United States v. Allen, 542 F.2d 630, 633 (4th Cir.1976), cert. denied, 430 U.S. 908, 97 S.Ct. 1179, 51 L.Ed.2d 584 (1977). 101 At trial, defense counsel is the adversary of the government prosecutor; but absent clear evidence to the contrary, courts ought to assume that lawyers perform their duties ethically. While suggesting fraud or perjury is unethical,2 there is nothing unethical or, otherwise, wrong with lawyers counseling their clients at every recess concerning the anticipated direction of the prosecutor's questions and the best manner in which the client can present the facts most favorably to the defense. To the contrary, such counseling is entirely proper. 102 In criminal cases, judicial orders barring defense attorney-client communications during brief, trial recesses violate the Constitution if that interference has a likely effect on the trial's outcome. Because such interference entails considerable risk of constitutional error, it is ill-advised and unseemly and ought to be avoided unless expressible, extraordinary circumstances justify it in each particular instance. Briefly stated, the cost outweighs the benefit. 2. 103 PER SE REVERSALS ARE INAPPROPRIATE WHERE TRIAL COURTS INTERFERE WITH ATTORNEY- CLIENT COMMUNICATIONS DURING BRIEF TRIAL RECESSES 104 As indicated above, I am of the opinion that orders barring communications (even dealing with the defendant's testimony) between a criminal defendant and his lawyer during brief, routine trial recesses (including recesses during the cross-examination of the defendant) can violate the sixth amendment if the trial record shows that the lawyer or defendant actually wished to confer during that recess. I believe, however, that such constitutional violations occur very occasionally: not every interference of this sort with counsel's assistance results in a breakdown in the adversary process that renders the trial's outcome unreliable. If there is no such breakdown, there has been no sixth amendment violation.3 Although such a breakdown may be presumed in certain extreme circumstances, the facts and circumstances of most cases do not warrant such a presumption. 105 Accordingly, sixth amendment claims in the context of brief, routine recesses during the trial day ought to be subject to a requirement that the defendant affirmatively assert and demonstrate prejudice as a condition to post-conviction relief.4 See United States v. DiLapi, 651 F.2d 140, 148-49 (2d Cir.1981), cert. denied, 455 U.S. 938, 102 S.Ct. 1427, 71 L.Ed.2d 648 (1982); State v. Perry, 278 S.C. 490, 299 S.E.2d 324, 325-26 cert. denied, 461 U.S. 908, 103 S.Ct. 1881, 76 L.Ed.2d 811 (1983); cf. Geders v. United States, 425 U.S. 80, 91, 96 S.Ct. 1330, 47 L.Ed.2d 592 (1976) (interference with counsel by order not to consult with defendant during overnight recess denied right to the effective assistance of counsel). See generally Strickland v. Washington, 466 U.S. 668, 104 S.Ct. 2052, 80 L.Ed.2d 674 (1984) (usually sixth amendment claims require defendants to show prejudice); United States v. Cronic, 466 U.S. 648, 662 n. 31, 104 S.Ct. 2039, 2049 n. 31, 80 L.Ed.2d 657 (1984) (fact that accused can attribute deficiency in his representation to the court does not justify reversal absent an actual or likely effect on the trial process). 106 I disagree with those who think that this standard places an impossible burden on the defendant. Both the Supreme Court and this Circuit have already placed this burden on defendants alleging a myriad of constitutional violations. See Morris v. Matthews, --- U.S. ----, 106 S.Ct. 1032, 1038, 89 L.Ed.2d 187 (1986) (jeopardy barred conviction reduced to conviction for lesser included offense that is not jeopardy barred); United States v. Bagley, --- U.S. ----, 105 S.Ct. 3375, 3381, 3384, 87 L.Ed.2d 481 (1985) (evidence favorable to defendant withheld by government); Wilson v. Kemp, 777 F.2d 621, 623 (11th Cir.1985) (improper prosecutorial argument), cert. denied, --- U.S. ----, 106 S.Ct. 2258, 90 L.Ed.2d 703 (1986); Stoner v. Graddick, 751 F.2d 1535, 1546-47 (11th Cir.1985) (delay between crime and indictment).5 107 Neither am I convinced that this prejudice standard would infringe too much on attorney-client relations. To prove prejudice, truly privileged communications may be neither necessary nor relevant. Standing alone, the circumstances of the pertinent recess and pertinent order can establish prejudice. If in a particular case the circumstances (length of recess, restrictiveness of the order, the point in the trial at which the recess is taken, etc.) are, by themselves, not enough, the defendant seeking review may choose to disclose the intended nature of the barred communication. Even these are communications that never, in fact, occurred; and, thus, the usual attorney-client privilege rules hardly seem to control. Perhaps sometimes, however, privileged communications may be relevant. Although the attorney-client privilege, in particular, and attorney-client confidentiality, in general, are important concerns due genuine deference, courts have never treated them as inviolable. When a defendant has challenged his conviction by asserting an issue that makes privileged communications relevant, he waives the privilege in respect to those communications. See, e.g., Smith v. Estelle, 527 F.2d 430, 434 n. 9 (5th Cir.1976) (whether defendant would have testified but for admission of constitutionally invalid confession).6 108 We must recall that these challenges are presented to us by persons already convicted of a crime. Those convictions are presumptively valid. See Barefoot v. Estelle, 463 U.S. 880, 103 S.Ct. 3383, 3392, 77 L.Ed.2d 1090 (1983); cf. United States v. Bulman, 667 F.2d 1374, 1380 (11th Cir.), cert. denied, 456 U.S. 1010, 102 S.Ct. 2305, 73 L.Ed.2d 1307 (1982). The courts remain open to such persons, but it is right that the challengers bear the burden of establishing that their convictions were inconsistent with the requirements of the Constitution. 109 Many of the brief recesses during a trial day are not critical stages of the criminal proceeding. If the category of sixth amendment cases in which prejudice will be presumed is to be extended to instances of interference with defense counsel during a brief, routine recess, the nation's highest court should take that step first. Per se reversal rules are not favored. See Rose v. Clark, --- U.S. ----, 106 S.Ct. 3101, 92 L.Ed.2d 460 (1986). Nor should they be. To the extent that United States v. Conway, 632 F.2d 641 (5th Cir.1980), and Judge Hatchett's opinion mandate per se reversals in this circuit, they are, in my opinion, mistaken. The social costs of crime are too great to allow the proliferation of per se reversal rules. 1 Q. [Prosecutor] And your father has--I think you testified earlier your father has always given you whatever you need A. [Crutchfield] Whatever I've needed. Yes sir. Q. [Prosecutor] Now, is the suggestion that you're making there is that--if I follow the--the logic to it--is that: I had all the money I needed from Dad; therefore, I wouldn't have to need it to rob the store? Is that what you're saying? A. [Crutchfield] No sir. If--if I've ever--if I've ever needed anything of need, that was of need, and my father saw that it was of need, he would help me with it. But he always made me, you know, work for it and try to strive for it. Q. [Prosecutor] Okay. We've had a lot of testimony about--that you've heard--that your father paid a hundred dollars down on your rent and this--you know, your financial needs. What does that have to do with this case? Can you explain that? A. [Crutchfield] I've--I've never had--I've--I've never had to or would have ever robbed any place--for money. Q. [Prosecutor] That's the point. Right? We went through all these financial needs that you had because your dad supplies you money; therefore, you didn't have to rob. Isn't that what you're saying? A. [Crutchfield] Yes sir, I would never have to. Never would. 2 The Florida appellate court opinion stated: Appellant's motion further asserts that "... this turned out to be a rather lengthy recess. The defendant was not permitted to speak with his lawyer for about two hours...." Geders v. United States, 425 U.S. 80, 96 S.Ct. 1330, 47 L.Ed.2d 592 (1976), held that an order preventing a criminal defendant from consulting with his counsel "about anything" during a 17 hour overnight recess, between direct and cross-exam, impinged the defendant's sixth amendment right to assistance to counsel. But Geders carefully noted that the case did not involve a limited prohibition during a brief routine recess during the trial day. Compare McFadden v. State, 424 So.2d 918 (Fla. 4th DCA 1982). We conclude that in the context of the Rule 3.850 motion in the present case, the limited prohibition imposed does not warrant post-conviction relief. Crutchfield v. State, 431 So.2d 244 (Fla. 1st DCA 1983) (footnote omitted). 3 See Bonner v. City of Prichard, Alabama, 661 F.2d 1206 (1981) 4 As to the second alleged denial of assistance of counsel, although the trial judge expressly instructed Crutchfield to address questions to his counsel, the record does not show that Crutchfield did so. Whether Crutchfield and his counsel did confer without the conference being noted by the court reporter, we do not know. In light of this fact, we take the Crutchfield-judge exchange in the light most favorable to Crutchfield--he was uncertain about the judge's reply and did not consult with his counsel. His counsel was present, however, and had the ability to clarify any confusion that might have existed. Counsel, for whatever reason, apparently saw no need to intervene. Consequently, if consultation did not take place, we must assume that counsel's professional judgment was that consultation was unnecessary. We thus conclude, that even read favorably to Crutchfield, this exchange did not actually deprive Crutchfield of assistance of his counsel 5 The Court suggested, as methods of combating coaching, skillful cross-examination and continuation of examination without interruption until the examination is completed 1 Petitioner was apparently represented by two lawyers at trial 2 Although Crutchfield has, at one time, asserted that the brief recess stretched into a two-hour lunch recess, he now concedes that a lunch break took place prior to his taking the stand on the day in question 3 The trial transcript recites only that a brief recess was taken. Because the district court concluded that any restriction on the ability to confer with counsel during any recess required issuance of the writ, it did not resolve the factual dispute over the duration of the recess 4 Crutchfield raised two issues in his direct appeal. He contended that the trial court erred in admitting evidence of his prior burglary conviction and erroneously retained jurisdiction over the first one-third of his sentence 5 Crutchfield apparently raised the following five claims in his Rule 3.850 proceeding: (1) that his sentence violated the equal protection clause of the fourteenth amendment because it was more severe than that received by his codefendant; (2) that the evidence was insufficient to support his conviction; (3) that the trial court erred in denying a severance; (4) that the trial court denied him the assistance of counsel in violation of the sixth and fourteenth amendments by instructing his counsel not to discuss his testimony with him during a recess; and (5) that the trial court erred in admitting evidence of a prior burglary conviction. The record does not contain Crutchfield's Rule 3.850 motion, but his federal habeas petition and his submissions to this court state that the five claims noted above were raised in the Rule 3.850 proceeding. The State has not disputed this point, and, in its en banc brief, specifically states that all claims have been exhausted 6 In Bonner v. City of Prichard, 661 F.2d 1206, 1209 (11th Cir.1981) (en banc), this court adopted as binding precedent all decisions of the former Fifth Circuit handed down prior to October 1, 1981 7 The district court held that claims (1), (2), and (3), see supra note 5, were procedurally barred under Wainwright v. Sykes, 433 U.S. 72, 97 S.Ct. 2497, 53 L.Ed.2d 594 (1977), because they could have been, but were not, raised on direct appeal. Petitioner has not appealed from this determination. Because the district court granted relief on claim (4), see supra note 5, it did not address claim (5) 8 The plurality opinion, ante at 1109 n. 4, assumes that petitioner wished to consult with counsel on this occasion but did not do so, because of confusion over the trial judge's remarks. The plurality holds that no deprivation of petitioner's right to the assistance of counsel occurred, however, because his attorneys witnessed the colloquy and said nothing, exercising their professional judgment that consultation was unnecessary. This analysis misses the mark. If petitioner had the right to consult with counsel at that point in the proceedings, while on the stand during cross-examination, his attorneys could not possibly have known about what he wished to consult. Accordingly, they lacked the information necessary to exercise their professional judgment and to "waive" petitioner's right to consult with counsel, which he was attempting to exercise. In my view, no constitutional error occurred during this incident, because (1) the trial judge did nothing to prevent consultation, and (2) petitioner had no constitutional right to compel the trial judge to declare a recess in the proceedings so that he could consult with counsel 9 Geders involved a direct appeal from a conviction in a federal case and did not arise in a habeas context 10 I use the term "coaching" to refer to improper attempts to influence or shape the testimony of the witness. Such a tactic is proscribed by ethical rules and is a perversion of the truth-seeking process 11 Indeed, requiring the defendant to prove prejudice would be unworkable. First, it would be very difficult for the defendant to show that the trial may have been altered had consultation been allowed. Second, such an inquiry would require that destructive inroads be forged into the attorney-client relationship 12 In order to avoid invalidating a conviction in the least offensive of these types of cases, the panel in this case and one other circuit have attempted to accomodate the situation by concluding that a constitutional deprivation took place but that a harmless error analysis should be applied. See United States v. DiLapi, 651 F.2d 140, 147-49 (2d Cir.1981), cert. denied, 455 U.S. 938, 102 S.Ct. 1427, 71 L.Ed.2d 648 (1982). I believe that a harmless error analysis is unworkable in a case such as this for several reasons. First, the prosecution has the burden of establishing that an error is harmless, see Chapman v. California, 386 U.S. 18, 22-26, 87 S.Ct. 824, 827-29, 17 L.Ed.2d 705 (1967); yet only petitioner and his counsel are in possession of the relevant material facts, i.e., what they would have discussed had they been allowed to consult. Further, as pointed out supra, note 11, such an inquiry is not subject to easy resolution and necessarily penetrates into the heart of the attorney-client relationship 13 Given the interest at stake and the difficulty in probing the merits of an objection to a court-imposed restriction on consultation with counsel during a recess, see supra note 11, I would not impose a very onerous burden on a defendant who objects to such a restriction. In my view, very little showing would be necessary before the trial judge is required to allow the defendant to consult with counsel during a recess. If a defendant is not willing to be restricted in his communication with counsel during a recess and thus raises an objection, the trial court can consider whether to continue the examination without recess or to take a recess and allow unrestricted consultation 14 The trial judge in this case did not enact a complete ban on communication, but merely told counsel to avoid discussing petitioner's testimony with him. Although "not discussing testimony" may be broader than is an instruction against coaching, on the facts of this case I believe the court's instruction amounted to little more than an admonition to counsel to make certain that ethical standards were complied with during the recess The plurality states that it has "explored the possibility" that the instruction not to discuss testimony was appropriate because it was narrowly tailored to prevent coaching. It then recites that "the trial court's solution to its concern about coaching could not take the form of an admonition against Crutchfield consulting with his counsel." This statement is mystifying because the trial court did not admonish Crutchfield not to consult with his attorney; the court instructed Crutchfield's attorney not to discuss his testimony with him. 15 Under this approach, error of constitutional magnitude occurs when the trial judge announces the prohibition, but whether a constitutional claim exists depends upon the presence of some type of indication of a desire to confer. The plurality, in a federal habeas case, thus engrafts its own contemporaneous objection, procedural bar rule to a state or federal trial proceeding 16 Nor are the problems of the plurality's approach limited to direct criminal appeals. The following scenario is possible in a federal proceeding collaterally attacking a state or federal conviction. If, at trial, the judge restricts communication between the defendant and counsel during a recess, and no objection is raised, the defendant may allege, in a collateral attack, that his counsel was ineffective for failing to object. Given that, under the plurality holding, the trial judge committed a per se constitutional violation, it would be difficult to argue that counsel's actions constituted reasonable performance. The question would then arise whether the defendant was prejudiced by counsel's ineffectiveness. Because the plurality holds that any deprivation of assistance of counsel requires a per se prejudice rule, it would appear that a valid claim of ineffectiveness has been established. Thus, although the defendant could not bring a deprivation of assistance of counsel claim, because of his failure to object, he may have a valid ineffective assistance of counsel claim 1 In light of Gideon v. Wainwright, 372 U.S. 335, 83 S.Ct. 792, 9 L.Ed.2d 799 (1963), the actions of the State of Florida are governed by the sixth amendment right to counsel 2 See ABA Model Rules of Professional Conduct, Rules 1.2(d), 3.3 & comment to Rule 1.2(d); ABA Model Code of Professional Responsibility Canon 7, DR 7-102(A)(4), (6), (7); Alabama Code of Professional Responsibility Canon 7, EC 7-26 & DR 7-102(A)(4), (6), (7); Florida Code of Professional Responsibility Canon 7, EC 7-26 & DR 7-102(A)(4), (6), (7); Georgia Code of Professional Responsibility Canon 7, EC 7-26 & DR 7-102(A)(4), (6), (7) 3 This is different from the concept of harmless federal constitutional error; thus, Chapman v. California, 386 U.S. 18, 87 S.Ct. 824, 17 L.Ed.2d 705 (1967), does not control 4 I specifically note that this case comes to us as an appeal by a state prisoner from the denial of federal habeas corpus relief by the district court. In the words of Justice Harlan, "I therefore put aside all other types of cases; in so doing, however, I wish to make it perfectly clear that I am by no means prepared to say that the constitutional issue should ultimately turn upon the nature of the particular case involved." Estes v. Texas, 381 U.S. 532, 590, 85 S.Ct. 1628, 1663, 14 L.Ed.2d 543 (1965) (Harlan, J., concurring) 5 For other cases applying a prejudice standard, see United States v. Valenzuela-Bernal, 458 U.S. 858, 867-873, 102 S.Ct. 3440, 3446-49, 73 L.Ed.2d 1193 (1982) (government deportation of defendant's witnesses; defendant must show testimony would have been material and favorable); United States v. Morrison, 449 U.S. 361, 365-66, 101 S.Ct. 665, 668-69, 66 L.Ed.2d 564 (1981) (agent meeting with defendant without counsel's consent or presence; defendant must demonstrate at least threat of prejudice); Busby v. Holt, 771 F.2d 1461 (11th Cir.1985), cert. denied, --- U.S. ----, 106 S.Ct. 826, 88 L.Ed.2d 798 (1986), opinion withdrawn in part, 781 F.2d 1475, 1477 (11th Cir.1986) (prosecutor calling coindictee to testify, knowing that he would invoke fifth amendment privilege; defendant must prove prejudice) 6 For other examples of waiver, see United States v. Miller, 600 F.2d 498, 501-02 (5th Cir.) (criminal law securities case, issue of good faith reliance on attorney's advice), cert. denied, 444 U.S. 955, 100 S.Ct. 434, 62 L.Ed.2d 327 (1979); Johnson v. United States, 542 F.2d 941 (5th Cir.1976) (validity of guilty plea based on attorney's advice); Bennett v. Mississippi, 523 F.2d 802, 804 (5th Cir.1975) (waiver of right to appeal); Armstrong v. United States, 440 F.2d 658 (5th Cir.1971) (validity of guilty plea based on counsel's advice); United States v. Woodall, 438 F.2d 1317, 1324-26 (5th Cir.1970) (en banc) (guilty plea based on counsel's advice), cert. denied, 403 U.S. 933, 91 S.Ct. 2262, 29 L.Ed.2d 712 (1971). See also Matter of Continental Illinois Securities Litigation, 732 F.2d 1302, 1315 n. 20 (7th Cir.1984) (securities case); Tasby v. United States, 504 F.2d 332 (8th Cir.1974) (defendant claiming attorney coerced him into testifying), cert. denied, 419 U.S. 1125, 95 S.Ct. 811, 42 L.Ed.2d 826 (1975). See generally Thornburg, Attorney-Client Privilege: Issue-Related Waivers, 50 S. Air L. & Com. 1039 (1985)
Mid
[ 0.598314606741573, 26.625, 17.875 ]
Q: AAD Application Permission issue I have added an Azure AD application and removed all required permissions within the azure portal: However, the application still has access to the GraphAPI. If I go to the Enterprise applications tab, select the application and go to permissions, I can see the Read directory data permission: Why is the permission still there - even I removed it? Its probably not a timing issue since I removed the permission for about an hour. I also logged in using a new browser session.... A: It seems that the permission is still on the service principal even though it has been removed from the application. (Enterprise Applications = Service principals, Application registrations = Applications) Remember that the Application is only a template for Service Principals. Service principals get permissions for APIs, the app never does. I would manually update the service principal through Graph API, or delete it and re-create it altogether. Seems like something went wrong syncing them. Normally it should sync the service principal in the same tenant, multi-tenant apps' service principals in other tenants don't sync. EDIT: Since it is an app permission on the Microsoft Graph you have to delete the appRoleAssignment created for the service principal. (If it was Azure AD Graph API, it would be a member of the role Directory Readers) You should be able to see these from: https://graph.windows.net/tenant-id/servicePrincipals/object-id/appRoleAssignments?api-version=1.6 (Azure AD Graph API Explorer is not working for me right now...) After finding it, you can just delete it by running an HTTP DELETE on https://graph.windows.net/tenant-id/servicePrincipals/object-id/appRoleAssignments/assignment-object-id?api-version=1.6 If it were a delegated permission, you would have to remove the oauth2PermissionGrant. You can find it via https://graph.windows.net/tenant-id/servicePrincipals/object-id/oauth2PermissionGrants?api-version=1.6
High
[ 0.7302631578947361, 27.75, 10.25 ]
This pillow is filled with 100% soft polyester and it has a non-woven protective cover. Add a little flair and comfort to your rooms with new pillows covered in beautiful fabrics and trimmed elegantly. For use with all types of shams & covers, Will look superb in any decorative pillow cover
Low
[ 0.30945558739255, 13.5, 30.125 ]
LAUSANNE - Prime Minister Benjamin Netanyahu on Sunday harshly criticized the American government and the other world powers currently negotiating with Iran over its nuclear program, telling ministers at the weekly cabinet meeting that the "Iran-Lausanne-Yemen axis" was a danger to humanity that must be stopped. As the talks entered their fifth day in Lausanne, Switzerland, Netanyahu told his cabinet: "I am deeply troubled by the emerging agreement with Iran in the nuclear talks," said Netanyahu during a start of a cabinet meeting on Sunday. "The agreement confirms all of our fears and even worse." U.S. Secretary of State John Kerry and his Iranian counterpart Mohammad Javad Zarif opened the day with their eighth meeting in four days. Netanyahu met on Sunday morning at his office in Jerusalem with U.S. Senate majority leader, Republican Mitch McConnell, and spoke over the weekend with Senate minority leader Harry Reid, who announced that he was stepping down from his position. In what sounded like a barb aimed at the White House, Netanyahu stated that he heard from both veteran senators about "strong and uncompromising support for Israel." Later this week, House Speaker John Boehner is expected to arrive in Jerusalem for a meeting with Netanyahu. The prime minister said that the developing nuclear deal also bears the danger of what he called "Iran's conquest of the Middle East." Referring to the Iran-allied Houthi militia's recent takeover of large parts of Yemen, Netanyahu said: "Iran is using proxies to try and take over the strategic straits of Bab el-Mandab, which could alter the balance of world seafaring and oil supply. Iran is enacting a pincer strategy from north and south to take over the entire Middle East. The Iran-Lausanne-Yemen axis is extremely dangerous to humanity and must be stopped." At the same time Netanyahu made his comments, the negotiations between Iran and the six world powers continued at the Beau-Rivage Palace Hotel in Lausanne. After an hour-long meeting on Sunday morning between Kerry and Zarif, the two will meet with Chinese Foreign Minister Wang Yi and British Foreign Secretary Philip Hammond.
Mid
[ 0.623326959847036, 40.75, 24.625 ]
South Korean director Bong Joon Ho speaks to the media upon his arrival at Incheon airport, west of Seoul, on February 16, 2020, after his film "Parasite" won the Oscar for Best Picture. Getty Director Bong Joon Ho smiled and waved at a waiting crowd Sunday as he arrived home in South Korea, his first trip back since he won four Oscars for his movie "Parasite," including the award for Best Picture. The crowd clapped and cheered as Bong walked out of the arrivals gate at Incheon International Airport. "It's been a long journey in the United States and I'm pleased that it got wrapped up nicely," Bong said, speaking in Korean. "Now, I am happy that I can quietly return to creating, which is my main occupation." He also joked that he would wash his hands to join the movement to defeat a new virus that has sickened tens of thousands, mostly in China. Get Breaking News Delivered to Your Inbox "I'll diligently wash my hands from now on and participate in this movement to defeat coronavirus," he said. As of Sunday, South Korea had 29 confirmed cases of the new virus, which the World Health Organization has named COVID-19, referring to its origin late last year and the coronavirus that causes it. South Korean director Bong Joon Ho walks past the media upon his arrival at Incheon airport, west of Seoul, on February 16, 2020, after his film "Parasite" won the Oscar for Best Picture. Getty "Parasite" was the first non-English-language film to win Best Picture in the 92-year history of the Academy Awards, and is the first South Korean movie to ever win an Oscar, stunning moviemakers and fans around the world. Bong plans to hold a news conference with the staff and cast of "Parasite" on Wednesday in Seoul.
High
[ 0.6633663366336631, 33.5, 17 ]
The president’s comments effectively wiped away the more conventional statement he delivered at the White House a day earlier when he branded members of the KKK, neo-Nazis and white supremacists who take part in violence as “criminals and thugs.” RIGHT? WRONG? NO OPINION? President Donald Trump says he may grant a pardon to former Sheriff Joe Arpaio following his recent conviction in federal court, prompting outrage among critics who say the move would amount to an endorsement of racism.
Low
[ 0.506465517241379, 29.375, 28.625 ]
Q: stroke-dasharray can't bind with angular 6 I used this Chart with angular 6 , I faced the some conflict on this chart, i try to bind data , but i cant do that, its not working correctly, This stroke-dasharray="5, 100" to I replaced this one stroke-dasharray="{{this.master.locationVsStatusMap.size}}, 100" stackblitz anyone know how to do that correctly ? Thanks in advance! .single-chart { width: 33%; justify-content: space-around ; } .circular-chart { display: block; margin: 10px auto; max-width: 70%; max-height: 150px; } .circle-bg { fill: none; stroke: #3c9ee5; stroke-width: 3.8; } .circle { fill: none; stroke-width: 3.8;border-right: 1px solid white; border-left: 1px solid white; stroke-linecap:square; animation: progress 1s ease-out forwards; } @keyframes progress { 0% { stroke-dasharray: 0 100; } } .circular-chart.orange .circle { stroke: #ff9f00;border-right: 1px solid white; border-left: 1px solid white; } .circular-chart.green .circle { stroke: #4CC790; } .circular-chart.blue .circle { stroke: #3c9ee5; } .percentage { fill: #666; font-family: sans-serif; font-size: 0.3em; text-anchor: middle; } .flex-wrapper { display: flex; flex-flow: row nowrap; } <div class="flex-wrapper"> <div class="single-chart"> <svg viewBox="0 0 36 36" class="circular-chart orange"> <path class="circle-bg" d="M18 2.0845 a 15.9155 15.9155 0 0 1 0 31.831 a 15.9155 15.9155 0 0 1 0 -31.831" /> <path class="circle" stroke-dasharray="5, 100" d="M18 2.0845 a 15.9155 15.9155 0 0 1 0 31.831 a 15.9155 15.9155 0 0 1 0 -31.831" /> <text x="18" y="20.35" class="percentage">50%</text> </svg> </div> </div> A: Try attribute binding attr.stroke-dasharray="{{this.master.locationVsStatusMap.size}}, 100" Forked Example:https://stackblitz.com/edit/angular-wqjlc5 Ref this: https://teropa.info/blog/2016/12/12/graphics-in-angular-2.html
Mid
[ 0.6396396396396391, 35.5, 20 ]
Q: Align matplotlib scatter marker left and or right I am using the matplotlib scatterplot function to create the appearance of handles on vertical lines to delineate certain parts of a graph. However, in order to make them look correct, I need to be able to align the scatter plot marker to the left (for the left line / delineator) and / or right (for the right line / delineator). Here's an example: #create the figure fig = plt.figure(facecolor = '#f3f3f3', figsize = (11.5, 6)) ax = plt. ax = plt.subplot2grid((1, 1), (0,0)) #make some random data index = pandas.DatetimeIndex(start = '01/01/2000', freq = 'b', periods = 100) rand_levels = pandas.DataFrame( numpy.random.randn(100, 4)/252., index = index, columns = ['a', 'b', 'c', 'd']) rand_levels = 100*numpy.exp(rand_levels.cumsum(axis = 0)) ax.stackplot(rand_levels.index, rand_levels.transpose()) #create the place holder for the vertical lines d1, d2 = index[25], index[50] #draw the lines ymin, ymax = ax.get_ylim() ax.vlines([index[25], index[50]], ymin = ymin, ymax = ymax, color = '#353535', lw = 2) #draw the markers ax.scatter(d1, ymax, clip_on = False, color = '#353535', marker = '>', s = 200, zorder = 3) ax.scatter(d2, ymax, clip_on = False, color = '#353535', marker = '<', s = 200, zorder = 3) #reset the limits ax.set_ylim(ymin, ymax) ax.set_xlim(rand_levels.index[0], rand_levels.index[-1]) plt.show() The code above gives me almost the graph I'm looking for, like this: However, I'd like the leftmost marker (">") to be "aligned left" (i.e. shifted slightly to the right) so that the line is continued to the back of the marker Likewise, I'd like the rightmost marker ("<") to be "aligned right" (i.e. slightly shifted to the left). Like this: Any guidance or suggestions on how to accomplish this in a flexible manner? NOTE: In practice, my DataFrame index is pandas.Datetime not integers as I've provided for this simple example. A: I liked this question and was not satisfied with my first answer. In particular, it seemed unnecessarily cumbersome to create figure specific objects (mark_align_*) in order to align markers. What I eventually found was the functionality to specify a marker by verts (a list of 2-element floats, or an Nx2 array, that specifies the marker vertices relative to the target plot-point at (0, 0)). To utilize this functionality for this purpose I wrote this function, from matplotlib import markers from matplotlib.path import Path def align_marker(marker, halign='center', valign='middle',): """ create markers with specified alignment. Parameters ---------- marker : a valid marker specification. See mpl.markers halign : string, float {'left', 'center', 'right'} Specifies the horizontal alignment of the marker. *float* values specify the alignment in units of the markersize/2 (0 is 'center', -1 is 'right', 1 is 'left'). valign : string, float {'top', 'middle', 'bottom'} Specifies the vertical alignment of the marker. *float* values specify the alignment in units of the markersize/2 (0 is 'middle', -1 is 'top', 1 is 'bottom'). Returns ------- marker_array : numpy.ndarray A Nx2 array that specifies the marker path relative to the plot target point at (0, 0). Notes ----- The mark_array can be passed directly to ax.plot and ax.scatter, e.g.:: ax.plot(1, 1, marker=align_marker('>', 'left')) """ if isinstance(halign, (str, unicode)): halign = {'right': -1., 'middle': 0., 'center': 0., 'left': 1., }[halign] if isinstance(valign, (str, unicode)): valign = {'top': -1., 'middle': 0., 'center': 0., 'bottom': 1., }[valign] # Define the base marker bm = markers.MarkerStyle(marker) # Get the marker path and apply the marker transform to get the # actual marker vertices (they should all be in a unit-square # centered at (0, 0)) m_arr = bm.get_path().transformed(bm.get_transform()).vertices # Shift the marker vertices for the specified alignment. m_arr[:, 0] += halign / 2 m_arr[:, 1] += valign / 2 return Path(m_arr, bm.get_path().codes) Using this function, the desired markers can be plotted as, ax.plot(d1, 1, marker=align_marker('>', halign='left'), ms=20, clip_on=False, color='k', transform=ax.get_xaxis_transform()) ax.plot(d2, 1, marker=align_marker('<', halign='right'), ms=20, clip_on=False, color='k', transform=ax.get_xaxis_transform()) or using ax.scatter, ax.scatter(d1, 1, 200, marker=align_marker('>', halign='left'), clip_on=False, color='k', transform=ax.get_xaxis_transform()) ax.scatter(d2, 1, 200, marker=align_marker('<', halign='right'), clip_on=False, color='k', transform=ax.get_xaxis_transform()) In both of these examples I have specified transform=ax.get_xaxis_transform() so that the vertical position of the markers is in axes coordinates (1 is the top of the axes), this has nothing to do with the marker alignment. The obvious advantage of this solution compared to my previous one is that it does not require knowledge of the markersize, plotting function (ax.plot vs. ax.scatter), or axes (for the transform). Instead, one simply specifes a marker and its alignment! Cheers!
High
[ 0.663072776280323, 30.75, 15.625 ]
President Trump all but ruled out a full repeal of the Dodd-Frank Act on Tuesday, as Republicans prepare to make major changes to the sweeping post-recession banking regulation law. “We’re doing a major elimination of the horrendous Dodd-Frank regulations, keeping some obviously but getting rid of many,” Trump told reporters. “The bankers in the room will be very happy, because we are really doing a major streamlining, and perhaps elimination and replacing it with something else,” Trump said. Trump promised to “dismantle” Dodd-Frank on the campaign trail, but a full repeal of the law has become increasingly unlikely. Bank and financial firm executives are seeking significant rollbacks to the law but aren’t asking lawmakers to repeal it altogether. ADVERTISEMENT Republican lawmakers have largely abandoned efforts to erase the law, but have pushed for significant changes to how the government monitors major banks for financial risk. They’ve also pushed for measures that subject the Consumer Financial Protection Bureau to greater congressional oversight. Trump and his aides haven’t released a formal Dodd-Frank reform plan, nor have they specified what parts of the law the White House wants to change. Treasury Secretary Steven Mnuchin and National Economic Council Director Gary Cohn have even publicly backed separating consumer and investment banks, a change long opposed by the banking industry. Instead, Trump has signed executive orders directing his administration to target banking regulations that can be cut to help boost the economy. “We’re going to put many millions of people back to work,” Trump said following a meeting with cabinet officials and CEOs. “The banks will be able to lend again.” Trump has made job creation through deregulation a central focus of his presidency and campaign. He said his administration has “created more than 600,000 jobs,” though federal data says the amount of U.S. jobs created in the past three months is closer to 500,000. Even so, Trump's few legislative and deregulatory actions have likely made a limited impact on the job market. Stocks have largely rallied since Trump’s election, and he touted several surveys of manufacturers and business owners expressing more optimism about the economy than in almost a decade. But gold prices, which rise as investors worry about the economy, hit a five-month high Tuesday as stocks sank due in part to growing geopolitical tensions.
Low
[ 0.5276595744680851, 31, 27.75 ]
A direct role for Sox10 in specification of neural crest-derived sensory neurons. sox10 is necessary for development of neural and pigment cell derivatives of the neural crest (NC). However, whereas a direct role for Sox10 activity has been established in pigment and glial lineages, this is more controversial in NC-derived sensory neurons of the dorsal root ganglia (DRGs). We proposed that sox10 functioned in specification of sensory neurons, whereas others suggested that sensory neuronal defects were merely secondary to absence of glia. Here we provide evidence that in zebrafish, early DRG sensory neuron survival is independent of differentiated glia. Critically, we demonstrate that Sox10 is expressed transiently in the sensory neuron lineage, and specifies sensory neuron precursors by regulating the proneural gene neurogenin1. Consistent with this, we have isolated a novel sox10 mutant that lacks glia and yet displays a neurogenic DRG phenotype. In conjunction with previous findings, these data establish the generality of our model of Sox10 function in NC fate specification.
High
[ 0.686111111111111, 30.875, 14.125 ]
Late-onset neonatal sepsis due to multiply-resistant coagulase-negative staphylococci. A cluster of septic episodes that were caused by coagulase-negative staphylococci occurred in eight patients, over a six-month period from August 1, 1984 to January 31, 1985, in a Brisbane neonatal intensive-care unit where sepsis which was due to these organisms previously was uncommon. The organisms were universally-resistant to tobramycin (the aminoglycoside agent that was used at that time) and were variably-resistant to gentamicin, flucloxacillin and cephalothin. All organisms were sensitive to netilmicin, vancomycin, fusidic acid and rifampicin. The affected infants were all of 32 weeks' or less gestation and most of them weighed less than 1500 g at birth. All neonates had been ventilated artificially and had had long intravascular lines. Two infants had ventriculoperitoneal shunts that had been infected with coagulase-negative staphylococci--a potentially-important problem that has not been noted in premature infants in previous reports. Our experience demonstrates that it is important to consider the patterns of resistance to aminoglycoside as well as to beta-lactam antibiotic agents for the empirical therapy of septic episodes and for neurosurgical prophylaxis in nurseries where coagulase-negative staphylococci are emerging as common nosocomial pathogens.
Mid
[ 0.6519607843137251, 33.25, 17.75 ]
Episode 51 – Covering Recovery Boy Josh is new to Seattle and has been on the road to recovery from past drug addictions that was a big part of his life as a part of early kink experiences. He is now on the road of recovery and was willing to share his story. Drugs are a part of American life in general, and all of us have had to deal with drugs as a part of our kink life. Be it trying ourselves or knowing others who use or have used. And we all have different levels of what is acceptable and not when it comes to drugs being a part of play. So having a honest, open communication about how drugs affected one person, in this case with Boy Josh, we get to explore what the impact of drugs can be to us all.
Mid
[ 0.5815324165029471, 37, 26.625 ]
Commissural dehiscence of Carpentier-Edwards mitral bioprostheses. Explant analysis and pathogenesis. Manufacturing factors have seldom been implicated as a direct cause of structural deterioration of valvular bioprostheses; this phenomenon has generally been considered to be of a host-dependent origin. We analyzed the clinical and pathologic data from 12 Carpentier-Edwards mitral bioprostheses removed from 12 patients because of severe dysfunction and showing detachment of the porcine aortic wall from the stent in one commissure or more. These 12 prostheses were part of a group of 92 such valves that were explanted and displayed structural deterioration. They belong to a population of 405 Carpentier-Edwards bioprostheses implanted in the mitral position in our institution between May 1978 and November 1988. The patients included three men and nine women with a mean age of 54 +/- 13 years. One patient had a history of chronic renal failure, and two had systemic hypertension. Prosthesis sizes were 29, 31, and 33 mm (n = 4 for each size). The models of the valves were 6625 (n = 8) and 6650 (n = 4). Mean duration of implantation of the prostheses was 99 +/- 27 months (52 to 136 months) and did not differ depending on the model. There was no significant clustering of commissural detachments depending on valve size, year of implantation, or gender of the patient. No similar phenomenon was observed among 76 explanted aortic Carpentier-Edwards bioprostheses with structural deterioration from a population of 441 valves implanted during the same time frame. Native porcine aortic roots (n = 5) and aortic Carpentier-Edwards bioprostheses explanted because of structural deterioration (n = 4) were used as controls for comparison. Macroscopic examination showed single commissural dehiscence in 10 patients and double in two. Radiology disclosed no or mild mineralization in eight valves and no calcium in the area of aortic wall dehiscence, except for heavily calcified valves. Light microscopy evidenced a significant thinning of the aortic wall at the paracommissural level of mitral bioprostheses (351 +/- 68 microns) compared with either aortic bioprostheses (526 +/- 59 microns; p < 0.01) or control native porcine aortic roots (419 +/- 50 microns; p < 0.01). No difference was found in terms of aortic wall thickness between detached (322 +/- 42 microns) and intact (366 +/- 74 microns) commissures in mitral bioprostheses.(ABSTRACT TRUNCATED AT 400 WORDS)
Mid
[ 0.6365688487584651, 35.25, 20.125 ]
Q: MYSQL - Return Result of one SELECT minus another SELECT Essentially what im trying to get is the increase in views of a property on a property website after running an advertising campaign. Every view is equal to a row in the Views Table. Assuming the advertising campaign ran the month of Jan 2014. I can run 2 seperate Queries 1) to get the Count of january 2) To get the count of February Query 1 - Views For January SELECT COUNT(Views.ViewId) AS 'January Munster Views' FROM Views INNER JOIN Property ON Views.PropertyId=Property.PropertyId WHERE Views.ViewsDate LIKE '2015-01-%' AND Property.PropertyPrice BETWEEN "800" AND "1000" AND (Property.PropertyCounty='Co Waterford' OR Property.PropertyCounty='Co Cork' OR Property.PropertyCounty='Co Clare' OR Property.PropertyCounty='Co Kerry' OR Property.PropertyCounty='Co Tipperary' OR Property.PropertyCounty='Co Limerick' ); Result = 103 Query 2 - Views For February SELECT COUNT(Views.ViewId) AS 'February Munster Views' FROM Views INNER JOIN Property ON Views.PropertyId=Property.PropertyId WHERE Views.ViewsDate LIKE '2015-02-%' AND Property.PropertyPrice BETWEEN "800" AND "1000" AND (Property.PropertyCounty='Co Waterford' OR Property.PropertyCounty='Co Cork' OR Property.PropertyCounty='Co Clare' OR Property.PropertyCounty='Co Kerry' OR Property.PropertyCounty='Co Tipperary' OR Property.PropertyCounty='Co Limerick' ); Result = 274 Is there any way to just return 171 as a result with a column title of "Increase" ? I could of course just do the work in Java or PHP but id like to know if its possible just using SQL Statement? Thanks A: If I'm understanding the question correctly, just put each query in parentheses and subtract them like so: SELECT (query1) - (query2) AS Increase [see https://stackoverflow.com/questions/1589070/subtraction-between-two-sql-queries]
High
[ 0.669975186104218, 33.75, 16.625 ]
Bloomberg Bloomberg | Quint is a multiplatform, Indian business and financial news company. We combine Bloomberg’s global leadership in business and financial news and data, with Quintillion Media’s deep expertise in the Indian market and digital news delivery, to provide high quality business news, insights and trends for India’s sophisticated audiences. The $1.9 billion Invesco Variable Rate Preferred ETF, ticker VRP, which tracks an index of variable- and floating-rate preferred stocks, has had two large block trades since Powell’s seemingly dovish remarks. Nearly 4.3 million shares worth more than $101 million sold at 1:17 p.m. in New York on Wednesday, and an additional 4.6 million shares worth nearly $110 million printed Thursday morning. Floating-rate securities help mitigate interest rate risk. And by all appearances, at least one investor is ditching the strategy as the possibility of long-term rising rates seems to have diminished. Two of VRP’s largest holders include Charles Schwab Corp. and Bank of America Corp., according to Bloomberg data based on filings from Sept. 30. As of the end of the third quarter, Charles Schwab owned 12.3 millions shares and Bank of America held close to 5.3 million. In a speech on Wednesday, Powell said that the Fed had brought interest rates to “just below” the range of neutral estimates. That seemed to be a reversal from his comments in October, when he said rates were a “long way” from neutral. “As we get closer to year-end, most likely an investor is just cutting their losses,” said Mohit Bajaj, director of exchange-traded funds at WallachBeth Capital. “I wouldn’t be surprised if some of those assets just went into shorter duration Treasuries.” VRP is on track to see its worst month of outflows since December. Investors have pulled close to $126 million from the fund so far in November and assets have declined to their lowest level since November 2017. The fund, which is trading lower for a third straight day, is down 2.8 percent this month. The Vanguard Short-Term Treasury ETF, known by its ticker VGSH, saw a huge block trade at 10:04 a.m. in New York, less than a minute after the VRP trade. More than 9.6 million shares of the fund worth around $577 million have traded Thursday. “The notional was much larger on the VGSH, but I wonder if part of the VRP went to that,” said Bajaj. “The prints happened very close together. They could be all from the same person, especially following the Fed’s comments.”
Mid
[ 0.616438356164383, 33.75, 21 ]
Q: Identify a quadric Could you tell me how to identify a given quadric? Given a conic section, I should find an orthonormal affine frame in $\mathbb{R}^2$ (with standard dot product) in which the equation has a canonical form. Could you solve for example $\{(x,y) \in \mathbb{R}^2 : 73x^2 + 72xy + 52y^2 - 220x - 40y - 2300=0\}$ ? Or $x^2 - 6xy + 9y^2 + 2x - 5y -1=0$ ? Thank you. I would really appreciate a thorough explanation, because Kostrikin gives only the answers - there are no full solutions there. For the second exmple I thought I could begin with a basis $(0,1), (1,0)$ $(x-3y)^2 + 2x-5y-1=0$ We set $\alpha = x-3y, \ \ \beta=y$, so $x=\alpha + 3\beta$. $(0,0) + (x(1,0) + y(0,1)) = (0,0) + ((\alpha + 3\beta)(1,0) + \beta(0,1)) = (0,0) + \alpha(1,0) + \beta(3,1)$ So we have $\alpha^2 + 2 \alpha + \beta -1 = 0$ $(\alpha + 1)^2 + \beta -2 = 0$ Then we set $\alpha_1 = \alpha + 1$, $\beta_1 = \beta -1$ The problem is that I don't know how to make the basis stay orthonormal. A: This solution uses projective geometry, in particular the fact that there is a unique conic up to isometry in the real projective plane, and that when embedding this into $\overline{\mathbb{R}^2}$, the type of conic it corresponds to in the restriction to $\mathbb{R}$ is characterized by how many ideal points it contains. We use the homogeneous coordinates in $\overline{\mathbb{R}^2}$ such that ideal points have $x_3=0$. Let $q(x)=0$ be the homogenous equation of a projective conic in $\overline{\mathbb{R}^2}$ where $q$ is a quadratic form with $3\times 3$ symmetric matrix $M=(a_{ij})$. Show that whether the conic is an ellipse, a parabola, or a hyperbola in the restriction to $\mathbb{R}$ is determined by the sign of the upper-left $2\times 2$ minor of $M$. Consider the ideal points of this conic: such points have $x_3=0$, and without loss of generality we may consider the representative vectors on the hyperplane $x_2=1$. Then $$ q(x^*) = \begin{bmatrix} x_1 & 1 & 0 \end{bmatrix} \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ a_{12}&a_{22}&a_{23} \\ a_{13}&a_{23}&a_{33} \end{bmatrix} \begin{bmatrix} x_1 \\ 1 \\ 0 \end{bmatrix} = a_{11}x_1^2+2a_{12}x_1+a_{22}.$$ The number of solutions to this quadratic is found by considering the sign of the discriminant $(2a_{12})^2-4a_{11}a_{22}$. Notice that this is precisely $-4$ times the upper-left $2\times 2$ minor of $M$. Explicitly: if the discriminant is positive, then there are two solutions to $q(x^*)=0$; if the discriminant is negative, there are none, and if it is zero, there is one. Since these solutions are the ideal points in the conic, this can be rewritten in the desired form: The conic is an ellipse precisely when the minor is positive, a parabola precisely when the minor is zero, and a hyperbola precisely when the minor is negative.
Mid
[ 0.5958549222797921, 28.75, 19.5 ]
Title: Far Cry 3 Publisher: Ubisoft Platform: PS3Genre: ShooterRelease Date: 12/4/2012Overview:NOTE: Rental of this title does not include the Uplay Passport required by Ubisoft for access to online features and game modes. The single-use code is included with purchase and is also available for purchase separately via PSN. A limited multiplayer trial is available via your PSN ID while renting this title. You’re trapped on an island where the natives are not only restless, but also crazy in the head and armed to the teeth. The best way to find your way off? Shoot first and ask for directions later. Make your escape by taking the fight to two lawless factions – crazy warlords and ruthless rebels – in environments that range from mountain peaks to swampy island bogs. The wilds of the mysterious island hold secrets that might be worse than the people who call it home. Search for ancient relics, hunt animals, and travel by land, sea, and air in a single-player campaign or a co-op split-screen romp for up to four players online.
Low
[ 0.52801724137931, 30.625, 27.375 ]
Maui Brewing Hires National Sales Manager Amid construction of a new brewing facility that will double its current capacity, Maui Brewing Co. has added to its executive team. Last week, the company announced the hiring of Peter Scheider as its new National Sales Manager. Scheider, a former off-premise sales representative and territory manager with Odell Brewing Co., officially joined Maui Brewing in August and has spent the last two months meeting with the company’s wholesale partners. While Maui is headquartered in Hawaii, Garrett Marrero, Maui’s founder, said Scheider will be based out of the continental U.S., however, a specific location has not yet been determined. Outside of Hawaii, Maui beer can be found in 10 mainland states and internationally. The company has steadily grown its sales over the past three years, growing production from 17,265 barrels in 2011 to an expected 24,000 barrels this year. Maui Brewing’s new brewery will have an initial capacity of 50,000 barrels and is scheduled to open in June of 2014. The full press release is below. Lahaina, Hawaii – Maui Brewing Co. (MBC) is elated to announce another addition to the “family”: Peter Scheider. Pete is joining MBC as the National Sales Manager. Peter started as the draft technician for the Odell brewing Company in 2004 and then quickly moved on to run the off-premise sales route for Fort Collins, CO. After a year of off-premise work, Pete was moved to Western Slope Territory Manager where he represented Odell Brewing in Boulder as well as several other mountain towns including Breckenridge, Vail, Aspen, Copper Mountain and Winter Park. Peter grew up on Long Island and later moved to Tennessee for high school and college. College was interrupted to follow the band Widespread Panic around the country until he found himself in Colorado for a concert – he never looked back. Connections made in Colorado eventually lead to the job with Odell Brewing Co. Maui Brewing Co. is a craft brewery based in Maui, HI. As the largest authentic Hawaiian brewery, it currently has one brewery in Lahaina, and one brewpub in Kahana that creates more than 40 different styles on a rotating basis. In 2005 Maui Brewing Co. produced 400 barrels from the single brewpub and expanded into an additional brewery location in 2007, producing nearly 20,000 barrels in 2012. MBC has remained consistent in the vision, “Handcrafted Ales & Lagers Brewed with Aloha”. This means respect for the environment, the community, its people, and company ethics are considered in every high-quality craft beer brewed. The beers have been recognized worldwide for quality and innovation winning more than 100 medals in a short history. Maui Brewing Co. is currently in construction to move production to a state-of-the-art brewery in Kihei. This will help meet current demand, and give the ability to open additional markets. The goal is to be brewing, drinking, and shipping beer from this new brewery in June 2014. Founded in 2005, Maui Brewing Company is Hawaii's largest craft brewery, operating 100% in Hawaii. MBC is based on Maui, with its production brewery and tasting room in Kihei, as well as pubs in Kahan...
Mid
[ 0.545258620689655, 31.625, 26.375 ]
Primary culture of adult rat liver cells. I. Preparation of isolated cells from trypsin-perfused liver of adult rat. Isolated hepatic cells from adult rats were prepared by perfusing the livers with trypsin. The highest yield of viable cells was obtained by perfusing the liver with 0.1% trypsin, pH 7.0, at 37 degrees C for 30 min. Following this treatment about 70% of cells excluded trypan blue. The isolated cells contained many binucleate cells. Between 60 and 70% of DNA present originally in the liver was recovered from the isolated hepatic cells, which had higher glucose 6-phosphatase activity than the liver. Thus the resulting cell population seems to be rich in hepatocytes. The isolated hepatic cells, however, lost some of their cellular proteins such as alanine and tyrosine amino-transferases. It was suggested that the membranes of isolated hepatic cells might be damaged by both enzymatic digestion and mechanical destruction.
Mid
[ 0.6117647058823531, 32.5, 20.625 ]
Melisandre, otherwise known as "The Red Woman" or "The Red Witch" (and several other names by Shireen Baratheon fans that I won't mention) is one of the most loathed characters on Game of Thrones, but lately fans have had no choice but to put their faith in her as everyone's favourite, as Jon Snow's fate lies in her evil little hands. Back to reality though, Carice van Houten, the actress behind Melisandre, couldn't be any more likeable. This sexy Dutch actress who's pushing 40 (unbelievable, right?) is known for a string of roles including Valkyrie, Black Book and Repo Men and has played alongside big shots like Bill Nighy, Jude Law, Eddie Redmayne and Tom Cruise. Want to know a little more about this little vixen from the Netherlands? Well, you've come to the right place. Some spoilers of the Game of Thrones series will make it on here, so if you're still not up to date with the show, be careful not to read any further. Here are 10 things you probably never knew about Carice Van Houten...
Mid
[ 0.585263157894736, 34.75, 24.625 ]
13 Best Hotels in Houston Many travelers flock to the great city of Houston, therefore there are a number of wonderful lodging options perfect for the businessman or the honeymooning couple. These downtown hotels listed below are all luxury and boutique hotels that offer great amenities in perfect locations. Enjoy all that Houston has to offer by staying in one of these 13 best hotels in Houston. For travelers wanting a secluded oasis in the middle of downtown Houston, The Houstonian Hotel is perfect for you. This hotel, club and spa is located on 18 acres, offering a peaceful setting nestled in nature. You don’t even have to leave the property to have a good time. Visit the Trellis Spa for relaxing spa treatments. Or dine at one of the four on-site restaurants and eateries.
Low
[ 0.45278969957081505, 26.375, 31.875 ]
Q: Viewport for ipad portrait [only] I have built a responsive website and it encounters problem while rendering in portrait orientation on iPad i.e It doesn't correctly fit in. I have tried adjusting the viewport meta's parameter values but that also affects the whole rendering, including on mobile. I used the following viewport meta in my website. <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" /> A: I had a similar issue just now, on a site that is 1550px wide on desktop but only 880px on mobile. Things were working great with <meta name="viewport" content="width=880px, initial-scale=1.0, user-scalable=1;" /> combined with <link rel="stylesheet" media="all" href="/css/base.css" /> <link rel="stylesheet" media="(max-width:880px)" href="/css/mobile.css" /> (mobile.css readjust some element widths to fit nicely into the 880px mobile layout.) Things looked good until I tested it on an iPad in the iOS Simulator. In portrait things looked alright, but in landscape orientation some elements (specifically those with width: 100%) adjusted to the viewport width, while some didn't (that one element with width: 1550px). This meant that when scrolling right (or zooming out) to view the entire 1550px element, the elements with width: 100% were left dangling from the left side, only about half as wide as they should be. The solution was far from obvious, but here's how I solved it: base.css body{ width: 100%; min-width: 1550px; } mobile.css body{ min-width: 100%; } This explicitly sets the miniumum width of the body element to 1550px for all devices wider than 880px, including tablets that take the viewport meta tag into account. When viewed on a mobile device with a width less than 880px, the width of the body element is reset to simply 100%, i.e. the viewport width. Hope this will help someone out there struggling with different layouts for different devices.
Mid
[ 0.614427860696517, 30.875, 19.375 ]
Students at Berkeley High School walked out of class last week to protest campus administrators' response to allegations of sexual assault. It all began after the names of boys accused of assault started appearing on the wall of a girls' restroom. Around the same time, an unnamed student filed a lawsuit against the school, alleging that her sexual assault case was mishandled. Student organizers plan to present a list of demands to the school board on Wednesday. And in the era of both the #MeToo movement and student protests, organizers hope that these policy changes will change the culture of their school. Guests:
Mid
[ 0.646511627906976, 34.75, 19 ]
Disability pensions in relation to stroke: a population study. This study aimed to establish prevalence levels of disability pensions among stroke patients within a national population. From a Danish National register of hospitalizations, 72 673 patients were identified who had a discharge diagnosis of stroke between the years 1979-1993 inclusive and were of pensionable age during that period. These patients were then screened in registers for death during the period 1979-1993 and for the award of disability pensions between the years 1979-1995. A total of 19476 (27%) patients had received a pension at some level. Being in possession of a disability pension prior to stroke (n = 8565, 12%), rarely at the highest level, was not associated with elevated risk for stroke, or with elevated stroke mortality. It was, however, associated with a greater mortality subsequent to stroke. Disability pensions awarded following stroke (n = 10564, 15%), often at the highest level, were awarded equally to males and females in all age groups, but most commonly (ca 50%) at age 50-59. Disability pensions awards were also strongly related to duration of hospitalization. Among stroke sufferers hospitalized for over 90 days, the proportion ultimately awarded a disability pension rises to over 80%. The results show high levels of disability pensions awards to relatively young stroke patients probably reflecting pessimism concerning ability to return to employment in such patients. More recent development of stroke units and post-acute rehabilitation programmes may justify greater optimism.
High
[ 0.672941176470588, 35.75, 17.375 ]
Q: STM32F407 timers with hall encoders I'm a bit unsure what's the best approach to the problem given my knowledge of the STM32. I want to measure the speed and position of a motor with an integrated hall encoder of 6400 rising/falling edges per rotation, separated into two channels (one CH gives 3200 rising/falling edges). What's the best way to do it? The thing is... I have 4 motors to measure. I considered many options, but I would like one that only generates interrupts when the position data is already known (basically, so I don't increment myself a position variable at each pulse but instead let a timer do it for me). From what I know, a few timers support a mode called "Encoder mode". I don't know the details about this mode, but I would like (if possible) to be able to calculate my speed at a fixed amount of time (say around 20ms). Is it possible in encoder mode, with one timer, to know both the rising/falling edges count (which I guess would be in the CNT register) and have it trigger an interrupt at each 20 ms, so that I can divide the CNT register by 20ms to get the count/sec speed within the ISR? The other option I have is to count with Input Capture direct mode with two channels on each timer (one for each motor), and have another timer with a fixed period of 20ms, and calculate all the speeds of the 4 motors there. But it requires 5 timers... If anything else, is there a way DMA could help to keep it to 4 timers? For example, can we count with DMA? Thanks! A: The encoder interface mode on the STM32F407 is supported on timers 1 & 8 (Advanced Control timers - 16 bit) and timers 2 to 5 (General purpose timers - 16/32 bit). Timers 9 to 14 (also General purpose) do not support quadrature encode input. It is important that in this mode the timer is operating as a counter rather than a timer. The quadrature input allows up/down count depending on the direction, so that it will provide relative position. Note that if your motor will only ever travel in one direction, you do not need the encoder mode, you can simply clock a timer from a single channel, although that will reduce the resolution significantly, so accuracy at low speeds may suffer. To determine speed, you need to calculate change in relative position over time. All ARM Cortex-M devices have a SYSTICK timer which will generate a periodic interrupt. You can use this to count time. You then have two possibilities: read the encoder counter periodically whereby the change in count is directly proportional to speed (because change in time will be a constant), read the encoder aperiodically and calculate change in position over change in time The reload value for the encoder interface mode is configurable, for this application (speed rather then position), you should set the to the maximum (0xffff or 0xffffffff) since it makes the arithmetic simpler as you won't have to deal with wrap-around (so long as it does not wrap-around twice between reads). For the aperiodic method and assuming you are using timers 2 to 5 in 32 bit mode, the following pseudo-code will generate speed in RPM for example: int speedRPM_Aperiodic( int timer_id ) { int rpm = 0 ; static struct { uint32_t count ; uint32_t time ; } previous[] = {{0,0},{0,0},{0,0},{0,0}} ; if( timer_id < sizeof(previous) / sizeof(*previous) ) { uint32_t current_count = getEncoderCount( timer_id ) ; int delta_count = previous[timer_id].count - current_count ; previous[timer_id].count = current_count ; uint32_t current_time = getTick() ; int delta_time = previous[timer_id].time - current_time ; previous[timer_id].time = current_time ; rpm = (TICKS_PER_MINUTE * delta_count) / (delta_time * COUNTS_PER_REVOLUTION) ; } return rpm ; } The function needs to be called often enough that the count does not wrap-around more than once, and not so fast that the count is too small for accurate measurement. This can be adapted for a periodic method where delta_time is fixed and very accurate (such as from the timer interrupt or a timer handler): int speedRPM_Periodic( int timer_id ) { int rpm = 0 ; uint32_t previous_count[] = {0,0,0,0} ; if( timer_id < sizeof(previous_count) / sizeof(*previous_count) ) { uint32_t current_count = getEncoderCount( timer_id ) ; int delta_count = previous[timer_id].count - current_count ; previous_count[timer_id] = current_count ; rpm = (TICKS_PER_MINUTE * delta_count) / (SPEED_UPDATE_TICKS * COUNTS_PER_REVOLUTION) ; } return rpm ; } This function must then be called exactly every SPEED_UPDATE_TICKS. The aperiodic method is perhaps simpler to implement, and is good for applications where you want to know the mean speed over the elapsed period. Suitable for example a human readable display that might be updated relatively slowly. The periodic method is better suited to speed control applications where you are using a feed-back loop to control the speed of the motor. You will get poor control if the feedback timing is not constant. The aperiodic function could of course be called periodically, but has unnecessary overhead where delta time is deterministic.
Mid
[ 0.606683804627249, 29.5, 19.125 ]
Cooling System Permits Effective Transcutaneous Ultrasound Clot Lysis In Vivo Without Skin Damage. Previous in vivo studies have shown that transcutaneous ultrasound enhances clot dissolution in the presence of either streptokinase or microbubbles. However, ultrasound-induced skin damage has been a major drawback. The objective was to evaluate the effect of a cooling system to prevent the skin damage that has heretofore been associated with transcutaneous low-frequency, high-intensity ultrasound clot dissolution. After thrombi were induced in both iliofemoral arteries in 15 rabbits, streptokinase (25,000 U/kg) was given intravenously and dodecafluoropentane was injected slowly (2 mL/15 min) through an infusion catheter into the abdominal aorta. One iliofemoral artery was randomized to receive ultrasound treatment, and the contralateral artery was treated as a control (receiving streptokinase and dodecafluoropentane alone). In six rabbits (group 1), the skin below the ultrasound transducer was protected by the use of a balloon cooling system, and in the other nine rabbits (group 2), ultrasound was used without a cooling system. Seven of nine (78%) arteries treated without the cooling system, and six of six (100%) arteries treated with the cooling system were angiographically recanalized after ultrasound + streptokinase + dodecafluoropentane treatment. Thermal damage was present in the skin and soft tissues of all nine rabbits treated without a cooling system. However, the skin and soft tissues were grossly and histologically normal in the six rabbits in which the transcutaneous ultrasound was used with the cooling system. Low-frequency, high-intensity ultrasound energy can be delivered transcutaneously for clot dissolution without concomitant tissue damage when coupled with the use of a cooling system to prevent thermal injury.
High
[ 0.6717948717948711, 32.75, 16 ]
A memorial continues to grow for the two young men that were killed in Saturday’s single vehicle crash in Kitchener. It happened on Robert Ferrie Drive at Reynolds Court in the Doon South neighbourhood just after 4:00 a.m. RIP and condolence messages about the two men have been posted on facebook by friends and family. The passenger, 19-year-old Travis Brown, who was a Wilfrid Laurier student and the driver, 24-year-old Jamie Martin, who was a young father are dead as a result of the crash. Both men were from Kitchener. Police say one of the occupants died on scene while the other was airlifted to hospital where he was pronounced dead. The Kitchener men had another friend in the car with them when they crashed into a light pole. Police say he suffered minor injuries. Police are still investigating what caused the crash but say speed is likely a factor. Robert Ferrie Drive was closed for nine hours on Saturday.
Mid
[ 0.590163934426229, 36, 25 ]
Thursday, July 26, 2012 A traditional leader in Zimbabwe says he has temporarily shelved plans to conduct a witch hunt to solve the mystery of a whole village of women who woke up without their knickers. Chief Njelele of Gokwe abandoned the plan after one of his aides was involved in a bizarre mishap – spreading a fresh wave of terror in the troubled community. Local headman Pauro had gathered villagers to discuss plans to bring a “prophet” to smoke out the suspected wizard who stole the panties when a huge owl swooped just feet away and grabbed a mature male dog with its claws before disappearing in the distance. Owls are associated with witchcraft and evil in superstitious Africa. Chief Njelele said he feared whoever was behind the mysterious disappearance of the 26 undergarments – which were later found in a heap in the woods – was sending a warning. “I had tasked the two village heads, Pauro and Charuseka, to meet their subjects with the aim of inviting a traditional healer to cleanse their areas,” the chief said. I then saw village head Pauro coming to my homestead with more strange news that a huge owl came flying from nowhere before picking up a male dog at his homestead and flew away while they were at a meeting. It’s mind boggling what is going on in the area.” Now Chief Njelele says he is gathering a panel of wise men including other traditional leaders from neighbouring communities to try and bring peace to his troubled community. Chiefs Nemangwe and Chireya will arrive for consultations this week. According to Chief Njelele, on July 11 “the majority of women” in two local communities under village heads Pauro and Chariseka went to sleep with their panties on – but mysteriously woke up in the nude. Seventeen women later positively identified their underwear in the presence of the police. The chief explained: “Some have burnt the recovered panties while others said they would perform some rituals before disposing them. I am keeping the remaining nine at my own risk because as the leader, there is nothing I can do.” Chief Njelele said he had received calls from women’s campaign groups who were planning a visit to the area, with plans to provide counselling and other support to the affected women. Gokwe police said following discussions with the local leaders, they had decided to leave the community to find a traditional solution to the problem.
Low
[ 0.46724890829694304, 26.75, 30.5 ]
Hi Petr, Is there a way to distinguish within Lua Macros between the ENTER key and the NUMPAD_ENTER key? My destination software considers the first one to be "RETURN" and the second one "ENTER" which gives me the ability to assign two different behaviors. But when I hit the corresponding NUMPAD_ENTER key, Lua Macros sends what is detected as "RETURN", not what should be, detected as "ENTER". Do you know how to address this? Thanks! Lulu Now I can see picture even in your original post. So we talk about input side. If you receive enter key from both keys then no - luamacros can't distinguish now. Someone was asking here in forum to make macro triggers recognize virtual key codes instead of their representation by keyboard's regional settings. That would probably solve also this issue. Feel free to create feature request issue here: https://github.com/me2d13/luamacros/issues Callback function enhanced in LuaMacros version 0.1.1.9 released on July 12th 2017. Actually the keycode is virtual key code but windows sends value 13 fr both enters. What is different is value of "Flag" attribute and this value can be now passed to callback function (as additional argument). Play with following code - I was able to recognize both enters now.
High
[ 0.673575129533678, 32.5, 15.75 ]
Elucidation of the small RNA component of the transcriptome. Small RNAs play important regulatory roles in most eukaryotes, but only a small proportion of these molecules have been identified. We sequenced more than two million small RNAs from seedlings and the inflorescence of the model plant Arabidopsis thaliana. Known and new microRNAs (miRNAs) were among the most abundant of the nonredundant set of more than 75,000 sequences, whereas more than half represented lower abundance small interfering RNAs (siRNAs) that match repetitive sequences, intergenic regions, and genes. Individual or clusters of highly regulated small RNAs were readily observed. Targets of antisense RNA or miRNA did not appear to be preferentially associated with siRNAs. Many genomic regions previously considered featureless were found to be sites of numerous small RNAs.
High
[ 0.6756393001345891, 31.375, 15.0625 ]
--- abstract: | We investigate the instanton dynamics of asymptotically safe and free quantum field theories featuring respectively controllable ultraviolet and infrared fixed points. We start by briefly reviewing the salient points about the instanton calculus for pure Yang Mills (YM) and QCD. We then move on to determine the role of instantons within the controllable regime of the QCD conformal window. In this region we add a fermion-mass operator and determine the density of instantons per unit volume as function of the fermion mass. Finally, for the first time, we extend the instanton calculus to asymptotically safe theories.\ \[.3cm\] [*Preprint: CP$^3$-Origins-2018-08 DNRF90,* ]{} author: - Francesco Sannino - Vedran Skrinjar bibliography: - 'safe\_instantons\_biblio.bib' title: Safe and free instantons --- Introduction ============ The standard model and its four dimensional extensions are described by gauge-Yukawa theories, it is therefore paramount to understand their dynamics. Of special interest are theories that are fundamental according to Wilson [@Wilson:1971bg; @Wilson:1971dh], meaning that [they]{} are well defined at arbitrarily short distances. Asymptotically free  [@Gross:1973ju; @Politzer:1973fx] and safe [@Litim:2014uca] quantum field theories are two classes of fundamental quantum field theories. For the former, at extremely short distances, all interactions vanish while for the latter the interactions freeze. In theories with multiple couplings some can be free and others can be safe. Although asymptotic freedom has a long and successful history, the discovery of four dimensional controllable asymptotically safe quantum field theories is recent [@Litim:2014uca; @Litim:2015iea]. This result has enabled novel dark and bright extensions of the standard model [@Sannino:2014lxa; @Abel:2017ujy; @Abel:2017rwl; @Pelaggi:2017wzr; @Mann:2017wzh; @Pelaggi:2017abg; @Bond:2017wut]. The infrared dynamics of fundamental field theories is extremely rich and it can entail confinement and/or chiral symmetry breaking or large distance conformality. This depends on the field content of the specific quantum field theory as well as the presence and type of infrared relevant operators such as scalar and fermion masses. In particular asymptotically free theories can develop an interacting infrared (IR) fixed point that in certain limits is perturbatively controllable, known as Banks-Zaks (BZ) [@Banks:1981nn] fixed point. The full region in color-flavor space, for gauged fermion theories, where an IR fixed point is present is known as the conformal window, see [@Sannino:2009za] for an introduction and [@Pica:2017gcb] for a summary of recent lattice efforts. Recently, building on the large $N_f$ results of [@PalanquesMestre:1983zy; @Gracey:1996he; @Holdom:2010qs; @Pica:2010xq; @Shrock:2013cca] the concept of conformal window has been extended to include the asymptotically safe region at large number of flavors for which asymptotic freedom is lost [@Antipin:2017ebo]. The first systematic study of exact constraints that a supersymmetric asymptotically safe quantum field theory must abide including a-maximisation [@Intriligator:2003jj] and collider bounds [@Hofman:2008ar] appeared in [@Intriligator:2015xxa] extending the results of [@Martin:2000cr]. Here it was also established that Seiberg’s SQCD conformal window [@Seiberg:1994pq] does not admit an asymptotically safe conformal region. This result is in net contrast with the nonsupersymmetric case [@Antipin:2017ebo]. Building upon the results of [@Intriligator:2015xxa] in reference [@Bajc:2016efj] the first evidence for supersymmetric safety was uncovered within the important class of grand unified theories. The generalisation to different types of supersymmetric quantum field theories passing all known constraits appeared in [@Bajc:2017xwx]. Here we shall be concerned with generalising and applying the instanton calculus to gauge theories in the perturbative regime of the QCD conformal window as well as of controllable nonsupersymmetric asymptotically safe quantum field theories [@Litim:2014uca; @Litim:2015iea]. To keep the work self-contained we briefly review the instanton calculus for pure Yang Mills (YM) as well as QCD including its large $N_c$ limit in Section \[sec:review\]. Instantons for the QCD conformal window are introduced and discussed in \[sec:BZ\]. Here we will consider the two-loop corrected instantons that allow us to follow the perturbative RG flow deep in the infrared where a perturbative interacting IR fixed point occurs. We will then perform our analysis in the fermion-mass deformed theory and derive the main instanton features as function of the fermion mass operator. For example, we shall compute the density of instantons per unit volume as function of the fermion mass measured in units of the RG invariant scale. The latter separates the infrared interacting theory from the UV free fixed point. Finally we generalise the instanton calculus to safe rather than free theories in section \[SafeInstantons\]. Here we will consider again the fermion mass dependence that now, however, affects the infrared trivial fixed point. We will offer our conclusions in section \[conclusions\]. Instanton calculus review {#sec:review} ========================== In quantum field theory (QFT) one aims at computing the partition function, $$\label{part_func_Z[j]} Z[\mathcal{J}]=\int\mathcal{D}{\phi}\, e^{\mathrm{i}S[{\phi};\lambda]+\mathcal{J}{\phi}} \ ,$$ where $S[{\phi};\lambda]$ is the sum of a classical action, a gauge-fixing action and a ghost action, depending on the fields ${\phi}$ and the couplings $\lambda$, and $\mathcal{J}$ is a source for $\phi$. If the action is non-integrable one usually attempts to solve the problem through perturbation theory which amounts to expanding the action in powers of small coupling constants $\lambda$. Solutions of the classical theory corresponding to $S[{\phi};\lambda]$ are specific classical field configurations ${\bar{\phi}}$. Since the first variation of the action vanishes on these configurations they represent stationary points, or extrema, of the action. The integrand on the right hand side (RHS) of (\[part\_func\_Z\[j\]\]) is clearly an oscillating function, and thus one may attempt to evaluate the integral by performing an expansion around the classical solution ${\bar{\phi}}$. Symbolically, we have $$\label{expand_Z} Z[\mathcal{J}]=\int\mathcal{D}{\phi}\ e^ {{\mathrm{i}} \left[ S[{\bar{\phi}}]+\frac{1}{2}{\phi}S^{(2)}[{\bar{\phi}}] {\phi}+ \mathcal{O}({\phi}^3) \right] +\mathcal{J}{\phi}} \ .$$ This is the core of the steepest descent method for addressing the issue of oscillating integrals. One defines the vacuum solution as the classical configuration that minimizes the energy functional (the Hamiltonian). In the case of (comparatively) simple QFTs there is just one vacuum state and thus there is but a single field configuration ${\bar{\phi}}$ around which one should expand the partition function. This is precisely the situation described by equation (\[expand\_Z\]). For Yang-Mills (YM) theories, often coupled to scalars or fermions, and occasionally coupled to gravity, the vacuum structure is more involved and if one would naively apply the above prescription several important phenomena would be unaccounted for, such as a deeper understanding of chiral symmetry breaking, the generation of the eta prime mass in QCD, etc. Let us therefore re-consider briefly the correct approach applicable to a generic QFT [@Shifman:2012zz; @Shuryak:1988ck; @Coleman1988; @Schafer:1996wv; @Vainshtein:1981wh]. We begin by Euclideanizing the QFT by performing the Wick rotation [ $t\rightarrow\tau=-\mathrm{i}t$]{}. One should treat gauge fields and fermions with care during this procedure. Euclidean action $S_E$ is a functional of Euclidean fields ${\phi}_E(x)$ living on a 4D Euclidean space described by coordinates $x=(x_1,x_2,x_3,\tau)$. When solving the equations of motion one has to set up the boundary conditions for $|x|\rightarrow\infty$ such that the action remains finite. Usually our conditions require ${\phi}\rightarrow\text{const}$ for $|x|\rightarrow\infty$. If the potential has only one extremum there is going to be a single vacuum solution (constant field configuration in all of the space) and therefore the naive perturbation theory described by (\[expand\_Z\]) is valid. If, however, the potential has more than one degenerate vacuum, then there exist classical solutions interpolating between these Euclidean vacuua. These finite-action topologically-stable solutions to classical Euclidean equations of motion are called instantons or pseudoparticles [@Belavin:1975fg; @tHooft:1976snw]. Instantons are topologically stable in the sense that they cannot decay, as going from one such vacuum to another would require bridging an infinite energy barrier [^1]. It is now clear that the correct application of the steepest descent method to the Euclideanized version of involves a summation over all the instanton configurations. [Even though one does not find instantons as classical solutions to Lorentzian equations of motion, it is clear that the Lorentzian partition function can be obtained by Wick rotating the Euclidean partition function, and thus instantons have to be incorporated in the Lorentzian computation.]{} Being [interpreted as]{} fields that interpolate between different vacuua, instantons are crucial for understanding a rich vacuum structure in YM theories. When discussing instantons the $SU(2)$ color group plays a special role since $SU(N)$ instantons can be determined starting from the $SU(2)$ case [@Bernard:1977nr; @Shifman:2012zz]. Let us therefore assume for the moment to have a Euclidean YM action, $$S[A]=\frac{1}{4}\int_x G_a^{\mu\nu}(A)G_{a\mu\nu}(A)$$ where $\int_x\equiv\int d^4x\equiv\int d^3xd\tau$, and $A_\mu^a$ is the gauge field. To find instanton solutions we require the action to be bounded, but rather than asking that $A_\mu^a(x)$ decays faster than $\nicefrac{1}{x}$ for $|x|\rightarrow\infty$, we require it to become a pure gauge, $$\label{inst_asympt_behavior} A_\mu\xrightarrow{|x| \rightarrow\infty}\mathrm{i}S\partial_\mu S^\dagger \ ,$$ where $S$ are $SU(2)$ matrices (not to be confused with the action) that depend on angles only. $SU(2)$ instantons can thereby be seen as maps from $SU(2)$ to itself. Such maps are classified by the third homotopy group and they fall into topologically distinct classes. In the case of $SU(2)$ these are labelled by integer numbers, and members from different classes cannot be continuously mapped into each other [^2]. Instantons belonging to the same class are related by a gauge transformation. The integers labelling distinct topological classes of instantons can be thought of as topological charges. Furthermore, for a given instanton configuration the topological charge is given by, $$\label{eq:topological-charge} n=\frac{g^2}{32\pi^2}\int_x G_a^{\mu\nu}\tilde{G}_{a\mu\nu},\hspace{1cm} n\in\mathbb{Z}$$ where $g$ is the gauge coupling. One can complete the square in the action as follows (suppressing indices), $$S=\frac{1}{4} \int_x GG = \int_x \frac{1}{4} G\tilde{G}+\frac{1}{8}(G-\tilde{G})^2=n\frac{8\pi^2}{g^2}+\frac{1}{8}\int_x(G-\tilde{G})^2$$ The action minimum for the instanton of topological charge $n$ clearly corresponds to the value[^3] $$\label{eq:finite-action-value} S|_{\text{n-instanton}}=n \frac{8\pi^2}{g^2}.$$ This is achieved when the field satisfies the self-duality condition, $G=\tilde{G}$. Using the Bianchi identities, one can show that the field satisfying the self-duality condition is on-shell, i.e. it automatically satisfies the equations of motion. Computing the value of the action on an instanton solution constitutes the first important result of the instanton calculus. Starting from the asymptotics, (\[inst\_asympt\_behavior\]), and assuming the same directional dependence of the solution in all spacetime points one can write an ansatz for the instanton. Requiring absence of singularities at the origin of space and self-duality of the solution suffices to uniquely fix the instanton (up to collective coordinates) [@Shifman:2012zz]. This is the famous BPST instanton ($SU(2)$ instanton with charge n=1) [@Belavin:1975fg]. Explicitly, $$\label{BPST solution} A_\mu^a=\frac{2}{g}\eta_{a\mu\nu}\frac{(x-x_0)^\nu}{(x-x_0)^2+\rho^2} \ .$$ The above expression for the BPST instanton is in the so-called regular gauge. The parameter $\rho$, the instanton size, is the aforementioned integration constant and it is one of the instanton collective coordinates. The remaining collective coordinates are instanton position in spacetime, $x_0$, and its orientation in color space. Finally, $\eta_{a\mu\nu}$ are known as ’t Hooft symbols [@tHooft:1976snw]. The generalisation to simple Lie algebras is obtained directly from the $SU(2)$ BPST instanton exploiting the fact that any $SU(N)$ group contains $SU(2)$ subgroups. To deduce the $SU(N)$ instantons one simply embeds the BPST solution into $SU(N)$. This choice of embedding is ambiguous, but the most common choice is the so-called minimal embedding. It consists in taking the $SU(N)$ generators in the fundamental group, and taking the first three generators $T^1,...,T^3$ to be block-diagonal with $SU(2)$ generators embedded in the upper-left corner. The $SU(N)$ BPST instanton is obtained by contracting the first three generators $T^a$, $a=1,2,3$ with the BPST solution (\[BPST solution\]). One can analogously obtain $SU(N)$ instantons with charge $n\neq 1$ from other $SU(2)$ solutions. This simple prescription works because the third homotopy group of $SU(N)$ is $\mathbb{Z}$ for all $N$, and with the minimal embedding each equivalence class of $SU(N)$ solutions contains a representative $SU(2)$ instanton. The QCD story ------------- Here we shall see that an instanton ensemble plays an important role in determining the structure of the QCD vacuum. We will start with reviewing the construction of a partition function for such an ensemble. We begin with the famous result for the one instanton partition function which was given by ’t Hooft in 1976 [@tHooft:1976snw]. The vacuum-to-vacuum transition amplitude in presence of a single instanton is given by the following 1-loop instanton calculus result for an $SU(N_c)$ pure Yang-Mills theory [@tHooft:1976snw; @Bernard:1979qt], $$\begin{aligned} \label{eq:one-instanton-density} W^{(1)}&=&\frac{4}{\pi^2} \frac{\exp\left(-\alpha(1)-2(N_c-2)\alpha(\nicefrac{1}{2})\right)}{(N_c-1)!(N_c-2)!} \int d^4x d\rho \rho^{-5} \left(\frac{4\pi^2}{g_0^2}\right)^{2N_c} \exp \left(-\frac{8\pi^2}{g_{1L}^2}\right) \\ \label{eq:one-instanton-density-compact} &\equiv &C_c \int d^4x d\rho \rho^{-5} \left(\frac{8\pi^2}{g_0^2}\right)^{2N_c} \exp \left(-\frac{8\pi^2}{g_{1L}^2}\right)\end{aligned}$$ The integral on the RHS is over the instanton size $\rho$, and its integrand is refferred to as the instanton density. [Note that the numerical factor $C_c$ depends only on the number of colors and it also contains the factor $2^{-2N_c}$.]{} The above integral is IR divergent ($\rho\rightarrow\infty$, [see (\[eq:mastereq-M\])]{}) because of the running coupling in the exponent. Clearly one has to tame this behavior for the result to be meaningful. If the Yang-Mills theory is coupled to $N_f$ Dirac fermions then, at one loop, they contribute via the fermion determinant to the above result. It is both possible and useful to separate the zero and non-zero fermionic modes. The non-zero modes contribute to the exponential as [@tHooft:1976snw], $$\label{eq:fermion-nonzero-modes} \exp\left[-\frac{2N_f}{3}log(\rho / \rho_0)+2N_f\alpha(1/2) \right] \ ,$$ where the first term is the fermion contribution to the 1-loop running of the gauge coupling and $\alpha(x)$ is a function defined in [@tHooft:1976snw][^4]. Taking all the fermions to have the same mass $m$, the zero modes contribute a term $$\label{eq:ZeroModes} (m \rho)^{N_f}.$$ We can now generalise the result in to include fermions using the 1-loop running of the QCD gauge coupling, $$\label{eq:1Lbeta} \frac{8\pi^2}{g_{1L}^2}=\frac{8\pi^2}{g_0^2}-b \ log(\rho/\rho_0) \ , \quad {\rm with } \quad b=\frac{11}{3}N_c-\frac{2}{3}N_f \ ,$$ and derive $$\begin{aligned} \label{eq:mastereq-M} W^{(1L)}&=& \frac{4}{\pi^2}\frac{\exp(-\alpha(1)+4\alpha(\nicefrac{1}{2}))}{(N_c-1)!(N_c-2)!} \exp(2(N_f-N_c)\alpha(\nicefrac{1}{2}))\times \nonumber \\ &\times & m^{N_f}(\frac{4\pi^2}{g_0^2})^{2N_c}\int d^4x d\rho \rho^{-5+N_f} \exp(-\frac{8\pi^2}{g_0^2}+(\frac{11}{3}N_c-\frac{2}{3}N_f)log(\rho/\rho_0)) \\ \label{eq:mastereq-M-1} &=&C_{cf}\ m^{N_f}\int d^4x d\rho \rho^{-5+N_f}(\frac{8\pi^2}{g_0^2})^{2N_c} \exp(-\frac{8\pi^2}{g_{1L}^2}) \ .\end{aligned}$$ Besides [suffering from]{} the divergence of the instanton density for large instantons, the master equation (\[eq:mastereq-M\]) [has another important feature]{}. The zero mode contributions of (\[eq:ZeroModes\]) imply the vanishing of the whole amplitude as $m\rightarrow 0$. This was noted and thoroughly discussed in [@tHooft:1976snw], see also [@Shifman:1979uw; @Diakonov:1985eg]. A simple strategy to bypass this problem was initially given by [@Shifman:1979uw]. Their reasoning goes as follows. It is empirically known that the QCD vacuum is a medium in which many condensates form, so instead of studying a single instanton in isolation one should take into account the condensation phenomenon on the instanton density. In particular, the authors focussed on the chiral condensate ${\langle\bar{q}q\rangle}$. At the time one could not determine the chiral condensate from first principles, so the authors employed its phenomenological value. Besides its relevance as an order parameter for the spontaneous breakdown of chiral symmetry (SBCS), one considers the chiral condensate as a dynamical fermion mass that should be used in the amplitude instead of the bare mass. Following [@Shifman:1979uw] we compute the effective quark mass in presence of a non-vanishing ${\langle\bar{q}q\rangle}$ condensate in QCD to be given by, $$\begin{aligned} m_{eff} &=&m-\frac{4\pi^2\rho^2}{N_c}\langle\bar{q}^Lq^R\rangle \\ \label{eq:Meffective} &=&m-\frac{2\pi^2\rho^2}{N_c}\langle\bar{q} (1+\gamma_5) q\rangle\end{aligned}$$ Crucially, in the case $m\rightarrow0$ the effective mass doesn’t vanish, meaning that the vacuum-to-vacuum transition amplitude in the presence of a single instanton is non-zero provided that the chiral condensate forms. In this way the authors of [@Shifman:1979uw] successfully pointed towards the physical mechanism responsible for resolving the issue with the zero mass limit. Let us now return to the other issue, the IR divergences of the instanton density. Conceptually, it is reasonable to expect that if QCD forms gluon condensates, then they should be described by a statistical ensemble of the instantons forming them. The early attempts in this direction imagined the QCD vacuum to be described by an instanton gas [@Callan:1977gz; @Vainshtein:1981wh]. This was demonstrated to be a poor description of the physical vacuum, since instantons were much more strongly interacting. The solution came in the form of Shuryak’s instanton liquid model in 1982 [@Shuryak:1982dp]. He had shown that a simple model of the instanton medium as a liquid with only two free parameters can effectively explain a number of nuclear physics observables. His model assumes that all instantons have the same size, $\bar{\rho}$, and he obtained the instanton size and the density of the instanton liquid from the empirical value of the gluon condensate. The approach thus doesn’t explain why the instanton density is a delta-like peak around some $\bar{\rho}$, but such a description has predictive power and seems to explain nuclear physics data well. The above ideas were developed more systematically within the mean field approximation by Diakonov and Petrov in 1983 and 1985, aiming at a description of an ensemble of instantons from first principles. Failure of the instanton gas picture had implied that the instanton interactions should be modelled even if the medium itself will turn out to be rather dilute. This was motivated by the expectation that the instanton interactions would remove the IR divergence. Because of the above, the authors in [@Diakonov:1983hh] introduced a modified variational procedure in an attempt to approximate the exact multi-instanton partition function. They applied their method to pure Yang-Mills theory and besides curing the IR problem they also successfully computed a number of physical observables. In a later work, [@Diakonov:1985eg], the method was extended to include gauged fermions. The central result of this paper is that in an instanton background fermions develop a momentum dependent effective mass which is non-vanishing in the zero momentum limit confirming the expectations of reference [@Shifman:1979uw] as summarised above. Before moving on to the large-$N_c$ theory we will comment on one more issue regarding the master equation (\[eq:mastereq-M-1\]). It follows from a 1-loop computation that the coupling in the exponential term is renormalized, but the one in the pre-exponential factor is not. In literature, this problem is often addressed by recognizing that at two loops the pre-exponential factor gets renormalized [@Diakonov:1983hh] and thus one replaces the bare coupling by the 1-loop running coupling, and the 1-loop coupling by the 2-loop coupling. For completeness we also provide the standard result for the two loop running coupling [@Caswell:1974cj; @Shifman:1979uw], $$\begin{aligned} \label{eq:naive-2-loop-running} \frac{8\pi^2}{g_{2L}^2}&=&\frac{8\pi^2}{g_0^2}- b \log{\rho/\rho_0}+\frac{b'}{b}\log(1-\frac{g_0^2}{8\pi^2}\log\rho/\rho_0) \ , \\ b'&=&\frac{51}{9}N_c^2-\frac{19}{3}N_f \ . \end{aligned}$$ Note that the behavior of the coupling given in (\[eq:naive-2-loop-running\]) is not the exact 2-loop one. In fact, this is only the leading UV contribution valid in the deep UV regime for the asymptotically free phase of QCD [^5]. We will elaborate more on this point in section \[sec:BZ\]. Large $N_c$ {#sec:pure-YM} ------------ Pure Yang-Mills theory at large-$N_c$ is an important step towards studying instantons in the conformal window as well as asymptotically safe instantons. In fact, many of the formulae derived in this subsection can be adapted to include the effects of fermions in these theories. Herein we briefly outline the variational approach of [@Diakonov:1983hh] and present their main results. We particularly focus on the large-$N_c$ limit following reference [@Schafer:2002af]. Assuming that the pure YM vacuum is given by a background gauge field configuration which consists of a large set of instantons, following [@Diakonov:1983hh], in absence of exact results in pure YM theory, one approximates such a background to be a sum of simple, localized 1-instanton solutions. Starting from such an ansatz the ground state can be derived by introducing a modification of the Feynman’s variational principle. This consists in taking an action S, modifying it slightly to get an action $S_1$ s.t. it has a minimum on our ansatz field configuration, and then using the fact that $$\label{eq:variational_principle} Z\geq Z_1 e^{-\langle S-S_1 \rangle} \ ,$$ where Z is the partition function that we want to approximate using the variational principle. Z is given by $$Z=\int \mathrm{D}\phi e^{-S[\phi]}\ ,$$ and $Z_1$ is defined analogously, with the action $S_1$. The expectation values $\langle\ .\ \rangle$ are taken with respect to the measure $\exp(-S_1)$. Let us take the background field to be given by $\bar{A}=\sum_I A_I + \sum_{\bar{I}} A_{\bar{I}}$ where I runs over the instanton configurations and $\bar{I}$ over anti-instantons. We may rewrite the Lagrangian as follows, $$\begin{aligned} -\frac{1}{4g^2}F^2(\bar{A})&=&-\frac{1}{4g^2}\left(\sum_{i=I,\bar{I}}F^2(A_i)+ F^2(\bar{A})-\sum_{i=I,\bar{I}}F^2(A_i)\right) \\ & \equiv & -\frac{1}{4g^2}\left(\sum_{i=I,\bar{I}}F^2(A_i)+U_{int}\right) \ ,\end{aligned}$$ where the first term is the Lagrangian of a non-interacting instanton gas, and the second term describes the interaction in the medium. [From here on we use notation]{} $\nicefrac{1}{4g^2}F^2=\nicefrac{1}{4}G^2$. Including the bosonic statistics factors $N_{\pm}$ in front of the partition function, normalizing both sides of (\[eq:variational\_principle\]) to the perturbation theory vacuum, and regularizing the determinants, at one loop order we obtain the following expression, $$\begin{aligned} \label{eq:RILM-partition-func-1} \left. \frac{Z}{Z_{ptb}}\right|_{reg, 1L} &\geq & \frac{1}{N_+!N_-!}\int\prod_i^{N_++N_-}d\gamma_i \ d(\rho_i) e^{-\beta(\bar{\rho})U_{int}(\gamma_i)} \\ \label{eq:RILM-partition-func-2} &\equiv & \frac{1}{N_+!N_-!}\int\prod_i^{N_++N_-}d\gamma_i \ e^{-E(\gamma_i)} \ . \end{aligned}$$ In this expression $\gamma_i$ represents the collective coordinates of the i-th pseudoparticle (see \[sec:review\]). $d(\rho)$ stands for the 1-instanton density (\[eq:one-instanton-density-compact\]), and we use the standard notation, $$\beta(\rho)\equiv8\pi^2/g^2(\rho)\ .$$ In the expression (\[eq:RILM-partition-func-1\]) $\beta(\rho)$ is renormalized by 1-loop determinants at a scale $\bar{\rho}$ corresponding to the average instanton size. In the second line, (\[eq:RILM-partition-func-2\]), we’ve introduced the compact notation, $$\label{RILM-energy} E(\gamma_i)=\beta(\bar{\rho})U_{int}(\gamma_i)-\sum_i \log d(\rho_i) \ .$$ If the medium is sufficiently dilute one can consider only two-particle interactions in the interaction term, all the other ones being subdominant [^6]. This is the key physical ingredient beyond the simple instanton gas model. The interaction potential has been determined in [@Diakonov:1983hh]. Integrating over the relative angle between two instantons in color space, and integrating over the instanton separation one obtains a remarkably simple expression, $$\begin{aligned} U_{int}^{2-body}(\rho_1,\rho_2)=\gamma^2 \rho_1^2 \rho_2^2\ ,\qquad \gamma^2=\frac{27\pi^2}{4}\frac{N_c}{N_c^2-1}\ ,\end{aligned}$$ where $\rho_{1,2}$ are the sizes of the two pseudoparticles, and the coupling $\gamma^2$ has the characteristic $1/N_c$ behavior. We may now use the variational principle. Assuming that the effect of the 2-body interactions can be well captured by a modification of the 1-instanton densities $d(\rho)$, we write, $$E_1(\gamma_i)=-\sum_I^{N_+} \log\mu_+(\rho_I)-\sum_{\bar{I}}^{N_-} \log\mu_-(\rho_{\bar{I}}) \ .$$ Substituting $E_1$ in place of $E$ in (\[eq:RILM-partition-func-2\]) we get, $$\label{eq:RILM-partition-function-3} Z_1=\frac{1}{N_+!N_-!}V^{N_++N_-}(\mu_+^0)^{N_+}(\mu_-^0)^{N_-} \ ,$$ where, $$\mu_\pm^0=\int_0^\infty d\rho \ \mu_\pm(\rho) \ .$$ To apply the variational principle to find the optimal value of $\mu(\rho)$ we start by evaluating $\langle E-E_1\rangle$ which enters in (\[eq:variational\_principle\]). First we express $\langle E-E_1\rangle$ in terms of, $$\label{eq:rho-square-bar} \overline{\rho_{\pm}^2}=\frac{1}{\mu_{\pm}^0}\int d\rho \ \rho^2 \mu_{\pm}(\rho) \ .$$ Next, we minimize (\[eq:RILM-partition-function-3\]) wrt $\mu_\pm$. There’s an arbitrary constant appearing in the optimal $\mu_\pm$, and if these are chosen equal then $\mu_+=\mu_-\equiv\mu$. Writing $N_++N_-=N$, we find the optimal $\mu$ to be, $$\label{eq:naive-optimal-mu} \mu(\rho)=d(\rho)\exp\left(-\frac{\beta \gamma^2 N}{V}\overline{\rho^2}\rho^2\right),$$ where $\beta\equiv\beta(\bar{\rho})=8\pi^2/g^2(\bar{\rho})$. This can be reinserted in (\[eq:rho-square-bar\]) to give, $$\label{eq:rho-square-bar-1} (\overline{\rho^2})^2=\frac{\nu}{\beta \gamma^2 N/V}, \ \ \ \ \ \nu=\frac{b-4}{2}.$$ This expression can be further inserted in the optimal $\mu$, and $\mu^0$ can be easily found using the explicit form of the optimal $\mu$ and of the 1-instanton density. Finally we can determine the RHS of , see [@Diakonov:1983hh] for more details. Instead of keeping the number of pseudoparticles $N$ fixed we can work in the grand canonical ensemble. This allows us to find the average number of instantons in the medium by maximizing the RHS of (\[eq:RILM-partition-function-3\]) as a function of $N$. For the bosonic factors we set $N_\pm!=(N/2)!$ and use the Stirling approximation. This brings us to the following important expression for the average instanton number, $$\begin{aligned} \label{eq:avg-instanton-number} \langle N \rangle = V \Lambda_{YM}^4 \left(\Gamma(\nu) C_{cf} \tilde{\beta}^{2N_c} (\beta \gamma^2 \nu)^{-\nu/2}\right)^{\frac{2}{\nu+2}}\ ,\end{aligned}$$ where $\tilde{\beta}=8\pi^2/g_0^2$. Note that $\overline{\rho^2}$ enters this equation through $\beta=8\pi^2/g^2_{1L}(\overline{\rho^2})$, so (\[eq:rho-square-bar-1\]) and (\[eq:avg-instanton-number\]) should be solved simultaneously (consistently). The importance of the average number of instantons comes from the fact that it is related both to the gluon condensate, vacuum energy, topological susceptiblity, and in a theory with fermions, to the $U(1)$ axial anomaly. In terms of the number of instantons per unit volume the partition function takes the following simple form, $$\label{eq:RILM-partition-function} Z=\exp \left[ \frac{1}{2} (\nu+1) \langle N \rangle \right].$$ We can solve numerically for the expectation values of the instanton size and of the density of instantons in the vacuum. To do that we need to perform the aforementioned RG improvement by promoting $\tilde{\beta}$ to $\beta$ and $\beta$ to $8\pi^2/g^2_{(2L)}(\bar{\rho})$. [Note that it is useful to introduce a]{} free parameter $a$, called fudge factor, [in the log term of the 1-loop running coupling (\[eq:1Lbeta\]). The fudge factor]{} essentially parametrizes the uncertainty on the actual confining scale $\Lambda_{YM}$. The numerical results are shown in Figure \[fig:rho-and-density\]. Even for modest values of $N_c$ shown in the figure one already notices that the density of instantons increases as $\mathcal{O}(N_c)$, whereas the average instanton size is quite independent of $N_c$ and is always of $\mathcal{O}(1)$. ![\[fig:rho-and-density\] Instanton size and density of instantons as function of $N_c$](rho-and-density.png) We can also study the dependence of the effective instanton density $d(\rho)$ on the number of colors $N_c$. The results are shown in Figure \[fig:instanton-density\]. Already from (\[eq:one-instanton-density\]) we know that the amplitude decreases rapidly with $N_c$, but what we consider here is the shape and the spread of the distribution. (To this end we normalize all the distributions to $\mu^0=1$.) In particular, we notice that the distribution has a prominent peak centered about the average instanton size, and in the large $N_c$ limit becomes essentially delta like [@Schafer:2002af]. ![\[fig:instanton-density\] Effective instanton density profile as a function of $N_c$. (Normalized to unity.)](instanton-density.png) Recall the relation between the full Lagrangian and the instanton gas Lagrangian, $F^2=\sum_i F_i^2+32\pi^2 U_{int}$. Since we know the value of the action for a BPST instanton (see eq. (\[eq:finite-action-value\])), we know that $$\label{eq:RILM-energy} \langle \int \frac{d^4x}{32\pi^2} F^2 \rangle = \langle N \rangle + \langle U_{int} \rangle \ ,$$ and from (\[eq:RILM-partition-func-1\]) it follows $\langle U_{int} \rangle = - \partial \log Z / \partial \beta$. From (\[eq:RILM-partition-function\]) we thus obtain, $$\langle U_{int} \rangle = \frac{\nu}{2\beta}\langle N \rangle\ .$$ ![\[fig:YM-energy-ratio\] Ratio of interaction energy to free energy as a function of $N_c$. We’ve fixed a=1/10.](pureYM-energy-ratio.png) Figure \[fig:YM-energy-ratio\] shows the ratio of the interaction energy to free energy. Since the free energy is larger than the interaction energy we can trust the simplified 2-body interaction model. Further, because the gluon field VEV is related to the trace of the stress energy tensor (SET) by the scale anomaly relation, and since the trace of the SET is in a direct relation to the vacuum energy density, we obtain the following leading-order expression for the vacuum energy density, $$\label{eq:RILM-vacuum-energy} \mathcal{E}=-\frac{b}{4}\frac{\langle N \rangle}{V} \ .$$ Notice that it grows quadratically with $N_c$, with an additional factor of $N_c$ with respect to non-interacting instanton gas [@Schafer:2002af]. Let us now compute the topological susceptibility. This is of particular interest because it is an observable. We start by adding the topological theta-term, $\frac{i \theta}{32\pi^2} \int d^4x F \tilde{F}$, to the action. The topological susceptibility is defined by, $$\label{eq:RILM-top-suscept} \chi_{top}=-\frac{\partial^2 \log Z}{\partial \theta ^2}|_{\theta =0} =\langle \left( \int d^4x \frac{F \tilde{F}}{32\pi^2} \right)^2 \rangle \ .$$ In particular, adding the $\theta$-term to the partition function doesn’t modify the computation of $\mu(\rho)$, or $\overline{\rho^2}$, and thus the only modification to (\[eq:RILM-partition-function\]) is in an additional term $+i\theta (N_+-N_-)$. Self-consistently, by rewriting this as $$Z=\exp \left[ \frac{\nu+2}{2} \langle N \rangle (1-\frac{\theta^2}{\nu+2}+\mathcal{O}(\theta^4)) \right] \ ,$$ and taking the derivative as in (\[eq:RILM-top-suscept\]) we get [@Diakonov:1983hh], $$\label{eq:RILM-top-suscept-result} \chi_{top}=\langle N \rangle \ .$$ We are now ready to investigate and extend the role of instantons within the conformal window of QCD and for asymptotically safe quantum field theories. Conformal Window Instantons {#sec:BZ} =========================== In this section we determine the instanton dynamics in the QCD IR conformal window. We shall be prevalenty concerned with the calculable part of the conformal window, the one in which an IR fixed point is reached perturbatively and that is often referred to as a lá Banks-Zaks [@Banks:1981nn]. The perturbative IR fixed point occurs for the number of fermions $N_f$ tuned to be slightly below $\nicefrac{11}{2}N_c$ in the large-$N_c$, large-$N_f$ limit. In this limit one introduces an expansion in the physical parameter $\epsilon$, defined in (\[eq:def-epsilon\]), that measures the distance, in flavor space, from the loss of asymptotic freedom. This parameter can be made arbitrarily small. The fixed point value, being an expansion in $\epsilon$ can be made arbitrarily weakly interacting rendering the expansion controllable. In figure \[fig:BZvsQCD-running\] we compare the running coupling in the Banks-Zaks theory for $\epsilon=-1/10$ at one loop (diverging) and at two loops (converging to a fixed point). ![\[fig:BZvsQCD-running\] Banks-Zaks running, shown in continuous blue line for $\epsilon=-1/10$, interpolates between an interacting fixed point in the IR and a non-interacting fixed point in the UV. This result is obtained starting at two loops, whereas the analogous one loop running is given in blue dashed line. The only scale in the BZ theory is the RG-invariant scale $\Lambda_c$ corresponding to the red dashed line. We chose the matching conditions s.t. the one loop and the two loop couplings match at the scale $\Lambda_c$; this fixes the one loop IR divergence scale shown in black dashed line. ](BZvsQCD-running.png) It is immediately clear from the running of the coupling that the infrared dynamics, being conformal, is quite distinct from the chiral symmetry breaking QCD scenario. In particular, in the IR instead of becoming non-perturbatidve it remains within the perturbative regime until it finally reaches a conformal theory in the deep infrared. We shall consider the epsilon regime in which the two-loops analysis remains trustworthy. It would be interesting to extend the present work to higher loops [@Pica:2010xq; @Ryttov:2010iz; @Ryttov:2016ner]. Due to the perturbative control, we can fully include the fermion effects at one loop order by including their contribution to the beta function of the gauge coupling. It is particularly interesting to investigate the mass-deformed perturbative conformal field theory as argued first in [@Sannino:2010ca]. Starting with fermions, all of the same mass $m\ll\Lambda_c$, the running is given in the top panel of figure \[fig:massive-BZ-running\]. In the deep UV, at energies higher than the fermion mass $m$, the running is dominated by the free fixed point. At energies below the fermion mass fermions can be integrated out. In the perturbative regime of the conformal window we can follow the perturbative flow down to $m$. At energies lower than the common fermion mass one enters the YM regime. In a mass-independent scheme (although our results for physical quantities are scheme independent) one matches the pure YM coupling with the one with massless fermions at the scale $m$, and the YM running takes over as shown in the bottom panel of figure \[fig:massive-BZ-running\]. ![\[fig:massive-BZ-running\] Blue line shows the Banks-Zaks running for $\epsilon=-1/10$, and green line corresponds to the pure YM running. Purple dot shows the matching couplings at the fermion mass scale which is given by purple dashed line. Black dashed line is the scale $\Lambda_{YM}$, and scale $\mu=\Lambda_c=1$ can’t be shown due to the use of log scale on the horizontal axis.](matchingBZflow.png "fig:")\ In QCD one needs to take particular care of the low-energy fermion modes when the hard common fermion mass is sufficiently small. This is so since these modes are delocalized and feel the presence of the instanton medium. In the perturbative regime of the conformal window one can continue lowering the fermion mass all the way to zero because the coupling is guaranteed to stay perturbative down to the fermion mass scale. Above the common fermion mass energy no condensate can form because the theory can be made arbitrarily weakly interacting [@Diakonov:1985eg]. As the fermions become massless we expect the instantons to [*melt away*]{} and the vacuum-to-vacuum transition amplitude due to instantons to vanish. To take into account the full perturbative running we consider the RG-improved master equation. The often used naive 2-loop running approach (\[eq:naive-2-loop-running\]) is valid only in the deep UV since it does not account for the Banks-Zaks IR fixed point. Let us thus look into the exact 2-loop RG running more closely. We begin by defining the ’t Hooft coupling, $$\label{eq:tHooft-coupling} \alpha=\frac{g^2N_c}{(4\pi)^2}.$$ The two loop beta function of the gauge coupling in presence of fermions can be written as, $$\label{eq:2-loop-betas} \mu \partial_\mu \alpha\equiv\beta_\alpha=-B \alpha^2 + C \alpha^3.$$ Here $B=-\nicefrac{4}{3}\ \epsilon $ and $C=25+\nicefrac{26}{3}\ \epsilon$, and the physical control parameter is given by $$\label{eq:def-epsilon} \epsilon=\frac{N_f}{N_c}-\frac{11}{2}<0 \ .$$ The exact 2-loop running is given by [@Litim:2015iea] $$\label{eq:2-loop-running} \alpha(\mu)=\frac{\alpha_*}{1+W(z(\mu))}\ ,$$ where $$\alpha_*=B/C$$ is the IR Banks-Zaks fixed point, and W stands for the Lambert (or productlog) function. z($\mu$) will be defined shortly. Expansion around $\mu\rightarrow\infty$ yields equation (\[eq:naive-2-loop-running\]). The running stemming from is manifestly bounded and it interpolates between $\alpha=0$ for infinite energies and $\alpha=\alpha_*$ in the IR, as it can be seen from figure \[fig:BZvsQCD-running\]. Let us note that $\partial_\alpha\beta_\alpha(\alpha)$ vanishes for $\alpha=\nicefrac{2}{3}\ \alpha_*\equiv\alpha_c$. The scale at which one reaches this value of the coupling is critical in the sense that at this scale the gauge coupling changes scaling from canonical to a non-Gaussian one. This scale, $$\label{eq:2-loop-RG-inv-scale} \mu(\alpha_c)\equiv\Lambda_c=(2e^{-\frac{1}{2}})^{-1/ \theta_*} (1-\frac{\alpha}{\alpha_*})^{-1/\theta_*}\ \mu \ ,$$ is the 2-loop RG-invariant scale in the sense that $\mu\partial_\mu(\Lambda_c)=0$ (to linear order), and $\theta_*$ is the [eigenvalue of the RG flow]{} at the FP, $$\theta_*=\frac{\partial \beta_\alpha}{\partial \alpha}|_{\alpha_*}=\alpha_* B \ .$$ Inserting the RG-improvements in the 1-loop master equation (\[eq:mastereq-M-1\]) we have, $$\begin{aligned} d_{2L}(\rho)&=&C_{cf} m^{N_f} \rho^{N_f-5}(b \log M\rho)^{2N_c} e^{-\frac{8\pi^2}{g^2_{2L}}} \\ &=& C_{cf} \exp(1/2-\log 2)^{-\frac{8\pi^2}{g^2_*}}m^{N_f}\rho^{N_f-5}(\rho \Lambda_c)^{\frac{1}{2}BN_c}\times \nonumber \\ &\times & (b \log M\rho)^{2N_c} W(z(\rho))^{\frac{8\pi^2}{g_*^2}}. \label{eq:2-loop-inst-density} \end{aligned}$$ In the above expression we used $C_{cf}$ as defined in (\[eq:mastereq-M\]), $\Lambda_c$ defined in (\[eq:2-loop-RG-inv-scale\]), $$\label{eq:arg-of-Lambert} z(\rho)=e^{1/2-\log 2}(\rho\Lambda_c)^{-\alpha_*B},$$ 1-loop beta coefficient $b$ given in , and 1-loop RG-invariant scale M, $$\label{eq:1-loop-RG-inv-scale} M=\frac{1}{\rho_c}\exp{(-\frac{1}{b}\frac{8\pi^2}{g(\rho_c)^2})}=\Lambda_c \exp{(-\frac{3}{2}\frac{C}{B^2})}.$$ Setting $\rho^2\rightarrow\overline{\rho^2}$ in second line of (\[eq:2-loop-inst-density\]) the expression for the instanton density takes a form which is similar to what we had in the pure YM case. In fact, defining, $$\label{eq:f-of-rho-bar} f(\overline{\rho})=C_{cf} \exp(1/2-\log 2)^{-\frac{8\pi^2}{g^2_*}}(\frac{b}{2} \log M^2 \overline{\rho^2})^{2N_c} W(\overline{\rho^2})^{\frac{8\pi^2}{g_*^2}},$$ we can write the 2-loop instanton density as, $$\label{eq:2-loop-instanton-density} d_{2L}(\rho)=f(\overline{\rho})m^{N_f}\rho^{N_f-5}(\rho \Lambda_c)^{\frac{1}{2}BN_c}.$$ From here we find $\overline{\rho^2}$ analogously to the derivation of (\[eq:rho-square-bar-1\]). The result can be put in the same form, with $$\label{eq:nu-Banks-Zaks} \nu=\frac{1}{2}\left(\frac{1}{2}BN_c+N_f-4\right)\ .$$ The minimization of the partition function can now be performed in complete analogy to the derivation of the average instanton number in the pure YM theory and we obtain $$\label{eq:BZ-num-of-instantons} \langle N \rangle =V \Lambda_c^4 \left[\Gamma(\nu) (\frac{m}{\Lambda_c})^{N_f} f(\bar{\rho}) (\beta \gamma^2 \nu)^{-\frac{\nu}{2}}\right]^{\frac{2}{2+\nu}} \ .$$ Comparing to (\[eq:avg-instanton-number\]), the most notable difference is the appearance of the RG-invariant scale $\Lambda_c$ instead of the IR-divergence scale $\Lambda\simeq\Lambda_{YM}$. Another important thing is that $\tilde{\beta}^{2N_c}$ is replaced by $(\frac{m}{\Lambda_c})^{N_f}f(\bar{\rho})$, which renormalizes the 1-loop result (\[eq:avg-instanton-number\]). The partition function still has the same form (\[eq:RILM-partition-function\]) as in the pure YM case, but with new values for $\nu$ and $\langle N \rangle$. ![\[fig:BZ-inverse-rho\] The figure shows inverse of $\bar{\rho}$ measured in units of $\Lambda_c$ for various choces of $\epsilon$ and $N_c$ (green, blue and purple). For fixed $\epsilon=-.1$ changing $N_c$ from 100 to 1000 changes $\bar{\rho}$ by less than 2%. For $\epsilon=-.1$ and $N_c=1000$ changing the parameter a from 1 to .1 has less than .1% effect. Red dashed line shows fermion mass m in units of $\Lambda_c$.](BZ-inverse-rho-of-m.png) Solving the equations for $\bar{\rho}$ and $N/V$ the way we did in the pure YM case leads us to the results shown in the figure \[fig:BZ-inverse-rho\]. Crucially, the results are inconsistent with the hypotheses in the sense that $\bar{\rho}^{-1}$ that we find is always smaller than m, i.e. it is more IR than the scale m where we decouple fermions. This leads us to look for the solution below the energy scale m, where running of the couplings is given by pure YM beta functions [^7]. This leads us to consider the equations (\[eq:rho-square-bar-1\]) and (\[eq:avg-instanton-number\]) again. We know from the subsection \[sec:pure-YM\] that the solutions for instantons in the pure YM theory are internally consistent, meaning that $\bar{\rho}^{-1}\gg \Lambda_{YM}$. When solving the equations for the BZ theory, since we didn’t find any solutions for $\bar{\rho}^{-1} > m$, we additionally have to make sure that the consistency condition $\bar{\rho}^{-1} < m$ is met when using the pure YM running coupling. Our results are shown in the top panel of figure \[fig:consistency\] which shows the ratio of $m$ to $\bar{\rho}^{-1}$ as a function of $m$ measured in units of $\Lambda_c$. Results for $\bar{\rho}$ are well within the required consistency range. Bottom panel shows the inverse instanton length as a function of mass, and we can clearly see the power law decrease of the instanton scale as m is taken to zero. ![\[fig:consistency\] We take $a=1/10$, $N_c=1000$, and $\epsilon=-1/10$. The top panel shows the instanton scale in the deep IR w.r.t. the fermion mass. The bottom panel shows $\bar{\rho}^{-1}$ as a function of m. Numerical values are predominantly determined by $\epsilon$, with a very mild dependence on $a$ and $N_c$.](BZ-rho-over-m-of-m.png "fig:") ![\[fig:consistency\] We take $a=1/10$, $N_c=1000$, and $\epsilon=-1/10$. The top panel shows the instanton scale in the deep IR w.r.t. the fermion mass. The bottom panel shows $\bar{\rho}^{-1}$ as a function of m. Numerical values are predominantly determined by $\epsilon$, with a very mild dependence on $a$ and $N_c$.](BZ-inverse-rho-of-m-2.png "fig:") As an additional consistency check one may study the behavior of $\Lambda_{YM}$. In fact, here it is not an arbitrary number but it is specified by the following one loop matching conditions $$\frac{8\pi^2}{g^2_{YM}(m)}\equiv -\frac{11}{3}N_c\ log( a\frac{\Lambda_{YM}}{m} ) = \frac{8\pi^2}{g^2_{BZ}(m)}=\frac{N_c}{2 \alpha(m)} \ ,$$ which yields, $$\Lambda_{YM}=\frac{m}{a} \exp \left( -\frac{3}{22} \frac{1}{\alpha(m)} \right) \ .$$ For small enough $\epsilon$ the exponential term is flat as a function of $m$, so the dependence on mass here is essentially linear. Finally, we can measure $\bar{\rho}$ in units of $\Lambda_{YM}$ and what we find is that it is flat as a function of $m$, taking value $\bar{\rho}=0.390\Lambda_{YM}^{-1}$ [for $a=1/10$, $N_c=1000$ and $\epsilon=-1/10$]{}. See figure \[fig:BZ-rho-of-MYM\]. ![\[fig:BZ-rho-of-MYM\] The figure shows $\bar{\rho}$ in units of $\Lambda_{YM}$ as a function of m. We’ve fixed $a=1/10,\ N_c=1000$ and $\epsilon=-1/10$. ](BZ-rho-of-MYM.png) Let us now discuss the instanton energy and the topological susceptibility. Since the couplings are renormalized at the energy scale corresponding to the inverse of the average instanton size, and since the instanton size turns out to be such that they sit well within the pure YM regime, the analysis closely follows the pure YM case. In particular, the partition function again takes the simple form (\[eq:RILM-partition-function\]) with $\langle N \rangle$ and $\nu$ given by (\[eq:avg-instanton-number\]) and (\[eq:rho-square-bar-1\]) respectively. The total energy is given by a sum of the free energy term, $\langle N \rangle$, and the interaction term. The interaction term comes from the derivative of the partition function wrt $\beta=8\pi^2/g^2_{2L}(\bar{\rho})$. This dependency is hidden in $\langle N \rangle$ where it appears in the same form as it did in the pure YM case, which means the interaction energy can again be written as $\langle U_{int} \rangle = \nu \langle N \rangle / (2\beta)$. The ratio of the interaction energy to free energy thus follows the curve shown in figure \[fig:YM-energy-ratio\]. In fact the shape of that curve changes significantly if 2-loop running is used instead of the 1-loop running and the overall method is more stable when compared to the QCD case. If one fixes $m$, $N_c$ and $a$ (e.g. $m=1/10\ \Lambda_c,\ N_c=1000,\ a=1/10$) one can study $\bar{\rho}$, in units of $\Lambda_{YM}$, as a function of $\epsilon$ and find that it is constant (and in our example) equal to $\bar{\rho}=0.390\Lambda_{YM}^{-1}$. The reason why $\bar{\rho}(\epsilon)$ is constant in units of $\Lambda_{YM}$ is related to the fact that $\Lambda_{YM}$ decreases rapidly with decreasing $|\epsilon|$, thus compensating for the rapidly growing $\bar{\rho}$ in units of $\Lambda_c$. The determination of the topological susceptibility proceeds as described in the previous section, see equation (\[eq:RILM-top-suscept-result\]). As we’ve discussed above, $\Lambda_{YM} \bar{\rho}$ is essentially $m$- and $\epsilon$-independent (see figure \[fig:BZ-rho-of-MYM\]). In this sense $N/V$ depends only on the explicit factor $\bar{\rho}^{-4}$. It is then clear that $N/V$ will rapidly decrease with decreasing $m/\Lambda_c$ when measured in units of $\Lambda_c^{-4}$, but will be constant if measured in units of $\Lambda_{YM}^{-4}$. This is confirmed in figure \[fig:BZ-N-over-V\]. ![\[fig:BZ-N-over-V\] Top panel shows density of the instantons per unit volume measured in units of $\Lambda_c^{-1}$. Bottom panel shows the same quantity measured in units of $\Lambda_{YM}^{-1}$. The exact value of N/V depends on the fudge factor a. The bottom panel shows $N/(V\Lambda_{YM}^4)=99.268$ obtained for a=1/10. Decreasing a by 1/2 increases N/V by 30%, and decreasing a by a further factor of 1/2 decreases N/V by additional 23%.](BZ-N-over-V-1.png "fig:") ![\[fig:BZ-N-over-V\] Top panel shows density of the instantons per unit volume measured in units of $\Lambda_c^{-1}$. Bottom panel shows the same quantity measured in units of $\Lambda_{YM}^{-1}$. The exact value of N/V depends on the fudge factor a. The bottom panel shows $N/(V\Lambda_{YM}^4)=99.268$ obtained for a=1/10. Decreasing a by 1/2 increases N/V by 30%, and decreasing a by a further factor of 1/2 decreases N/V by additional 23%.](BZ-N-over-V-2.png "fig:") Safe Instantons {#SafeInstantons} =============== Here we extend the instanton calculus to asymptotically safe quantum field theories starting with the first discovered controllable asymptotically safe four dimensional gauge theory, here dubbed LISA [@Litim:2014uca]. Controllable Instantons in UV-Safe Gauge-Yukawa Theories -------------------------------------------------------- LISA consists of an $SU(N_c)$ gauge field coupled to $N_f$ vector-like fermions and a scalar field. Besides the gauge coupling there are Yukawa couplings and two scalar self-couplings. At 2-loop order the beta function of the gauge coupling has the LO term exactly as in (\[eq:2-loop-betas\]), but the cubic term becomes, $$\left[(25+\frac{26}{3}\epsilon)\alpha-2(\frac{11}{2}+\epsilon)^2\alpha_y\right]\alpha^2,$$ where $\alpha_y=y^2 N_c/(4\pi)^2$ and $y$ is the Yukawa coupling. In the Veneziano limit the theory admits a perturbative interacting UV fixed point. At the fixed point value of the gauge coupling, the Yukawa coupling and scalar self-couplings all have values of order $\epsilon$, where $\epsilon$ is again given by (\[eq:def-epsilon\]) but this time is positive because asymptotic freedom is lost. To simplify the discussion, herein we neglect the running of the Yukawa coupling. This slightly changes the numerical behavior of the running gauge coupling, but qualitatively the picture of having a running coupling interpolating between a Gaussian FP in the IR and a perturbative, non-Gaussain FP in the UV persists. In particular, substituting the fixed point value of the Yukawa coupling, $\alpha_y^*$, in the above expression for the cubic term leads exactly to the beta function (\[eq:2-loop-betas\]), with $B=-4\epsilon/3$ and $$C=-\frac{2}{3}\frac{57-46\epsilon-8\epsilon^2}{13+\epsilon}\ .$$ Both $B$ and $C$ being negative, the fixed point appears at the physical value $\alpha_*=B/C>0$. Note that this couldn’t have been possible without the inclusion of scalars in the theory, since what was necessary for flipping the sign of the cubic term was the contribution of the Yukawa coupling. From here on it is clear that [running of the gauge coupling in]{} LISA (in this slightly simplified form) is analogous to [that in]{} the Banks-Zaks theory, with UV and IR reversed although the dynamics is profoundly different in nature. In particular, the 2-loop running gauge coupling is again given by (\[eq:2-loop-running\]). The difference, of course, is in the fact that the argument of the Lambert function, $z(\rho)$ that is given in (\[eq:arg-of-Lambert\]), now runs inversely proportional $\rho$ because $B=-4\epsilon/3$ changed sign with respect to the IR interacting fixed point. We include the fermion effects following the previous section. The running coupling behaves according to figure \[fig:massive-LISA-running\]. Due to theory being non-interacting in the IR, there are no condensates forming at the fermion mass scale and thus zero modes are taken care of in the same straightforward manner as in the previous section (see [@tHooft:1976snw]). ![\[fig:massive-LISA-running\] The blue line shows the LISA running for $\epsilon=1/10$, and green line corresponds to the pure YM running. Purple dot shows the matching couplings at the fermion mass scale which is given by purple dashed line. Black dashed line is the scale $\Lambda_{YM}$.](matchingLISAflow.png "fig:")\ Computation of the average instanton size and the density of instantons per unit volume proceeds analogously to the perturbative interacting IR fixed point theory presented in the previous section. In particular, the 2-loop instanton density is given by (\[eq:2-loop-inst-density\]). The main difference is that the argument of the Lambert function, given in (\[eq:arg-of-Lambert\]), now grows with $\rho$ due to $B=-\nicefrac{4}{3}\ \epsilon<0$. In the remaining terms of (\[eq:2-loop-inst-density\]) with explicit power-law dependence on $\rho$, $N_f$ dominates over $\nicefrac{1}{2}B N_c$, so the fact that B changes sign here is irrelevant. From this 2-loop density one can obtain the effective 2-loop density $\mu(\rho)$ in a similar way as done in the IR interacting case. It will therefore again lead to the expression (\[eq:naive-optimal-mu\]). Note that $\beta$ is just shorthand for $8\pi^2/g^2$ so it doesn’t change sign wrt the IR interacting theory, in fact, we still have a Gaussian suppression of the IR instantons. The expectation value of $\rho$ that we get is (\[eq:rho-square-bar-1\]) with $\nu$ given by (\[eq:nu-Banks-Zaks\]). Here $\beta$ is still positive, so $\nu$ has to be positive too if $\bar{\rho}$ is to be positive. In fact, $\nicefrac{1}{2}BN_c=-\nicefrac{4}{6}\epsilon N_c$, whereas $N_f=(\nicefrac{11}{2}+\epsilon)N_c\simeq \nicefrac{11}{2} N_c$, and thus $\nu$ is clearly positive both in the BZ and LISA theories. Finally, $\langle N \rangle$ takes the same form (\[eq:BZ-num-of-instantons\]) as in the BZ case with $\nu$, $\beta$ and $f(\rho)$ appropriately modified. Solving the equations for $\langle N \rangle$ and $\bar{\rho}$ numerically, we find results similar to the BZ instantons. In particular, using the LISA beta functions we do find solutions for $\bar{\rho}$, with the instanton scale still smaller than the fermion mass, which means that the results are not consistent (see figure \[fig:LISA-inconsistent-solns\]). We then solve the equations using the pure YM beta functions and find solutions with the instanton scale in the window between the IR $\Lambda_{YM}$ scale (which is expected from \[sec:pure-YM\]), and the fermion decoupling scale (which is a nontrivial consistency check). For the results in LISA see figure \[fig:LISA-instanton-scale\]. ![\[fig:LISA-inconsistent-solns\] The figure shows solutions for $\bar{\rho}$, obtained using LISA beta functions, for various choices of $\epsilon$ and $N_c$ (green, blue and purple). Red dashed line shows the fermion mass. ](LISA-inverse-rho-of-m.png) ![\[fig:LISA-instanton-scale\] Top panel shows ratio $m/\bar{\rho}^{-1}$ and bottom panel shows $\bar{\rho}^{-1}$ as functions of m. In both figures a=1/10, $N_c=1000$ and $\epsilon=1/10$.](LISA-rho-over-m-of-m.png "fig:") ![\[fig:LISA-instanton-scale\] Top panel shows ratio $m/\bar{\rho}^{-1}$ and bottom panel shows $\bar{\rho}^{-1}$ as functions of m. In both figures a=1/10, $N_c=1000$ and $\epsilon=1/10$.](LISA-inverse-rho-of-m-2.png "fig:") It is of some interest to compare figures \[fig:LISA-instanton-scale\] and \[fig:consistency\]. One interesting feature is that, for the chosen parameters, the instanton scale is about one order of magnitude smaller than the fermion mass in the case of LISA, but it is more than 16 orders of magnitude smaller than the fermion mass in the BZ theory. Here the difference arises because in the infrared LISA is free rather than interacting. Further, in both cases the instanton scale $\bar{\rho}^{-1}$ always has to lie below the mass scale $m$, which explains the fact that the lines in the bottom panels have the same tendency to grow with $m$. Finally, from the top panels we see that the ratio $m/ \bar{\rho}^{-1}$ grows with $m$ in BZ theory but decreases in LISA. In fact, the BZ theory is non-interacting in the UV and the higher the energies at which we decouple fermions the higher the IR instanton scale seems to be. This pattern is found in the LISA case as well. This behavior is related to how $\beta=8\pi^2/g^2$ enters the equations (\[eq:avg-instanton-number\]) and (\[eq:rho-square-bar-1\]). We can study $\bar{\rho}$ in units of $\Lambda_{YM}$ as a function of $m$, but it is clear that the results are described by figure \[fig:BZ-rho-of-MYM\], i.e. in units of $\Lambda_{YM}$ the solution reproduces the BZ result. The same holds for $\langle N \rangle /V$ in units of $\Lambda_{YM}^{-4}$ which is shown in bottom panel of figure \[fig:BZ-N-over-V\]. For the instanton density in units of $\Lambda_c^{-4}$ see figure \[fig:LISA-N-over-V\]. ![\[fig:LISA-N-over-V\] The figure shows density of the instantons per unit volume measured in units of $\Lambda_c^{-4}$. ](LISA-N-over-V-1.png) Because of the perturbative nature of the UV interacting fixed point we have been able to extend the instanton calculus to controllable asymptotically safe quantum field theories. Safe QCD Instantons, the large $N_f$ story ------------------------------------------ In the LISA theory [@Litim:2014uca] elementary scalars and their induced Yukawa interactions crucially help taming the ultraviolet behaviour of the overall gauge-Yukawa theory. Scalars, however, are not needed at finite number of colours and very large number of flavours for non-abelian gauge-fermion theories as reviewed and further analysed in [@Antipin:2017ebo]. Consider an $SU(N_c)$ gauge theory with $N_f$ fermions transforming according to a given representation of the gauge group. We consider the theory for a number of flavours above which asymptotic freedom is lost, i.e. $N_f^{AF} > 11C_G/(4T_R)$, where the first coefficient of the beta function changes sign. Although we don’t not need to specify the fermion representation we will consider here the fundamental representation for which the relevant group theory coefficients are $C_G=N_c$, $C_R=(N_c^2-1)/2N_c$ and $T_R=1/2$. At one loop order the theory is simultaneously free in the infrared (non-abelian QED) and trivial, meaning that the only way to take the continuum limit (i.e. sending the Landau pole induced cutoff to infinity) is for the theory to become non-interacting. At two-loops Caswell [@Caswell:1974gg] demonstrated that an UV interacting fixed point (asymptotic safety) cannot arise near the loss of asymptotic freedom implying that safety can only occur above a certain critical number of flavours. This possibility has been (re)investigated in [@Antipin:2017ebo] at large $N_f$ and fixed number of colours for which the beta function is given by [@PalanquesMestre:1983zy; @Gracey:1996he; @Holdom:2010qs; @Pica:2010xq]. $$\beta(A)=\frac{2A}{3}\left( 1+\sum_{i=1}^{\infty} \frac{H_i(A)}{N_f^i} \right)\ ,$$ where we defined the following large $N_f$ normalized coupling $$A=\frac{N_f}{8\pi^2}g^2 \ .$$ The functions $H_i(A)$ come about by resumming an infinite set of Feynman diagrams at fixed order $i$ in the large-$N_f$ expansion [@PalanquesMestre:1983zy; @Gracey:1996he]. Most importantly, already to first order in large-$N_f$ there is a fixed point, $$A^* = 3 - e^{\left(-8 \frac{N_f}{N_c} + 18.49 - 5.26 \frac{N_c^2 - 1}{2 N_c^2}\right)} \ .$$ We now attempt to approximate the overall behaviour of the beta function in order to estimate the instanton properties for this theory. Let us therefore write the 1-loop running as, $$\beta_{1L}\equiv\frac{8\pi^2}{g^2}= -b \log(a_{LP} \Lambda_{LP} \rho)\ ,$$ and the 2-loop running as, $$\beta_{2L}= \beta_{1L} + \frac{b'}{b}\log\beta_{1L} \ ,$$ where the 1-loop and 2-loop coefficients are given in (\[eq:1Lbeta\]) and (\[eq:naive-2-loop-running\]). We fix the fudge factor $a_{LP}$ by requiring that the 2-loop running $g^2$ matches the UV FP value of $g^2$ at the 1-loop divergent scale $\Lambda_{LP}$. ![\[fig:large-nf-couplings\] Flow of the coupling in the mass-deformed large-$N_f$ UV conformal window. Pure Yang-Mills 1-loop running is shown in green. Matching to 2-loop QCD running, shown in blue, is at the fermion mass scale given by the purple dashed line. We match 2-loop QCD running to the UV FP value, shown in red, at the 1-loop UV divergence scale $\Lambda_{LP}$ shown in red dashed line. ](matchingLargeNfflow.png) ![\[fig:large-nf-consistency\] Red dashed line shows the fermion mass scale. Solving for the $1/\bar{\rho}$ using 2-loop QCD beta functions we find the solutions shown in blue. This puts instantons below the fermion decoupling scale which makes the solutions inconsistent. Next we solve for the $1/\bar{\rho}$ using 2-loop pure YM beta functions and we find the solutions shown in green. We see that the instanton scale is still below the fermion mass scale and is thus consistent. ](largeNf-consistency.png) Let us consider the specific examples $N_f=100$, $N_c=3$ for which $a_{LP}=1.189$. The running couplings are given in figure \[fig:large-nf-couplings\]. There are three energy windows, the lowest one is below the fermion mass scale, the intermediate one runs up to the 1-loop divergent scale $\Lambda_{LP}$ (Landau pole), and the highest one above the scale $\Lambda_{LP}$. In the highest energy window we don’t consider the RG running, but instead keep the coupling constant since it reaches the fixed point value at $\Lambda_{LP}$. We only consider fermion masses $m<\Lambda_{LP}$ and in figure \[fig:large-nf-consistency\] we plot the results for the inverse instanton size (in units of $\Lambda_{LP}$) in the lower two energy windows. We see that using the naive running couplings the instanton scale turns out to be very small, similarly to what we found in BZ and LISA. This is perhaps not surprising given that we adopted a naive setup that makes use of a rough 2-loop approximation. Conclusions =========== We investigated the instanton dynamics for fundamental field theories featuring either an asymptotically safe or free dynamics. In order to make the work self-contained we provided a brief review of the role of instantons for YM and QCD dynamics including the limitations of the instanton calculus. Within the asymptotically free scenario we ventured in the perturbative regime of the QCD conformal window. Here we determined, by extending the calculus to two-loops, the number of instantons per unit volume as function of a common fermion mass. We then extended the instanton calculus to the case of controllable asymptotically safe theories. Here the non-trivial UV dynamics demands the immediate use of higher order results. As for the conformal window case we determined the fermion mass dependence of the instanton density. We further discussed the finite number of colours and large number of flavours limit. In the future one will extend the instanton calculus in order to cover a wider range of number of flavors within the calculable regime of UV and IR conformal windows. The ambitious goal is to determine to which extent the instanton dynamics is responsible for the loss of conformality once the number of flavors drops below a certain critical value for which either UV or IR conformality is lost in the respective safe or free conformal windows. The work of F.S. is partially supported by the Danish National Research Foundation under grant DNRF:90. [^1]: This is clear because the action is given by the volume integral of the field configuration, so deviations from the instanton configuration automatically pick up infinite contributions. [^2]: One can think of a class label as a winding number saying how many times a map winds around the target sphere. [^3]: Strictly speaking equation holds for positive $n$. Negative values of $n$ are obtained via a parity transformation, since then $G\tilde{G}\rightarrow-G\tilde{G}$. Following the same argument as above the action attains its minimum at $|n| \frac{8\pi^2}{g^2}$ for the field configuration which is anti self-dual, $G=-\tilde{G}$. Such field configuration is called anti-instanton. [^4]: $\alpha(1/2)=0.145873$ and $\alpha(1)=0.443307$ [^5]: This is clear since the expression (\[eq:naive-2-loop-running\]) is manifestly ignorant of the possible existence of a perturbative IR fixed point. [^6]: In fact first corrections to this computation come not from considering higher order interactions but from considering 2-loop beta functions [@Diakonov:1983hh]. [^7]: Equivalently, we may look for solutions using the running coupling defined as a piecewise function, equal to BZ running coupling beyond energy m and equal to matching pure YM running coupling below m.
Mid
[ 0.6153846153846151, 32, 20 ]
[Interpretation of the updated guidelines for prevention of surgical site infection]. The Guideline for prevention of surgical site infection had been published by the Centers for Disease Control for over 10 years. The Updated Recommendations for Control of Surgical Site Infections was published based on large amount of research results; last year, which focused on reduction in contamination, reduction in consequences of contamination and improvement of host defense. This article aims to review these guidelines so that improve clinical practice and decrease the complication of surgical site infection.
High
[ 0.690751445086705, 29.875, 13.375 ]
Q: Text underneath text input I have this css http://jsfiddle.net/thiswolf/3GYY4/ and beneath each text input,i want to have some text which i can use to have some guiding text about that text input like My current html and css looks like this <!Doctype html> <head> <title>Lorem ipsum text below form</title> <meta charset="utf-8" /> <style type="text/css"> .zseform{ width:300px; background-color:#E6E6FA; } label{ width:15%; float:left; } p{ background-color:#B0C4DE; } </style> </head> <body> <p> <label>Logo</label><input class="zseform" type="text" /> </p> <p> <label>City</label><input class="zseform" type="text" /> </p> <p> <label>Address</label><input class="zseform" type="text" /> </p> </body> </html> A: It should be as simple as placing an element after the input, setting is as display:block and setting the padding. Here's an example.
Low
[ 0.5372093023255811, 28.875, 24.875 ]
Determination of the Catterall classification in Legg-Calvé-Perthes disease. Fifty hips in forty-four children with Legg-Calvé-Perthes disease, treated at the Shriners Hospital for Crippled Children, San Francisco, were evaluated with a simplified method of Catterall's classification. Our data indicate that the Catterall rating changed in 40 per cent of the hips when they were classified before they had reached the fragmentation stage of Waldenström compared with only 6 per cent when they were classified after fragmentation had occurred.
High
[ 0.668485675306957, 30.625, 15.1875 ]
Brandon Nichols, deputy director of the Los Angeles County Department of Children and Family Services, revealed in an interview Monday that Anthony "said he liked boys," but Nichols declined to provide more details, including whom the boy told and when, the Los Angeles Times reported. Nichols said the criminal investigation of the deadly abuse is ongoing. Bobby Cagle, director at the Department of Child and Family Services told Eyewitness News that the department is investigating whether homophobia played a role in the boy's death. "One of the things that we have heard is that there may have been a motivation on the part of the man in the household regarding to the sexuality of the child, and so we're looking into that in a very deep way. Of course, that was an alleged factor in the Gabriel Fernandez case, so that concerns us and so we're looking at that angle as well as many others," Cagle said. Gabriel, 8, was murdered in 2013 after he was tortured for a long time and ultimately beaten to death. His mother and her boyfriend were convicted in the boy's killing. They were sentenced earlier this month. In Gabriel's case, there were also previous reports of abuse to DCFS. Department officials said it investigated 13 allegations of child abuse at Anthony's home between February 2013 and April 2016. "We know that the child had severe head injuries, including a brain bleed, contusions and bruises all over the body, really horrific kinds of injuries to the child," Cagle said. As authorities continue to investigate the suspicious death of a Lancaster boy, authorities say his sexuality may have been a motivating factor in his death. Cagle said the allegations included physical and sexual abuse. "The allegations ranged from sexual abuse back in 2013, which we substantiated, to physical abuse, which was unsubstantiated but then we substantiated twice for what's known as general neglect," Cagle said. Numerous calls were reportedly made by a teacher, a counselor and family members accusing Anthony's mother and her boyfriend of abuse. Officials are trying to determine why the fourth-grader wasn't removed from the home. "Was there something that should have been done that was not done? And we will not rest until we understand exactly what that was," Cagle said. That investigation will also examine the actions of case workers. "Are there people who intentionally did not follow the policy, that did not do their job in the way that we know it should be done? So we're looking at that as a possibility," Cagle said. L.A. County Supervisor Kathryn Barger introduced a motion Tuesday directing multiple agencies to take a closer look at contacts with Avalos' family and any systemic issues that may have prevented services for the boy. "You had teachers, you had family members, you had law enforcement come in contact and yet Anthony is now at the morgue," Barger said.
Mid
[ 0.581609195402298, 31.625, 22.75 ]
More thoughts on West, Obama and Malveaux I went back and watched the entire video of their appearance on Tavis's show, and then thought some on it. There is a tendency to react to this sort of thing out of anger, and I kinda was angry when I first started watching. But the more I listened, the more I calmed down, as I saw what really seemed to be at work. I didn't hear a single policy disagreement in the entire interview. Not one. What I did hear was a general complaint that Obama isn't claiming his blackness--historically or politically. That sort of talk makes me cringe, if only because it's so open to interpretation and can easily slide into a sort of lazy equivalence between lefty politics and blackness. Barack Obama could have stood up and quoted Boooker T. Washington or George Schuyler and yet, I don't think that's what his critics are talking about. The specific charge seems to be essentially that Obama--for political reasons--neglected to mention Martin Luther King by name ("the preacher from Georgia" being demeaning"), that he didn't mention Katrina, that he was--in Malveaux's words--"white-washing" his speech so as not to offend good white folk. Hmmm. I took the "preacher from Georgia" riff as poetic use of understatement. MLK's significance is such these days that, in America, he is the air, the symbol of purity that ideologues of all stripes reach for to launder their cause. But, hey, I love poetry, and I'm an Obama fan, so maybe I see too much. That said, it seems to me that an attempt at white-wash which mentions "the preacher from Georgia" and references the March on Washington, is a sorry effort indeed. Are we to believe that Obama's folks, think that white voters--fresh off a week of having history drummed into them--are going to somehow miss these references? If this is the Obama campaign's idea of hoodwinking white folks, they should all be fired. I got clear when I heard Malveaux salute Clinton for referencing Harriet Tubman. The beef isn't about policy. It's not about what an Obama administration would do for black people. It's not even about Obama's blackness, per se. It's about the "black freedom struggle"--the struggle that West and Malveaux see themselves as a part of--taking credit. Look, I say the following as the son of Black Panther, as a dude who learned critical thinking from the posthumous words of Malcolm X, who idolizes James Baldwin. None of that was as important to me as family. If not for my father, I'd have no idea what "the black freedom struggle" was. If not for my mother, I would have likely dropped out school in tenth grade. Barack Obama is black man who received his essential human values from three white people. In that he isn't the first. The great Frederick Douglass learned to read from his white slave-mistress. Booker T. Washington--father of organic black conservativism--was a biracial black man. Malcolm X, in some ways inheritor of that same legacy, was a multiracial black man. But Barack is living in another time, and is the progeny of more courageous people. Bear with me if I'm lapsing into either/or, I don't mean to. But this jingoist idea that the exclusive black tradition deserves primary credit for Obama's place in history feels simplistic. Half of that evening was spent praising King and the Civil Rights movement. King's kids spoke. John Lewis spoke, and Obama spent the last five minutes of his speech connecting his struggle with the Civil Rights struggle of the past. It was cool to see that. But Barack Obama was there to do his best to convince people to vote for him in November. I don't know what other criteria there really could be for judging his speech. The last thing I want to catch this dude doing, is the Electric Slide before the inauguration. Something else also--this keeps happening with a specific group of black people. I'd hate to think that they were conflating themselves and their events with black people and "black interests" at large. Did Obama's absence from the State of The Black Union say something about black people? Or did it say something about the event itself? We want to hear what you think about this article. Submit a letter to the editor or write to [email protected].
Mid
[ 0.6123348017621141, 34.75, 22 ]
Q: The argument of a proof based on double induction I am struggling to convince myself of this proof. Let me rewrite it so that the proof's structure and my interpretation of it are more apparent. Let $ S(k, n) $ be true when $ n! \mid P(k, n) $ where $ P(k, n) = (k+1)(k+2)\cdots(k+n) $. We want to show that $ S(k, n) $ holds for all $ k, n \in \mathbb{Z}^{+} $. (I consider only the positive integers to simplify the discussion.) Induction on $ n $: The base case $ S(k, 1) $ holds since $ 1! \mid (k + 1) $ for all $ k $ The inductive step on $ n $ is not demonstrated yet, but the inductive hypothesis $ H_0 $ is introduced: $ (n-1)! \mid P(k, n-1) $ Induction on $ k $: The base case $ S(0, n) $ holds since $ P(0, n) = n! $ The inductive step $ H_1 $ on $ k $ assumes $ n! \mid P(k, n) $ Consider $$ \begin{align} P(k+1, n) & = ((k+1)+1)((k+1)+2)\cdots((k+1)+(n-1))((k+1)+n) \\ & = [(k+2)(k+3)\cdots(k+n)](k+1) + [(k+2)\cdots(k+n)]n \\ & = P(k, n) + nP(k+1, n-1) \end{align} $$ The first term $ P(k, n) $ is divisible by $ n! $ by $ H_1 $ The second term $ nP(k+1, n-1) $ is also divisible by $ n! $: By $ H_0 $ we have $ (n-1)! \mid P(k+1, n-1) $ Then $ n $ times a multiple of $ (n-1)! $ is divisible by $ n! $ Therefore, $ S(k+1, n) $ holds I do not see what makes step 3.5.1 valid. How can one use $ S(k+1, n-1) $ during the induction step when neither $ H_0 \equiv S(k, n-1) $ nor $ H_1 \equiv S(k, n) $ are stated in terms of $ k + 1 $? Please note that I understand the inductive argument on $ [ k + n = z ] \to [ k + n = z + 1 ] $ as presented in the alternative answer. Such argument also holds for the proof in question, but this is not how the author structured it. According to this answer, the proof appears to use simple induction twice. See this proof for an example of such argument. From what I can see, it does not use $ \ell+1 $ during the induction step on the second variable $ \ell $. A: It may help to elaborate on the argument between 2 and 3. The inductive step on n is not demonstrated yet, but the inductive hypothesis $H_0$ is introduced: $(n−1)!∣P(k,n−1)$ for all $k \in \mathbb{N}$ I added the for all $k\in \mathbb{N}$. We are assuming that everything works perfectly for $n-1$ (this is the nature of induction). Perhaps a better way is to just frame this in terms of the proposition we're trying to prove: You've already shown that $S(k,0)$ holds for all $k$, so now we will assume inductively that $S(k,n-1)$ holds for all $k$. For the next step, you say we are going to use Induction on k but specifically what proposition are we proving here? As per the proof by induction format, we're trying to prove $S(k,n)$, where $n$ is some fixed value, and we know $S(j,n-1)$ is true for all $j$ (note the change of variable here, so as not to cause confusion). Now hopefully it's clear why you can use $S(j,n)$, where $j = k+1$: it's simply part of the inductive hypothesis.
Mid
[ 0.544303797468354, 26.875, 22.5 ]
Strained aromatic oligoamide macrocycles as new molecular clips. Can one join both ends of a helix? A helical aromatic oligoamide was macrocyclized into a saddle-shaped bifunctional clip molecule that self-assembles into discrete circular dodecamers in the solid state and shows great potential for binding aromatic acid guests in solution. The cyclization step requires that the helix is only partly denatured in the reaction medium.
High
[ 0.6795366795366791, 33, 15.5625 ]
# testdb ## Description Sample PostgreSQL database document. ## Tables | Name | Columns | Comment | Type | | ---- | ------- | ------- | ---- | | [public.users](public.users.md) | 6 | Users table | BASE TABLE | | [public.user_options](public.user_options.md) | 4 | User options table | BASE TABLE | | [public.posts](public.posts.md) | 8 | Posts table | BASE TABLE | | [public.comments](public.comments.md) | 6 | Comments<br>Multi-line<br>table<br>comment | BASE TABLE | | [public.comment_stars](public.comment_stars.md) | 6 | | BASE TABLE | | [public.logs](public.logs.md) | 7 | audit log table | BASE TABLE | | [public.post_comments](public.post_comments.md) | 7 | post and comments View table | VIEW | | [public.post_comment_stars](public.post_comment_stars.md) | 5 | | MATERIALIZED VIEW | | [public.CamelizeTable](public.CamelizeTable.md) | 2 | | BASE TABLE | | [public.hyphen-table](public.hyphen-table.md) | 4 | | BASE TABLE | | [administrator.blogs](administrator.blogs.md) | 6 | admin blogs | BASE TABLE | | [backup.blogs](backup.blogs.md) | 5 | | BASE TABLE | | [backup.blog_options](backup.blog_options.md) | 4 | | BASE TABLE | | [time.bar](time.bar.md) | 1 | | BASE TABLE | | [time.hyphenated-table](time.hyphenated-table.md) | 1 | | BASE TABLE | | [time.referencing](time.referencing.md) | 3 | | BASE TABLE | ## Relations ![er](schema.svg) --- > Generated by [tbls](https://github.com/k1LoW/tbls)
Low
[ 0.49202733485193606, 27, 27.875 ]
729 F.2d 1441 In re Martin-Trigona (Anthony) NO. 82-5045 United States Court of Appeals,second Circuit. APR 11, 1983 1 Appeal From: S.D.N.Y. 2 AFFIRMED.
Low
[ 0.48780487804878003, 35, 36.75 ]
/* * Status and system control registers for Xilinx Zynq Platform * * Copyright (c) 2011 Michal Simek <[email protected]> * Copyright (c) 2012 PetaLogix Pty Ltd. * Based on hw/arm_sysctl.c, written by Paul Brook * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version * 2 of the License, or (at your option) any later version. * * You should have received a copy of the GNU General Public License along * with this program; if not, see <http://www.gnu.org/licenses/>. */ #include "qemu/osdep.h" #include "qemu/timer.h" #include "sysemu/runstate.h" #include "hw/sysbus.h" #include "migration/vmstate.h" #include "qemu/log.h" #include "qemu/module.h" #include "hw/registerfields.h" #ifndef ZYNQ_SLCR_ERR_DEBUG #define ZYNQ_SLCR_ERR_DEBUG 0 #endif #define DB_PRINT(...) do { \ if (ZYNQ_SLCR_ERR_DEBUG) { \ fprintf(stderr, ": %s: ", __func__); \ fprintf(stderr, ## __VA_ARGS__); \ } \ } while (0) #define XILINX_LOCK_KEY 0x767b #define XILINX_UNLOCK_KEY 0xdf0d REG32(SCL, 0x000) REG32(LOCK, 0x004) REG32(UNLOCK, 0x008) REG32(LOCKSTA, 0x00c) REG32(ARM_PLL_CTRL, 0x100) REG32(DDR_PLL_CTRL, 0x104) REG32(IO_PLL_CTRL, 0x108) REG32(PLL_STATUS, 0x10c) REG32(ARM_PLL_CFG, 0x110) REG32(DDR_PLL_CFG, 0x114) REG32(IO_PLL_CFG, 0x118) REG32(ARM_CLK_CTRL, 0x120) REG32(DDR_CLK_CTRL, 0x124) REG32(DCI_CLK_CTRL, 0x128) REG32(APER_CLK_CTRL, 0x12c) REG32(USB0_CLK_CTRL, 0x130) REG32(USB1_CLK_CTRL, 0x134) REG32(GEM0_RCLK_CTRL, 0x138) REG32(GEM1_RCLK_CTRL, 0x13c) REG32(GEM0_CLK_CTRL, 0x140) REG32(GEM1_CLK_CTRL, 0x144) REG32(SMC_CLK_CTRL, 0x148) REG32(LQSPI_CLK_CTRL, 0x14c) REG32(SDIO_CLK_CTRL, 0x150) REG32(UART_CLK_CTRL, 0x154) REG32(SPI_CLK_CTRL, 0x158) REG32(CAN_CLK_CTRL, 0x15c) REG32(CAN_MIOCLK_CTRL, 0x160) REG32(DBG_CLK_CTRL, 0x164) REG32(PCAP_CLK_CTRL, 0x168) REG32(TOPSW_CLK_CTRL, 0x16c) #define FPGA_CTRL_REGS(n, start) \ REG32(FPGA ## n ## _CLK_CTRL, (start)) \ REG32(FPGA ## n ## _THR_CTRL, (start) + 0x4)\ REG32(FPGA ## n ## _THR_CNT, (start) + 0x8)\ REG32(FPGA ## n ## _THR_STA, (start) + 0xc) FPGA_CTRL_REGS(0, 0x170) FPGA_CTRL_REGS(1, 0x180) FPGA_CTRL_REGS(2, 0x190) FPGA_CTRL_REGS(3, 0x1a0) REG32(BANDGAP_TRIP, 0x1b8) REG32(PLL_PREDIVISOR, 0x1c0) REG32(CLK_621_TRUE, 0x1c4) REG32(PSS_RST_CTRL, 0x200) FIELD(PSS_RST_CTRL, SOFT_RST, 0, 1) REG32(DDR_RST_CTRL, 0x204) REG32(TOPSW_RESET_CTRL, 0x208) REG32(DMAC_RST_CTRL, 0x20c) REG32(USB_RST_CTRL, 0x210) REG32(GEM_RST_CTRL, 0x214) REG32(SDIO_RST_CTRL, 0x218) REG32(SPI_RST_CTRL, 0x21c) REG32(CAN_RST_CTRL, 0x220) REG32(I2C_RST_CTRL, 0x224) REG32(UART_RST_CTRL, 0x228) REG32(GPIO_RST_CTRL, 0x22c) REG32(LQSPI_RST_CTRL, 0x230) REG32(SMC_RST_CTRL, 0x234) REG32(OCM_RST_CTRL, 0x238) REG32(FPGA_RST_CTRL, 0x240) REG32(A9_CPU_RST_CTRL, 0x244) REG32(RS_AWDT_CTRL, 0x24c) REG32(RST_REASON, 0x250) REG32(REBOOT_STATUS, 0x258) REG32(BOOT_MODE, 0x25c) REG32(APU_CTRL, 0x300) REG32(WDT_CLK_SEL, 0x304) REG32(TZ_DMA_NS, 0x440) REG32(TZ_DMA_IRQ_NS, 0x444) REG32(TZ_DMA_PERIPH_NS, 0x448) REG32(PSS_IDCODE, 0x530) REG32(DDR_URGENT, 0x600) REG32(DDR_CAL_START, 0x60c) REG32(DDR_REF_START, 0x614) REG32(DDR_CMD_STA, 0x618) REG32(DDR_URGENT_SEL, 0x61c) REG32(DDR_DFI_STATUS, 0x620) REG32(MIO, 0x700) #define MIO_LENGTH 54 REG32(MIO_LOOPBACK, 0x804) REG32(MIO_MST_TRI0, 0x808) REG32(MIO_MST_TRI1, 0x80c) REG32(SD0_WP_CD_SEL, 0x830) REG32(SD1_WP_CD_SEL, 0x834) REG32(LVL_SHFTR_EN, 0x900) REG32(OCM_CFG, 0x910) REG32(CPU_RAM, 0xa00) REG32(IOU, 0xa30) REG32(DMAC_RAM, 0xa50) REG32(AFI0, 0xa60) REG32(AFI1, 0xa6c) REG32(AFI2, 0xa78) REG32(AFI3, 0xa84) #define AFI_LENGTH 3 REG32(OCM, 0xa90) REG32(DEVCI_RAM, 0xaa0) REG32(CSG_RAM, 0xab0) REG32(GPIOB_CTRL, 0xb00) REG32(GPIOB_CFG_CMOS18, 0xb04) REG32(GPIOB_CFG_CMOS25, 0xb08) REG32(GPIOB_CFG_CMOS33, 0xb0c) REG32(GPIOB_CFG_HSTL, 0xb14) REG32(GPIOB_DRVR_BIAS_CTRL, 0xb18) REG32(DDRIOB, 0xb40) #define DDRIOB_LENGTH 14 #define ZYNQ_SLCR_MMIO_SIZE 0x1000 #define ZYNQ_SLCR_NUM_REGS (ZYNQ_SLCR_MMIO_SIZE / 4) #define TYPE_ZYNQ_SLCR "xilinx,zynq_slcr" #define ZYNQ_SLCR(obj) OBJECT_CHECK(ZynqSLCRState, (obj), TYPE_ZYNQ_SLCR) typedef struct ZynqSLCRState { SysBusDevice parent_obj; MemoryRegion iomem; uint32_t regs[ZYNQ_SLCR_NUM_REGS]; } ZynqSLCRState; static void zynq_slcr_reset(DeviceState *d) { ZynqSLCRState *s = ZYNQ_SLCR(d); int i; DB_PRINT("RESET\n"); s->regs[R_LOCKSTA] = 1; /* 0x100 - 0x11C */ s->regs[R_ARM_PLL_CTRL] = 0x0001A008; s->regs[R_DDR_PLL_CTRL] = 0x0001A008; s->regs[R_IO_PLL_CTRL] = 0x0001A008; s->regs[R_PLL_STATUS] = 0x0000003F; s->regs[R_ARM_PLL_CFG] = 0x00014000; s->regs[R_DDR_PLL_CFG] = 0x00014000; s->regs[R_IO_PLL_CFG] = 0x00014000; /* 0x120 - 0x16C */ s->regs[R_ARM_CLK_CTRL] = 0x1F000400; s->regs[R_DDR_CLK_CTRL] = 0x18400003; s->regs[R_DCI_CLK_CTRL] = 0x01E03201; s->regs[R_APER_CLK_CTRL] = 0x01FFCCCD; s->regs[R_USB0_CLK_CTRL] = s->regs[R_USB1_CLK_CTRL] = 0x00101941; s->regs[R_GEM0_RCLK_CTRL] = s->regs[R_GEM1_RCLK_CTRL] = 0x00000001; s->regs[R_GEM0_CLK_CTRL] = s->regs[R_GEM1_CLK_CTRL] = 0x00003C01; s->regs[R_SMC_CLK_CTRL] = 0x00003C01; s->regs[R_LQSPI_CLK_CTRL] = 0x00002821; s->regs[R_SDIO_CLK_CTRL] = 0x00001E03; s->regs[R_UART_CLK_CTRL] = 0x00003F03; s->regs[R_SPI_CLK_CTRL] = 0x00003F03; s->regs[R_CAN_CLK_CTRL] = 0x00501903; s->regs[R_DBG_CLK_CTRL] = 0x00000F03; s->regs[R_PCAP_CLK_CTRL] = 0x00000F01; /* 0x170 - 0x1AC */ s->regs[R_FPGA0_CLK_CTRL] = s->regs[R_FPGA1_CLK_CTRL] = s->regs[R_FPGA2_CLK_CTRL] = s->regs[R_FPGA3_CLK_CTRL] = 0x00101800; s->regs[R_FPGA0_THR_STA] = s->regs[R_FPGA1_THR_STA] = s->regs[R_FPGA2_THR_STA] = s->regs[R_FPGA3_THR_STA] = 0x00010000; /* 0x1B0 - 0x1D8 */ s->regs[R_BANDGAP_TRIP] = 0x0000001F; s->regs[R_PLL_PREDIVISOR] = 0x00000001; s->regs[R_CLK_621_TRUE] = 0x00000001; /* 0x200 - 0x25C */ s->regs[R_FPGA_RST_CTRL] = 0x01F33F0F; s->regs[R_RST_REASON] = 0x00000040; s->regs[R_BOOT_MODE] = 0x00000001; /* 0x700 - 0x7D4 */ for (i = 0; i < 54; i++) { s->regs[R_MIO + i] = 0x00001601; } for (i = 2; i <= 8; i++) { s->regs[R_MIO + i] = 0x00000601; } s->regs[R_MIO_MST_TRI0] = s->regs[R_MIO_MST_TRI1] = 0xFFFFFFFF; s->regs[R_CPU_RAM + 0] = s->regs[R_CPU_RAM + 1] = s->regs[R_CPU_RAM + 3] = s->regs[R_CPU_RAM + 4] = s->regs[R_CPU_RAM + 7] = 0x00010101; s->regs[R_CPU_RAM + 2] = s->regs[R_CPU_RAM + 5] = 0x01010101; s->regs[R_CPU_RAM + 6] = 0x00000001; s->regs[R_IOU + 0] = s->regs[R_IOU + 1] = s->regs[R_IOU + 2] = s->regs[R_IOU + 3] = 0x09090909; s->regs[R_IOU + 4] = s->regs[R_IOU + 5] = 0x00090909; s->regs[R_IOU + 6] = 0x00000909; s->regs[R_DMAC_RAM] = 0x00000009; s->regs[R_AFI0 + 0] = s->regs[R_AFI0 + 1] = 0x09090909; s->regs[R_AFI1 + 0] = s->regs[R_AFI1 + 1] = 0x09090909; s->regs[R_AFI2 + 0] = s->regs[R_AFI2 + 1] = 0x09090909; s->regs[R_AFI3 + 0] = s->regs[R_AFI3 + 1] = 0x09090909; s->regs[R_AFI0 + 2] = s->regs[R_AFI1 + 2] = s->regs[R_AFI2 + 2] = s->regs[R_AFI3 + 2] = 0x00000909; s->regs[R_OCM + 0] = 0x01010101; s->regs[R_OCM + 1] = s->regs[R_OCM + 2] = 0x09090909; s->regs[R_DEVCI_RAM] = 0x00000909; s->regs[R_CSG_RAM] = 0x00000001; s->regs[R_DDRIOB + 0] = s->regs[R_DDRIOB + 1] = s->regs[R_DDRIOB + 2] = s->regs[R_DDRIOB + 3] = 0x00000e00; s->regs[R_DDRIOB + 4] = s->regs[R_DDRIOB + 5] = s->regs[R_DDRIOB + 6] = 0x00000e00; s->regs[R_DDRIOB + 12] = 0x00000021; } static bool zynq_slcr_check_offset(hwaddr offset, bool rnw) { switch (offset) { case R_LOCK: case R_UNLOCK: case R_DDR_CAL_START: case R_DDR_REF_START: return !rnw; /* Write only */ case R_LOCKSTA: case R_FPGA0_THR_STA: case R_FPGA1_THR_STA: case R_FPGA2_THR_STA: case R_FPGA3_THR_STA: case R_BOOT_MODE: case R_PSS_IDCODE: case R_DDR_CMD_STA: case R_DDR_DFI_STATUS: case R_PLL_STATUS: return rnw;/* read only */ case R_SCL: case R_ARM_PLL_CTRL ... R_IO_PLL_CTRL: case R_ARM_PLL_CFG ... R_IO_PLL_CFG: case R_ARM_CLK_CTRL ... R_TOPSW_CLK_CTRL: case R_FPGA0_CLK_CTRL ... R_FPGA0_THR_CNT: case R_FPGA1_CLK_CTRL ... R_FPGA1_THR_CNT: case R_FPGA2_CLK_CTRL ... R_FPGA2_THR_CNT: case R_FPGA3_CLK_CTRL ... R_FPGA3_THR_CNT: case R_BANDGAP_TRIP: case R_PLL_PREDIVISOR: case R_CLK_621_TRUE: case R_PSS_RST_CTRL ... R_A9_CPU_RST_CTRL: case R_RS_AWDT_CTRL: case R_RST_REASON: case R_REBOOT_STATUS: case R_APU_CTRL: case R_WDT_CLK_SEL: case R_TZ_DMA_NS ... R_TZ_DMA_PERIPH_NS: case R_DDR_URGENT: case R_DDR_URGENT_SEL: case R_MIO ... R_MIO + MIO_LENGTH - 1: case R_MIO_LOOPBACK ... R_MIO_MST_TRI1: case R_SD0_WP_CD_SEL: case R_SD1_WP_CD_SEL: case R_LVL_SHFTR_EN: case R_OCM_CFG: case R_CPU_RAM: case R_IOU: case R_DMAC_RAM: case R_AFI0 ... R_AFI3 + AFI_LENGTH - 1: case R_OCM: case R_DEVCI_RAM: case R_CSG_RAM: case R_GPIOB_CTRL ... R_GPIOB_CFG_CMOS33: case R_GPIOB_CFG_HSTL: case R_GPIOB_DRVR_BIAS_CTRL: case R_DDRIOB ... R_DDRIOB + DDRIOB_LENGTH - 1: return true; default: return false; } } static uint64_t zynq_slcr_read(void *opaque, hwaddr offset, unsigned size) { ZynqSLCRState *s = opaque; offset /= 4; uint32_t ret = s->regs[offset]; if (!zynq_slcr_check_offset(offset, true)) { qemu_log_mask(LOG_GUEST_ERROR, "zynq_slcr: Invalid read access to " " addr %" HWADDR_PRIx "\n", offset * 4); } DB_PRINT("addr: %08" HWADDR_PRIx " data: %08" PRIx32 "\n", offset * 4, ret); return ret; } static void zynq_slcr_write(void *opaque, hwaddr offset, uint64_t val, unsigned size) { ZynqSLCRState *s = (ZynqSLCRState *)opaque; offset /= 4; DB_PRINT("addr: %08" HWADDR_PRIx " data: %08" PRIx64 "\n", offset * 4, val); if (!zynq_slcr_check_offset(offset, false)) { qemu_log_mask(LOG_GUEST_ERROR, "zynq_slcr: Invalid write access to " "addr %" HWADDR_PRIx "\n", offset * 4); return; } switch (offset) { case R_SCL: s->regs[R_SCL] = val & 0x1; return; case R_LOCK: if ((val & 0xFFFF) == XILINX_LOCK_KEY) { DB_PRINT("XILINX LOCK 0xF8000000 + 0x%x <= 0x%x\n", (int)offset, (unsigned)val & 0xFFFF); s->regs[R_LOCKSTA] = 1; } else { DB_PRINT("WRONG XILINX LOCK KEY 0xF8000000 + 0x%x <= 0x%x\n", (int)offset, (unsigned)val & 0xFFFF); } return; case R_UNLOCK: if ((val & 0xFFFF) == XILINX_UNLOCK_KEY) { DB_PRINT("XILINX UNLOCK 0xF8000000 + 0x%x <= 0x%x\n", (int)offset, (unsigned)val & 0xFFFF); s->regs[R_LOCKSTA] = 0; } else { DB_PRINT("WRONG XILINX UNLOCK KEY 0xF8000000 + 0x%x <= 0x%x\n", (int)offset, (unsigned)val & 0xFFFF); } return; } if (s->regs[R_LOCKSTA]) { qemu_log_mask(LOG_GUEST_ERROR, "SCLR registers are locked. Unlock them first\n"); return; } s->regs[offset] = val; switch (offset) { case R_PSS_RST_CTRL: if (FIELD_EX32(val, PSS_RST_CTRL, SOFT_RST)) { qemu_system_reset_request(SHUTDOWN_CAUSE_GUEST_RESET); } break; } } static const MemoryRegionOps slcr_ops = { .read = zynq_slcr_read, .write = zynq_slcr_write, .endianness = DEVICE_NATIVE_ENDIAN, }; static void zynq_slcr_init(Object *obj) { ZynqSLCRState *s = ZYNQ_SLCR(obj); memory_region_init_io(&s->iomem, obj, &slcr_ops, s, "slcr", ZYNQ_SLCR_MMIO_SIZE); sysbus_init_mmio(SYS_BUS_DEVICE(obj), &s->iomem); } static const VMStateDescription vmstate_zynq_slcr = { .name = "zynq_slcr", .version_id = 2, .minimum_version_id = 2, .fields = (VMStateField[]) { VMSTATE_UINT32_ARRAY(regs, ZynqSLCRState, ZYNQ_SLCR_NUM_REGS), VMSTATE_END_OF_LIST() } }; static void zynq_slcr_class_init(ObjectClass *klass, void *data) { DeviceClass *dc = DEVICE_CLASS(klass); dc->vmsd = &vmstate_zynq_slcr; dc->reset = zynq_slcr_reset; } static const TypeInfo zynq_slcr_info = { .class_init = zynq_slcr_class_init, .name = TYPE_ZYNQ_SLCR, .parent = TYPE_SYS_BUS_DEVICE, .instance_size = sizeof(ZynqSLCRState), .instance_init = zynq_slcr_init, }; static void zynq_slcr_register_types(void) { type_register_static(&zynq_slcr_info); } type_init(zynq_slcr_register_types)
Low
[ 0.49433962264150905, 32.75, 33.5 ]
"( theme music playing )" "( phone ringing )" "THE "BOUQUET" RESIDENCE, THE LADY OF THE HOUSE SPEAKING." "DO I SOUND LIKE A CHINESE TAKE-AWAY?" "I AM A HIGHLY DESIRABLE PRIVATE RESIDENCE" "IN AN AREA OF OUTSTANDING NATURAL PROPERTY VALUES." "AND I'M WAITING FOR YOUR APOLOGY," "UNLESS, OF COURSE, THE THOUGHT OF BEAN SHOOTS" "AND CRISPY WON TON HAVE TOTALLY STIR-FRIED YOUR BASIC GOOD MANNERS." "NOW, KINDLY CLEAR THIS LINE." "THERE ARE PEOPLE OF SUBSTANCE IN THIS COMMUNITY" "WHO ARE PROBABLY QUEUING TO RING ME AT THIS VERY MOMENT." "YOUR SUGGESTION IS NOTED," "BUT I SEE LITTLE PRACTICAL MERIT" "IN HAVING THE TELEPHONE UP MY JUMPER." "I'VE JUST BEEN INSULTED ON MY OWN TELEPHONE." "I EXPECT THE OPPORTUNITIES ARE RARE" "OF BEING INSULTED ON OTHER PEOPLE'S." "I THINK IT'S TIME, RICHARD, THAT YOU COMPLAINED." "ABOUT WHAT?" "ABOUT THESE WRONG NUMBERS FOR THE CHINESE TAKE-AWAY." "YOU MUST RING THE AMBASSADOR." "WHAT AMBASSADOR?" "THE CHINESE, OF COURSE." "I DON'T THINK IT'S REALLY HIS PROVINCE." " THEN WHAT'S HE HERE FOR?" " PROBABLY FOR MORE THAN THAT." "THEY'RE A VERY ANCIENT CIVILIZATION." "OH, RICHARD, DON'T JUST SIT THERE," "GET DIRECTORY INQUIRIES." "THAT'S IT!" "FINISHED!" "NO MORE MEN." "ENOUGH'S ENOUGH." "IN THE END, THEY ALL DECEIVE YOU." "AFTER ALL I'VE DONE FOR THAT SWINE," "TO LEAVE ME FOR ANOTHER WOMAN," "I DON'T CARE IF SHE IS HIS WIFE." "WELL, IT'S OVER." "I'VE LEARNED MY LESSON." "I ONLY WANT ONE." "I WISH YOU'D FIND YOURSELF" "A NICE STEADY BLOKE." "GIVE YOUR HORMONES A CHANCE TO SETTLE." "STOP LIVING ON YOUR NERVES..." "AND MY FAGS." "MY HORMONES HAVE SETTLED!" "I'VE FINISHED-- NO MORE MEN!" "THE SWINE!" "HE SWORE HE'D LOVE ME TILL THE END OF TIME!" "I DIDN'T REALIZE HE MEANT CLOSING TIME." "GO ON THEN, READ ME SOME BOOK." ""'I LOVE YOU' HE SAID," "LIFTING HER GENTLY ONTO THE BALCONY." "'OH, JEREMY,' SHE SAID."" "OH, CHUFFIN' HECK." "HELLO!" "GOOD MORNING, MR. PENWORTHY." "GOOD MORNING, MRS. BUCKET." " "BOUQUET."" " WERE YOU GOING TOO FAST?" "IT'S THE MODERN DISEASE." "HAVE YOU HURT YOURSELF?" "I DO HOPE YOU'VE SUSTAINED NO INJURY." "NO, THANK YOU, I'M FINE." "YOUR WAVING DISTRACTED ME." "REALLY?" "I'D ASK YOU IN, ONLY MY HUSBAND'S BUSY" "ON THE TELEPHONE WITH THE CHINESE AMBASSADOR." "I WONDER IF I MIGHT ASK YOU" "NOT TO COUGH OR SNEEZE TOO LOUDLY." "MY HUSBAND'S ON THE TELEPHONE WITH THE CHINESE AMBASSADOR." "THEY'RE A VERY ANCIENT CIVILIZATION." "WELL, WHAT DID HE SAY?" "I HAD TO LEAVE A MESSAGE." "IT WAS AN ANSWERING MACHINE." "OH, WAS IT A CULTURED VOICE?" " YES." " WELL, THAT'S IT, THEN." "YOU'VE BEEN SPEAKING TO THE AMBASSADOR." "I HARDLY THINK SO." "NOW, DON'T CONTRADICT ME, RICHARD," "IT'S VERY TIRING." "YOU OUGHT TO BE VERY PLEASED THAT YOU'RE" "ON CHATTING TERMS WITH THE CHINESE AMBASSADOR." " WHO IS?" " WE ARE." "YOU LEFT A MESSAGE ON HIS MACHINE," "THEREFORE, HE WILL RING US BACK." "I MUST TELL MY SHERIDAN." "NO SIGN OF HER." "LET'S GET IN THE CAR" "AND AWAY BEFORE SHE SPOTS US." "YOU'RE GETTING PARANOID ABOUT HYACINTH." "WHICH SEEMS A VERY SENSIBLE WAY TO BE." " NOT SO LOUD." " EMMET, I'M AT MY OWN HOME" "IN BROAD DAYLIGHT." "I SEE NO REASON TO CREEP ABOUT." "I CAN GIVE YOU" "ONE DAMN GOOD REASON TO CREEP ABOUT." "NO, I'VE GOT A BETTER IDEA." "LET'S FREEWHEEL DOWN THE DRIVE" "AND START THE CAR IN THE STREET." " THERE ISN'T ANY SLOPE." " I'LL GIVE YOU A PUSH." " EMMET!" " IT'S WORTH IT, LIZ!" "BELIEVE ME, IT'S WORTH IT." "IF SHE SPOTS US SHE'LL WANT TO KNOW WHERE WE'RE GOING." "YOU'LL TELL HER BECAUSE YOU CAN'T LIE." "WHY IS IT?" "YOU'RE A GROWN WOMAN AND YOU CAN'T LIE?" "HAVING TROUBLE WITH THE CAR?" "I'LL GET RICHARD TO GIVE YOU A HAND." "THAT'S IF HE'S FINISHED HIS DIPLOMATIC CALL." "IT'S ALL RIGHT, HYACINTH, HONESTLY." "NO, I WON'T BE DISSUADED." "THAT'S WHAT FRIENDS ARE FOR." "* PEOPLE, PEOPLE WHO NEED PEOPLE *" "* ARE THE LUCKIEST PEOPLE... *" "QUICK, START THE CAR!" "IT'S TOO LATE NOW." "WE'LL HAVE TO GO THROUGH WITH IT." "I'LL GO WHERE THERE ARE NO MEN." "I'LL EMIGRATE" "SOMEWHERE DESOLATE," "WITH NONE OF THE CREATURE COMFORTS" "SCRUB THAT, TOO MUCH LIKE HOME." "MEN!" "HOW CAN YOU GO ON READING ROMANTIC DRIVEL" "WHEN YOU KNOW WHAT MEN ARE LIKE?" "IN THIS BOOK," "THEY'RE TERRIFIC." "JEREMY HAS HAIR LIKE GOLDEN CORN." "HE'S TALL AND SLENDER," "A MAGNIFICENT FIGURE ON A HORSE." "SO HOW COME I'M STILL IN LOVE WITH ONSLOW?" "YOU MUST BE COMING UNGLUED IN THE HEAD." " ANY MORE CRISPS?" " GET 'EM YOURSELF." "IT'S ALL RIGHT, I'LL GET THEM." "YOU'D THINK HIS HAIR WAS LIKE GOLDEN CORN." "SMOKEY BACON FLAVOR." "THERE'S NO SMOKEY BACON FLAVOR LEFT!" "OH, NICE." "IT COULD BE THE ALTERNATOR." "RIGHT." "TRY NOW, LIZ." "( engine starts )" " WELL DONE, RICHARD." " YES, THANK YOU VERY MUCH." "I DIDN'T DO MUCH, REALLY." "DON'T BE SO MODEST." "IT WAS VERY KIND OF YOU." " I DON'T THINK I DID ANYTHING." " YOU DID ENOUGH, DEAR." "AND NOW ELIZABETH AND EMMET" "CAN BE OFF TO WHERE THEY'RE GOING." "WHEREVER THAT IS." " ARE YOU GOING FAR?" " NO, NOT FAR." "WELL, THAT'S VERY MYSTERIOUS." "IS IT A VERY INTERESTING "NOT FAR"?" "IT'S JUST THE CHURCH HALL, HYACINTH." "OH, THE CHURCH HALL." "I DIDN'T KNOW THERE WAS A FUNCTION." "NO, NO, NO, NOT A FUNCTION, REALLY." "AH" "THE VICAR ASKED EMMET IF HE'D JUST" "J-JUST A BIT MUSIC FOR FULL-GROWN PEOPLE." "OH, EMMET, WHAT A GOOD IDEA." "RICHARD, WE'D BETTER GET ALONG." "MUSTN'T DELAY THESE PEOPLE." "I THOUGHT SHE'D PUSH HER WAY IN." "I COULD HAVE SWORN SHE'D WANT TO SING." "WELL, THERE YOU ARE." "YOU'RE BEING UNFAIR TO HER." "I'VE JUST HAD A WONDERFUL IDEA!" "WHY DON'T I JOIN YOU?" "YOU PLAY AND I'LL SING." "FOR THE OLD PEOPLE, BLESS THEM." "HOW THEY'LL ENJOY IT." "I'LL RING THE VICAR NOW" "AND I'LL SEE YOU AT THE CHURCH HALL." "( humming )" "* LA-DEE-DAH-- *" "OH, MAY I SPEAK TO THE VICAR, PLEASE?" "NOT AVAILABLE, IS HE BUSY?" "OH, RINGING THE BELLS, OH." "RINGS HIS OWN BELLS, HOW DEMOCRATIC." "OH, I CAN JUST PICTURE THE VICAR AT THE END OF HIS ROPE." "NOW, I ASSUME I'M SPEAKING TO THE VICAR'S WIFE." "YES, YOU REMEMBER ME, DEAR?" "HYACINTH "BOUQUET."" "THAT IS A BAD COUGH, DEAR." "HAS SOMETHING GONE DOWN THE WRONG WAY?" "I WANT YOU TO PASS A LITTLE MESSAGE TO THE VICAR," "HE'LL BE THRILLED." "WE HAVE A SURPRISE FOR THE OLD PEOPLE'S ENTERTAINMENT." "TELL THE VICAR THAT IT WON'T JUST BE PIANOFORTE," "I'VE VOLUNTEERED MY SERVICES TO SING." "YOU MUST TAKE SOMETHING FOR THAT COUGH, DEAR." "THE BUCKET WOMAN!" "( gasps ) HE'LL GO MAD" "WHEN I TELL HIM IT'S THE BUCKET WOMAN." "( bells ringing )" "THE BUCKET WOMAN!" "( bells jangle )" "( Hyacinth humming )" "* LA-- * WHICH SHALL I WEAR, DEAR?" "OH, I WISH YOU WOULDN'T GIVE ME THAT KIND OF RESPONSIBILITY." "YOU KNOW I VALUE YOUR OPINION, RICHARD." "SINCE WHEN?" " I'LL WEAR THIS ONE." " GLAD I SORTED THAT OUT FOR YOU." "AH,I'LLANSWERIT." "* LA-LA-DA-DEE *" "* LA-DOE-DA-DEE. *" "THE "BOUQUET" RESIDENCE, THE LADY OF THE HOUSE SPEAKING." "RICHARD?" "OH YES, HE'S HERE." "SHE'LL FIND OUT SOONER OR LATER." " WHY DID YOU TELL HER?" " I HAD TO TELL HER." "I MEAN, YOU WERE THERE." "SHE PRACTICALLY DRAGGED IT OUT OF ME." "ONE MORE HESITATION, SHE'D HAVE TORN OUT MY FINGERNAILS." "YOU COULD HAVE LIED." "WELL, YOU KNOW HOW BAD I AM AT LYING." "I'LL TEACH YOU." "OH, EMMET, JUST MAKE THE BEST OF IT." "IT'S FOR THE OLD PEOPLE." "OF WHOM I'M GOING TO BE ONE BEFORE THIS IS OVER." "WELL, BE FAIR TO HER." "SHE'S GOT A RICH STRONG VOICE." "IT'S NOT THE VOICE," "IT'S WHO'S IN CHARGE OF IT." "( humming )" "OH, WHO WAS IT, DEAR?" "IT WAS THE HEAD OF MY DEPARTMENT." "OH, THAT'S NICE, DEAR." "THEY'RE THINKING OF OFFERING" " SOME PEOPLE EARLY RETIREMENT." " HMM?" "HE WANTS TO KNOW HOW I FEEL" "ABOUT EARLY RETIREMENT." "OH, RICHARD!" "THAT MEANS WE'D BE TOGETHER ALL DAY AND EVERY DAY!" "( singing )" "( dramatic music playing on television )" "WELL, HOW DO I LOOK?" "GREAT." " HOW DO I LOOK?" " FINE." "THERE'S ONLY THE DOG PAYING ANY ATTENTION." "DID SOMEBODY TURN THE SET OFF?" "ROSE TURNED THE SET OFF." "THANK GOD FOR THAT, I THOUGHT I'D GONE BLIND." "COULD YOU SPARE ME ONE MOMENT OF YOUR TIME?" "IT'S IMPORTANT TO ME." "HOW DO I LOOK?" "FIGURE IN BLACK." "I'VE SEEN IT SOMEWHERE." "WEREN'T YOU IN FRANKENSTEIN'S HOUSE OF HORRORS?" "( TV playing )" "WHAT ARE YOU DOING ALL DRESSED IN BLACK?" "PRACTICING." "I'M TAKING THE VEIL," "I'M GOING TO BE A NUN." "YOUR SKIRT'S TOO SHORT." "I'LL HAVE IT LENGTHENED." "TALK ABOUT POACHER TURNING GAMEKEEPER," "THEY'LL NEVER TAKE YOU FOR A NUN." "THEY MIGHT IF I CAN GET" "A RECOMMENDATION FROM A CLERGYMAN." "YOU DON'T EVEN KNOW A CLERGYMAN." "OH, I MET THAT DISHY VICAR AT OUR HYACINTH'S." "THEY WAY YOU THREW YOURSELF AT HIM," "HE'LL NEVER RECOMMEND YOU FOR A NUN." "YOU'RE TOO EMOTIONAL, OUR ROSE." "OH, I'VE TAKEN CARE OF THAT." "I'VE TAKEN A TRANQUILIZER." "IT SHOULD SLOW ME DOWN" "TO A MORE RELIGIOUS SPEED." "I HOPE YOUR BRAKES DON'T FAIL IN FRONT OF THAT VICAR." "EARLY RETIREMENT..." "RICHARD, THE DOOR." "NO, DEAR, THIS DOOR." "RICHARD, WHAT ON EARTH'S THE MATTER WITH YOU, DEAR?" "THE MATTER WITH ME?" "NOTHING." "WHY SHOULD ANYTHING BE THE MATTER?" "( horn blares )" "I SHALL START WITH A FEW OF THE OLD FAVORITES." "THEY LOVE THE OLD FAVORITES." "SHERIDAN ALWAYS LOVED THE OLD FAVORITES." "WE HAVE TO HAVE A TALK, HYACINTH," "ABOUT EARLY RETIREMENT." "AND FOR A FINALE I THINK A MEDLEY" "FROM "THE SOUND OF MUSIC."" "TURN LEFT HERE, RICHARD." "FOR THAT I SHALL WEAR MY AUSTRIAN HAT" "THE ONE WITH THE FEATHER." "I WONDER IF EMMET HAS ANY LEATHER SHORTS." "A TALK, I MEAN WITH ME TALKING, TOO." "HE COULD HAVE, YOU KNOW." "ANYONE WHO'S IN AN OPERATIC SOCIETY" "SHOULD HAVE SOME LEATHER SHORTS." "OF COURSE HE'S PROBABLY LEFT THEM BEHIND." "YOU COULD GO AND FETCH THEM FOR HIM, RICHARD." "TURN LEFT, DEAR." "YOU'RE GOING TO PROMISE ME THAT WHEN WE HAVE OUR TALK," "THAT YOU'LL MAKE AN EFFORT AND TRY TO LISTEN" "WHEN YOU'RE NOT ACTUALLY SAYING ANYTHING." "RICHARD, ARE YOU LISTENING?" "ARE YOU PAYING ATTENTION?" " YOU'VE NOT BEEN LISTENING." " TURN RIGHT, DEAR." "WHAT ARE WE DOING HERE?" "YOU SAID TURN RIGHT." "I MEANT RIGHT" "AT THE NEXT JUNCTION." "YOU WERE GOING TO ALLOW ME TO GO ALL THAT WAY" "WITHOUT FURTHER INSTRUCTIONS." "I WANT YOU TO START REVERSING NOW, RICHARD." "THIS IS NOT THE KIND OF AREA" "IN WHICH I WISH TO BE SEEN PARKING." "Vicar:" "RIGHT, I'M OFF." " DON'T I GET A KISS?" " OF COURSE YOU GET A KISS." "KEEP AWAY FROM THE LADIES." "NOW, HOW CAN I KEEP AWAY FROM THE LADIES?" "98.5% OF MY CONGREGATION ARE LADIES." "AND THEY ALL ADORE YOU." "INCLUDING ME." " YOU'RE JUST JEALOUS." " OF COURSE I'M JEALOUS." "KEEP AWAY FROM THE LADIES." "I MUST GO." "AH!" "NOW WHERE ARE THOSE DATES FOR THE YOUTH CLUB?" "WHY THE VICARAGE?" "I THOUGHT YOU WERE SINGING AT THE CHURCH HALL." "WELL, I THOUGHT I'D ASK THE VICAR" "IF THERE WERE ANY LITTLE FAVORITES HE WISHES ME TO INCLUDE." "IT'S THE SORT OF COURTESY HE WOULD EXPECT FROM ME." "( groans )" "HAVEN'T YOU FORGOTTEN SOMETHING, DEAR?" " HMM?" " THE DOOR." "EARLY RETIREMENT." "HMM?" "OH, THAT WAS QUICK." "YOU SAID, "KEEP AWAY FROM THE LADIES."" "OH, I SEE." "I MEANT KEEP AWAY FROM ALL OF THEM, NOT JUST THAT ONE." "TELL HER I'VE BEEN CALLED AWAY." " WHERE TO?" " VLADIVOSTOK." "PULL YOURSELF TOGETHER, RICHARD." "WHATEVER'S THE MATTER WITH YOU?" "REALLYWANT TO KNOW?" "OF COURSE I WANT TO KNOW." "WHATEVER CONCERNS YOU CONCERNS ME." "VERY WELL THEN," "I'LL TELL YOU." "THE THING THAT'S BOTHERING ME" "YOU SEE HOW MUCH BETTER IT IS WHEN WE TALK THESE THINGS OUT." "I HAVEN'T STARTED YET." "THAT'S WHAT WIVES ARE FOR" "TO LISTEN." "WELL," "YOU THINK OF ALL THE TERRIBLE THINGS THAT CAN HAPPEN TO YOU" "( bells ringing )" "OH, MY GOODNESS, LOOK AT THE TIME." "WE MUSTN'T KEEP EMMET WAITING, HE'LL WANT TO REHEARSE." "I'LL TALK TO THE VICAR LATER." "WE MUST GO STRAIGHT TO THE CHURCH HALL!" "* DO-DO-DO-DOOT-- *" "YOU CAN'T HIDE THERE ALL DAY." "YOU'RE RIGHT, I'D BE SAFER IN THE KITCHEN." "I'LL HAVE ANOTHER CUP OF COFFEE." "JUDGING BY THE LENGTH OF HER SKIRT," "IT MUST BE ROSE UNDERNEATH THAT VEIL." "WHAT WOULD ROSE BE DOING HERE?" "WOBBLING AS IF SHE'S BEEN DRINKING," "WELL, SHE CAN WOBBLE OFF SOMEWHERE ELSE." "I WILL NOT HAVE A SISTER IN BLACK" "WOBBLING INTO OUR CHURCH." "ROSE!" "INCREDIBLE." "YOU LOOK EXACTLY LIKE MY SISTER HYACINTH." "ROSE, HAVE YOU BEEN DRINKING?" "I HAVEN'T BEEN DRINKING." "I TOOK A PILL" "AND IT SEEMS TO HAVE GONE STRAIGHT TO MY KNEES." "I WISH WE COULD SAY THE SAME ABOUT YOUR SKIRT." " WHY ARE YOU HERE?" "!" " I WANT TO SEE THE VICAR." "YOU'RE IN NO CONDITION TO SEE THE VICAR." "I WANT TO BE A NUN." "PUT ME DOWN!" " I WANT TO BE A NUN!" " RICHARD, GIVE ME A HAND." "FUNNY HOW YOU MISHEAR THINGS." "I COULD HAVE SWORN SHE SAID SHE WANTED TO BE A NUN." " I DO!" "I DO!" " WE'LL HAVE TO TAKE HER HOME." "THERE'S NO TIME." "OH, RICHARD, DO REMEMBER WHERE YOU ARE." "WE HAVE TO HIDE HER IN THE HALL UNTIL SHE PULLS HERSELF TOGETHER." "PULL MYSELF TOGETHER?" "GOOD GRIEF, OUR HYACINTH," "YOU'RE NOT EVEN SATISFIED" "WHEN A PERSON WANTS TO BE A NUN." "HOW TOGETHER DO YOU HAVE TO GET?" "Hyacinth:" "NO, DON'T GET IN THE WAY, RICHARD." "OH, MIND WHAT YOU'RE DOING, DEAR!" "ROSE!" "I KNOW IT'S BEEN A LIFETIME'S DEDICATION," "BUT I WISH YOU'D TRY TO BREAK THIS HABIT" "OF WANTING TO LIE DOWN EVERYWHERE." "DO TRY AND STAY ON YOUR FEET." "KNEES!" "IF I'M GOING TO BE A NUN I SHOULD BE ON MY KNEES." "IT'S ME THAT WILL BE ON MY KNEES." "OH, RICHARD WANTS TO BE A NUN, TOO." "SHE'S SURPRISINGLY HEAVY" "FOR A SHORT-SKIRTED PERSON." "UHH!" "HOW MANY PILLS DID SHE TAKE?" "( giggling ) I'LL TELL YOU SOMETHING," "THIS BEING A NUN MAKES YOU FEEL REALLY GOOD." "I FEEL UPLIFTED!" "( giggling )" "LOOK, POP HER IN THE STORE CUPBOARD," "UNTIL SHE GETS BACK THE USE OF HER LEGS." "I CAN'T HAVE HER SEEN LIKE THIS." "OH, DEAR." "OH, HYACINTH!" "ELIZABETH, DEAR," "ABOUT OUR LITTLE CONCERT," "I THOUGHT I'D BEGIN WITH SOMETHING CLASSICAL." "I THINK EMMET'S ALREADY GOT A PROGRAM IN MIND." "OH GOOD, AND THEN A SELECTION FROM "THE SOUND OF MUSIC"" "AND FINISH WITH "ANNIE, GET YOUR GUN."" "I'M NOT SURE THAT EMMET WAS" "QUITE THINKING ALONG THOSE LINES." "OH, HE'LL LOVE IT." "EVERYBODY LOVES MY ANNIE OAKLEY." "I THINK I'D BETTER WARN HIM" "TELL HIM." "RICHARD, HOW DO I LOOK, DEAR?" "I KEEP SEEING YOU AS A KIND OF RECURRING MOTIF" "RUNNING THROUGH MY EARLY RETIREMENT." "THAT'S NICE, DEAR." " AND THE HAT, HOW'S THE HAT?" " YES, THAT, TOO." "SHE'S GONE!" "SHE'S GONE!" "SHE WAS HERE." "SHE WAS HEADING THIS WAY, AND NOW SHE'S GONE." "AND STILL THERE ARE PEOPLE WHO REFUSE" "TO BELIEVE IN THE POWER OF PRAYER." "OFF YOU GO, THEN." "AND KEEP AWAY FROM THE LADIES." "( off-key ) * WITH A GUN *" "* WITH A GUN *" "NO, NO!" "LOOK, SUPPORT ME, LOUDER." "PLUS FORTE!" "* WITH A GUN *" "* WITH A GUN-- * NO, IT'S TOO LOUD, NOW." "PIANISSIMO." "* OH, YOU CAN'T GET A MAN *" "* A MAN-- A MAN-- *" "ARE YOU SURE YOU'RE PLAYING THE RIGHT NOTE?" "I SUPPOSE IRVING KNEW WHAT HE WAS DOING." "* IF I WENT TO BATTLE *" "* WITH SOMEONE'S HERD OF CATTLE *" "* THERE'D BE ST-- * NO!" "HAVE YOU LOST YOUR PLACE AGAIN?" "I THOUGHT YOU HAD." "FROM THE BEGINNING." "* IF I WENT TO BATTLE *" "* WITH SOMEONE'S HERD-- * KEEP IT GOING!" "FASTER!" "* WHEN IT WAS DONE *" "* THAT IF I SHOT THE HERDER I'D HOLLER BLOODY MURDER-- *" "LOOK, YOU TOLD ME HE'D BEEN CERTIFICATED." "DA CAPO, BACK TO THE BEGINNING, PLEASE." "NO, NO!" "ENOUGH!" "THAT'S IT!" "I'M ONLY FLESH AND BLOOD." "I CAN'T TAKE IT ANY LONGER!" "PSST!" "WHAT IS IT HE CAN'T TAKE ANY LONGER?" "THE CHAIR." "HE'S GOT THE WRONG SIZE CHAIR?" "HE'S VERY PARTICULAR ABOUT HIS CHAIR." "IT AFFECTS THE STYLE OF HIS PLAYING." "WHY DON'T YOU GET ANOTHER TYPE OF CHAIR." "( whispering ) TAKE A WALK OUTSIDE." "YES." "YES, I'LL" "I'LL GET ANOTHER CHAIR!" "OH, SHUT UP!" "I'M SO SORRY ABOUT THE INTERRUPTION, HYACINTH." "OH, NO APOLOGY NECESSARY." "YOU DON'T HAVE TO EXPLAIN TO ME" "ABOUT THE LITTLE ERUPTIONS OF THE ARTISTIC TEMPERAMENT." "I'M THE FIRST TO UNDERSTAND ABOUT THE ARTISTIC TEMPERAMENT." "RICHARD WAS SAYING SO, ONLY THIS MORNING" "TO THE CHINESE AMBASSADOR." " ( giggling )" " ROSE!" "SHE WAS IN THE ROOM WITH THE CHAIRS!" "( screams )" "ROSE!" "PUT MY ACCOMPANIST DOWN AT ONCE!" "THIS IS NO WAY TO BEGIN YOUR VOCATION!" "THE BUCKET WOMAN!" "AH, VICAR!" "VICAR, MY SISTER'S HAD A CONVERSION," "PROFOUNDLY." "WHY DON'T YOU PLAY "MISTY" FOR ME?" "LET'S PUT HER BACK IN THE CUPBOARD,AGAIN," "SHE CAN MEDITATE IN THERE FOR A WHILE." "HAS ANYONE SEEN MY HUSBAND?" "HE'S WANTED ON THE PHONE." "( crashing, clanking )" "OH, VICAR!" "EXCUSE ME!" "WELL!" "IT'S ALL RIGHT." "I'M GOING TO BE A NUN." "( theme music playing )"
Mid
[ 0.55578093306288, 34.25, 27.375 ]
Deal includes an option for a French theatrical release via Bac Films, said ICO's Estelle Jaugin. Now in post, and one of Spain's big end-of-year local bows, "Body" is sold by DeAPlaneta, which screens a nine-minute promo at this week's Spanish Film Screenings. "Body" stars Belen Rueda, who toplined "Orphanage" and "Eyes." Jose Coronado ("No Rest For the Wicked") plays a detective searching for the cadaver of a femme fatale (Rueda, seen in flashback), which has gone missing from a morgue. "The package -- the producers, their international films, the cast, the promo" -- clinched the pre-buy, Jaugin said. DeAPlaneta plans to show a final cut of "Body" in the Toronto Film Festival. It will be completed by early October, said DeAPlaneta's Gorka Bilbao. The three-year-old ICO, which is backed by an investment fund, works closely with Bac Films, ringing multiple distribution options. These include renewing French rights on lapsing Bac titles, buying library titles previously owned by other distributors, acquiring all-rights to films on which Bac can handle theatrical, or buying TV, DVD and VOD rights, sometimes, but not necessarily, to Bac movies, Jaugin said.
Mid
[ 0.612980769230769, 31.875, 20.125 ]
mgo.licio.us "The face of the operation is Briatore (referred to exclusively in the film by his colleagues and angry, chanting detractors as "Flavio"), an anthropomorphic radish who spends most of his time at QPR plotting to fire all of the managers." At press time, Harbaugh had sent Michigan’s athletic department an envelope containing a heavily annotated seating chart, a list of the 63,000 seat views he had found unsatisfactory, and a glowing 70-page report on section 25, row 12, seat 9, which he claimed is “exactly what the great sport of football is all about.” Why Would Notre Dame Drop Michigan? So an hour before the kick off of the annual Michigan vs Notre Dame game the other night, Michigan’s Athletic Director David Brandon was handed a letter from Notre Dame. When he opened it the next day, he learned that Notre Dame was canceling the annual series between the two schools after their meeting in 2014..... This is sad for me, because I live 20 minutes from Notre Dame, and I’m a HUGE Michigan fan. The result of this is that the chance I get for yearly bragging rights in my community is gone. Its also sad because Notre Dame and Michigan are two of the oldest football traditions in the country, and they’ve got the oldest rivalry in the nation - which like .... awesome! So Notre Dame is pulling the plug on something I think is pretty cool and pretty important to me. Why would they do this? They say that its to protect their coastal important rivalries. I think they’re lying through their teeth when they say this. --- Well, Notre Dame - a school with a proud of being independent in football (which makes absolutely no sense to me!) is looking to protect its brand. Now hopefully you’re asking, “What does that mean?!?” Well its about about visibility and recruiting. College sports are a really weird way that schools build their reputation and influence. Example: Pennsylvania State University. In the 1960’s Penn State was a rather insignificant school, but in the next few decades, Joe Paterno, through the power of his football tradition, built up the school’s reputation. Today, “State College” is one of the more respected academic traditions in the country, a member of the prestigious AAU (a collection of the top research institutions on the continent), and national brand. All of this way made possible by the fame and the cash flow brought in by the football team. Notre Dame, in much the same way as Penn State, has been propped up by its football tradition. The exploits of their traveling football team in the early part of the 20th century put this midwest school in the front of the nation. They would play anybody, anywhere. As a result Notre Dame has strong connections with major cities on either coast. As a result, many of the students that attend the school are from states far away from the school. And you cannot separate the rise of their academic tradition to a top 20 school, from their football tradition. --- Now in the past few years decades Notre Dame’s football tradition has become a bit ..... stagnant. They have not finished in the top 25 for the past 6 seasons, they have failed to win a BCS game since the BCS was started in 1998, and they’ve failed to win a National Championship since 1988. Some have dubbed the phrase, “Notre Dame, returning to glory since 1993.” Winning 1 National Championship in the past 34 years is something that is tough for the proud alumni of the schools, a fact drilled deep into the awareness of many of the alumni from Notre Dame. They insist that their school do everything possible to return their alma mater back to the level it once was. They have gone through multiple coaches looking for the man who can be their messiah; the chosen one capable of winning it all. There have been several reasons why their brand has suffered; academics, location, the number of schools getting on TV, and the rise of the SEC have all contributed to Notre Dame’s dip in prominence. These factors have weakened Notre Dame. There is also the way that the National Championship teams build their schedule. You want to have a strong schedule, yet you need to win the majority of your games. Notre Dame hasn’t been able to do these things in recent years. Either they have played teams that were much superior to them in strength, they have lost to their rivals, or they have played teams that were so poor that it did not prepare them/boost their strength of schedule. --- Notre Dame has always fiercely maintained their independence from a conference. The main result would be that this would limit their influence. The problem with the major football conferences is that they end up being tied down to a geographical region. (The SEC mainly recruits students to their school from south eastern, the Pac(ific) 12 recruits the west coast, the Big Ten (12) recruits the midwest, and the Big 12(10) recruits Texas.)Notre Dame knows this, and doesn’t want to become geographically limited; they want to make sure that they maintain their national brand. Notre Dame is located in the midwest part of the country, near Chicago (the region’s largest metropolitan area). They are very visible in this city, so much so that they aren’t worried about recruiting in their own back yard. They are comfortable with their fan base in the middle area of the country. To be present on the west coast, Notre Dame has played Stanford and Southern Cal. They rotate the years that they play them, so every year Notre Dame makes a trip out to California. They have also been affiliated with the Big East for the past few decades; full members in basketball & the Olympic sports and playing a number of Big East opponents in football. Thus, Notre Dame is visible on both coasts. In the past few years, Big East football has become diluted. Many of their traditional football programs have left the conference, and Notre Dame has been left to schedule a number of weak teams instead. As a result, ND has chosen to change is conference relationship to an East Coast conference with some football muscle: the Atlantic Coast Conference (ACC). Notre Dame will now play 5 or 6 games a year against ACC opponents. These ACC opponents will be quality football teams stretch up and down the East Coast. So their move to the ACC is good for their level of competition AND it gives them a presence on that coast. Its a win-win for Notre Dame. This year, Notre Dame is already playing 4 ACC schools, so to add another game or two against these conference teams means that Notre Dame will need to drop one of its non-coastal, midwest opponents; they chose to drop Michigan. --- The problem with Notre Dame saying they are stopping the Michigan rivalry because they value their more important coastal rivalries, is that Michigan isn’t the only non-coastal, midwest school that Notre Dame plays; Michigan State and Purdue are also regular opponents. So to say that Notre Dame canceled their series with Michigan is simply because they’re looking to protect their coastal reputation - which was Notre Dame’s reason for dropping the Michigan series - is to miss the point. There are three schools they could have chosen to stop playing. Now if you look at these three schools and their football rivalry with Notre Dame you’ll see something else. Purdue: Notre Dame has dominated its series with Purdue. Since 1970, Purdue has only beaten the Irish 10 times; only 2 of these wins being in South Bend! (This includes an 11-game winnings streak by Notre Dame.) This rivalry has been completely one sided. Michigan State: Michigan State is viewed as a thorn in the side of the Irish. They’re the pesky underdog that usually gives them fits. MSU has always been a 2nd level program in the midwest, surviving on the football players that were rejected by the region’s elite programs. The series is a bit more even than the ND/Purdue series (with Notre Dame winning 2/3rds of the games), yet its still a series where Notre Dame is the favorite. Michigan: In the non-coastal midwest region, there are three big dogs in the football world: Notre Dame, Ohio State, and Michigan. Not just the biggest in the region, they’re three of the biggest football traditions in the country! They are near the top of the of the totem pole in wins, national championships, budgets, stadiums, Heisman Trophy winners, etc. etc. etc. If there is a stat that can be compared, these three schools are among the leaders in those stats. While Ohio State and Notre Dame don’t have much of a rivalry, Michigan and Notre Dame have a very fierce rivalry that stretches back to the earliest days of organized college football. Some students from Michigan, were the first to teach ND students how to play the football. Michigan was ND’s first opponent in 1887 (an 8-0 UofM win). And Notre Dame was first described as “the Fighting Irish” by a Michigan newspaper. There is also a long period of time between 1909 and 1978 where the two schools refused to play each other. Since resuming playing one another (in 1978), Notre Dame and Michigan have played 29 games, each team has winning 14 and tying in 1992. Its as even as it possibly could be. And while both teams like to think they’re a better tradition, they’re the same tradition; Notre Dame == Michigan. [Also, we should note that since resuming this rivalry, Notre Dame has only won one national championship. 10 of their 11 championships were won during the years the teams did not play each other (1909-1978).] So what we see is that Notre Dame canceled the rivalry with the midwest rival they’re equal with; while keeping the rivalries that they dominate. --- I think we should see this move by Notre Dame as nothing short of the Irish cutting the strongest of their non-coastal, midwestern rivals as they amp up their strength of schedule by moving into a relationship with the ACC. This has nothing to do with protecting their coastal allegiances. They want to avoid a strength of schedule that will limit their ability to compete consistently for the national championship. Comment viewing options that NBC is cool with this. When I say cool with this, I mean willing to pump millions of dollars into the school. NBC with the Michigan series was either guaranteeed a USC game at home or Michigan. Both huge football traditional powerhouses with large fan and alumni bases. Now NBC is guaranteed USC on odd number years and uh Stanford on the evens? I know ND may demand that FSU be scheduled instead, but I still think thats a step down in potential tv draw from Michigan. Also, I thought the ACC was choosing the scheudule each year. I know I hate NBC because of the announcer and how they manage to shove a commerical into every posession change and heck after every kickoff Saturday. Also ND broke up with Michigan the same way a 5th grade girl does with her boyfriend. Calling LSU freak. I don't think FSU goes for that. They already play Miami, FL and Florida every year in non-conference play. Since ND is a non-conferernce game in terms of the standings, I can't believe FSU would ok an annual game with the Fighting Irish. I sense a certain "in your face" component to dropping Michigan the way they did. From the perspective of viewership, what ND did makes zero sense, as the game gets very significant national coverage. Even with NBC doing the game this year, this was the game that drew the largest viewership by a long shot. The prime-time matchup helped make the NBC telecast the highest rated college football game of the weekend. According to USA Today, the game drew a 4.0 overnight rating and according to sportsmediawatch.com, those numbers were the second lowest in the series since the 2007 game on ABC (2.7). Michigan’s last-second win over Notre Dame last season scored a 4.8 overnight for ESPN. While it may not make immediate sense to NBC, with the Irish not being in the National Championship picture for a long time, maybe the corporates realize that ND can only beat UM 1 out of every 4 years. That early season loss is usually memorable (heartbreaking, ass-kicking, or both). It stays in the minds of recruits and pollsters. Eliminating UM bu t keeping the West coast teams maintains the Irish exposure there. Midwest exposure may not be that important because that is where ND lives. I'm surprised about NBC as well. Last weekends game had the highest ratings ever for an Irish primetime slot. Not the USC game in 2005, or any others they had since the NBC contract, ours. I mean the game had 6.4 million viewers, which crushed the ESPN game by about 1.5 million people Sorry, there are some interesting facts and trivia in there, and I do agree with you. Notre Dame is overhauling their scheduling playbook in an effort to make the Final Four. However their logic is flawed, and the ACC played right into it. What I don't understand is this: at the end of the 2014 season a selection committee is going to choose four teams to play in the semifinal game. By all accounts these four teams will probably be conference champions, with two possible exceptions: A situation like Alabama last year where a conference runner-up simply outshines all other candidates; or a Notre Dame team with an 11-1 or 12-0 record. So every year the "pool" of candidates for final four is going to be one of the five major conference champions, a really, really good runner up that could be considered better than all but one or two conference champs, and ND (or in a flukey year, an unbeated MWC or Big East champ). That's 7-8 teams vying for 4 spots. On the surface it seems like ND might be trying to reduce that "pool" by 1. Are they really thinking that the committee will consider the ACC champ OR Notre Dame? What if Notre Dame and Florida State both go 11-1, and weren't on each other's schedule so there was no head-to-head game. What if the committee takes the SEC, Big 12 and Pac 12 champs and ND and FSU or vying for that fourth spot? How does the ACC feel about this? Anyway, it seems like ND's move to the ACC is driven by the glaring proof over the last 16 years of the BCS that they will never sniff the national champtionship game again as a true independent. They think that this half-baked conference affiliation will help their cause with this committee, which now holds all the cards. I just don't understand fully how they think aligning with the ACC helps. Where it WILL help -- and I think the ACC knows this -- is a scenario where they become full members and win the conference at 11-1 or 12-0. As far as scheduling, I don't think you'll see many more Oklahomas or Texases on ND's schedule. It'll be MSU, Navy, ACC teams, USC, Stanford, and a couple "beatable" one-offs like (cough cough) USF or Tulsa. It came down to USC or Michigan and they decided to keep Michigan on. Having them both on the schedule along with at least a Clemson or FSU or Va Tech was going to be just too much to handle. "Is there enough magic out there in the moonlight to make this dream come true?" This is one of the few instances where the CIC would actually be a negative for a prospective new member. Also, the administrations of Big Ten members take a much more involved role in the conference than most other conferences. I can see ND not wanting to deal with Michigan and Wisconsin (who hold a lot of power). Even if you don't accept reasons like these though, there are still a bunch of other (football related) reasons that have nothing to do with protecting brand/ avoiding being tied to the midwest. NBC TV deal, scheduling flexibility, special BCS consideration, that kind of stuff. I know things change, and I know there is a strong desire from a lot of people in power at ND to keep it an undergraduate focused school, but the faculty senate there voted 25-4 in favor of going after CIC membership in 1999. If I had to guess, they're basically just concerned with your second paragraph. They want their tradition and their one/oneths of a vote in their independent world basically no matter the consequences, unless that precludes the possibility of a national championship in football. Thank you for linking the article, I was unaware of this information. It does seem like your interpretation is correct and that I'm wrong, though this quote from the president is interesting: "Notre Dame always will be Catholic and always will be private," Rev. Edward A. Malloy, the university's president, read from a statement. "Even in terms of size, we will not become appreciably larger. Given these realities, we have had to ask ourselves the fundamental question: Does this core identity of Notre Dame as Catholic, private and independent seem a match for an association of universities--even a splendid association of great universities--that are uniformly secular, predominantly state institutions and with a long heritage of conference affiliation? saying the senate wasn't representative enough, which were both interesting. I'd just add that their association with the ACC in every other sport kind of makes me doubt that sentiment. I know they don't see basketball or softball or swimming the way they do football, but the ACC has one Catholic member, and is only 1/3 private. I'm sure in several years when the NCAA gets a finalized playoff system in place and ND doesn't have to worry about finishing 11-1 or 12-0 to make the limited playoff format they will want to resume the series. My guess is by 2020 we will have another decade agreement playing a home and away series with ND in the non conference schedule. I don't mind playing them every year but I think I'm ready for a break! I remember when the series resumed after a long hiatus in '78. Everyone was so excited about reviving the ND series and I'm sure the same will occur again in the future. The computer scientist in me gives you +1 for using two "=" signs to denote equality. "Good evening, and welcome to Michigan Stadium for this the one-hundred thirty-second season of Michigan football, and the thirty-ninth meeting between Michigan and Notre Dame." -Carl Grapentine, September 10, 2011 as john kryk talks in his book about the mich-nd rivalry, notre dame learned to play football from michigan, was a tiny regional school that begged to play michigan in the early days to build prestige, and then as they got better at football, translated that to national visibility. now, in today's world, they probably want to continue to play on the coasts for continued exposure both for athletic and academic recruiting purposes. i completely agree that playing michigan state is their chance to keep a regional rivalry going that they dominate rather than playing one that has traditionally spoiled their shot at an undefeated season 50% of the time in september ND couldn't join the Big Ten because the league would never buy the half-assed "partial membership" the pathetic Big East and the almost as pathetic ACC went for. ND wants to hold onto its NBC deal and the money it gets and doesn't share. And ND wants a softer schedule that makes it more likely it can run the table now and then and be in the mix for the BCS bowls. Spurning the Big Ten and dropping Michigan from the schedule (while keeping Purdue and MSU) is wholly consistent with ND's financial self-interest. It's their right to do that. Just don't insult our intelligence by pretending this is something other than a financially-driven decision. Regarding the money that ND gets from NBC, I was of the belief that the B1G schools, with the Big Ten Network and the B1G's revenue sharing arrangement, are way ahead financially than is ND with their NBC deal. It seems a bit complicated. Supposedly ND gets $15M from NBC right now, and is in line for a new contract conservatively estimated at $20M in 2015. Supposedly, the BTN paid each school $7M this year, and each got another $10M from ESPN/ABC/CBS. Not sure when those contracts expire. The Big Ten distributed a total of $24.6 in shared revenue to the 11 old members and a lesser amount to Nebraska. Not sure where the additional money came from--bowl games? Basketball tournament? Still strikes me as ND gambling that it can do better on its own. Add them to the Big Ten, and you slice the pie in one more piece. Plus, they might object to being phased in like Nebraska. Would they bring more value and a higher overall return? (Sorry, tried to embed the links an failed.) Edit: Also, what is their deal with the ACC on bowl revenue? Do they keep it all themselves? didn't know about this beforehand. More importantly, having ND on our schedule was kind of a pain. They were crappy before, so it was fine, but going forward, they will be pretty good to very good. That limits us. We have 4 non-conference games per season (maybe 3 if the B1G goes to 9 games). Ideally, I would like 2 cupcakes, 1 top 5-15 team and 1 team ranked 25-35. That plus OSU, MSU, Wisconsin/Iowa is a pretty solid schedule. Having ND (5-15 team) plus Alabama (1-5 team) in a year where MSU and OSU are better is just too much. Add in an annoying surprise good team (maybe Nebraska, Iowa, Wisconsin) and our schedule becomes the toughest in the nation. It's obvious that the Irish can't play five ACC teams per year, maintain all of their current rivalries, and still have seven home games a year (the standard for most FBS teams). There was no way Notre Dame would drop Purdue. It's an in-state rivalry, and it's the Big Ten team they've played the most often. And more than any other school on their schedule, Purdue really needs the game. Purdue would be really screwed if the Irish dropped them. Of course, you're right that it's close to an automatic win on the Irish schedule, so Notre Dame doesn't mind playing it, just as much as Purdue (economically) doesn't mind that they almost always lose. Michigan State is already off the Irish schedule in 2014-2015, so dropping them wouldn't have solved their problem, insofar as clearing away the space to play five ACC teams per year. The Michigan State deal is also more flexible, because going forward it's structured as 4-on, 2-off, as opposed to the Michigan deal, which is every year aside from a 2018-19 hiatus. As you've noted, the pesky Spartans have given the Irish fits. In the last 15 years, the Spartans have actually beaten Notre Dame more often than Michigan has. So it's kind of silly to suggest that Notre Dame is scared of Michigan. Several people have noted that the Michigan-Notre Dame game was the highest-rated game of the weekend. Notre Dame needs games like that to make their NBC TV contract more valuable. For that reason (among others), I suspect that Notre Dame will be back on the Michigan schedule sooner than most people think. Brandon clearly likes the game, and Swarbrick's letter sounded like he is very open to rescheduling it. At best, given its five game per year agreement with the ACC, Notre Dame can best be considered a semi-independent in football. The ACC also tells ND which five teams it will play each season, so ND has to work with the conference to get the type of schedule it wants in the long term. 2. Notre Dame has a four-game agreement to play Texas starting in 2015 and 2016. If they had kept the agreement with UT and UM, ND would have started those seasons with back-to-back games with the Longhorns and the Wolverines. Strategically speaking, that's not a smart way to start any season, especially one with a four-game playoff at the tail end. In essence, ND has replaced Michigan with Texas for at least the 2015/6 seasons. 3. The ACC is a mixed bag of programs right now. Florida State looks like they've gotten back to their old swagger and Clemson plus Virginia Tech are looking good as well. Miami-FL (which plays ND later this year) is due for some major sanctions, so they may go back into a tail spin here shortly. Here are the other 10 teams in the ACC from north to south: Boston College, Syracuse, Pittsburgh, Maryland, Virginia, North Carolina, N. Carolina State, Duke, Wake Forest, Georgia Tech. To be frank, none of them are screaming out "football power" at this time. ND will likely be playing each of these teams at least twice over the next four years. 4. Is NBC happy about this? While the UM-ND game has had big ratings, the other thing that has helped is that the game is played early in the season when the hype and expectations are still in place. What we've seen in the past is that ND's television ratings drop off during the season once they start playing the season and when they play the types of teams listed in the second part of (3) above. If ND puts together a consistent double digit winning program, then NBC will probably be okay with this move. If not, then the ratings for some future ND-Duke or ND-Wake Forest game played in South Bend isn't going to look so good. 5. If Notre Dame is going to continue playing seven home games (or six home games and one neutral site game that would count as a home game), it doesn't have much scheduling flexibility. ND will have to have two home-and-home contests with the four games it can schedule (since 5 ACC games plus Navy, USC and Stanford cover the other eight slots) in order to get those seven home games. Right now, those home-and-home slots will be going to Purdue and Michigan State (who has a four-year on, two year off agreement with MSU) plus another team TBD for a two-year home and home series when ND isn't playing MSU. ND will then have two buy-in games each year. So this is how its sets up: Five ACC Games (alternating year of two or three at home) USC, Stanford, Navy Purdue, Michigan State and another home-and-home series when not playing MSU (say a team like Brigham Young or maybe a team from the Big XII or SEC). Two Buy in Games from the Moutain West, etc. What this means for ND is that they'll get perhaps three marquee opponents per year--USC, one of the major ACC teams (ex. FSU) and one other major opponent when they're not playing MSU. They'll also have a number of good teams on the schedule as well in Stanford, MSU and some of the mid-level ACC teams, but the schedule isn't going to be a killer. That said, if a Notre Dame team runs the gamut and goes 12-0 (or even 11-1), they'll certainly be considered for the four-team playoff. Lacking a conference championship game essentially puts them in the same boat as the Big XII Conferece, and no one thinks that if a team from that conference goes 12-0 or 11-1 that it will be exempted from the playoff. They've ceded 5 of 12 games to the ACC commissioner's office; but that's still far fewer than any other team in any other league. What's most important (to them) is that they keep their NBC deal, can continue to play a national schedule, and can make the playoff or a top-tier bowl without having to play a conference championship game. Those are pretty important differences. Many of the ACC teams are regulars on the Irish schedule anyway (BC, Pitt, Miami), or have played them periodically in the past (Syracuse, Georgia Tech, Florida State, Wake Forest, Maryland). It isn't any great leap for them to play five ACC teams a year. Notre Dame also recruits heavily in ACC territory, and they have a lot of fans in the ACC footprint. Also, they typically scheduled 2-3 Big East teams per year, so this isn't such a huge leap from what they did under their old arrangement. #Ifyoucan'tgetintocollegegotostate saying that nd "dropped michigan" is true in the short term, but i doutbt its what nd wants in the long term. given 5 acc games, usc, stanford and navy...that leaves 4 games and its pretty hard to imagine 3 of those being the b1g year in and year out. so, nd most likely wants to cut back to 1-2 b1g games per year. nd has a history with msu, purdue and michigan. the ideal scenario from nd's perspective is to rotate, at least, those three teams (with an osu, psu, northwestern, etc.) mixed in there. but, if nd has a relationship in perpetuity with michigan, that means that they'll have very little room for the rest of the b1g. my guess is that nd wants a rotation of 2 b1g teams per year with most of those games going to msu, purdue and michigan and occasionally another b1g thrown in there. nd does not want to play michigan every year, because it limits what they have room to do with the rest of the b1g. nd probably does want to play regularly - maybe 4 or 5 out of every 10 years. michigan is a meaningful rival for nd and an obvious opponent, and i doubt nd wants to write them off forever. this, by the way, seems like a reasonable scenario for um as well — given that the conference schedule has expanded if um plays nd every year (particularly if nd sucks less) — there isn't much room for any meaningful non-conference opponents. including Golic today that Michigan, Purdue, and MSU will all be in the same boat- that ND will play each of them on a rotating basis. I don't know where they get that, but I sure do hear it consistently. (I assume Golic is closer to the AD than the others.) Ours was the first shoe to drop, perhaps because of the contract details and the chance to pull off that extra home game stunt. Full disclosure - I'm an ND alum, and have lived in SE Michigan my entire life. I got a grad degree from MSU and I'm enrolled in another grad program at UM-Dearborn. So, I'm not just some arrogant ND slappy - I know that we've had a not great 15 years out in the wilderness. I'm also not here to flame or troll or engage in internet combat, but I do feel compelled to add my $0.02 from the ND side of things. The OP is pretty accurate in his reasoning. The fact of the matter is that ND needs to maintain its national presence - it's our comparative advantage. There are no other schools that have a national footprint like we do, and we can go play anywhere in the nation and sell out a stadium. Without that footprint, we'd fall to a Northwestern-level football program. This is why joining the BIG was never a legitimate option for us - it would isolate us as a midwestern school. The ACC also makes more sense from a cultural standpoint, as there are numerous other small private schools, and also religiously-affiliated schools. We'd be a fish out of water in the research-institution dominated BIG. We simply don't fit in with the mission or academic strengths of the other BIG schools. So, that's the cliff notes version of why the ACC made more sense for us. The deal to play 5 teams every year isn't so bad or even much of a departure from our past schedules when you consider that we've regularly played 1-2 teams against traditional ACC teams and another 2-3 games against new ACC (and former Big East teams) like Pitt, Syracuse, Boston College, etc. So why are we dropping Michigan, and not (yet) Purdue or Michigan State? Quite frankly, it's rooted in history. Not to be pedantic (and I relize that using the word pedantic automatically makes me pedantic), but our shared history goes back a LONG way - I think I may have heard something about UM teaching ND how to play football, you guys ever hear that story? Anyways, after ND beat UM for the first time Yost dropped us, refused to schedule ND for 30ish years, and blackballed us from the Big 10. After two games in the 40s there was another 35 year break. So even though our history goes back 125 years, we didn't play for 70+ of those. Further, Yost's blackballing was the thing that led to ND having to barnstorm across the nation. On the other hand, we've played Purdue consistently for decades, and MSU and ND also have a more institutionally-chummy shared history (not to be confused with our relationship with MSU fans these days). So when it came time to decide which BIG team to drop, I suspect that our closer ties to Purdue and MSU outweighed carried the day against our contentious history with UM. There are undeniable benefits to the ND-UM matchup, and I'll be very sad to see the game go as a regular event, but I think the historical aspect skewed the final calculus. What I DO suspect is that we'll move to more of a rotating list of BIG teams to fill 1 or 2 scheudle slots. We have a 4-on 2-off rotation with MSU starting soon, which probably helped their case for staying on the schedule, but also would certainly allow for future games against UM as part of a BIG rotation. I hope that is the case. At any rate, having 1-2 BIG games a year is definitely to our benefit and, emotional reactions aside, also to the benefit of the BIG teams we play. As a final thought - I don't think this is about watering down our schedule. We have early-season games scheduled against Texas in 2015, 2016, 2019, and 2020 which effectively replace the UM game those years. We also added Oklahoma to this year and next years' schedule as a possible replacement for UM when there was uncertainty about the future of the UM-ND series a few years ago. I'm sure we'll end up with replacements that are lower quality than UM in some seasons, but not as a general rule. This ended up a lot longer than I wanted it to despite the fact that I didn't cover some aspects of things in the interest of "brevity." I'll go put on my asbestos suit now, so flame away if you'd like. Due to the long, checkered history between Notre Dame and Michigan, when the ACC agreement was announced I figured it was essentially the end of the Notre Dame-Michigan series. People on this blog are quick to say the hell with Notre Dame when they turned down membership to the Big Ten, and now with them dropping Michigan off the schedule. But Michigan continually prevented Notre Dame's addition when they wanted to join the Big Ten during Yost's years as AD. Since the series resumed in 1978, it has been contentious as each school has accused the other in trying to get an unfair advantage. Bo was under the impression that it was supposed to be the first game of the season for both teams and was hot when Notre Dame started scheduling a game prior our game, thus giving Notre Dame an advantage. Given all of this, and the longer history ND has with Michigan State and Purdue, it was all but certain the Michigan was going to be the first to drop off of the schedule. 1: ND isn't super duper special. Yes, ND will sell out every away game of theirs....but so would Michigan, Texas, USC, Alabama, Ohio State....etc etc. You're not the only school to have a large "National Footprint" 2: I'm sure both Dave Brandon and Jack Swarbrick can have their petty moments, but Brandon was born in 1952, and Swarbrick in 1954. I seriously doubt either one of them really cares what Yost's feelings toward ND were 40 years before they were born. What it came down to was that MSU has a more managable contract than UM, while ours was an easy out too. ND lost to two MIchigan teams full of underclassmen during the RR era, and lost to a first-year Michigan coach last year. They needed six turnovers to beat Michigan once in four years and only beat them by a touchdown, despite all of the turnovers. ND knows, deep in their hearts, that the series is bad for them. They want to schedule easier teams. In other words, they are afraid of Michigan. The series is 14-14-1 since it resumed in 1978. The3 years prior were bad (and weird) for ND, but this year karma came back around a little bit. You can easily say that we dominated the game last year, gaining more yards than UM, but lost becuse of bad turnovers. But the fact of the matter is that points are all that matters, and the other stuff evens out over time. And the series isn't bad for us, any more than it's bad for UM. Nor is it fear - we have Texas lined up in 4 of the 8 coming seasons and I guarantee that we'll try to add other top tier opponents. Seriously? The 20th century includes all dates with a 19xx. That includes a whole lot of national championships. Secondly, in the current relevance trend, who has been to 5 BCS bowl games? Surely not those Spartans. Win the conference before you run your mouth. Outright non shared b10 titles since 88? 6-0 Michigan. On the fifth night—possibly the sixth—a breeze arose. It was cool and dewy.
Low
[ 0.5164835164835161, 29.375, 27.5 ]
LMS Sentinel 7192 The London, Midland and Scottish Railway (LMS) Sentinel No. 7192 was a geared steam locomotive. It was built in 1934 by the Sentinel Waggon Works of Shrewsbury, maker's number 8805 on LMS Lot 111. It had an Abner Doble boiler combined with a 4-cylinder compound arrangement, but an order for an additional locomotive and three railcars to a similar was later cancelled. It was withdrawn in 1943 and scrapped. References 0F Category:0-4-0T locomotives Category:Sentinel locomotives Category:Compound locomotives Category:Railway locomotives introduced in 1934 Category:Standard gauge steam locomotives of Great Britain Category:Scrapped locomotives
Mid
[ 0.633802816901408, 28.125, 16.25 ]
using System; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; namespace GridLengthConverter_grid { /// <summary> /// Interaction logic for Window1.xaml /// </summary> public partial class Window1 : Window { private void changeRowVal(object sender, RoutedEventArgs e) { txt2.Text = "Current Grid Row is " + hs2.Value.ToString(); } // <Snippet1> private void changeColVal(object sender, RoutedEventArgs e) { txt1.Text = "Current Grid Column is " + hs1.Value.ToString(); } private void changeCol(object sender, SelectionChangedEventArgs args) { ListBoxItem li = ((sender as ListBox).SelectedItem as ListBoxItem); GridLengthConverter myGridLengthConverter = new GridLengthConverter(); if (hs1.Value == 0) { GridLength gl1 = (GridLength)myGridLengthConverter.ConvertFromString(li.Content.ToString()); col1.Width = gl1; } else if (hs1.Value == 1) { GridLength gl2 = (GridLength)myGridLengthConverter.ConvertFromString(li.Content.ToString()); col2.Width = gl2; } else if (hs1.Value == 2) { GridLength gl3 = (GridLength)myGridLengthConverter.ConvertFromString(li.Content.ToString()); col3.Width = gl3; } } //</Snippet1> private void changeRow(object sender, SelectionChangedEventArgs args) { ListBoxItem li2 = ((sender as ListBox).SelectedItem as ListBoxItem); GridLengthConverter myGridLengthConverter2 = new GridLengthConverter(); if (hs2.Value == 0) { GridLength gl4 = (GridLength)myGridLengthConverter2.ConvertFromString(li2.Content.ToString()); row1.Height = gl4; } else if (hs2.Value == 1) { GridLength gl5 = (GridLength)myGridLengthConverter2.ConvertFromString(li2.Content.ToString()); row2.Height = gl5; } else if (hs2.Value == 2) { GridLength gl6 = (GridLength)myGridLengthConverter2.ConvertFromString(li2.Content.ToString()); row3.Height = gl6; } } private void setMinWidth(object sender, RoutedEventArgs e) { col1.MinWidth = 100; col2.MinWidth = 100; col3.MinWidth = 100; } private void setMaxWidth(object sender, RoutedEventArgs e) { col1.MaxWidth = 125; col2.MaxWidth = 125; col3.MaxWidth = 125; } private void setMinHeight(object sender, RoutedEventArgs e) { row1.MinHeight = 50; row2.MinHeight = 50; row3.MinHeight = 50; } private void setMaxHeight(object sender, RoutedEventArgs e) { row1.MaxHeight = 75; row2.MaxHeight = 75; row3.MaxHeight = 75; } } }
Low
[ 0.503703703703703, 34, 33.5 ]
NSSA eyes CSC equity Published: 19 October 2017 THE National Social Security Authority (NSSA) says its $18 million investment deal to revive Cold Storage Company (CSC) was on course, with positive engagements underway that will culminate in the eventual issue of equity in its favour. The State-run pension fund intends to pour in $18m to recapitalise CSC in an equity investment deal. NSSA acting chief executive officer, Emerson Mungwariri, told NewsDay in emailed responses that CSC transaction was underway."The Cold Storage Company transaction is underway, with Nssa confirming its interest in participating in the resuscitation of the former beef producer in Zimbabwe," Mungwariri said. "Positive engagements that will culminate in the eventual issue of equity in favour of NSSA are underway between NSSA, government and CSC. We believe that this will provide a good start to the revival journey of CSC," he said. CSC was one of Zimbabwe's most strategic assets, earning the country at least $45m annually before its collapse. It is currently operating under 10% of its capacity and reported to be making annual losses in the region of $6m. Currently, it has debts amounting to $25 million mainly from fixed costs such as wages, rates and taxes on land. Meanwhile, Mungwari said NSSA was contemplating a number of possible alternative uses for its $30m Beitbridge hotel. "We remain vigilant to the need to achieve a trade-off between occupancy and repair costs, versus possible income that could be generated," he said. NSSA has been leasing the hotel to RTG since its opening in January 2014, but the hotelier pulled out after it accumulated losses of over $2m in the 29 months it operated. It was the second major hotel in the border town to shut down last year, after African Sun also closed its Beitbridge Express Hotel in January of the same year, citing prolonged losses at the 140-roomed hotel. - newsday
Mid
[ 0.595289079229122, 34.75, 23.625 ]
Question No: 1001 – (Topic 5) When Ann an employee returns to work and logs into her workstation she notices that, several desktop configuration settings have changed. Upon a review of the CCTV logs, it is determined that someone logged into Ann’s workstation. Which of the following could have prevented this from happening? Password complexity policy User access reviews Shared account prohibition policy User assigned permissions policy Answer: A Explanation: The most important countermeasure against password crackers is to use long, complex passwords, which are changed regularly. Since changes were made to Ann’s desktop configuration settings while she was not at work, means that her password was compromised. Topic 6, Cryptography Question No: 1002 – (Topic 6) Symmetric encryption utilizes , while asymmetric encryption utilizes . Public keys, one time Shared keys, private keys Private keys, session keys Private keys, public keys Answer: D Explanation: Symmetrical systems require the key to be private between the two parties. With asymmetric systems, each circuit has one key. In more detail: Symmetric algorithms require both ends of an encrypted message to have the same key and processing algorithms. Symmetric algorithms generate a secret key that must be protected. A symmetric key, sometimes referred to as a secret key or private key, is a key that isn’t disclosed to people who aren’t authorized to use the encryption system. Asymmetric algorithms use two keys to encrypt and decrypt data. These asymmetric keys are referred to as the public key and the private key. The sender uses the public key to encrypt a message, and the receiver uses the private key to decrypt the message; what one key does, the other one undoes. Question No: 1003 – (Topic 6) The concept of rendering data passing between two points over an IP based network impervious to all but the most sophisticated advanced persistent threats is BEST categorized as which of the following? Stream ciphers Transport encryption Key escrow Block ciphers Answer: B Explanation: Transport encryption is the process of encrypting data ready to be transmitted over an insecure network. A common example of this would be online banking or online purchases where sensitive information such as account numbers or credit card numbers is transmitted. Transport Layer Security (TLS) is a protocol that ensures privacy between communicating applications and their users on the Internet. When a server and client communicate, TLS ensures that no third party may eavesdrop or tamper with any message. TLS is the successor to the Secure Sockets Layer (SSL). Question No: 1004 – (Topic 6) After encrypting all laptop hard drives, an executive officer’s laptop has trouble booting to the operating system. Now that it is successfully encrypted the helpdesk cannot retrieve the data. Which of the following can be used to decrypt the information for retrieval? Recovery agent Private key Trust models Public key Answer: A Explanation: To access the data the hard drive need to be decrypted. To decrypt the hard drive you would need the proper private key. The key recovery agent can retrieve the required key. A key recovery agent is an entity that has the ability to recover a key, key components, or plaintext messages as needed. Question No: 1005 – (Topic 6) Which of the following is used to certify intermediate authorities in a large PKI deployment? Root CA Recovery agent Root user Key escrow Answer: A Explanation: The root CA certifies other certification authorities to publish and manage certificates within the organization. In a hierarchical trust model, also known as a tree, a root CA at the top provides all of the information. The intermediate CAs are next in the hierarchy, and they trust only information provided by the root CA. The root CA also trusts intermediate CAs that are in their level in the hierarchy and none that aren’t. This arrangement allows a high level of control at all levels of the hierarchical tree. . Question No: 1006 – (Topic 6) Which of the following is a requirement when implementing PKI if data loss is unacceptable? Web of trust Non-repudiation Key escrow Certificate revocation list Answer: C Explanation: Key escrow is a database of stored keys that later can be retrieved. Key escrow addresses the possibility that a third party may need to access keys. Under the conditions of key escrow, the keys needed to encrypt/decrypt data are held in an escrow account (think of the term as it relates to home mortgages) and made available if that third party requests them. The third party in question is generally the government, but it could also be an employer if an employee’s private messages have been called into question. Question No: 1007 – (Topic 6) The security administrator installed a newly generated SSL certificate onto the company web server. Due to a misconfiguration of the website, a downloadable file containing one of the pieces of the key was available to the public. It was verified that the disclosure did not require a reissue of the certificate. Which of the following was MOST likely compromised? The file containing the recovery agent’s keys. The file containing the public key. The file containing the private key. The file containing the server’s encrypted passwords. Answer: B Explanation: The public key can be made available to everyone. There is no need to reissue the certificate. Question No: 1008 – (Topic 6) A bank has a fleet of aging payment terminals used by merchants for transactional processing. The terminals currently support single DES but require an upgrade in order to be compliant with security standards. Which of the following is likely to be the simplest upgrade to the aging terminals which will improve in-transit protection of transactional data? AES 3DES RC4 WPA2 Answer: B Explanation: 3DES (Triple DES) is based on DES. In cryptography, Triple DES (3DES) is the common name for the Triple Data Encryption Algorithm symmetric-key block cipher, which applies the Data Encryption Standard (DES) cipher algorithm three times to each data block. The electronic payment industry uses Triple DES and continues to develop and promulgate standards based upon it (e.g. EMV). Microsoft OneNote, Microsoft Outlook 2007, and Microsoft System Center Configuration Manager 2012, use Triple DES to password protect user content and system data. Question No: 1009 – (Topic 6) Pete, an employee, is terminated from the company and the legal department needs documents from his encrypted hard drive. Which of the following should be used to accomplish this task? (Select TWO). Private hash Recovery agent Public key Key escrow CRL Answer: B,D Explanation: B: If an employee leaves and we need access to data he has encrypted, we can use the key recovery agent to retrieve his decryption key. We can use this recovered key to access the data. A key recovery agent is an entity that has the ability to recover a key, key components, or plaintext messages as needed. As opposed to escrow, recovery agents are typically used to access information that is encrypted with older keys. D: If a key need to be recovered for legal purposes the key escrow can be used. Key escrow addresses the possibility that a third party may need to access keys. Under the conditions of key escrow, the keys needed to encrypt/decrypt data are held in an escrow account (think of the term as it relates to home mortgages) and made available if that third party requests them. The third party in question is generally the government, but it could also be an employer if an employee’s private messages have been called into question. Question No: 1010 – (Topic 6) Ann wants to send a file to Joe using PKI. Which of the following should Ann use in order to sign the file? Joe’s public key Joe’s private key Ann’s public key Ann’s private key Answer: D Explanation: The sender uses his private key, in this case Ann#39;s private key, to create a digital signature. The message is, in effect, signed with the private key. The sender then sends the message to the receiver. The receiver uses the public key attached to the message to validate the digital signature. If the values match, the receiver knows the message is authentic. The receiver uses a key provided by the sender-the public key-to decrypt the message. Most digital signature implementations also use a hash to verify that the message has not been altered, intentionally or accidently, in transit.
Mid
[ 0.578406169665809, 28.125, 20.5 ]
STAR GUARDIAN DIGITAL ART CONTEST By UchidaVelCalix Watch 43 Favourites 5 Comments 3K Views (O w O )" it's not very easy when you using mouse only in ms paint </3 ( ^ w ^ )" i hope u like my fan art....now time to sent this baby to rito-chan IMAGE DETAILS Image size 2612x1624px 431.5 KB Show More Published : Oct 5, 2016
Low
[ 0.503416856492027, 27.625, 27.25 ]
USS Louise No. 2 (SP-1230) USS Louise No. 2 (SP-1230), sometimes written Louise # 2 and also referred to during her naval career as Louise and as Pilot Boat #2, was a United States Navy patrol vessel in commission from 1917 to 1919. Louise No. 2 was built as a civilian schooner-rigged pilot boat of the same name in 1900 by Ambrose Martin at East Boston, Massachusetts. On 10 September 1917 the U.S. Navy acquired her under a free lease from her owner, the Boston Pilots Relief Society, for use as a section patrol boat during World War I. She was enrolled in the Naval Coast Defense Reserve on 15 September 1917, delivered to the Navy on 19 September 1917, and commissioned on 20 September 1917 as USS Louise No. 2 (SP-1230) with Ensign John M. Watson, USNRF, in command. Assigned to the 1st Naval District in northern New England and based at Boston, Massachusetts, Liberty III served for the rest of World War I as a pilot boat in Boston Harbor as she had in civilian use, guiding inbound and outbound ships through the defensive sea area of the Port of Boston. The Navy decommissioned Louise No. 2 on 14 January 1919 and returned her to the Boston Pilots Relief Society the same day. References Department of the Navy Naval History and Heritage Command Online Library of Selected Images: Civilian Ships: Louise # 2 (Pilot Boat Schooner, 1900). Served as USS Louise # 2 (SP-1230) in 1917-1919 NavSource Online: Section Patrol Craft Photo Archive Louise No. 2 (SP 1230) Category:Schooners of the United States Navy Category:Patrol vessels of the United States Navy Category:World War I patrol vessels of the United States Category:Ships built in Boston Category:1900 ships
High
[ 0.6728971962616821, 36, 17.5 ]
Theatre Wizard of Oz (3) Actually, I missed the audition. Didn’t think I could even do the show at all. I was taking four grad classes and already up to my neck in homework (figuratively, not literally!). Were would I find time to do my homework? I could only even rehearse two days a week at that for a while. Nope. I figuredthey wouldn’t even consider me with my conflicts. No sir. Never happen. Then Shawnel called and everything changed. Seems they had a good turnout for auditions, but still needed the character parts filled. I was asked to audition for the Scarecrow, probably because I had done the role twice before, so missing rehearsals wouldn’t be a huge issue. But my hesitance was that I had really enjoyed the role my previous time and didn’t want to repeat it (hate to tread on great memories). So she mentioned auditioning for the Tin Man. Tin Man, you say? Tin Man…huh. Hmm. Well, that might be fun. Hadn’t done that role before. Alwaysthought those one-liners were pretty funny in a vaudeville way. Very well. Let’s do it! I attended the callbacks and things went rather well. I also read for the Guard. Wanting to add something new into the mix, I read him in a few different ways, including an Austrian Arnold Terminator voice. That wasfun. It’s nice not to care too much. Casting was done the next day, and I found myself headed back to Oz. it would be my 18th show with Pleasanton Playhouse, but in a brand new theatre back in my old home town. Rehearsing Luckily, I was able to attend the first rehearsal. It was the read-through or maybe just a sing through. Several characters were missing. This would become the show of conflicts for a long while. Seems one of us wasmissing almost every night. During this rehearsal, we got to see just how many people could pack into the rehearsal warehouse. I’m amazed. Not one munchkin gottrampled and no one suffocated. Subsequent rehearsals went well and as we got closer, more people were showing up. After classes ended, I was able to finally relax and enjoy myself. Then we parted for the holidays and arrived back in January to really get down to business. At that point, I really needed to start saying myown lines instead of reciting the Scarecrow’s. The Suit of Armor Several weeks before opening, the metal suit finally came. I had been expecting something fancy, but nothing like we got. This thing was real tin. Probably weighed about 25 pounds. Ironically, the axe was the lightest thing about it. After one rehearsal, I stripped down to my t-shirt andtried it on. Ouch. OUCH! OUCH! OW! OW! OWWW!!! It hurt! The armpits and elbow areas pinched something awful. After removing the suit, I found bad bruises all over my arms. This just wouldn’t do. I couldn’t dance in extreme pain. People would notice the screamsof agony. I was sure of it! Next came the legs. The pinching wasn’t as bad; I just could hardly move my legs. No big deal…except I had to dance during my song and do other trivial little things like walk around the stage, maybe run onoccasion. The show was written with the Tin Man being mobile. Hence, I was dead set against the suit of agony. It just wouldn’t work. Other Tin Men got painted leather; one had foam padding. How didI wind up with a medieval instrument of torture? In time, I figured out that they were sticking with this tin suit; thus, I would have to adapt. And adapt I did. The biggest help came when Shawnel’s parent friend from the studio made me a padded suit. I had been shoving in shoulder pads to keep the weight off my shoulders. Problem was they’d eventually fall out and suddenly I’d find the full weight of the outfit digging into my shoulders. It wasn’t pleasant. It was affecting my enjoyment in Oz. Forget needing a heart; I just wanted morphine. Suddenly, the memories of the carefree happy days of being Scarecrow were haunting me dreadfully. But the padded suit saved all. The top became bearable and wearable. Be that as it may,the legs still were awful. I could hardly move in them. I made suggestions, but nothing was ever really done though.I think they knew I’d overcome the issues. And voila, in time, my body just adapted to them and my movement became freer and freer. It took a while, but by the second weekend of the show,I was moving like I was wearing only a 10-pound suit of armor. By closing my Charleston kicks were head-high. The Theatre After many, many years at the Amador Theatre, the change was nice. Heck, after 17 shows at the old place, you’d think I would really have missed it. No, not really. Of course I have a ton of very fond memories at Amador. Many new friends and relationships began there, but I also enjoyeddiscovering a brand new venue in my old home town. The new theatre is built very nicely, though there are several pros and cons. The cons involve the very limited stage left space. I tripped over a sound cable one time since the rail area is mixed into the offstage wing space. It’s just too tight. Also, the stage left downstage hallway mixes with the audience hallway, so that became off limits, resulting in an unnecessary walk to the dressing rooms. The green room is functional butnothing too special. I think that’s all for the cons. Not much, really. The pros are that it’s next to a nice and free parking garage (kudos, Livermore!). The dressing rooms have bathrooms and showers (with soap provided!). I made use of the shower several times. The seats seem to all have great views of the stage. The pit is deep enough to be out of the way. The stage access points are nice. The upstairs dressing rooms are huge and I almost wish I was stationed up there, given all the fun they seemed to be having. Butbecoming the Tin Man took every second. It’s just part of the role. Our dressing room was somewhat crowded, especially with the costume pieces, yet we did fun in there. An odd thing was the amount of obscenities, which was somewhat shocking considering the amount of children running about. Different people have different viewpoints on the matter, but personally, I support the idea of giving kids as clean an environment as possible when they’re especially young. They’ll get enough of the filth and grime once they hit high school or turn on late night TV. It’s just part of theresponsibility of being an adult. That aside, we did have some great laughs and camaraderie. Mostly we ran the same routine of commenting on the absurdities of the Munchkins singing happy songs about a person getting killed. It’s macabre, I know. The next night, we’d comment on commenting about the songs. This wasrepeated throughout the run. Each night we could also count on the Lion asking for answers to crossword puzzles. That kept things interesting. Only playing Jeopardy would have topped that. I’ve discovered that while I don’t like doing crossword puzzles, I do enjoy assisting other people in doing crosswordpuzzles. Plus, I don’t have to fill in the squares that way. Yeah, it was lively dressing room. I’ll miss those crazy antics. The Makeup I have no known allergies (NKA as we used to say in the Marines) so I wasn’t worried about the tin makeup. Still probably would have been wise to test it out early, but time passed fast and soon it was tech Monday. I applied the silver and it worked flawlessly. I didn’t go into a coma. My skin didn’t break out in boils. From then on, I had a silver face for two hours a night. It worked well, but I wasn’t fond of it; removing it at the end justtook so much natural oil off the skin. It did not seem too healthy. With the physical demands of the suit and the application of the silver paint, I don’t think I’d ever do a role like the Tin Man long term. I don’t mind several weekends but several months might start to do some irreparable damage to the body and I sort of need mine intact for quite some time. I hear even the Lion King actors don’t last too many months due tothe strong demands of the costumes (some being much heavier than my Tin Suit). The Sets Hmmm….how to put this delicately….hmmm…well, they weren’t…well, they were…functional. Oz is simply not an easy show to build a set for and these were rented from San Jose Children’s theatre. Now, the paintings were well-done, but so many things are required for a spectacular production (set-wise) of Oz. The flying wasn’t possible due to high costs and it’s a tough call. I mean it’s only needed in a few spots, but those spots really need it (e.g., Glinda descending and the Wizard’s Balloon ascending), yet the show isn’t Peter Pan–it’s simply a lot of money to spend just for thosefew moments. The theatre disallowed flash paper too. Ouch. That really hurt a few moments, but Tom did a really good job of providing Plan B. The spool of orange LEDs was very creative. I suppose not having a good trap-dooraffected the witch melting, but enough fog also covered that. In the end, some were okay with the sets, others were not. They were what they were, and it is hard to match the quality people see in themovie version (although the DLOC sets were remarkably close, I must say). Costumes Well, I’ve said enough about mine, but yeah, I think Vicky did an amazing job. What the sets lacked, the costumes seemed to have made up for. A lot of it (I felt) was Broadway level. Sure, I would have loved thinner tin, but with the Lion punching it and me rolling around on it, that might not have been a good solution. Wouldn’t bode well for the theatre to return acrumpled ball of metal to the rental place. (Though it would be rather funny) Performances Wow! We had some great crowds. The show was done with virtually no audience members until opening night. It was hard to gauge applause timing and fine tune jokes, but once the crowds came, it was very rewarding. Most of the shows after opening night sold out and that was great to experience.The theatre just seems off to a really good start. Congrats, Livermore. Each run went well with few errors or glitches. Our final show had to be restarted after a lighting board malfunction. That was interesting. It was even on a night when we weren’t joking around aboutsuperstitious words. People often asked if it was tough watching someone else play the role I had played twice. No, not really. Certainly, there were occasions where I wanted to shout, "Wait! Stop! Try this; it works really well!" But that wasn’t my job. My job was being the Tin Man. Everyone has his or her own interpretations of characters. And if we impede someone else’s process of discovery, we’ve done that person a great disservice. Each of usneeds to explore and walk our own Hero’s Journey. All said, done and performed: I had an excellent time. Andyeah, I’d even do it again. Trivia We joked around a lot before Act II one night. The lion was pestering the Tin Man, so the Tin Man pushed the Lion not realizing that Dorothy was right behind the Lion. Well, he bashed right into her. She was fine, but we started joking about how crazy it would have been if Dorothy had fallen back, cracked her head open and bled profusely as the curtain opened. The joke ended up with us saying that when the Guard asked our purpose, wewould shout, "We want to see a doctor!" I was a bit surprised to hear one character (won’t namenames) actually say it when the time came. (Luckily, it never reached the point of someone shouting,"We want to see Macbeth!!!") For those who looked especially closely, you could actually see a moment of relief when the Witch chose Scarecrow first. Certainly, the Tin Man and Lion hated to see the demise of their friend, but hey, it’s good to bealive. There were many tiny changes here and there, and perhaps maybe a few larger ones as well (e.g., "You want a piece of the Tin Man?!?" and "I’ve fallen and I can’t get up"). I believe the Lion was even chanting Muhammad Ali’s boxing mantras. However, the one thing I get asked about by the Producer happened completely by accident. After my song, Dorothy let me know that I sang, "Oh the Lord gave me tin ribs…" instead of "Oh the Smith gave me tin ribs…" I had no idea I had said that, but I think it only happened once. At any rate, I was asked about that after the show. All the other crazy stuff thathad happened and I get interrogated about that? That’s ironic! It took heightened concentration NOT to accidentally say the Scarecrow’s lines. Almost happened in a song once, but I paused just in time. Opened my mouth to sing a line and quickly realized it wasn’t mine. Close call. Tooclose for comfort. My Tin Man dance was different every night from tech week through opening weekend. Having metal legs does something to one’sconcentration. The rest of the run was consistent. Falling on my back may have been comical, but it came at a price. With the belt and tin snaps on the torso, there was a large lump in the lower back area that drove into my back. Think of it as lying on a rock and then rolling from side to side. Repeat twice per show. Three sets per weekend.Mondays were days of healing, Thursdays of dreadful anticipation. The axe was two-sided. This was odd since most axes are one-sided, including the movie version. The joke was that the Tin Man also hadside gigs in a few Lord of the Rings battles. Many people helped me get the makeup off and the suit off each night. One held a flashlight. Others helped rub baby wipes on the face. Another helped with the outfit. Another had my boots ready. I am deeplygrateful to them. The change could NOT have happened without them. For the last show, my secret pal got me a pinata filled with candy. We took it upstairs. One held it up and I swung my axe. Voila! Candy everywhere. After the show, that candy stash was about at 25%. I didn’t mind–I would have OD’ed otherwise. Special notice should be given to Shawnel who, among other things obtained my Tin Man hat, picked up my makeup from Encore, replaced a dancer who wasn’t able to dance in the show, took care of getting Toto into place for the last scene, had the Wizard’s tokens ready in the last scene, took care of loosening my shoes in Act II, and even assisted in secret pal help. Going way beyond the extra mile shouldn’t be thankless, so…thanks!!! (Heck, sheeven convinced me to audition.) To prepare for the quick change at the end, I had to remember to place my farm clothes offstage right during intermission, and I had to have my shoelaces loosened in Act II. Before the next to last scene, I had the tape removed from my costume back. Before the big Oz scene, I had the snaps opened. Thus, I never turned by back in the last scene. All that andremembering songs, lines and dances, too. On occasion, an apple would fall into the pit during the Tin Man discovery scene. Orchestra members did NOT like that. I’m not to blame as Iwas frozen during that time. During the second week, the identity of my secret pal was spoiled by Glinda, who came bounding into our dressing room, exclaiming, "I want some of James’ candy that Rach…" Thenceforth, she was affectionately entitled the "Secret Pal Spoiler." I wasn’t mad though; it turned out to berather fun. "Stand back while I break the door down" was followed by "with my axe." To be honest, I wasn’t exactly sure if that is in the script or not, but it soon became part of the show. The extra emphasis at the end was a comment on how odd it was that the Tin Man carries this axe throughout the entire show and doesn’t really use it much (until then). Also ironic that hecarries an enchanted axe which chopped him to pieces once. Playing the Tin Man has an extra caveat in that it gets very tiring and one gets very thirsty having to expend all that energy lugging the costume around. Trouble is, one can’t consume a lot of liquids since using the restroom is not an option. Thus, one gets incredibly thirsty by the end of theshow. Okay, I confess that yes, it actually was possible to walk semi-quietly when wearing the tin outfit; however, the amount of extra effort it required was simply not worth it. Easier just to be very loud in thehallways and side stage. Toto (Coco) was an amazingly well-behaved dog. Fact is, the only time she ever howled and barked is when she wasn’t around people. The dog simply loves people. The monkeys became so funny that I went out of my way tostart watching their antics. Even their sound effects were humorous. All things considered, the sweat level in the Tin Man costume was not that bad. Nevertheless, I’m very grateful we did this show inthe winter and not during the summer. Inside the suit, I was completely helpless. I could not get out of it without assistance. Had everyone decided to leave me in it and headhome as I joke, I would have had to walk home ensconced in tin. Wearing the Tin Suit for the first time produced a semi-claustrophobic effect. It was something I did not anticipate. Interestingly, it immediately became a conscious choice to ignore that feeling once inside the outfit. By the end of the first night in it, the feeling was completely gone. The human mind and body is amazingly capable of adapting to its environment when given the chance. I could not even touch my face in that thing. It was spooky at first. Falling wasmy greatest fear, but even that became fairly easy in time. During tech week, I realized that I couldn’t trust myself to say, "Well, we was just having a little fun, Aunt Em." You see, after so many OKLAHOMA‘s, once the mid-west voice started, Aunt Em just automatically became Aunt Eller. Even when I concentrated on it, it still happened, so I just dropped her name entirely. I did try it on closing night (and was successful). A few people noticed that I had played Scarecrow twice before and now the Tin Man. They would inevitably ask, "So is the Lion next for you?"The answer was and is, no. I don’t ever plan on being big enough.
High
[ 0.658354114713216, 33, 17.125 ]
The Yahoo Style Guide - Hagelin http://styleguide.yahoo.com/ ====== dchest Example: Seed copy with keywords for SEO: [http://styleguide.yahoo.com/resources/optimize-search- engine...](http://styleguide.yahoo.com/resources/optimize-search- engines/example-seed-copy-keywords-seo) ------ thechangelog For a company that seems to have so many internal problems, the tools Yahoo gives away are really fantastic. YUI, PageSpeed, Pipes... all great stuff. ~~~ sh1mmer PageSpeed isn't ours but YSlow is ;)
Low
[ 0.504405286343612, 28.625, 28.125 ]
If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. Troy hasn't performed well in SB's... and it wouldn't make sense to tout his play in those SB's if you were comparing him to another safety with 3 rings and 2 SB losses. (if that makes any sense, lol) and I don't think stopplayin believes what he types.. he likes the attention. How did Troy play poorly in SB 40? Do explain. In SB 43 and 45 he was injured and according to Wizenhunt made then run the ball more in the 1st half (trying to avoid Troy). Rodgers avoided Troy and targeted Clark. Too bad you all are die hard fans and dont understand the intricasies of a scheme. YOU DONT HAVE TO MAKE PLAYS to have an effect on Defense. Troy's mere pressence make a difference in where you will attack. Did you see Peyton Manning in week 1? The REST of the league has the same respect for Polamalu just check every poll. Why doesnt Ben get the same respect from his peers? How did Troy play poorly in SB 40? Do explain. In SB 43 and 45 he was injured and according to Wizenhunt made then run the ball more in the 1st half (trying to avoid Troy). Rodgers avoided Troy and targeted Clark. Too bad you all are die hard fans and dont understand the intricasies of a scheme. YOU DONT HAVE TO MAKE PLAYS to have an effect on Defense. Troy's mere pressence make a difference in where you will attack. Did you see Peyton Manning in week 1? The REST of the league has the same respect for Polamalu just check every poll. Why doesnt Ben get the same respect from his peers? lol... if someone pointed out the game planning and defensive schemes teams had to run to defend against a second year QB who is the size of a LB you wouldn't allow it. We weren't even discussing presence.. we were talking about playing up to ones standards The fact is that Ben Roethlisberger is the very best QB the Pittsburgh Steelers have EVER had. EVER. Are there things about his game that some might want to change? Sure. That doesn't take away from the fact that for better or worse, he is the best player at his position in the history of this storied franchise. Offensive woes dont come about after ONE game. Offensive woes come about over a period of time.Brady has has Weis, Mcdaniel and Obrien as OC and it has never hindered him. Brady is the PRIMARY reason the Pats are elite. The defense is the reason the Steelers are elite. How many team MVPS does Ben have? Its seems the Steelers view him as I do. Good but not elite. I didnt rip Ben. You view it as ripping because you HATE what I say about Ben ( YOU CANT HANDLE THE TRUTH).Brady keeps his team in the game vs the Giants. Ben played those SAME Giants in 08 and had FIVE turnovers and we lost big. Put Bens stats vs the Giants up and compare them to Bradys. TELL ME WHAT YOU FIND LOL And yet, you answer none of my points... Perhaps you should follow the advice of your own screen name and stop playing... Because in all of the drivel you posted above, you could not deny the hypocrisy you accuse others of having... Really don't know why you guys bother. It should be obvious that a poster like the one above will take whatever facts you give him and ignore them if they don't fit their personal agenda, then try to find another stat to misinterpret. These people are not interested in rational discourse. They are interested in their own agenda and ignore all else to that end. For example, even though he himself asked for 4th quarter stats and were given these: Ben has 20 career 4th Q comebacks - tied for 18th all-time Palmer has 12 and is tied for 73rd ...now he will find a way to fixate on something else and downplay those statistics. "I was talking about <insert some other stat involving the 4th quarter>". It's called moving the goalposts and no matter how many times these types of people are proven wrong they will just ignore the facts and move on to a different imagined stat that is all of a sudden so much more important than the last. This is really just a waste of bandwidth IMO. (And so is my bitching about it, but hey, I feel better)
Low
[ 0.47073170731707303, 24.125, 27.125 ]
--- abstract: 'Numerical simulations offer the unique possibility to forecast the results of surveys and targeted observations that will be performed with next generation instruments like the Square Kilometre Array. In this paper, we investigate for the first time how future radio surveys in polarization will be affected by confusion noise. To do this, we produce 1.4GHz simulated full-Stokes images of the extra-galactic sky by modelling various discrete radio sources populations. The results of our modelling are compared to data in the literature to check the reliability of our procedure. We also estimate the number of polarized sources detectable by future surveys. Finally, from the simulated images we evaluate the confusion limits in $I$, $Q$, and $U$ Stokes parameters, giving analytical formulas of their behaviour as a function of the angular resolution.' author: - | F. Loi$^{1,2}$[^1], M. Murgia$^{2}$, F. Govoni$^{2}$, V. Vacca$^{2} $, I. Prandoni$^{3}$, A. Bonafede$^{1,3,4}$, and L. Feretti$^{3}$.\ $^{1}$Dip. di Fisica e Astronomia, Università degli Studi Bologna, Viale Berti Pichat 6/2, I–40127 Bologna, Italy\ $^{2}$INAF - Osservatorio Astronomico di Cagliari, Via della Scienza 5, I-09047 Selargius (CA), Italy\ $^{3}$INAF - Istituto di Radioastronomia, Via Gobetti 101, I–40129 Bologna, Italy\ $^{4}$ Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, 21029, Hamburg, Germany.\ date: 'Accepted XXX. Received YYY; in original form ZZZ' title: Simulations of the polarized radio sky and predictions on the confusion limit in polarization for future radio surveys --- \[firstpage\] polarization $-$ radio continuum: galaxies $-$ methods: numerical Introduction ============ The capabilities of forthcoming radio telescopes, such as the Square Kilometre Array[^2] (SKA) and its precursors, will allow us to study the sky with an unprecedented detail and they will dramatically improve our knowledge of the radio Universe. One of the main advantages of next generation radio-continuum surveys will be the possibility to study the faint signals coming from the most distant regions of the Universe over large field of views both in total intensity and in polarization. This is extremely important for a number of scientific applications, from the study of the physical and evolutionary properties of different classes of radio sources, to the investigation of the cosmic magnetism.\ Concerning the first topic, important steps forward are expected from the radio continuum surveys that will be carried out with the SKA precursors: the Evolutionary Map of the Universe [EMU, @norris] planned with the Australian Square Kilometre Array Pathfinder (ASKAP), the MeerKAT International GHz Tiered Extragalactic Exploration (MIGHTEE) survey [@jarvis], the Westerbork Synthesis Radio Telescope (WSRT) Apertif [@norris13], and the Very Large Array (VLA) Sky Survey (VLASS) [@lacy]. For a detailed discussion of the scientific expectations of the SKA for continuum science we refer to @prandoni.\ Regarding cosmic magnetism, the origin and the evolution of large scale magnetic fields have not yet been established, despite many observational and numerical simulation-based efforts. To determine the characteristics of large scale magnetic fields in galaxy clusters, one can analyse the Faraday rotation which affects every linearly polarized signal (the one from a background radio source) passing through a magnetised plasma (the intra-cluster medium) [see the reviews on the determination of cluster magnetic fields of @cartay; @govoni04]. The Faraday rotation of extra-galactic radio sources can also be used to evaluate the Galactic magnetic field. @taylor have used the NRAO VLA Sky Survey [NVSS, @condon98] at 1.4GHz to produce a rotation measure (RM) Grid which has an average of 1 polarized source per square degree. These data have been used by @oppermann to produce a reconstruction of the Galactic foreground Faraday rotation. Since the sensitivity of future radio surveys will significantly improve, it will be possible to realise a denser RM Grid. In this framework an important step forward will be represented by the polarization Sky Survey of the Universe’s Magnetism [POSSUM, @gaensler], that will be carried out with ASKAP. POSSUM will make use of the same full Stokes observations dedicated to EMU, and therefore will share the same observational parameters (rms noise $\sim$10$\muup$Jy beam$^{-1}$, 10$^{\prime\prime}$ of resolution). While EMU will produce total intensity images, POSSUM will use the data to extract polarization and RM information producing a RM grid of approximately 25 polarized sources per square degree. In its first phase of implementation, the mid frequency element of SKA (SKA1-MID) is expected to reach an average of 230$-45$0 RMs per square degree at the sensitivity of 4${\rm \muup Jy\,beam^{-1}}$ with a resolution of 2${\rm^{\prime\prime}}$ [@melanie]. Radio observations performed with next generation radio telescopes would be sensitive enough to be potentially limited by confusion rather than thermal noise. Confusion is an additional noise term due to the presence of background unresolved sources whose signal enters into the synthesised beam of the telescope. It is therefore clear that the larger the beam, the higher the confusion noise term. In total intensity the behaviour of the confusion noise as a function of angular resolution have been extensively studied in the literature [see @condon74; @condon2002; @condon2012]. On the other hand, confusion noise has never been investigated in polarization, as the polarized signal from background radio sources is typically a factor 10-100 lower than the total intensity signal, and it has never been an issue in existing polarization surveys. However, this may be not true for the upcoming generation of extremely deep radio surveys, that may be confusion limited also in polarization. This work aims at estimating the confusion noise in polarization at 1.4GHz. Generally, the existing studies in the literature make use of analytical formulas to estimate confusion at a given angular resolution. Such formulas are based on extrapolations of the observed source counts, assumed to follow a power law with slope and normalisation depending on observing frequency and depth.\ In this work, we use a different approach, that relies on end to end simulations. We simulate $I$, $Q$, and $U$ Stokes images of a synthetic population of discrete radio sources distributed over cosmological distances and we analyse them to evaluate the confusion limit at different angular resolutions both in total intensity and in polarization.\ The paper is organised as follows: in Section 2, we describe the models and the procedure adopted to produce spectro-polarimetric images of a population of discrete radio source; in Section 3, we show the comparison with data at 1.4 GHz, giving our expectation on the number of polarized source that future surveys could detect; in Section 4, we present the confusion limit in $I$, $Q$, and $U$ Stokes parameters and the analytical formulas that describe its behaviour as a function of the angular resolution; in Section 5, we discuss about the applicability of the obtained results. Finally, the conclusions are drawn in Section 6. Throughout the paper, we adopt a ${\rm\Lambda}$CDM cosmology with ${\rm H_0=71\,km \, s^{-1} Mpc^{-1}}$, ${\rm\Omega_m=0.27}$, and ${\rm\Omega_{\Lambda}=0.73}$. Modelling the radio sky ======================= For this project, we make use of the FARADAY software package [@murgia04] which has been further developed to reproduce the polarized emission of a population of discrete radio sources.\ As a first step, we produce a simulated catalogue of radio sources, generated by implementing recent determinations of the radio luminosity function (RLF) for the two main classes of objects dominating the faint radio sky: star forming galaxies (SFG) and Active Galactic Nuclei (AGN). The resulting catalogue contains all the discrete radio sources inside the “conical” portion of Universe whose angular aperture is set by the chosen field-of-view and whose depth extends from redshift $z$=0 up to a given $z$=$z_{\textrm{max}}$.\ It is worth mentioning that simulated radio source catalogues already exist in the literature. An example is the one produced by @wilman08 which with a semi-empirical approach, starting from radio luminosity functions, simulates the radio continuum (total intensity) and HI emission of several radio source populations. Assuming a luminosity dependence for the fractional polarization, @osullivan realised a simulated polarized image based on the radio source catalogue of @wilman08. Very recently a new simulated catalogue was produced [T-RECS; @bonaldi] based on cosmological dark matter simulations to reproduce the clustering of sources and it models the radio sky both in total intensity and polarization with updated information on radio sources. Our simulation, like the above simulations, aims at giving useful information for the advent of the SKA. Similarly, it is based on cosmological radio luminosity functions integrated over cosmological volumes but the models adopted to reproduce the characteristics of the radio sources and also the procedure are in general different. In addition, alternatively to the previous works, we use observed high-quality images of extended radio sources to reproduce the morphology and the spectro-polarimetric properties of the simulated radio sources. This is especially important as these simulations will be used to study magnetic fields in galaxy clusters (Loi et al. in prep.).\ For each simulated radio source, our catalogue lists the following parameters: - [*type*]{}, in principle we can classify our sources in several sub-classes, radio-loud or radio-quiet AGN, SFG, quasar etc. Following @novak and @smolcic we consider two main families depending on the mechanism that triggers the radio emission: SFG and AGN; - *redshift*, z; - *size*, we used the relations adopted by @wilman08 for radio-loud AGN and SFGs. The size model are redshift dependent and in particular the SFG size depends also on luminosity; - *luminosity at 1.4GHz*, we extract this information from the RLFs of @novak and @smolcic for the SFGs and AGN respectively, based on the results of the VLA$-$COSMOS 3GHz Large Project [@smolcic], extrapolated to 1.4GHz assuming the spectral index derived in combination with the the VLA$-$COSMOS 1.4 GHz Large and Deep Projects [@schi1; @schi2; @schi3]; - *coordinates*, (x,y); - *morphology and spectro-polarimetry properties*, we select a model of radio source from a dictionary depending on its luminosity and type. Each model of this dictionary consists of four 1.4GHz images: - the surface brightness ${ I}_{\nu}$ in total intensity; - the spectral index distribution $\alpha$ determined by assuming that the flux density ${S}_{\nu}$ at a frequency ${\nu}$ is ${S}_{\nu}\propto\nu^{-\alpha}$; - the fractional polarization defined as the ratio between the polarized intensity and the total intensity $ FPOL=P/I$; - the intrinsic polarization angle which is defined with respect to the $Q$, and $U$ Stokes parameters as: $$\Psi_0=0.5 \cdot \arctan{U/Q}. \label{eq:psi}$$ The images of the dictionary are real high-quality high-resolution images performed at high-frequency. In particular, we used VLA images at C and X bands at arcsecond resolution so that the polarization properties can be considered very close to the intrinsic values. Some examples of models are shown in Fig. \[fig:model\], where the colour represents the total intensity surface brightness (normalised to one) and the vectors the intrinsic polarization strength and orientation. For the AGN class we consider sources with two different morphology: Fanaroff-Riley (FR) type I and type II @fr. For the SFG class we use images of spiral galaxies. ![Models of radio galaxies where the colour represents the total intensity surface brightness distribution (normalised to one) and the vectors the intrinsic polarization strength and orientation. On the top, from left to right, we can see models of respectively Fanaroff-Riley (FR) type I and type II [@fr], while in the bottom we show different models of SFGs.[]{data-label="fig:model"}](./dict.pdf){width="47.00000%"} From an operative point of view, the generation of the catalogue is generally based on a Monte Carlo extraction from the corresponding cumulative distribution functions of the models. A flow chart of the adopted procedure is shown in Fig. \[fig:flow\].\ ![A flow chart of the procedure used to obtain simulated radio images.[]{data-label="fig:flow"}](./flow_chart.png){width="49.00000%"} As a first step, we set the maximum redshift up to which we populate the simulated portion of the Universe. We split the slice into sub-volumes of ${\rm\Delta}z$=0.01 in width. We perform the integral of the AGN and SFG RLFs throughout the solid angle of the simulated observation sub-volume by sub-volume : the result is the total number of “cosmological” AGN and SFGs respectively. As a maximum redshift we set $z_{\rm max}$=6, since the adopted RLFs sample AGN and SFGs up to a redshift of $z$=5.7 and $z$=5.5 respectively. The radio source redshift is assigned through a Monte Carlo extraction from the cumulative distribution computed for each specific type from the corresponding RLF evolution. We populate each sub-volume by randomly extracting the coordinates. The luminosity is assigned from the corresponding cumulative distribution function based on the evolved RLF at the redshift of the radio source. We compute the source size taking into account the redshift and also the luminosity in the case of SFGs. A model for the radio source is extracted from the dictionary according to the luminosity and type. The surface brightness distribution at a given frequency is re-scaled such that the luminosity: $$L_{\nu}=\int_{\Sigma} I_{\nu} (x,y)\, dx dy \label{eq:l}$$ matches the one assigned to the source, where the integral is performed over the radio source area ${\rm \Sigma}$. Once we obtain our simulated catalogue of radio sources, we set the frequency bandwidth and channel resolution, and we used FARADAY to generate a spectral-polarimetric cube for each source and for each of the $I$, $Q$, and $U$ Stokes parameters. In this process, the algorithm considers the correct spectral index for each pixel according to the catalogue. Indeed, the observed surface brightness at a given pixel of coordinates ($x,y$) depends on the redshift $z$ and on the spectral index at the corresponding coordinates ${\rm \alpha}$($x,y$): $$I_{\nu} (x,y) =\frac{L_{\nu}}{A} \frac{1}{(1+z)^{3+\alpha(x,y)}}$$ where $A$ is the pixel area.\ By multiplying the surface brightness and the fractional degree polarization maps, we obtain the intrinsic polarized intensity of the selected radio galaxy. The radio sources which constitute our dictionary are not enough to represent the level of polarization statistically observed and reported in the literature. This is why we decided to re-scale the fractional polarization images in such way that the AGN and the SFGs can assume values between 0$-$10% and 0$-$5% respectively as observations of statistical samples suggest [@hales]. The $Q$, and $U$ Stokes parameters are computed by combining Eq. \[eq:psi\] and ${ p_{\nu}= \sqrt{ U_{\nu}^2+Q_{\nu}^2}}$, where $\rm p_{\nu}$ is the polarized intensity at a given frequency $\rm \nu$: $$\begin{aligned} Q_{\nu} & = & \frac{p_{\nu}}{\sqrt{\tan^2{2\Psi_0}+1}} \nonumber \\ U_{\nu} & = & \frac{p_{\nu}\tan{2\Psi_0}}{\sqrt{\tan^2{2\Psi_0}+1}}\end{aligned}$$ We neglect the effect of the Galactic Rotation Measure (RM) and we assume that no other magnetised plasma is present in the simulated portion of Universe. Otherwise, the observed polarized intensity would not be equal to the intrinsic one and we should compute the U and Q Stokes parameters starting from the polarization angle $\Psi$ defined as: $${ \Psi=\Psi_0+\rm \lambda^2 \cdot \rm\phi(l)}, \label{eq:faraday}$$ where the $\rm \phi(l)$ is the Faraday depth defined as the integral performed over the length l (in kpc) of the crossed magneto-ionic plasma of the line-of-sight parallel component of the magnetic field $\rm B_{||}$ (in $\muup$G) times the thermal density $\rm n_e$ (in cm$^{-3}$): $$\phi(l)=812 \int_0^l n_{\rm e} \,{\bf B} \cdot{\bf dl}=812 \int_0^l n_{\rm e} \,{ B}_{||} {dl} \quad {\rm [rad m^{-2}].} \label{eq:rm}$$ Comparison with data: total intensity and polarization source counts {#sect:comp} ==================================================================== To test the reliability of our simulations, we compare our results with total intensity and polarization source counts available from the literature.\ ![Euclidean-normalised source counts at 1.4GHz: the black points represent the data [@white; @atesp; @bondi; @bondi2008; @kellermann; @hales; @prandoni18] while the green ones show the values obtained from the simulation of this work.[]{data-label="fig:cnt_I"}](./cnt_I_cut.pdf){width="47.00000%"} In Fig. \[fig:cnt\_I\], we plot the 1.4GHz differential source counts of our simulated radio source population together with those estimated from surveys sampling at a wide flux density range, from $\sim$60$\muup$Jy up to 1Jy. The counts are Euclidean normalised[^3] and the data refers to large-scale (&gt; few square degree) 1.4GHz surveys [@white; @atesp; @bondi; @bondi2008; @kellermann; @hales; @prandoni18]. The flux density is evaluated taking into account the $k$-correction: $$S_{\nu}=\frac{L_{\nu}}{4 \pi D_{\rm L}^2} \cdot (1+z)^{1-\alpha}$$ where $D_{\rm L}$ is the luminosity distance. As shown in the plot, the simulated differential counts (green points) are in agreement with data. This simulation can be used to predict the radio sky at sub$-\muup$Jy fluxes, that will be accessible with the next generation radio telescopes like the SKA over large field-of-view. ![Cumulative counts of polarized sources as a function of the polarized flux density. The black points refer to the data of the GOODS-N field [@rudnick] and of the ATLAS data release 2 [@hales] while the red points represent the results obtained from the simulation of this work. The best-fit equation for the simulated counts is reported in the bottom left corner and it is represented as a solid purple line.[]{data-label="fig:cfr_rudnick"}](./cumul_pol_fit.pdf){width="47.00000%"} \ In Fig. \[fig:cfr\_rudnick\], we show the 1.4GHz cumulative counts of polarized sources as a function of the polarized source flux density ${\rm p_{\nu}}$ in mJy. The black points are 1.4GHz data [@hales; @rudnick] which cover the range between ${\rm \sim 16\,\muup Jy}$ and ${\rm\sim60\,mJy}$ while the purple points are the cumulative counts obtained from our simulation. The error bars of the cumulative source counts $\sigma_N$ are the poissonian uncertainties. Even in this case, the agreement between data and simulation is remarkable.\ We observe that the cumulative source counts as a function of the polarized flux density can be well described by a power-law: $$N(>p)/{\rm deg^2}=N_0 \cdot \left (\frac{p}{\rm mJy}\right )^{\gamma},$$ which turns to be a linear function in the log-log space: $$y=A \cdot x + B,$$ where $y=\log(N(>p)/{\rm deg^2})$, $x=\log(p)$, $B=\log(N_0)$, and $A=\gamma$. With the least squares method, we fit the cumulative counts obtaining the following relation: $$N(>p)/{\rm deg^2}=(2.01\pm 0.22) \cdot \left (\frac{p_{\rm 1.4\,GHz}}{\rm mJy}\right )^{(-0.89\pm 0.09)}, \label{eq:p}$$ represented with a purple line in Fig. \[fig:cfr\_rudnick\]. In our fitting, we take into account the uncertainties on the measurements as: $$\sigma_y=\frac{1}{N(>p)/{\rm deg^2}}\cdot \ln{10}\cdot \sigma_N$$ The errors associated to the parameters $N_0$ and $\gamma$ are then $\sigma_{N_0}=10^B \cdot \ln{10}\cdot \sigma_B$ and $\sigma_{\gamma}=\sigma_A$.\ Also in this case, our simulation can be used to investigate radio source populations with polarized flux density lower than the limit of current observations. In particular, Table \[tab:table\] reports our expectations in terms of polarized source numbers and densities for several radio continuum polarization surveys. From left to right, each column shows the survey name, the sensitivity level in polarization $\sigma_p$ in $\muup$Jy at 1.4GHz, the expected number of sources per square degree with polarized intensity higher than 3$\sigma_p$, the field of view of the survey in square degree, and the number of sources that each survey would detect. The number and the number density per square degree have been computed from Eq. \[eq:p\]. Survey $\sigma_p$\[$\muup$Jy\] N/deg$^2$ FoV\[deg$^2$\] N\[$\times10^3$\] ------------------ ------------------------- ----------- ---------------- ------------------- VLASS 89 7 33885 220 Apertif 10 46 3500 159 POSSUM 7 63 30000 1877 MIGHTEE 0.7 486 20 10 SKA1-MID all-sky 2.8 141 31000 4385 wide 0.7 486 1000 486 deep 0.14 2034 30 61 ultra-deep 0.035 6987 1 7 : From left to right, each column shows the survey name, the sensitivity level in polarization $\sigma_p$ in $\muup$Jy at 1.4GHz, the expected number of sources per square degree with polarized intensity higher than 3$\sigma_p$, the field of view of the survey in square degree, and the number of sources that each survey would detect. The number and the number density per square degree have been computed from Eq. \[eq:p\].[]{data-label="tab:table"} The confusion limit in total intensity, Q, and U Stokes parameters ================================================================== The possibility to simulate all the radio sources that are present in a given field-of-view let us explore the effect of the confusion noise, which is due to the faint unresolved radio sources whose signal enters in the beam of the telescope. While we can reduce thermal noise by increasing the exposure time, confusion is a physical limit that we cannot overcome for a fixed maximum baseline length and it is important to have an accurate estimate of its statistical properties. Here, we simulate the full-Stokes parameters at 1.4GHz of a radio source population in a computational grid corresponding to $\sim$0.72deg$^2$ with a resolution of 1${\rm ^{\prime\prime}}$. ![image](./img_conf.pdf) The resulting images have been convolved with different beam sizes. In particular we consider beam Full-Width-at-Half-Maximum (FWHM) equal to 1${\rm ^{\prime\prime}}$, 2${\rm ^{\prime\prime}}$, 6${\rm ^{\prime\prime}}$, 10${\rm ^{\prime\prime}}$, 20${\rm ^{\prime\prime}}$, 45${\rm ^{\prime\prime}}$, 60${\rm ^{\prime\prime}}$, and 120${\rm ^{\prime\prime}}$. In Fig. \[fig:conf\_img\], we show the resulting images at, from top to bottom, 1${\rm ^{\prime\prime}}$, 10${\rm ^{\prime\prime}}$, 45${\rm ^{\prime\prime}}$, and 120${\rm ^{\prime\prime}}$ beam FWHM. Columns, from left to right, show the $I$, $Q$, and $U$ Stokes images respectively. ![A zoom of the histogram of the surface brightness in mJy/beam as measured over the 1$^{\prime\prime}$ total intensity image of Fig. \[fig:conf\_img\]. The y-axis represents the number of the pixels at a given surface brightness normalised to 1. In the top right corner, a zoom out of the same histogram is reported to show the full range of values assumed by the distribution.[]{data-label="fig:histo_conf"}](./histo_ozoom.pdf){width="47.00000%"} \ Starting from these images, we want to determine the confusion limits at different beam FWHM. In total intensity the spatial distribution of confusion sources over a large region of the sky forms a plateau characterised by a mean different from zero. However, this base level cannot be observed in the interferometric images due to the missing short space baselines in the $u-v$ plane. Thus, what we observe in general is the fluctuating component of the confusion. The distribution of these fluctuations is highly non-Gaussian but it presents a long tail at high flux densities due to bright sources. This is shown in Fig. \[fig:histo\_conf\], where we plot the Stokes I surface brightness distribution obtained from the image of Fig. \[fig:conf\_img\] at 1$^{\prime\prime}$ of resolution. The y-axis represents the number of the pixels at a given surface brightness normalised to 1. In the top right corner, it is reported a zoom out of the same histogram to show the full range of surface brightness values assumed by the distribution. The long tail towards high surface brightness values is due to the presence of real sources.\ In real images, the confusion is estimated from the probability distribution P(D) measured in a cold part of the sky, which corresponds to the distribution of the surface brightness image. The P(D) distribution is the convolution of the confusion due to the faint sources and the thermal noise which are independent of each other, so that the total observed variance ${\rm \sigma_o^2}$ is the sum of the variance due to the confusion noise ${\rm \sigma_c^2}$ and to the thermal noise ${\rm \sigma_n^2}$: $$\rm \sigma_o^2=\sigma_c^2+\sigma_n^2.$$ To estimate the confusion, in general it is necessary to start from images where the ${\rm \sigma_c^2} \ll {\rm \sigma_n^2} $. The simulated images obtained in this work are not affected by any kind of noise except the confusion. Therefore, to measure the confusion we could simply measure the rms from the simulated images. However, to be sure that we are not taking into account bright sources which should be distinguishable from the confusion, we measure the average and the rms with an iterative procedure. For each image at a given beam resolution, we follow these steps: 1. we cover the image with boxes with sizes 10 times the beam FWHM; 2. we evaluate the rms in every box by iterative clipping all the pixels having an intensity larger than 10${\rm \times}$rms, until convergence and no other pixels are excluded. In practice, we consider that the confusion noise is related only to the sources fainter than a signal-to-noise ratio S/N&lt;10, where N is evaluated numerically by clipping the tail of the distribution as described above; 3. we compute the confusion limit by averaging the rms values of the different boxes and its error as the square root of the standard deviation of the obtained mean divided by the number of boxes. ![The plot show the 1.4GHz confusion noise in total intensity calculated from the convolved images with respect to the FWHM as green dots fitted with the solid green line. The black line represents the formula proposed by @condon2002 which is reported together with the fitted relation in the bottom left corner. We also plot in magenta the expected sensitivity of different surveys: the SKA1-MID all-sky, wide, deep, and ultra-deep surveys [@prandoni], the WSRT Apertif survey [@norris13], the ASKAP MIGHTEE survey [@jarvis], the ASKAP EMU survey [@norris], and the VLA VLASS [@lacy].[]{data-label="fig:conf_I_plot"}](./i_fit.pdf){width="47.00000%"} The computed confusion limits in total intensity at different FWHM are plotted in Fig. \[fig:conf\_I\_plot\]: the measurements performed on the 1.4GHz simulated images are represented with green dots.\ As for the case of the cumulative counts, we assume a power law behaviour for the confusion noise $\sigma=N_0 \cdot (FWHM)^\gamma$ as a function of the beam resolution, and we fit the results with the least squares method in the log-log space, where $y=\log(\sigma)$, $x=\log(FWHM)$, $B=\log(N_0)$, $A=\gamma$. We find the following relation: $${\rm \sigma_{1.4\,GHz}}^I = \rm {(0.1862\pm0.0009) \cdot \left (\frac{FWHM}{arcmin} \right)^{2.149\pm 0.001 } \,mJy/beam.} \label{eq:I}$$ Assuming an average spectral index for the source population of $\alpha=0.8$ the previous relation can be written as: $$\sigma_{\nu}^I = (0.237\pm0.001) \cdot \left (\frac{\nu}{\rm GHz} \right )^{-\alpha}\cdot \left (\rm \frac{FWHM}{arcmin} \right )^{2.149\pm0.001} \,{\rm mJy/beam},$$ where we consider $\left (\frac{\nu}{\rm GHz} \right )^{-\alpha}$ as a constant and therefore we simply divided the fitted parameter $N_0$ and its uncertainty by this constant. Our results can be compared with the confusion noise expected on the basis of the formula provided by @condon2002: $$\sigma_{\nu}^I=0.2 \cdot \left ( \frac{\nu}{\rm GHz} \right )^{-\alpha} \cdot \left ( \rm \frac{FWHM_{min} \cdot FWHM_{max}}{arcmin^2} \right)\, {\rm mJy/beam},$$ where $\rm FWHM_{min}$ and $\rm FWHM_{max}$ are the minimum and the maximum beam FWHM. As reference, we trace this relation with a black line in Fig. \[fig:conf\_I\_plot\] where we assume $\alpha$=0.8, $\nu$=1.4GHz and that $\rm FWHM_{min}=FWHM_{max}$. We note that for what concerns the total intensity there is a remarkable agreement between the predictions of our simulations and the formula by @condon2002 widely used in literature. In the same Figure, we show in magenta the sensitivity levels foreseen for the same future surveys of Table \[tab:table\]. As we can see, all the survey are very close to the confusion even at very high angular resolution, for example at 0.5$^{\prime\prime}$ resolution of the SKA1-MID ultra-deep survey, where the confusion noise is lower and it is possible to deeply explore the radio continuum sky. No information in the literature has been reported so far about the confusion limit in $Q$, and $U$ Stokes parameters. The values measured in this work at different FWHM are plotted in Fig. \[fig:conf\_UQ\_plot\]: ![The plot shows the 1.4GHz confusion noise versus the FWHM for the U (blue triangles and line) and Q (red dots and line) Stokes parameters. They have been fitted with the relations reported in the bottom left corner of the plot. We also plot in magenta the expected sensitivity of different surveys: the all-sky, wide, deep, and ultra-deep SKA1-MID surveys [@prandoni], the WSRT Apertif survey [@norris13], the ASKAP MIGHTEE survey [@jarvis], the ASKAP POSSUM [@gaensler] surveys.[]{data-label="fig:conf_UQ_plot"}](./uq_fit.pdf){width="47.00000%"} the red and the blue solid line are respectively the fits of the 1.4GHz simulations of the $Q$, and $U$ confusion noise whose equations are indicated in the right bottom corner. We report in the following the best fit equations shown in the plot: $$\begin{aligned} \sigma_{\rm 1.4\,GHz}^Q = (0.393\pm0.002) \cdot \left (\rm \frac{FWHM}{arcmin} \right )^{2.018\pm 0.001} \rm\, \muup Jy/beam \nonumber\\ \sigma_{\rm1.4\,GHz}^U =(0.485\pm0.002) \cdot \left ( \rm \frac{FWHM}{arcmin} \right )^{2.093 \pm 0.001}\rm \, \muup Jy/beam \label{eq:U}\end{aligned}$$ By assuming an average spectral index for the source population of $\alpha=0.8$, the previous relations can be written as: $$\begin{aligned} \sigma_{\nu}^Q=(0.501\pm 0.002) \cdot \left (\frac{\nu}{GHz} \right)^{-\alpha} \cdot \rm \left ( \frac{FWHM}{arcmin} \right )^{2.018\pm 0.001}\rm \, \muup Jy/beam\nonumber \\ \sigma_{\nu}^U=(0.618 \pm 0.003) \cdot \left (\frac{\nu}{GHz}\right)^{-\alpha} \cdot \rm \left ( \frac{FWHM}{arcmin} \right )^{2.093\pm 0.001}\rm \, \muup Jy/beam\end{aligned}$$ As expected the confusion limits of the U and Q Stokes parameters is lower than in total intensity, according to our simulation by a factor of $\sim$400. Concerning future surveys, we observe that in $Q$, and $U$ Stokes parameters the confusion limit is well below their sensitivity level, which has been reported in the same plot with magenta symbols. This represents an important result since, according to the modelling presented here, it means that with next generation telescopes we could perform very deep targeted observations in polarization without being limited by confusion noise. Applicability of the results {#sect:app} ============================ The simulations presented in this work aimed at determining the confusion limit in polarization as a function of angular resolution.\ Our approach consists in a modelling of the discrete radio sources populating the Universe starting from their observed properties at 1.4GHz. Our investigation is based on a number of assumptions. We discuss in the following the reasons behind and the possible limitations introduced by each of them. 1. Frequency. At 1.4GHz the radio sky has been extensively studied down to $\muup$Jy flux levels, both in total intensity and in polarization. This enables us to compare our modelling with existing data in the literature and assess the reliability of our simulations. The results of the simulations at 1.4 GHz can be extrapolated to other frequencies by assuming an average spectral index for the various source populations. This approach has been followed by both @wilman08 and @bonaldi, obtaining good results in reproducing observational trends, like source counts, etc. Nevertheless directly simulating the extra-galactic radio sky at lower and/or higher frequencies would certainly be the right approach to follow. 2. Galactic foreground. We neglect the presence of a Galactic foreground. The effect of the Galactic RM is the rotation of the polarization plane of the signal, as shown in Eq. \[eq:faraday\]. If we do not correct for the right value of Galactic RM the signal will be depolarized and measurements of the $Q$, and $U$ confusion limits would give values lower than what reported in this work. By applying techniques like the Rotation Measure Synthesis [@brent; @burn], it is possible to infer the Galactic RM value. Our results will correspond to the de-rotated $U$, and $Q$ Stokes images. 3. Clustering. The simulated images used to estimate the confusion do not include clustering of sources. In other words, we are simulating a cold region of the sky, without galaxy clusters. The presence of source clustering would have the effect to create regions with different density of sources and likely a different distribution of the confusion. To evaluate the effect of clustering on confusion it is necessary to implement the clustering of sources along the filament of the cosmic web in our simulation and this is the goal of a future work. However, since our simulations agree with data (see Section \[sect:comp\]), we are sure about the reliability of our results. If discontinuities in the number of sources can be clearly observed in images, our results would represent an average behaviour of the confusion between the higher and the lower density regions. It is worth noting that @wilman08 include a clustering recipe in their simulations, but the results are questioned by radio source clustering analyses reported in the literature [see e.g. @hale18]. @bonaldi also implemented source clustering in T-RECS, using a high-resolution cosmological simulation. Issues that can be introduced by source over-densities is the possible presence of a magneto-ionic plasma in the inter-cluster medium and more generally in the filaments of the cosmic web. This will have the effect to depolarise the signal of background sources, resulting in a lower $Q$, and $U$ confusion limit. Provided that up to now the presence of magnetic fields in filaments is not firmly confirmed by observations [but see @vacca18], the magneto-hydro-dynamical simulations which explore this possibility suggest very weak magnetic fields in these structures [@vazza15] Therefore, the depolarization due to filaments should not have a significant impact on our estimates. However, the effect of source clustering mentioned here deserves dedicated studies and we consider them as a future prospect. 4. Sidelobe contribution. An additional source of confusion, especially important in total intensity rather than in U and Q Stokes images, is due to the sidelobes of uncleaned sources lying outside the image. In the work presented here, we did not consider this contribution. This choice was made because we wanted to estimate the confusion noise due to the faint unresolved sources and compare it with the sensitivity foreseen for several surveys performed (or which are going to be performed) with different instruments. These instruments will be characterised by a different response, i.e. by a different (and sometimes still unknown) shape of primary beam, therefore the addition of the sidelobe contribution would make the results valid just for a particular instrument in a particular configuration. With this work, we give a first estimate of the confusion noise in polarization due to the faint unresolved sources. Thanks to this, we could focus on that instruments which seem capable to reach a thermal noise closed to the confusion value reported here and perform the analysis considering also the sidelobe contribution. Conclusions =========== In this work, we presented an original numerical approach developed to generate full-Stokes images of the radio sky.\ We described the models and the procedure adopted to reproduce the discrete radio sources populating the Universe.\ After that we successfully compared the results of our modelling and data from the literature concerning the differential source counts in total intensity and the cumulative source counts of polarized sources, we identified a simple functional relation between the number of polarized sources per square degree and the polarized flux density. From this relation, we computed the number of polarized sources that future surveys will detect, an useful information especially for cosmic magnetism investigations. Finally, we evaluated the confusion limits in $I$, $Q$, and $U$ Stokes parameters at different beam resolution. Even in this case we found analytical formulas which describe the confusion limits as a function of the angular resolution. These formulas can be used as additional input for setting up observational strategies to maximise the impact of the next generation radio telescopes. Acknowledgements {#acknowledgements .unnumbered} ================ We gratefully acknowledge the anonymous referee for the useful comments and suggestions. FL and AB acknowledge financial support from the Italian Minister for Research and Education (MIUR), project FARE SMS, code R16RMPN87T. AB acknowledges financial support from the ERC-Stg DRANOEL, no 714245. IP acknowledges funding from the INAF PRIN-SKA 2017 project 1.05.01.88.04 (FORECaST). The trg computer cluster was funded by the Autonomous Region of Sardinia (RAS) using resources from the Regional Law 7 August 2007 n. 7 (year 2015) “Highly qualified human capital”, in the context of the research project CRP 18 “General relativity tests with the Sardinia Radio Telescope” (P.I. of the project: Dr. Marta Burgay). [39]{} Bonaldi, A., Bonato, M., Galluzzi, V., et al. 2019, , 482, 2 Bondi, M., Ciliegi, P., Zamorani, G., et al. 2003, , 403, 857 Bondi, M., Ciliegi, P., Schinnerer, E., et al. 2008, , 681, 1129-1135 Brentjens M. A., de Bruyn A. G., 2005, , 441, 1217 Burn B. J. 1966, , 133, 67 Carilli C. L., & Taylor G. B. 2002, , 40, 319 Condon, J. J. 1974, , 188, 279 Condon J. J., Cotton W. D., Greisen E. W., et al. 1998, , 115, 1693 Condon, J. J. 2002, Single-Dish Radio Astronomy: Techniques and Applications, 278, 155 Condon, J. J., Cotton, W. D., Fomalont, E. B., et al. 2012, , 758, 23 Fanaroff, B. L., & Riley, J. M. 1974, , 167, 31P Feretti L., Giovannini G., Govoni F., & Murgia M. 2012, , 20, 54 Gaensler, B. M., Landecker, T. L., Taylor, A. R., & POSSUM Collaboration 2010, Bulletin of the American Astronomical Society, 42, 470.13 Govoni, F., & Feretti, L. 2004, International Journal of Modern Physics D, 13, 1549 Hale, C. L., Jarvis, M. J., Delvecchio, I., et al. 2018, , 474, 4133 Hales, C. A., Norris, R. P., Gaensler, B. M., & Middelberg, E. 2014, , 440, 3113 Johnston-Hollitt, M., Govoni, F., Beck, R., et al. 2015, Advancing Astrophysics with the Square Kilometre Array (AASKA14), 92 Kellermann, K. I., Fomalont, E. B., Mainieri, V., et al. 2008, , 179, 71 Jarvis, M., Taylor, R., Agudo, I., et al. 2016, Proceedings of MeerKAT Science: On the Pathway to the SKA. 25-27 May, 2016 Stellenbosch, South Africa (MeerKAT2016). Online at https://pos.sissa.it/277/006/pdf, id.6, 6 Lacy, M., Baum, S. A., Chandler, C. J., et al. 2016, American Astronomical Society Meeting Abstracts \#227, 227, 324.09 Murgia, M., Govoni, F., Feretti, L., et al. 2004, , 424, 429 Norris, R. P., Hopkins, A. M., Afonso, J., et al. 2011, PASA, 28, 215 Norris, R. P., Afonso, J., Bacon, D., et al. 2013, , 30, e020 Novak, M., Smol[č]{}i[ć]{}, V., Delhaize, J., et al. 2017, , 602, A5 O’Sullivan, S., Stil, J., Taylor, A. R., et al. 2008, The role of VLBI in the Golden Age for Radio Astronomy, 107 Oppermann N., Junklewitz H., Greiner M., et al. 2015, , 575, A118 Prandoni, I., Gregorini, L., Parma, P., et al. 2001, , 365, 392 Prandoni, I., & Seymour, N. 2015, Advancing Astrophysics with the Square Kilometre Array (AASKA14), 67 Prandoni, I., Guglielmino, G., Morganti, R., et al. 2018, , 481, 4548 Rudnick, L., & Owen, F. N. 2014, , 785, 45 Schinnerer, E., Sargent, M. T., Bondi, M., et al. 2010, , 188, 384 Schinnerer, E., Smol[č]{}i[ć]{}, V., Carilli, C. L., et al. 2007, , 172, 46 Schinnerer, E., Carilli, C. L., Scoville, N. Z., et al. 2004, , 128, 1974 Smol[č]{}i[ć]{}, V., Novak, M., Delvecchio, I., et al. 2017, , 602, A6 Taylor, A. R., Stil, J. M., & Sunstrum, C., 2009, , 702, 1230 Vacca, V., Murgia, M., Govoni, F., et al. 2018, , 479, 776 Vazza, F., Ferrari, C., Br[ü]{}ggen, M., et al. 2015, , 580, A119 Wilman, R. J., Miller, L., Jarvis, M. J., et al. 2008, , 388, 1335 White, R. L., Becker, R. H., Helfand, D. J., & Gregg, M. D. 1997, , 475, 479 \[lastpage\] [^1]: E-mail: [email protected] [^2]: https://www.skatelescope.org/ [^3]: the source counts are normalised with respect to an Euclidean Universe, where the number N of sources depends on their flux densities S as ${\rm \log N = S^{2/5} \log S}$
Mid
[ 0.589498806682577, 30.875, 21.5 ]
1. Introduction {#s0005} =============== Individuals in inpatient treatment for alcohol use disorders (AUD) have a range of treatment needs. In particular, they experience prominent physical, psychological, and social problems ([@b0145]). These factors are important for daily functioning and are profoundly relevant to reintegration into the community ([@b0105]). Quality of life, which generally refers to perceptions of well-being across different domains of functioning ([@b0105]), has received attention within the addiction treatment field during the past decades ([@b0125]). Recent research has also recommended measures of patients' quality of life as outcome indicators of substance use disorder (SUD) treatment ([@b0105], [@b0175]). Measures of generic or overall quality of life (OQOL), as opposed to health-related quality of life, explore patients' perceptions (i.e. within physical, mental health, and social domains) independent of other health conditions ([@b0125]). OQOL may therefore be particularly relevant as a treatment outcome measure among SUD patients ([@b0105], [@b0175]). The treatment outcomes of SUD patients may be influenced by patient related factors (i.e. clinical and psychological variables) and treatment factors, such as the content and process of treatment ([@b0055], [@b0060], [@b0215]). So far, only a few studies have investigated the factors that may influence trajectories in OQOL among SUD patients. Regarding patient-related factors, one study of patients admitted to detoxification found that baseline mental distress predicted changes in OQOL at six-month follow-up ([@b0200]). Another prospective study of hospitalized SUD patients found no association between patients' baseline psychiatric symptoms and changes in OQOL at follow-up ([@b0165]). These two studies (using the same OQOL instrument), also reported inconclusive results regarding the role of gender. [@b0165] reported that compared with males, females had larger improvement in OQOL scores during SUD treatment, whereas [@b0200] did not find an association between gender and OQOL. The influence of SUD patients' substance use on OQOL is also not well understood. It has been suggested that greater levels of polysubstance use are associated with lower OQOL ([@b0085], [@b0120]). Conversely, reduced alcohol consumption may be associated with significant increases in OQOL ([@b0065]). [@b0200] reported that abstinence was associated with improved OQOL, while [@b0165] did not find such an association. Associations between treatment-related factors and OQOL outcomes have been the subject of only few previous studies. Patient satisfaction measures are recognized as an important tool for evaluating whether treatment factors contributes to improvements ([@b0045]). Higher patient satisfaction with different aspects of inpatient SUD treatment is suggested to be related to perceived benefit of treatment ([@b0010], [@b0215]) and to predict lower alcohol problem severity one year after treatment initiation ([@b0095]). Although the studies have mainly been confined to patients with mental disorders, there is also evidence that patient satisfaction with treatment ([@b0015]), and perceived quality of services is associated with OQOL ([@b0050], [@b0075]). The few available results on factors associated with changes in OQOL among SUD patients are inconclusive. Moreover, previous studies have generally paid little attention to the influence of treatment-related factors on OQOL trajectories among SUD patients. To the best of our knowledge, no studies have investigated OQOL among patients with AUD in SUD treatment and the patient- and treatment-related factors that may influence OQOL trajectories in this patient population. Previous work has also been limited by measuring OQOL at two assessment time points and using statistical methods that do not account for the clustered nature of the data (e.g. the same patients nested over time). In contrast, a multiple OQOL follow-up allows a mixed model examination of trajectories during and after inpatient treatment. Therefore, the overall study purpose was to investigate patient- and treatment-related factors associated with OQOL trajectories during and after inpatient AUD treatment. Specifically, based on the literature on factors associated with treatment outcome among patients with substance use and mental health issues, we hypothesized: 1) that higher mental distress would be associated with lower trajectories of OQOL and 2) that higher patient satisfaction with treatment and services received would be associated with higher OQOL trajectories. 2. Materials and methods {#s0010} ======================== 2.1. Design and setting {#s0015} ----------------------- The current study was part of a larger prospective cohort study of patients consecutively admitted for inpatient SUD treatment in Central Norway from September 2014 to December 2016. The study sites were the five largest publicly funded SUD treatment centers in central Norway, providing treatment for different SUD types. Three of these centers offer short-term inpatient treatment (2--4 months) and two provide inpatient treatment \> 6 months. Patients undergo ≤ 14-day detoxification prior to intake, if necessary. All the five centers provide comprehensive treatment and recovery programs, focusing on individually based social, biological, and mental health needs through a combination of group and individual therapies. Research assistants at these units approached patients 1--2 weeks after inpatient admission. In accordance with the Declaration of Helsinki, all patients gave informed consent prior to inclusion. Patients who chose to participate signed a consent form giving explicit permission for researchers to obtain information from their medical records and to reestablish contact for follow-up interviews. The patients filled in questionnaires at treatment entry (T1) and at discharge (T2). Follow-up interviews were conducted by telephone three months after discharge (T3) and one year after discharge (T4). The Regional Committee for Medical Research Ethics in Norway approved the study (application \#2013/1733). 2.2. Participants {#s0020} ----------------- The inclusion criterion was a sole AUD (ICD-10, F10); in cases where a SUD diagnosis was missing (n = 7), the most frequently used drug prior to admission was alcohol. Thus, the exclusion criterion was an illicit drug use disorder (ICD-10, F11-F19). 2.3. Data collection and variables {#s0025} ---------------------------------- Variables were collected using self-report instruments and medical records. Patient-related variables were selected based on previously reported associations with OQOL ([@b0025], [@b0050], [@b0065], [@b0165], [@b0200], [@b0215]). We also included treatment related factors (e.g. satisfaction with treatment, perceived service quality at follow-up), which have been under-investigated as variables associated with OQOL. 2.4. OQOL {#s0030} --------- OQOL was measured at each time point (T1--T4) with the global subscale (QoL-5) ([@b0155]) of the QoL-10 ([@b0110]). This instrument has been extensively validated and correlates with other established generic quality of life measures, such as the WHOQOL-BREF ([@b0155]). The five items in QoL-5 cover a broad spectrum of quality of life dimensions: physical health; psychological health; relation to self; relation to friends; and relation to partner. Responses to each use a five-point Likert scale from 1 (very good) to 5 (very poor). The raw scores were transformed to a decimal scale, ranging from 0.1 (worst score) to 0.9 (best score) ([@b0200], [@b0205]). The mean Cronbach's alpha (α) was 0.73 (range 0.65--0.78). 2.5. Patient satisfaction and perceived service quality at follow-up {#s0035} -------------------------------------------------------------------- Patients' satisfaction with treatment was reported at T2. This nine-item instrument was derived from the Patient Experiences Questionnaire for Interdisciplinary Treatment for Substance Dependence (PEQ-ITSD) ([@b0070]). One additional item from the Treatment Perception Questionnaire (TPQ) ([@b0140]) was included to obtain patients' perceptions of time in treatment ("Have you had enough time in treatment to sort out your problems"). A project team of experienced clinicians and researchers selected the items used in the current study based on relevance and utility criteria. Responses to the 10 items were recorded on a five-point Likert scale, ranging from 1 (not at all) to 5 (to a very large degree) (α = 0.86). The average score was used as a patient satisfaction index. Four items were included to measure perceived service quality at T3. These items reflected whether patients perceived that they had easy access to services, whether the services had helped them make recovery progress, and the degree of user involvement and satisfaction with the outlined plans for further follow-up (α = 0.80). The instrument was scored on a four-point scale from 1 (not at all) to 4 (to a large degree). The average score was used as a perceived service quality follow-up index. 2.6. Mental distress and psychiatric disorders {#s0040} ---------------------------------------------- Mental distress was measured at all four time points (T1--T4) using the self-reported Hopkins Symptom Checklist-10 (HSCL-10) ([@b0040]). The Norwegian translation of this 10-item instrument has shown feasible psychometrics ([@b0185]). Patients reported how frequently they had experienced symptoms related to depression and anxiety during the past seven days on a scale ranging from 1 (not at all) to 4 (extremely) (α = 0.89, range 0.87--0.91); the mean score was used in analyses. Comorbid psychiatric diagnosis (yes/no) was based on a medical record of any ICD-10 diagnosis (F20--F99). 2.7. Substance use and treatment history {#s0045} ---------------------------------------- Medical records were used for substance use and treatment history information. SUD diagnoses (F0--F19) were classified according to the International Classification of Diseases, 10th revision (ICD-10) ([@bib219]) Additional substance use information included most frequently used drug type during the six months preadmission. Treatment history included information about any previous inpatient SUD treatment stay (yes/no), length of current stay, and treatment completion/dropout. The patients' onset age was recorded at T1 with the question: "How old were you the first time you used substances?" Abstinence (yes/no) at T3 was based on the question "Have you used substances for the last four weeks?" 2.8. Demographics {#s0050} ----------------- Demographic information (e.g. age at intake, gender) was obtained from medical records. 2.9. Statistical analysis {#s0055} ------------------------- Descriptive statistics, including Chi-square test were used to describe sample characteristics. Cohen's *d* and Cramer's *V* were used to determine group difference effect sizes for the continuous and categorical measures, respectively. SPSS version 25 was used for these analyses. Linear mixed modeling was used to investigate patient- and treatment-related factors associated with OQOL trajectories during and after inpatient treatment using Stata 14.2. This modelling approach allows use of all available data including those patients who have missing data on one or multiple assessment time points. Our base model examined both linear and quadratic temporal trends by incorporating Time and Time^2^ as random effects. This decision was based on a visual screening of individual OQOL trajectories, reflecting that respondents differed substantially in both T1 OQOL and trajectories. In the next step, a model tested which patient- and treatment-related factors had fixed effects. Treatment site was also included as a fixed effect, as too few patients were nested in each site to estimate a random effect. Since mental distress was measured on all four assessment points, this variable was entered as a time-varying covariate accounting for variation in mental distress across the entire study period. Both models were tested with both random intercepts and slopes, unstructured covariance matrix, and maximum likelihood (ML) estimation. Inclusion of a random intercept accounts for individual baseline differences in OQOL and random slopes allows for variation in individual OQOL trajectories over time (e.g. improved, declined or unchanged OQOL). A planned post hoc test of marginal effects with Bonferroni correction examined specific differences in OQOL by Time, adjusting for the remaining factors in the mixed model. A variation inflation factor (VIF) \< 4.00 was used as a cutoff for the presence or absence of collinearity ([@b0150]). A sensitivity analysis was conducted excluding patients who did not participate at all assessment time points (n = 114) and those with incomplete OQOL follow-up data (n = 19). 3. Results {#s0060} ========== 3.1. Sample {#s0065} ----------- T1 assessments were conducted with 611 of 728 eligible patients (84%), of whom 236 satisfied the inclusion criterion of misusing only alcohol. Of the 236 participants at T1, 172 provided data at T2, 177 at T3, and 182 at T4 (see flowchart of study participants in Appendix [Fig. A1](#f0010){ref-type="fig"}). In total, 122 patients participated at all assessment time points. Loss to follow-up at T2 was mainly due to treatment dropout (n = 22) or administrative failure (n = 14); attrition at T3 and T4 was because participants did not reply to research assistants' telephone calls. [Table 1](#t0005){ref-type="table"} presents study variables and sample characteristics at each assessment time point.Table 1Sample characteristics[1](#tblfn1){ref-type="table-fn"} at each assessment time point.VariablesBaselinesample T1(n = 236)Respondents at follow-up T2 (n = 172)Respondents at follow-up T3 (n = 177)Respondents at follow-up T4 (n = 182)nM (SD) or percentnM (SD) or percentnM (SD) or percentnM (SD) or percentAge at intake23549.12 (11.61)17149.76 (11.38)17649.87 (11.21)18149.58 (11.14)Onset age22915.54 (4.10)16815.57 (4.60)17215.69 (4.52)17715.33 (2.60)Gender- Female7331.1%5431.4%5531.3%5932.6%- Male16268.9%11868.6%12168.8%12267.4%Previous inpatient stay- Yes14862.7%10762.2%11363.8%11663.7%- No8837.3%6537.8%6436.2%6636.3%Psychiatric diagnosis- Yes6728.4%5029.1%5430.5%5731.3%- No16971.6%12270.9%12369.5%12568.7%Length of stay23667.72 (39.95)17271.94 (38.04)17766.92 (33.50)18270.41 (42.36)Mental distress2362.00 (0.73)1711.59 (0.49)1771.83 (0.71)1811.76 (0.72)OQOL2220.57 (0.16)1650.68 (0.11)1700.63 (0.15)1690.64 (0.16)Patient satisfaction index (T2)1724.03 (0.55)1404.04 (0.57)1384.03 (0.54)Abstinent (T3)- Yes8648.6%7849.7%- No9151.4%7950.3%Perceived service quality index (T3)1743.17 (0.81)1543.19 (0.82)[^1] Improved OQOL was reported among 63% of the sample at T4, whereas 31% and 6% reported reduced or unchanged OQOL, respectively. 3.2. Patient satisfaction {#s0070} ------------------------- Patients were generally satisfied with the inpatient treatment they received. The aspects of treatment receiving the highest ratings were staff perceptions, staff understanding the type of problem, and availability of staff counseling. Activities offered and time in treatment received relatively lower ratings. [Table 2](#t0010){ref-type="table"} presents means and variance for each patient satisfaction item.Table 2Items measuring patient satisfaction at discharge.ItemsNMeanSDAvailability of staff counseling1724.090.72Have benefited from treatment1714.250.75Problems understood by staff1724.260.72Opportunities to affect treatment plan1713.780.92Felt safe at the institution[a](#tblfn2){ref-type="table-fn"}1711714.54Satisfactory activities were offered1703.860.85Personnel cooperated with next of kin[b](#tblfn3){ref-type="table-fn"}1363.830.83Had been prepared for the time after discharge1693.890.79Enough time in treatment to sort out problems1713.860.95Overall treatment was satisfactory1724.210.68[^2][^3] Patients were also generally satisfied with the follow-up services ([Table 3](#t0015){ref-type="table"}). Specifically, involvement in making plans for follow-up and access to services were ranked highest, whereas perceived benefit of follow-up services was rated lowest.Table 3Items measuring perceived service quality at follow-up (T3).ItemsNMeanSDHave had easy access to follow-up services1743.240.96Have benefited from follow-up services1672.921.13Have been involved in service needs decisions1603.160.94Have had opportunities to affect plans for follow-up1623.400.86[^4] 3.3. Prediction of quality of life trajectories {#s0075} ----------------------------------------------- To investigate potential heterogeneity in OQOL trajectories, a base model ([Table 4](#t0020){ref-type="table"}: Model 1) was tested including linear and quadratic temporal trends as random effects. The model showed substantial differences both in T1 OQOL status (intercept, σ = 0.07, p \< .000) and slope (σ = 0.001, p \< .046). Since this variance warranted further exploration, we tested the full model, including patient- and treatment-related factors as fixed effects.Table 4Linear mixed model predicting OQOL.Model 1Model 2ParameterEstimatez-value*p*-value95% CIEstimatez-value*p*-value95% CIIntercept0.63084.150.0000.615; 0.6440.68010.340.0000.551; 0.809TimeT1 (ref)----------------T20.5624.850.0000.033; 0.079T30.0403.540.0000.018; 0.062T40.0342.610.0090.083; 0.059Psychiatric diagnosis (yes)0.0060.420.676--0.023; 0.035Gender (female)0.0201.440.149--0.007; 0.047Age0.0000.300.762--0.001; 0.001Previous inpatient stay (yes)--0.018--1.380.167--0.044; 0.008Abstinent T3 (yes)0.0100.800.426--0.015; 0.035Mental distress--0.147--17.990.000--0.163; --0.131Onset age0.0021.470.141--0.001; 0.004Length of stay0.0000.740.458--0.000; 0.001Patient satisfaction (T2)0.0322.600.0090.008; 0.056Perceived service quality (T3)0.0020.210.837--0.015; 0.019Treatment site--0.002--0.410.683--0.012; 0.008Variance componentsIntercept0.0743.460.000--0.0171.450.073--Time0.0332.060.019--0.0121.250.105--Time^2^0.0011.680.046--0.0011.530.063-- As shown in [Table 4](#t0020){ref-type="table"} (Model 2), high mental distress was strongly associated with reduced OQOL at all four time points. Higher patient satisfaction at T2 predicted higher OQOL growth trajectories. Growth in T2--T4 OQOL trajectories was also substantial compared with T1. VIF varied from 1.084 to 2.855, indicating that multicollinearity was absent from the mixed model. [Fig. 1](#f0005){ref-type="fig"} shows that the most substantial growth increase was between T1 and T2 (p \< 0.000). Growth was weaker at T3 (p = 0.004) and T4 (p = 0.041). Estimated marginal means showed that the growth differences between T2, T3, and T4 did not reach significance.Fig. 1Estimated marginal effects of OQOL by time points (T1--T4).Fig. A1Flowchart of study sample. 3.4. Sensitivity analysis {#s0080} ------------------------- Sensitivity analysis across the four assessment time points, excluding those lost to follow-up, essentially reflected similar results as in [Table 4](#t0020){ref-type="table"} (Model 2). For instance, higher mental distress was strongly associated (z = --18.29, 95% CI = --0.171; --0.138, p \< .000) with lower OQOL throughout the study period. Higher patient satisfaction at T2 was also positively associated with OQOL (z = 2.59, 95% CI = 0.008; 0.059, p = .015). Similar time trends in OQOL as those reported in [Table 4](#t0020){ref-type="table"} (Model 2) and [Fig. 1](#f0005){ref-type="fig"} were detected in the sensitivity analysis, with slightly weaker z-values. Furthermore, female gender (z = 1.97, 95% CI = 0.001; 0.055, p \< .049) and older age of onset (z = 2.43, 95% CI = 0.001; p = .010) were weakly associated with higher OQOL in the sensitivity analysis. 4. Discussion {#s0085} ============= The current study investigated patient- and treatment-related factors associated with OQOL trajectories during and after inpatient AUD treatment. As hypothesized, and in line with previous research among SUD inpatients ([@b0200]), the current study showed that higher mental distress was associated with lower OQOL trajectories. The association between mental distress and OQOL trajectories among patients in SUD treatment has been sparsely investigated, and the current study is the first to address this issue among inpatients with AUD. Mental health and general quality of life may be interrelated dimensions. As such, SUD treatment providers may consider incorporating routine OQOL and mental distress screenings at treatment entry, to target patient groups among whom these dimensions should be a focus. Such initiatives, both during and after inpatient treatment, may contribute to more successful treatment outcomes among many patients. Also as hypothesized, increased patient satisfaction with inpatient treatment was associated with higher OQOL trajectories. This is the first prospective study showing an association between patient satisfaction and OQOL among patients in SUD treatment. The current finding is in line with studies that have reported associations between patient satisfaction with SUD treatment and treatment outcomes, such as perceived benefit of treatment ([@b0010]) and drug use improvements ([@b0215]). The result is also congruent with research on patients with mental health problems ([@b0015]). Patient satisfaction within substance use treatment may be strongly associated with client engagement indicators and involvement in therapy ([@b0030]) and may even be a proxy for therapeutic alliance ([@bib217]). Patient perception of follow-up service quality was not significantly associated with OQOL. The importance of consistency and continued care following inpatient treatment is widely acknowledged ([@b0080], [@b0130]). Although previous research in this area is scarce, findings among service users with mental disorders have suggested that quality of life is associated with greater service continuity and satisfaction with the help received ([@b0050], [@b0075]). The current finding may be related to the low symptom severity of the current sample (as reflected by their relatively high mean QoL-5 follow-up scores), and consequently reduced needs for ancillary support services following inpatient treatment compared with those with more severe illicit drug use and severe mental health problems. Abstinence three months after discharge from inpatient treatment was not associated with OQOL. This finding contradicts studies emphasizing the importance of abstinence for improving quality of life after SUD treatment ([@b0100], [@b0200]). Diverging results may relate to differences in symptom severity between samples. Inconsistent results may also be due to assessment timing and number, type of statistical analyses, and adjustment for other variables. Nevertheless, the current findings are consistent with those of [@b0165] and with research suggesting limited congruity between abstinence and subjective well-being ([@b0210]). For many who seek treatment for alcohol problems, the treatment goal may be reduced intake rather than abstinence ([@b0035]). It should also be noted that abstinence from substances might not have an immediate, positive impact on OQOL. Patients may experience abstinence symptoms in the presence of specific situations and triggers ([@b0135]), which could negatively influence OQOL. Longitudinal studies with longer follow-up measurements should elucidate the role of post-treatment abstinence on OQOL. Most patients in this study (63%) reported improved quality of life at follow-up. These results are consistent with previous research suggesting improved quality of life during the course of SUD treatment ([@b0160], [@b0165], [@b0195]). The current findings showed a growth in OQOL from treatment entry to discharge. Thereafter, OQOL stabilized at a higher level than initially (i.e. at treatment entry). One possible explanation for the current findings is that inpatient substance use treatment takes a psychosocial approach, focusing on key areas for social reintegration, in addition to providing treatment for other substance abuse problems. The mean one-year follow-up OQOL score among our sample was somewhat higher than the scores recently reported in two six-month follow-up studies with more heterogeneous SUD samples ([@b0165], [@b0200]). This may be due to the longer follow-up interval of the current study. The difference may also indicate a relatively lower symptom severity of the current sample, consistent with research suggesting an association between substance use severity and OQOL ([@b0085], [@b0120]). However, patients in the current sample had a mean QoL-5 score at 12 month follow-up significantly below the mean QoL-5 score of 0.71 reported in non-patient samples ([@b0020]). This may reflect either that the effects of treatment on secondary, nondrinking outcomes may require more than a year ([@b0115]), or that there are long-term negative effects of AUD ([@b0090], [@b0180]). 5. Limitations {#s0090} ============== The study was conducted among patients with AUD, so these findings might not generalize beyond this clinical population. Although the one-year follow-up response rate was comparable with other studies ([@b0005]), the number of patients who responded at all four time points was modest. Nonrandomness of those with incomplete follow-up data might be a concern. However, additional analyses showed that those who did not respond at follow-up were similar to the analytic sample on all baseline variables. Nonetheless, some differences were found at the three-month follow-up; those with incomplete follow-up data were less likely to be abstinent and less satisfied with follow-up services received. As such, the associations found between OQOL and these two factors may have been attenuated. Moreover, if a larger patient sample had participated at all time points, we may have had greater statistical power to detect factors significantly associated with OQOL. For example, variables that trended to be associated with OQOL, such as previous inpatient stay and onset age (both reflecting dependence severity) and gender, may have reached statistical significance in a larger sample. However, a major strength of the mixed model approach is that it allows use of all available data, including from participants with incomplete data ([@b0170], [@b0190]). A sensitivity analysis excluding those lost to follow-up showed results that were similar to the model which also incorporated patients with missing data on one or more assessment time points. 6. Conclusions {#s0095} ============== This study assessing OQOL in a sample of patients with AUD, who were followed for one year after inpatient treatment, extends our knowledge about factors associated with OQOL. Based on these findings we propose that clinicians routinely screen for OQOL at AUD treatment entry, to identify patients for whom this dimension should be a treatment focus. Targeting mental distress both during and after treatment may also be associated with improved OQOL for persons with AUD. The current study also shows that patient satisfaction with different aspects of SUD inpatient treatment is associated with subsequent OQOL improvements. Future research should more closely investigate which aspects of inpatient treatment contribute to improved quality of life among service users, and other factors that may moderate this relationship. Longer-term posttreatment studies of OQOL development trajectories are also needed to determine whether OQOL eventually stabilizes at a higher level compared with pretreatment, or whether it declines to a similar level over time. 7. Role of funding sources {#s0100} ========================== This work was supported by the Norwegian University of Science and Technology (NTNU), Trondheim, Norway, St. Olav's University Hospital, Trondheim, Norway, and Møre and Romsdal Hospital Trust, Ålesund, Norway. The funding sources did not have any significant influences on data collection, analyses, writing, or the decision to submit the manuscript for publication. 8. Contributors {#s0105} =============== H.W.A. designed the study, wrote the protocol, and undertook the initial analyses. T.N. undertook the final statistical analyses. Both authors wrote the manuscript and have approved the final version. Declaration of Competing Interest ================================= The authors declared that there is no conflict of interest. We want to thank the research assistants of the participating clinics for their contribution to the implementation of the study: Marit Magnussen, Kristin Øyen Kvam, Snorre Rønning, Merethe Wenaas, Kristian Bachmann, and Helene Tjelde. We also want to thank the patients for their contribution to this research. [^1]: Comparison of those with incomplete follow-up data and those who participated at all time points showed that they were similar on all T1characteristics, including OQOL and mental distress. Patients with incomplete follow-up data were somewhat less satisfied with services received at 3-month follow-up (*p* = 0.036) and less likely to report being abstinent at 3-month follow (*p* = 0.002). [^2]: Item excluded from further analyses due to high proportion of respondents (60%) answering in the most positive response category. [^3]: Item excluded from further analyses due to high proportion (21%) of missing responses. [^4]: Note. Items measured on a four-point scale (1 = not at all, 4 = to a large degree).
High
[ 0.683544303797468, 33.75, 15.625 ]
On This Day In History | 1972 On this day 46 years ago, the boys’ basketball team defeated St. Paul’s, 65-59. Here is the recap from Newsday: Stony Brook Is Victorious Garden City–In a different kind of see-saw battle–one in which the winners’ lead fluctuated from 20 to two points–the Stony Brook School defeated St. Paul’s, 65-59, in basketball yesterday. The teams played evenly for the first quarter. Then Stony Brook took off in a 26-11 second period, when Drake Womack, Larry Jackson and Kelvin Spooner got hot. St. Paul’s came back slowly but surely in the second half, until Vince Mancusi’s 25-foot jump shot cut the lead to a mere two points, 59-57. A couple of ballhandling mistakes, however, gave Stony Brook breathing room in the final two minutes. Womack finished with a game-high 29 points while Spooner and Jackson chipped in 14 and 12, respectively. Womack would close his senior year with 1,025 career points to become the first Brooker to eclipse the 1,000 point plateau.
Mid
[ 0.6379746835443031, 31.5, 17.875 ]
Fugu chiri Fugu chiri is a pufferfish soup. It is also known as tetchiri. See also Fugu List of Japanese soups and stews References Category:Japanese soups and stews Category:Fish dishes
Low
[ 0.49882352941176406, 26.5, 26.625 ]
Thursday, July 22, 2010 Mama Monkey Adopts Baby of Another Species A childless female monkey has found a way to satiate her maternal drive — adopt a baby from another species, zookeepers report today. The mother, a golden-headed lion tamarin named Maternal Juanita, lives at the ZSL London Zoo. She took a liking to her neighbor's baby — an emperor tamarin — just weeks after it was born. Now the surrogate mum can be seen jumping around zoo exhibits with the 2-month-old baby on her back. The emperor tamarin's grey body and white moustache stand out against its "mother's" fiery orange mane. The baby tamarin is already showing signs of an adult's signature white moustache. In fact, the animals are thought to have been named after the Emperor of Germany, Emperor Wilhelm II, due to their long, white moustaches. "Juanita has never had a baby before so it seems like her mothering instinct has just kicked in this time around," said Lucy Hawley, a senior zookeeper at the zoo. "Who knows what animal she'll be carrying around next?"
Low
[ 0.529166666666666, 31.75, 28.25 ]
// // MarvelAPIManager.swift // Marvel // // Created by Thiago Lioy on 14/11/16. // Copyright © 2016 Thiago Lioy. All rights reserved. // import Foundation import Moya import RxSwift import ObjectMapper import Moya_ObjectMapper extension Response { func removeAPIWrappers() -> Response { guard let json = try? self.mapJSON() as? Dictionary<String, AnyObject>, let results = json?["data"]?["results"] ?? [], let newData = try? JSONSerialization.data(withJSONObject: results, options: .prettyPrinted) else { return self } let newResponse = Response(statusCode: self.statusCode, data: newData, response: self.response) return newResponse } } struct MarvelAPIManager { let provider: RxMoyaProvider<MarvelAPI> let disposeBag = DisposeBag() init() { provider = RxMoyaProvider<MarvelAPI>() } } extension MarvelAPIManager { typealias AdditionalStepsAction = (() -> ()) fileprivate func requestObject<T: Mappable>(_ token: MarvelAPI, type: T.Type, completion: @escaping (T?) -> Void, additionalSteps: AdditionalStepsAction? = nil) { provider.request(token) .debug() .mapObject(T.self) .subscribe { event -> Void in switch event { case .next(let parsedObject): completion(parsedObject) additionalSteps?() case .error(let error): print(error) completion(nil) default: break } }.addDisposableTo(disposeBag) } fileprivate func requestArray<T: Mappable>(_ token: MarvelAPI, type: T.Type, completion: @escaping ([T]?) -> Void, additionalSteps: AdditionalStepsAction? = nil) { provider.request(token) .debug() .map { response -> Response in return response.removeAPIWrappers() } .mapArray(T.self) .subscribe { event -> Void in switch event { case .next(let parsedArray): completion(parsedArray) additionalSteps?() case .error(let error): print(error) completion(nil) default: break } }.addDisposableTo(disposeBag) } } protocol MarvelAPICalls { func characters(query: String?, completion: @escaping ([Character]?) -> Void) } extension MarvelAPIManager: MarvelAPICalls { func characters(query: String? = nil, completion: @escaping ([Character]?) -> Void) { requestArray(.characters(query), type: Character.self, completion: completion) } }
Low
[ 0.497029702970297, 31.375, 31.75 ]
A Zero-Math Introduction to Markov Chain Monte Carlo Methods - tosh https://towardsdatascience.com/a-zero-math-introduction-to-markov-chain-monte-carlo-methods-dcba889e0c50 ====== kgwgk Previous discussion: [https://news.ycombinator.com/item?id=15986687](https://news.ycombinator.com/item?id=15986687) ------ donmatito Random-play Monte-Carlo was the first algorithm that lead to good computer Go software, before neural network. It was around 2008 I think. Before that, pattern-base algos were really, really bad (like, barely above human beginner level). I'm not a mathematician, but the paper itself was a real beauty. I remember vividly the parameter that balanced "exploitation" of apparently-good paths, and "exploration" of unknown/apparently-bad path. I used it in many analogies discussing innovation programs within large companies. ~~~ hackandtrip Is this [1] the paper you are referring to? Thanks for the heads up, this work looks interesting. [1]: [https://www.aaai.org/Papers/AIIDE/2008/AIIDE08-036.pdf](https://www.aaai.org/Papers/AIIDE/2008/AIIDE08-036.pdf) ------ hackandtrip Are there any Lot-Of-Math Introduction to Monte Carlo Methods? ~~~ theoh I'm not sure, since it doesn't introduce the notion of detailed balance, whether this article really deals meaningfully with the use of Markov chains at all. It doesn't bring out the fact that the Markov chain transition probabilities have to be tuned to explore the parameter space. The relative efficiency of MCMC versus a naive random sampling approach depends on this leveraging of detailed balance so that the correlations of the Markov chain work in favour of the experiment. So given that the article introduces this notion of a random walk, so it seems like it's going to discuss the Metropolis algorithm, it's not great that it ducks the main issue which is why a correlated Markov chain random walk is a useful approach. The key is that it's a "conditioned" random walk, and the method by which it is conditioned is the real trick to MCMC (at least to Metropolis, which is the cool kind.)
Low
[ 0.5269320843091331, 28.125, 25.25 ]
News Blue Jackets News Nash and the Jackets Look to Extinguish Iginla and the Flames by Staff Writer / Columbus Blue Jackets In 2003-04, Jarome Iginla and Rick Nash tied for the NHL lead with 41 goals to win the Maurice Richard Trophy. That was the same year that Iginla led the Flames to the seventh game of the Stanley Cup Finals. But a lot has changed since then, for both players. First the lockout washed out the entire 2004-05 season. Then Nash was injured in training camp the following season, missing 28 games and scoring 31 goals as the Jackets missed the playoffs. Meanwhile, Iginla and the Flames struggled offensively with the new rules and the Calgary captain finished with just 35 goals, getting bounced in the first round of the playoffs by a young and hungry Ducks team. Fast forward to this season. Both teams have struggled to find the net but have played stout defensively resulting in a combined six wins in 21 games between the two teams. Individually, Nash scored three goals in the first four games but has been held with out a goal for a career-high six games. Iginla, conversely, scored just two goals in the first seven games of the season but has lit the lamp four times in the last four contests. Both players will be on the ice tonight, both looking to jump start their games and their teams as the Jackets host the Calgary Flames at Nationwide Arena at 7 p.m. tonight. Coverage begins on FSN Ohio at 6:30 p.m. with Ice Breaker. Columbus (3-6-1) has won just once in their last seven games after starting out 2-0-1 on the year. Pascal Leclaire has been solid in the net, posting a 3-5-1 mark with a 2.95 GAA, keeping the club in games. The main culprit has been the club’s offense, being shut out three times in the first 10 games of the year. Those offensive struggles may have ended with the month of October, as the Jackets created numerous scoring opportunities en route to out-shooting the Colorado Avalanche on the first of November, 44-22. The Jackets converted three of those 44 shots for goals but ended up on the wrong end of a 5-3 score. But a step is a step, nonetheless. Fredrik Modin, the Jackets prized off-season acquisition scored for the third time this season and added an assist in the game. David Vyborny also had a goal and an assist as he remains on a little bit better than a point per game rate, with 2-9-11 in his first 10 games. Nikolai Zherdev leads the club with four goals. Calgary has been struggling just as mightily, dropping three straight games which means alot considering the club has not lost four consecutive games since Jan. 9-14, 2003, the last season the Flames failed to qualify for the playoffs. Offense has been the problem here as well with Calgary ranking 24th in scoring at 2.46 goals per game. Iginla leads the team in goals and points with six and 11, but off-season acquisitions Jeff Friesen (0-1-1) and Alex Tanguay (2-4-6) have not provided the offense that general manager Darryl Sutter had in mind. Miikka Kiprusoff has posted a 3-7-1 record with a 2.84 GAA and a .908 SV% this season. The Jackets and Flames met four times a year ago splitting the season series. Each team won once on its home ice and once on the road, the highlight being the Jackets 2-1 shoot-out win at Calgary on Feb. 1, which was the fourth win of a five-game winning streak from Jan. 24 - Feb. 2. Overall, Columbus has posted an 11-8-1 mark against Calgary, finishing .500 or better in the series in four of five years. The club has won seven of 10 games at Nationwide Arena. As always, Huntington Green Seats will be available for tonight's game and every game during the 2006-07 season. The Huntington Green Seats are 250 seats that are priced at $10 for every home game this season and are available only at the Nationwide Arena Ticket Office, two hours prior to game time. For tonight's game each person may purchase a maximum of four seats.
Mid
[ 0.637279596977329, 31.625, 18 ]
The goal of this study is to determine the natural history of HIV in women, including gynecologic manifestations, nongynecologic manifestations, and behavioral factors which are related to HIV disease progression. The study will attempt to define the relative frequencies of specific AIDS defining illnesses, the relationship of surrogate markers to disease progression, the causes of morbidity and mortality, and correlates on long-term survival.
High
[ 0.677707006369426, 33.25, 15.8125 ]
Here’s a piece of surprising news from Vietnam. The South-East Asian nation is coming up with its own car, and plans for the “national car” is already underway. According to Reuters, Vingroup, the country’s leading property developer, has kickstarted the construction of the car factory in a project worth US$1-1.5 billion in the first phase. Vingroup’s construction brand Vinfast has signed an MoU with Credit Suisse for the bank extend $800 million in financing. The 335-hectare factory is located in Haiphong, a northern city. The car project is part of the group’s expansion plan into the heavy industry of Vietnam, its vice chairman said in a statement. Of late, Vingroup has ventured into the retail and health care sectors. Vingroup said it hopes to be a top car manufacturer in ASEAN, making 500,000 cars per annum by year 2025. It expects to produce 100,000 to 200,000 vehicles per year in the first phase, with a range that will include a sedan, seven-seater SUV and electric-powered motorcycles. A company spokeswoman told Reuters that the factory would roll out the first electric motorcycle in 12 months and first car in 24 months. Vinfast plans to purchase blueprints of car engines and main mechanical systems from top European and American designers, it is reported. The “national car” dream is nothing new to Malaysians. A brainchild of former prime minister Tun Dr Mahathir Mohamad, Proton was founded in the 1980s and enjoyed a lion’s share of the Malaysian car market before market liberalisation levelled the main advantage the local brand had against foreign makes – price. In June, DRB-Hicom sold a 49.9% stake in the loss-making company to China’s Geely. Mahathir, now the opposition leader, has stated his desire to start another automotive company if his Pakatan Harapan coalition wins in Malaysia’s upcoming general elections. Danny Tan loves driving as much as he loves a certain herbal meat soup, and sweet engine music as much as drum beats. He has been in the auto industry since 2006, previously filling the pages of two motoring magazines before joining this website. Enjoys detailing the experience more than the technical details. Volvo, a sound company with good track record of safety and engineering. The only thing that did badly was their design language at that time and consumers acceptance of their vehicle. They needed money to progress and they did. Proton are way way behind. They needed money to feed the directors. Very weird design language and tidak apa attitude. Fail tak apa. Ask 10 Malaysians and 9 will tell you they don’t care anymore. Simple question. If they lower the price of accordana to $100k, will there be takers? Proton will always be a budget vehicle with dreams of being a European competitor. I’ve owned 4 protons before and I’ve seen enough to decide so. Malaysia used to be an agricultural country..according to a former Proton executive of whom was interviewed by Richard Hammond in 2001…so I figure this is the best way Vietnam is going to transform itself from a very agricultural country to an industrial nation like Malaysia Proton could have approached Vingroup, and offered to use the many platforms it has under license. They can totally create a new body above it. This way, proton can have the economic of scale while Vingroup can enjoy a huge reduction in terms of cost and time factor. Vietnam should not make the same mistake as Malaysia. To reach economy of scale it has to become a global brand. As a latecomer it is very unlikely to be able to compete with the established makes. What will happen is predictable: 1. Protectionism will jack up the price of imported makes. 2. Vietnamese will be forced to buy low tech locally made cars. 3. The Vietnamese auto manufacturer will never be able to compete globally so it has to depend on the captive domestic market. 4. Eventually the market will liberalize but there will be a lost generation of Vietnamese who will are unable to enjoy foreign cars. Non relevant to compare with Malaysia, Vietnam does not even has discrimination policies since unification and emphasis nation of Vietnamization every citizen is Vietnamese no matter which ethnic group belongs to. Furthermore Vietnam has the large population for achieving the economic scale… Matimatisyen Universitas on Sep 05, 2017 at 11:23 am Non relevant to compare with Malaysia, Vietnam does not even has discrimination policies since unification and emphasis nation of Vietnamization every citizen is Vietnamese no matter which ethnic group belongs to. Furthermore Vietnam has the large population for achieving the economic scale… The problem with automotive companies here in Asia is that they only want to sell premium high-class cars. What Vietnam, Cambodia and Thailand needs is a great QUALITY car for the masses. If Vietnam can make a quality car that most all people could afford, they will succeed when so many have failed. Volkswagen from Germany was just that kind of company in the beginning. Today, they are the 2nd largest automotive company in the world… rebadge is way to go. like what perodua and naza are doing right now. even japanese and european car makers have done it for decades. look at collaboration between ford, mazda, suzuki, nissan, renault, citroen and peugeot. You can either post as a guest or have an option to register. Among the advantages of registering is once a name has been registered, a guest cannot post using that name. If you have an account, please login before commenting. If you wish to have a profile photo next to your name, register at Gravatar using the same e-mail address you use to comment.
Low
[ 0.517391304347826, 29.75, 27.75 ]
All relevant data are within the paper and its Supporting Information files. Introduction {#sec005} ============ Active transport (e.g. walking and cycling) has great potential to increase physical activity in adolescents and young adults since it can be easily integrated into the daily routine \[[@pone.0168594.ref001]--[@pone.0168594.ref003]\]. It offers health benefits such as the prevention of overweight or obesity \[[@pone.0168594.ref004], [@pone.0168594.ref005]\], higher levels of cardiovascular fitness \[[@pone.0168594.ref006], [@pone.0168594.ref007]\] and a better cognitive performance \[[@pone.0168594.ref008]\]. Increasing active transport may also be beneficial to the environment and public health, as an increase in active transport may reduce traffic congestion and CO~2~ emissions \[[@pone.0168594.ref009]\]. Despite numerous advantages of active transport, a steep decline occurs during adolescence and continues when entering adulthood \[[@pone.0168594.ref001]\]. A study in 10 European cities showed that cycling for transport decreased from 30 to 25 minutes per day between 12.5--13.9 and 14--14.9 years, and from 25 to 20 minutes per day between 14--14.9 and 15--17.4 years \[[@pone.0168594.ref001]\]. Furthermore, a study among Danish adolescents showed that active transport accounted for around 20% of daily minutes of moderate-to-vigorous physical activity (MVPA) \[[@pone.0168594.ref010]\], whereas a study in Belgium found that adults spent 57% of their daily minutes of MVPA in active transport \[[@pone.0168594.ref011]\] because of lower total physical activity levels. Once adolescents reach driving age, their behaviour changes dramatically \[[@pone.0168594.ref012]\]. Acquiring a driving licence has a substantial negative effect on adolescents' (16--18 years) active transport with a 40% decline in the average number of walk trips \[[@pone.0168594.ref012]\]. A qualitative study among adolescents in New Zealand also indicated that driving became the preferred transport mode to school once they obtained a car driving licence \[[@pone.0168594.ref013]\]. In most European countries, adolescents have the possibility to obtain a regular car driving licence from the age of 18. In 2013, 49.3% of 18--24 year olds in Flanders (northern part of Belgium) possessed a driving licence \[[@pone.0168594.ref014]\]. Once habits toward a particular behaviour are formed, they are difficult to change \[[@pone.0168594.ref015]\]. When travel behaviour has become habitual, a particular travel goal automatically activates a travel mode in memory since people fail to suppress the habitual travel mode option in favour of alternative travel modes \[[@pone.0168594.ref015]\]. Therefore, it might be important to promote active transport at the age of 17--18 years (older adolescence), which may represent a crucial period for intervening before habitual car driving patterns get established. A variety of interventions to promote walking and cycling as a mode of transport have been introduced using a range of methods in multiple settings (such as schools, workplaces, communities and households) among various age groups \[[@pone.0168594.ref016]--[@pone.0168594.ref022]\]. Mixed results were found across these studies. Several intervention studies targeted (working) adults \[[@pone.0168594.ref016], [@pone.0168594.ref018], [@pone.0168594.ref020], [@pone.0168594.ref021]\] although their transport behaviour had often already evolved into a habitual behaviour which is difficult to change. Few studies have targeted youth and those who did mainly targeted primary school children and younger adolescents (12--16 years old) \[[@pone.0168594.ref017], [@pone.0168594.ref019], [@pone.0168594.ref022]\]. To the best of our knowledge, no intervention studies focused on older adolescents although this is an important age group just before a critical transition regarding transport behaviour. In order to achieve changes in behaviour, it is an important strategy for intervention studies to target the correlates of that behaviour. According to Epton et al. \[[@pone.0168594.ref023]\], the Theory of Planned Behaviour provides a strong theoretical framework for developing interventions to change behavioural intentions and health behaviour. In the Theory of Planned Behaviour it is suggested that intention of an individual to perform a given behaviour is the most proximal determinant of behaviour \[[@pone.0168594.ref024]\]. Intention, in turn, is predicted by the individual's attitude toward the behaviour, subjective norm and the degree of perceived behavioural control (or self-efficacy) \[[@pone.0168594.ref024]\]. Nevertheless, prior to act upon these determinants, people must be made aware of the positive and negative consequences of a certain behaviour \[[@pone.0168594.ref025]\]. A theory- and evidence-based intervention was developed aiming to promote active transport for short distance travel (\< eight kilometres; \[[@pone.0168594.ref026]\]) to various destinations among older adolescents (17--18 years). The intervention was implemented in the existing course 'Driving Licence at School', a project of the Flemish Foundation for Traffic Knowledge in secondary schools in Flanders (Belgium) \[[@pone.0168594.ref027]\]. Each year, over 40,000 secondary school students aged 17 and older have the opportunity to participate in this project \[[@pone.0168594.ref027]\]. Within this project, older adolescents receive free car driving theory training at school (eight hours in general and technical secondary education; ten hours in vocational secondary education) from qualified driving instructors. Thus, this existing project provided a good opportunity to reach a large group of young people at a critical stage of life regarding transport behaviour. The present study aimed to examine the effect of the intervention on psychosocial factors including intention to use active transport after obtaining a driving licence, attitude (perceived benefits and perceived barriers), subjective norm, self-efficacy, habit and awareness towards active transport. Participants were also asked to complete process evaluation measures. Methods {#sec006} ======= Study design and protocol {#sec007} ------------------------- A matched control three-arm study was conducted and consisted of a pre-test post-test design with intervention and control schools in Flanders. A supplementary two-hour lesson promoting active transport was implemented as the last lesson in the course 'Driving Licence at School' in intervention schools (intervention group 1). Individuals in intervention group 2 received this active transport lesson and, in addition, they were asked to become a member of a Facebook group on active transport. Participants in the control group only attended the regular course 'Driving Licence at School' without the active transport lesson or the Facebook group. Qualified driving instructors gave both the regular course 'Driving Licence at School' and the supplementary active transport lesson as one package. Participation in the active transport lesson was obligatory for all adolescents participating in the course 'Driving Licence at School' in the intervention schools. As a first step in the recruitment process, qualified driving instructors participating in the 'Driving Licence at School' project were recruited to teach the active transport lesson (see [Fig 1](#pone.0168594.g001){ref-type="fig"}). Driving instructors were recruited at annually organised information sessions of the Flemish Foundation for Traffic Knowledge on the 'Driving Licence at School' project. Of the ninety attending driving instructors, 31 (34%) indicated they were interested in the research project and were invited to attend a specific training session organised by the research team. This training session consisted of (a) a short introduction on the aim of the research project, (b) practicalities including instructions on recruitment of schools, planning of the active transport lesson, questionnaires and informed consents, and teaching materials and (c) a demonstration of the active transport lesson. Eventually, fourteen out of 31 invited driving instructors (45%) participated in this training session. ![Flow chart of participant enrolment and progression through the study.](pone.0168594.g001){#pone.0168594.g001} Consecutively, all schools (n = 48) in which these 14 instructors planned to teach the course 'Driving Licence at School' during the school year 2014--2015 were contacted to participate in the research project as intervention schools (see [Fig 1](#pone.0168594.g001){ref-type="fig"}). These schools were asked if they were willing to let their pupils attend the active transport lesson (at school; during or after school hours) and to motivate them to complete all measurements. This resulted in 10 schools (21.3%) with a total of 410 pupils attending the course 'Driving Licence at School' agreeing to participate in the study. Participating schools were of different educational types (general, technical and vocational secondary education) and located in both (semi-)urban and rural areas. Participating schools were connected to eight different driving instructors, the schools connected to the other six driving instructors were not willing to participate. Intervention schools, stratified by educational type, were randomly assigned to intervention group 1 (5 schools) or 2 (5 schools). After recruitment of the intervention schools, another convenience set of schools which were matched with intervention schools based on education type, population density of the town \[[@pone.0168594.ref028]\] and socio-economic status of the school \[[@pone.0168594.ref029]\] were recruited to participate as control schools. In total, 45 schools were contacted to participate in the research project as control schools. Ten schools (22.2%) with a total of 450 pupils attending the course 'Driving Licence at School' agreed to participate. Adolescents in the control group attended the regular course 'Driving Licence at School' and were asked to complete all measurements. Adolescents were eligible for participation in this study if they attended general, technical or vocational secondary education and participated in the course 'Driving Licence at School'. A flow chart of participant enrolment and progression through the study is provided in [Fig 1](#pone.0168594.g001){ref-type="fig"}. Before the start of the course 'Driving Licence at School', participants in the intervention and control groups completed a paper-and-pencil or an online questionnaire either at school or at home (baseline). One week after the active transport lesson, participants in both intervention groups completed the same questionnaire (post-test). Participants in the control group completed this questionnaire one week after the last lesson of 'Driving Licence at School'. The follow-up measurement was performed two months after either the active transport lesson (intervention groups) or the last lesson of 'Driving Licence at School' (control group). At the start of each questionnaire, all older adolescents were informed in writing that data would be processed anonymously and that consent was automatically obtained when they voluntarily completed the questionnaire. Since most pupils were under aged, written passive informed consent was obtained from all parents. If parents did not agree to let their child complete one or more questionnaires, a signed informed consent form had to be returned to the researchers. The study protocol was approved by the medical ethical committee of the Vrije Universiteit Brussel (January 12, 2012; B.U.N. 143201112745). The authors confirm that all ongoing and related trials for this intervention are registered. The trial was registered (NCT02823197) after starting participant recruitment since, at the start of the intervention, authors were not aware of the necessity of trial registration. The complete date range for patient recruitment and follow-up was June 1, 2014 up to September 30, 2015. The protocol for this trial and supporting TREND checklist are available as supplementary material ([S1 Protocol](#pone.0168594.s003){ref-type="supplementary-material"} and [S1 TREND Checklist](#pone.0168594.s005){ref-type="supplementary-material"}). Intervention {#sec008} ------------ A stepwise approach was used to develop the intervention for which the Theory of Planned Behaviour was used as a theoretical backbone \[[@pone.0168594.ref024]\]. Designing a theory- and evidence-based intervention to promote active transport requires a comprehensive understanding of the correlates of active transport \[[@pone.0168594.ref030]\]. Therefore, firstly, a qualitative and quantitative study was conducted prior to the development of the intervention \[[@pone.0168594.ref031], [@pone.0168594.ref032]\]. Within these studies, factors related to active, public and passive transport among older adolescents were investigated. Based on these results, existing evidence \[[@pone.0168594.ref026], [@pone.0168594.ref033]\], and the Theory of Planned Behaviour, it was decided to focus on the following psychosocial determinants in the intervention: attitude (perceived benefits and perceived barriers), subjective norm, self-efficacy, awareness and habit towards active transport, and intention to use active transport after obtaining a driving licence. Secondly, based upon the list of theoretical methods for behaviour change published by Bartholomew et al. (2013) \[[@pone.0168594.ref025]\], theory-based methods were selected to influence the targeted determinants. For example, the selected theoretical method 'consciousness raising' targeted changes in the determinants 'awareness' and 'attitude'. An overview of the methods used per determinant can be found in [Table 1](#pone.0168594.t001){ref-type="table"}. 10.1371/journal.pone.0168594.t001 ###### Overview of the elements included in the active transport lesson and their corresponding determinants and theory-based methods used. ![](pone.0168594.t001){#pone.0168594.t001g} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Element of the active transport lesson Description Determinant(s) Method(s) ------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------- ------------------------------ 1\) Brief introduction The purpose of the lesson was explained and importance to always choose consciously between transport modes, even after obtaining a driving licence, was stressed. \- awareness\ \- persuasive communication - habit 2\) Quiz An introductory quiz was held to emphasize the importance and a advantages of physical activity and active transport. \- awareness\ \- persuasive communication\ - attitude - belief selection\ - consciousness raising 3\) Enumeration of destinations Participants were asked to sum up destinations they go to by foot, by bicycle and by car. They were also asked to indicate which walk- and cycle trips they would replace by car trips after obtaining a driving licence. \- awareness \- discussion 4\) Enumeration and PowerPoint presentation on benefits of active transport Participants were asked to sum up benefits of active transport, after which a PowerPoint presentation on benefits of active transport was provided. \- awareness\ \- persuasive communication\ - attitude\ - active learning\ - habit - belief selection 5\) Enumeration of barriers of active transport and PowerPoint presentation on overcoming barriers of active transport Participants were asked to sum up barriers of active transport, after which a PowerPoint presentation was given with tips and ideas on how to overcome barriers of active transport. \- awareness\ \- persuasive communication\ - attitude\ - active learning\ - habit\ - belief selection - self-efficacy 6\) PowerPoint presentation on travelling longer distances Alternatives to private car use are offered to travel longer distances in a sustainable way. \- awareness\ \- persuasive communication\ - attitude\ - belief selection - habit\ - self-efficacy 7\) Movie on benefits of active transport A short and amusing movie was shown in which a race through London between public transport, a car, a boat and a bicyclist is won by the bicyclist. \- awareness\ \- belief selection - attitude 8\) Cases Cases describing the transport behaviour of a fictitious person were given to small groups of participants which were asked to discuss how to motivate the fictitious person to choose for active transport in certain circumstances by helping him/her to overcome barriers. \- attitude\ \- discussion - subjective norm 9\) Statements Statements on motivation to comply with the norm of significant others were given to small groups of participants which were asked to discuss these statements. \- attitude\ \- discussion - subjective norm 10\) Concluding message A concluding message was given in which the importance to always choose consciously between transport modes, even after obtaining a driving licence, was stressed. \- awareness \- persuasive communication ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Thirdly, the active transport lesson was developed by two researchers (HV and DS). The active transport lesson consisted of 10 elements of which an overview is provided in [Table 1](#pone.0168594.t001){ref-type="table"}. Once a first draft of the lesson was completed, people from different professional domains such as researchers, policy co-operators from the Flemish Foundation for Traffic Knowledge and two qualified driving instructors were asked to provide open written (researchers) or open verbal (policy co-operators and driving instructors) feedback on the active transport lesson. Afterwards, the lesson was adapted according to their comments and remarks. In general, it was suggested to formulate the content as short and as clear as possible. Furthermore, it was decided to add some extra information after each question in the quiz (which was one part of the lesson) to provide participants with sufficient background information. Although the focus of the lesson was on short distance travel, some slides promoting public transport were added to illustrate that for longer distances public transport is a suitable transport mode with several advantages (e.g. no need to search for a parking lot). In the next step, the second draft of the active transport lesson was used for pretesting in the target group as well as in an expert group. Two pre-tests were conducted in the target group; one in general secondary school students (n = 20; 17.6±0.6 years; 65.0% female) and one in vocational secondary school students (n = 10; 17.3±0.5 years; 66.7% female). Two additional pre-tests were conducted among expert groups, one among Master students of Physical Education and Movement Sciences (n = 5) who followed a course on health promotion and one among Public Health researchers (n = 8). These pre-test lessons were delivered by two researchers and were followed by a semi-structured group interview (see [S1 Table](#pone.0168594.s004){ref-type="supplementary-material"}) in which the audience was asked to provide feedback on all aspects of the lesson. The results of these semi-structured group interviews were used to adapt the lesson into a final version. The main results of the semi-structured group interviews and the corresponding adaptations made to the intervention are presented in [Table 2](#pone.0168594.t002){ref-type="table"}. The final version of the active transport lesson was used for implementation. The active transport lesson consisted of 10 elements and lasted for approximately 90 minutes. An overview of these elements, and their corresponding determinants and methods used, is provided in [Table 1](#pone.0168594.t001){ref-type="table"}. 10.1371/journal.pone.0168594.t002 ###### Main results semi-structured group interviews and corresponding adaptions. ![](pone.0168594.t002){#pone.0168594.t002g} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Feedback semi-structured group interviews Adaptations intervention ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Stronger emphasis should be put on the focus of the lesson (i.e. short distance travel).\ In the introduction section of the intervention a few sentences were added to emphasize that the lesson was on the promotion of active transport for short distance travel. It was also explained what was meant with 'short distances'.  *"Maybe I would emphasize more and right from the beginning (of the lesson) that it is on cycling to go somewhere for short distance travel*.*"* Some items need more explanation or need to be rephrased in order that all participants would clearly understand everything.\ The wording of some parts of the lesson was slightly changed and more specified.  *"Yes*, *for some questions you have to mention 'per year' or...because...It was not clear*, *is the question per year or per day..."* The section on bicycle and car sharing systems needs more detail because most adolescents have no experience with it at all.\ Some extra information on bicycle and car sharing systems was added (e.g. extra information on the location of the systems and how the systems work).  *"Concerning Cambio (car sharing system)*, *euhm... I do not know what that is*. *So maybe just explain where those Cambio-places are located*. *I have never seen it before*, *so I do not know anything about it*.*"* A small group task in which adolescents have to motivate a fictitious person to walk or cycle for transport is preferred over a task in which they have to motivate a person in their class who is not motivated to walk or cycle.\ Cases were developed describing the transport behaviour of a fictitious person which they had to motivate to walk/cycle for short distances.\  *"Each group receives one fictitious situation and searches for an answer...a solution*.*"* Afterwards a few groups had to present in front of the class how they would motivate their fictitious person. It should be more clear whether public transport use is something that is encouraged or discouraged.\ Although the lesson was on the promotion of active transport for short distance travel, it was more strongly emphasized in several parts of the lesson that public transport is a suitable transport mode which is preferred over car use when longer distances need to be travelled.  *"In the beginning (of the lesson) it seemed that public transport use was discouraged*. *And at the end (of the lesson) it was promoted*. *That was a bit strange to me*. *Maybe you could solve that*.*"* ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The active transport lesson was interactive and all elements of the lesson were made as visually appealing as possible for the older adolescents (see [Fig 2](#pone.0168594.g002){ref-type="fig"}) \[[@pone.0168594.ref034]\]. ![Examples of slides developed for the active transport lesson.](pone.0168594.g002){#pone.0168594.g002} Apart from participating in the active transport lesson, individuals in intervention group 2 were asked to become a member of a Facebook group on the promotion of active transport. This Facebook group was developed in order to be able to reach participants over a longer period of time after the active transport lesson. Posts for the Facebook group were composed by a researcher (HV) and consisted of cartoons, pictures, newspaper articles, fun facts or video's, sometimes accompanied by a message. Each post focussed on at least one of the targeted determinants. At the end of the development process, an expert group consisting of 12 Public Health researchers was asked to provide feedback on the Facebook posts. Each post was revised by three university researchers. Additionally, a member of the target group was also asked to provide feedback on the Facebook posts. The posts for the Facebook group were adapted according to the feedback. Some posts were deleted and replaced by other posts. For some posts, which initially only consisted of a cartoon or picture, an extra message was added in order to comply sufficiently with the targeted determinant. For other posts, the message was adapted. Examples of the posts are provided in [Fig 3](#pone.0168594.g003){ref-type="fig"}. ![Examples of Facebook posts targeting (a) awareness; (b) habit and (c) subjective norm.](pone.0168594.g003){#pone.0168594.g003} After the active transport lesson, a researcher (HV) posted three posts per week to the Facebook group during an eight-week period. So, participants received a total of 24 Facebook posts. During this eight-week period, four posts per determinant were launched. The Facebook group was "closed", which means it had following restrictions: (1) Anyone could find the group and see who is in it, but only members could see posts; (2) Any member could add members, but an administrator needed to approve them; (3) Only administrators could post to the group; (4) All group posts needed to be approved by an administrator. Measurements {#sec009} ------------ ### Effect evaluation {#sec010} Socio-demographic information (i.e. gender, age, school, educational type, education father, education mother, height, weight and home address) and participants' transport behaviour were collected at baseline using a self-reported questionnaire. Height and weight were used to calculate Body Mass Index (BMI). To assess transport behaviour, questions derived from the validated International Physical Activity Questionnaire (IPAQ) \[[@pone.0168594.ref035], [@pone.0168594.ref036]\] were used. Participants were asked to report frequency (days/week) and average daily duration of active transport (walking and cycling), public transport (train, tram, bus, metro) and passive transport (car, moped, motorcycle) within the last seven days, both to school and to other destinations. Weekly minutes per transport mode were calculated by multiplying frequency and duration of trips. Psychosocial factors such as intention to use active transport after obtaining a driving licence, attitude (perceived benefits and perceived barriers), subjective norm, self-efficacy, habit and awareness towards active transport were collected at the three time points. Questions on these psychosocial factors adhered to the guidelines described in a manual about constructing questionnaires based on the Theory of Planned Behaviour \[[@pone.0168594.ref037]\]. Furthermore, the questions were based on an existing questionnaire \[[@pone.0168594.ref038]\], and were adjusted to the specific target group according to the results of a prior explorative qualitative study \[[@pone.0168594.ref031]\]. A summary of these psychosocial measures is shown in [Table 3](#pone.0168594.t003){ref-type="table"}. 10.1371/journal.pone.0168594.t003 ###### Summary of psychosocial measures and internal consistency (Cronbach α; at baseline). ![](pone.0168594.t003){#pone.0168594.t003g} Factor Number of items Response category Cronbach α -------------------- -------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------- ------------ Intention 3 items (e.g. how much do you want to keep using active transport for short distances after obtaining a driving licence) five-point scale[^a^](#t003fn001){ref-type="table-fn"} 0.955 Perceived benefits 17 items (e.g. health, cost, parking lot, independence,...) five-point scale[^a^](#t003fn001){ref-type="table-fn"} 0.911 Perceived barriers 21 items (e.g. time, accidents, weather, sweating,...) five-point scale[^b^](#t003fn002){ref-type="table-fn"} 0.919 Subjective norm 3 items (family, friends, partner) five-point scale[^a^](#t003fn001){ref-type="table-fn"} 0.866 Self-efficacy 11 items (e.g. bad weather, darkness, when tired,...) five-point scale[^c^](#t003fn003){ref-type="table-fn"} 0.872 Habit 4 items (e.g. walking or cycling for transport is something I automatically do) five-point scale[^a^](#t003fn001){ref-type="table-fn"} 0.917 Awareness 8 items (e.g. ecology, health benefits, private car ownership,...) five-point scale[^d^](#t003fn004){ref-type="table-fn"} 0.419 ^a^ five-point scale from 1 (strongly disagree) to 5 (strongly agree) ^b^ five-point scale from 1 (never) to 5 (always) ^c^ five-point scale from 1 (know I cannot do it) to 5 (know I can do it) ^d^ five-point scale from 1 (I know this is not correct) to 5 (I know this is correct) ### Process evaluation {#sec011} Participants in both intervention groups were asked to evaluate the active transport lesson at the post measurement by means of a process evaluation questionnaire. Participants who were allocated to the Facebook condition (intervention group 2) were also asked to evaluate the Facebook group at the follow-up measurement. A summary of the measures on the evaluation of the active transport lesson and the Facebook group is shown in [Table 4](#pone.0168594.t004){ref-type="table"}. 10.1371/journal.pone.0168594.t004 ###### Summary of the process evaluation measures and descriptives. ![](pone.0168594.t004){#pone.0168594.t004g} Response category Response alternatives Mean (SD), % ----------------------------------------------------------------------- ------------------------------------------------------------------------- ------------------------------------------------------------------------------ -------------- **Active transport lesson** If you could choose, would you have followed the lesson voluntarily? Yes, I am interested in the topic; 37.5 No, but eventually it was interesting; 31.3 No, I am not interested in the topic 31.3 Attractiveness content 4 items; five-point scale[^a^](#t004fn001){ref-type="table-fn"} e.g.: How much do you agree that the lesson was useful? 3.5 (1.1) Adapted to target group 1 item; five-point scale[^a^](#t004fn001){ref-type="table-fn"} How much do you agree that the lesson was adapted to your age group? 3.7 (1.0) Difficulty content 1 item; five-point scale[^a^](#t004fn001){ref-type="table-fn"} How much do you agree that the lesson was difficult? 2.1 (1.1) Was the lesson able to motivate you to use active transport? I was already motivated before; 71.5  Which are the reasons you were less motivated before? Yes, I was less motivated before; 10.9  I am aware now that it is better for my health, the environment,...; 39.1  I am more aware of the benefits; 43.5  I know better how to cope with the disadvantages now; 13.0  I learned new things which can help me to choose the right travel mode 4.3  Which are the reasons you are still not motivated? No, I am still not motivated 17.6  The lesson was no encouragement for me; 27.8  I was not interested; 13.9  I think it is not necessary to walk or cycle more; 33.3  I find it difficult to really do this 16.7 **Facebook group** Did you join the Facebook group? Yes, I joined the Facebook group; 32.8 No, I did not join the Facebook group 67.2  Why did you not join the Facebook group?  I do not have a Facebook account; 27.3  I forgot to join the Facebook group; 30.3  I did not want to received messages regarding the topic; 27.3  I did not want the researchers to see my Facebook profile 15.2 Attractiveness Facebook posts 5 items; five-point scale[^a^](#t004fn001){ref-type="table-fn"} e.g. How much do you agree that the Facebook posts were useful? 3.0 (1.0) Adapted to target group 1 item; five-point scale[^a^](#t004fn001){ref-type="table-fn"} How much do you agree that the Facebook group was adapted to your age group? 3.3 (1.1) Was the Facebook group able to motivate? you to use active transport? I was already motivated before; 60.0 Yes, I was less motivated before; 30.0 No, I am still not motivated 10.0 ^a^ five-point scale from 1 (strongly disagree) to 5 (strongly agree) Data analyses {#sec012} ------------- Data were analysed using IBM SPSS Statistics version 22 (see [S1 Dataset Effect Evaluation](#pone.0168594.s001){ref-type="supplementary-material"} and [S1 Dataset Process Evaluation](#pone.0168594.s002){ref-type="supplementary-material"}). To check for differences between the control group, intervention group 1 and intervention group 2 at baseline, one-way ANOVA, Kruskal-Wallis and Chi-Square tests were conducted. Linear mixed models analyses were performed to assess the effectiveness of the intervention on psychosocial determinants (dependent variables). The model included three hierarchically ordered levels: school, participant and time. Intercepts were allowed to vary randomly at the school and participant level, all slopes were assumed to be fixed. Linear mixed models analyses allowed us to include all available measurements, even if participants completed only one or two measurements. Mixed models have advantages over fixed effects models in the treatment of missing values of the dependent variable. Mixed models are capable of handling the imbalance caused by missing observations and yield valid inferences if the missing observations are missing at random \[[@pone.0168594.ref039]\]. In addition, linear mixed models can handle correlated data such as responses of students from the same school. The eight items representing awareness were included separately due to low internal consistency (Cronbach's alpha\<0.6). Since only a small sample (n = 20) of older adolescents in intervention group 2 actually became a member of the Facebook group, an extra set of linear mixed models analyses were performed to identify differences in intervention effects between those who became a member of the Facebook group and those who did not (from baseline to follow-up). P-values \< 0.05 were considered statistically significant and p-values between 0.05 and 0.10 were considered borderline significant. Descriptive statistics were calculated to analyse the process evaluation measures. Results {#sec013} ======= Effect evaluation {#sec014} ----------------- In total, 441 older adolescents with at least a baseline measurement (51.3% on a total of 860 adolescents participating in 'Driving Licence at School' in both intervention and control schools) were included in the study. Of these individuals, 124 participants completed baseline, post and follow-up measurements, 107 participants completed baseline and post measurements, 46 participants completed baseline and follow-up measurements, and 164 participants only completed the baseline measurement. General characteristics of the study population at baseline are shown in [Table 5](#pone.0168594.t005){ref-type="table"}. Of the total sample, 56.8% was female and mean age was 17.4 (0.7) years. Significantly more girls were represented in intervention group 1 compared to the control group, and intervention group 1 had a significantly lower BMI compared to intervention group 2 and the control group. Furthermore, significantly more participants with a lower socio-economic status (SES) were represented in intervention group 2 compared to intervention group 1 and the control group. In the control group, significantly more participants from general and technical secondary education were represented compared to intervention group 2. Therefore, all analyses were adjusted for gender, BMI, SES and educational type. Furthermore, the analyses were also adjusted for season as not all measurements were taken at the same time for the different schools. 10.1371/journal.pone.0168594.t005 ###### Characteristics of both the intervention groups and the control group at baseline (%, Mean (SD)). ![](pone.0168594.t005){#pone.0168594.t005g} Intervention group 1 [^a^](#t005fn007){ref-type="table-fn"} Intervention group 2 [^b^](#t005fn008){ref-type="table-fn"} Control group [^c^](#t005fn009){ref-type="table-fn"} F-value or Chi^2^-value ------------------------------------------------------------------------- ------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------ ------------------------------------------------------ ---------------------------------------------- Gender (% female) 65.2 [°](#t005fn006){ref-type="table-fn"} 55.6 50.7 [^£^](#t005fn004){ref-type="table-fn"} 6.080[\*\*](#t005fn002){ref-type="table-fn"} Age (yrs) 17.4 (0.7) 17.5 (0.7) 17.3 (0.6) 4.132 BMI (kg/m^2^) 21.1 (2.7) [^§^](#t005fn005){ref-type="table-fn"}^,^[°](#t005fn006){ref-type="table-fn"} 22.2 (3.6) [^£^](#t005fn004){ref-type="table-fn"} 22.5 (3.8) [^£^](#t005fn004){ref-type="table-fn"} 7.568[\*\*](#t005fn002){ref-type="table-fn"} Socio-economic status (% low SES)[^1^](#t005fn010){ref-type="table-fn"} 40.9 [^§^](#t005fn005){ref-type="table-fn"} 60.2 [^£^](#t005fn004){ref-type="table-fn"}^,^[°](#t005fn006){ref-type="table-fn"} 43.0 [^§^](#t005fn005){ref-type="table-fn"} 9.022[\*\*](#t005fn002){ref-type="table-fn"} General/technical studies (%) 58.0 50.3 [°](#t005fn006){ref-type="table-fn"} 66.4 [^§^](#t005fn005){ref-type="table-fn"} 8.225[\*\*](#t005fn002){ref-type="table-fn"} Distance home-school (km) 5.7 (4.9) 6.2 (6.1) 6.4 (7.8) 0.354 Active transport (minutes/week) 221.5 (205.7) 183.4 (279.0) 207.9 (356.2) 0.658 Public transport (minutes/week) 193.2 (309.3) 239.4 (272.7) 236.5 (421.3) 0.795 Passive transport (minutes/week) 117.6 (167.5) 130.6 (170.3) 128.2 (278.7) 0.146 \*p\<0.10, \*\*p\<0.05, \*\*\*p\<0.001. ^£^ significant difference with intervention group 1; ^§^ significant difference with intervention group 2; ° significant difference with control group. ^a^ Only active transport lesson; ^b^ Active transport lesson and Facebook group; ^c^ Neither active transport lesson nor Facebook group. ^1^ Low SES (% no parent has a Bachelor's degree or higher)---high SES (% at least one parent has a Bachelor's degree or higher). Average item scores for the psychosocial variables according to group and time, and results obtained from the linear mixed models analyses are shown in [Table 6](#pone.0168594.t006){ref-type="table"}. Only for awareness regarding the existence of car sharing schemes and intention to use active transport after obtaining a driving licence, significant intervention effects were found (p = 0.009 and p = 0.014, respectively). For awareness regarding the existence of car sharing schemes, significant interaction effects were found with an increase in awareness from baseline to post measurement within intervention group 1 (p = 0.001) and intervention group 2 (p = 0.030) compared to the control group in which no change was found (see [Fig 4(A)](#pone.0168594.g004){ref-type="fig"}). Finally, a significant interaction effect (p = 0.043) was found with an increase in awareness from baseline to follow-up measurement within intervention group 1 and a decrease in awareness from baseline to follow-up measurement within the control group (see [Fig 4(A)](#pone.0168594.g004){ref-type="fig"}). Regarding intention to use active transport after obtaining a driving licence, a significant interaction effect (p = 0.031) was found with an increase in intention from post to follow-up measurement within intervention group 2 compared to intervention group 1 in which a slight decrease was found (see [Fig 4(B)](#pone.0168594.g004){ref-type="fig"}). However, from baseline to post measurement, a significant decrease in intention was found within intervention group 2 compared to the control group in which a slight increase was found (see [Fig 4(B)](#pone.0168594.g004){ref-type="fig"}). 10.1371/journal.pone.0168594.t006 ###### Average item scores and time and interaction effects for psychosocial variables in the total sample. ![](pone.0168594.t006){#pone.0168594.t006g} Pre Post Follow-up Time Time\*group ----------------------------------------------------------------------------- ---------------- ---------------- ---------------- ---------------- ------------- ------------- Intention [^a^](#t006fn003){ref-type="table-fn"} IG 1 3.9 (3.2; 4.6) 3.8 (3.1; 4.5) 3.6 (3.1; 4.2) 0.321 **0.014** IG 2 3.7 (3.1; 4.3) 3.3 (2.7; 3.8) 3.7 (3.0; 4.3) CG 3.7 (3.2; 4.2) 3.8 (3.3; 4.4) 3.9 (3.4; 4.5) Perceived benefits [^a^](#t006fn003){ref-type="table-fn"} IG 1 3.8 (3.4; 4.3) 3.9 (3.5; 4.3) 3.7 (3.4; 4.0) 0.435 0.301 IG 2 3.7 (3.3; 4.0) 3.6 (3.3; 4.0) 3.6 (3.2; 4.0) CG 3.7 (3.4; 4.1) 3.9 (3.6; 4.3) 3.9 (3.5; 4.2) Perceived barriers [^b^](#t006fn004){ref-type="table-fn"} IG 1 2.1 (1.6; 2.5) 2.0 (1.5; 2.4) 2.2 (1.8; 2.6) 0.328 0.229 IG 2 2.3 (1.9; 2.7) 2.5 (2.1; 2.9) 2.5 (2.1; 3.0) CG 2.1 (1.8; 2.5) 2.1 (1.8; 2.5) 2.4 (2.0; 2.8) Subjective norm [^a^](#t006fn003){ref-type="table-fn"} IG 1 2.3 (1.8; 2.9) 2.1 (1.5; 2.7) 2.2 (1.7; 2.7) 0.389 0.372 IG 2 2.6 (2.0; 3.0) 2.7 (2.2; 3.2) 2.5 (1.9; 3.0) CG 2.6 (2.1; 3.1) 2.4 (1.8; 2.9) 2.4 (1.8; 3.0) Self-efficacy [^c^](#t006fn005){ref-type="table-fn"} IG 1 3.0 (2.6; 3.5) 3.2 (2.7; 3.7) 3.0 (2.6; 3.4) 0.205 0.566 IG 2 2.9 (2.5; 3.3) 2.9 (2.5; 3.3) 2.9 (2.5; 3.4) CG 3.1 (2.7; 3.4) 3.3 (2.9; 3.6) 3.2 (2.8; 3.6) Habit [^a^](#t006fn003){ref-type="table-fn"} IG 1 3.7 (2.9; 4.4) 3.8 (3.0; 4.6) 3.8 (3.2; 4.5) 0.114 0.551 IG 2 3.3 (2.6; 4.0) 3.3 (2.6; 4.0) 3.7 (3.0; 4.5) CG 3.5 (2.9; 4.1) 3.8 (3.1; 4.4) 4.1 (3.5; 4.8) Awareness on ecology [^d^](#t006fn006){ref-type="table-fn"} IG 1 4.0 (3.5; 4.5) 3.9 (3.4; 4.5) 3.7 (3.3; 4.2) 0.108 0.252 IG 2 4.0 (3.5; 4.4) 3.6 (3.1; 4.0) 3.6 (3.1; 4.1) CG 4.1 (3.7; 4.5) 4.0 (3.6; 4.5) 4.1 (3.6; 4.6) Awareness on travel speed [^e^](#t006fn007){ref-type="table-fn"} IG 1 2.8 (1.8; 3.7) 3.4 (2.4; 4.4) 3.4 (2.6; 4.2) **\<0.001** 0.180 IG 2 2.4 (1.5; 3.3) 3.2 (2.3; 4.1) 3.3 (2.3; 4.3) CG 3.0 (2.2; 3.7) 3.1 (2.3; 3.8) 3.5 (2.7; 4.3) Awareness on physical activity [^f^](#t006fn008){ref-type="table-fn"} IG 1 4.2 (3.6; 4.7) 4.3 (3.7; 4.9) 3.9 (3.4; 4.5) 0.331 0.309 IG 2 4.0 (3.5; 4.5) 3.6 (3.1; 4.1) 3.7 (3.1; 4.3) CG 4.0 (3.5; 4.4) 4.0 (3.4; 4.5) 3.6 (3.1; 4.2) Awareness on health benefits [^g^](#t006fn009){ref-type="table-fn"} IG 1 3.4 (2.6; 4.2) 3.6 (2.8; 4.4) 3.3 (2.6; 4.0) 0.932 0.267 IG 2 3.2 (2.6; 3.9) 2.9 (2.2; 3.5) 3.2 (2.5; 4.0) CG 3.4 (2.8; 4.0) 3.5 (2.8; 4.1) 3.3 (2.6; 4.1) Awareness on private car ownership [^h^](#t006fn010){ref-type="table-fn"} IG 1 3.3 (2.5; 4.1) 3.3 (2.6; 4.1) 3.4 (2.8; 4.1) 0.660 0.415 IG 2 3.1 (2.4; 3.7) 3.0 (2.4; 3.6) 2.9 (2.2; 3.6) CG 3.5 (3.0; 4.1) 3.4 (2.7; 4.0) 2.9 (2.3; 3.6) Awareness on public transport use [^i^](#t006fn011){ref-type="table-fn"} IG 1 3.9 (3.4; 4.4) 3.9 (3.4; 4.5) 3.8 (3.3; 4.3) 0.485 0.659 IG 2 3.7 (3.2; 4.2) 3.5 (3.0; 3.9) 3.4 (2.8; 3.9) CG 3.8 (3.3; 4.2) 3.9 (3.4; 4.4) 3.5 (3.0; 4.1) Awareness on bicycle sharing schemes [^j^](#t006fn012){ref-type="table-fn"} IG 1 4.2 (3.6; 4.7) 4.5 (3.9; 5.1) 4.0 (3.5; 4.5) **0.050** 0.185 IG 2 4.0 (3.5; 4.5) 3.8 (3.3; 4.3) 3.2 (2.7; 3.8) CG 3.9 (3.4; 4.3) 3.9 (3.4; 4.4) 3.5 (3.0; 4.0) Awareness on car sharing schemes [^k^](#t006fn013){ref-type="table-fn"} IG 1 3.6 (3.0; 4.2) 4.4 (3.9; 5.0) 4.0 (3.5; 4.5) **\<0.001** **0.009** IG 2 3.4 (2.9; 3.9) 3.9 (3.5; 4.4) 3.3 (2.7; 3.9) CG 3.5 (3.0; 3.9) 3.5 (2.9; 4.0) 3.1 (2.5; 3.6) IG 1 = intervention group 1 (only active transport lesson); IG 2 = intervention group 2 (active transport lesson and Facebook group); CG = control group (neither active transport lesson or Facebook group); ^*£*^ 95% confidence interval (CI); ^a^ five-point scale from 1 (strongly disagree) to 5 (strongly agree); ^b^ five-point scale from 1 (never) to 5 (always); ^c^ five-point scale from 1 (know I cannot do it) to 5 (know I can do it); ^d^ 'using active transport is beneficial to the environment'; ^e^ 'using active transport is not always slower compared to using a car'; ^f^ 'using active transport contributes to sufficient physical activity'; ^g^ 'using active transport regularly has a positive influence on my health'; ^h^ 'owning a private car is necessary'; ^i^ 'for longer distances public transport combined with active transport is an acceptable alternative'; ^j^ 'there are systems which make it possible to rent a bicycle when needed'; ^k^ 'there are systems which make it possible to rent a car when needed'; questions on awareness: five-point scale from 1 (I know this is not correct) to 5 (I know this is correct). ![Evolution of psychosocial variables 'awareness on car sharing schemes' and 'intention to use active transport after obtaining a driving licence' according to group and time.](pone.0168594.g004){#pone.0168594.g004} An extra set of mixed models analyses was not able to detect differences in intervention effects from baseline to follow-up measurement between participants in intervention group 2 who joined the Facebook group and participants who did not join the Facebook group. Process evaluation {#sec015} ------------------ Results of the process evaluations are shown in [Table 3](#pone.0168594.t003){ref-type="table"}. In total, 170 out of 295 older adolescents allocated to one of both intervention groups (57.6%) completed the process evaluation measures on the active transport lesson. About one-third (37.5%) of participants indicated that they would have participated voluntarily in the active transport lesson if they were not obliged to. Furthermore, 31.3% of participants indicated that they would not have participated voluntarily but, eventually, they thought that the active transport lesson was interesting. The same percentage of older adolescents indicated that they were not interested in the topic. The older adolescents reported that the content of the active transport lesson was fairly attractive (3.5 (1.1); five-point scale) and adapted to the target group (3.7 (1.0); five-point scale). The content of the lesson was rated as not difficult (2.1 (1.1); five-point scale). Furthermore, the manner of teaching by the driving instructor was rated good (4.0 (0.8); five-point scale). A large part of the sample (71.5%) indicated that they were already motivated to walk or cycle for short distance travel before the active transport lesson. Furthermore, 10.9% indicated that they were less motivated before the active transport lesson compared to after the active transport lesson, of which 39.1% indicated that they are aware now that walking and cycling is better for their health, the environment,.... Nearly half of the sample (43.5%) indicated that they are more aware of the benefits of walking and cycling, whereas 13.0% indicated they know how to cope with the disadvantages of walking and cycling better now. Finally, 4.3% indicated they learned new things which can help to choose the right travel mode. Nevertheless, 17.6% of participants reported that they were still not motivated to walk or cycle for short distance travel after the active transport lesson. Of these participants, 27.8% indicated that the active transport lesson was no encouragement for them and 13.9% said they were not interested in the lesson. Approximately one third of the sample (33.3%) indicated that they think it is not necessary to walk or cycle more and 16.7% indicated that it is difficult to really do this. A total of 61 out of 163 older adolescents allocated to intervention group 2 (37.4%) completed the process evaluation on the Facebook group. Approximately one third of participants in intervention group 2 (32.8%) indicated that they joined the Facebook group. Of those who did not join the Facebook group, 27.3% reported that they do not have a Facebook account, 30.3% reported that they forgot to join when they arrived at home (these participants did not own a smartphone and were not able to join the Facebook group during the active transport lesson). Furthermore, 27.3% did not want to receive messages regarding the topic and 15.2% did not want the researchers to see their Facebook profile. Those who joined the Facebook group, thought that the posts were sometimes interesting/sometimes not interesting (3.0 (1.0)); five-point scale). Furthermore, they indicated that the posts were adapted to the target group (3.3 (1.1)); five-point scale). In total, 60.0% of older adolescents indicated that they were already motivated to walk or cycle for short distance travel before they joined the Facebook group, 30.0% indicated that they were less motivated before they joined and 10.0% indicated they were still not motivated to walk or cycle for short distance travel. Discussion {#sec016} ========== Although the developed intervention was theory- and evidence-based, the main finding of this study was that implementing an extra two-hour lesson on the promotion of active transport within the eight-hour course 'Driving Licence at School' was, in general, not effective in changing psychosocial factors related to active transport. The addition of a Facebook group on active transport was also not sufficient to change psychosocial factors. Although the process evaluation revealed that the intervention was evaluated as fairly attractive and adapted to the target group, it was not able to induce change. However, the presence of ceiling effects has to be taken into account since 71.5% of participants indicated they were already motivated to use active transport for short distance travel before the active transport lesson. Several previous interventions promoting active transport targeted those who were motivated to change their behaviour in favour of active transport modes \[[@pone.0168594.ref016], [@pone.0168594.ref020]\]. However, it is important to reach those who are less motivated to walk or cycle for transport too. By integrating the intervention into the project 'Driving Licence at School', adolescents from participating schools were obliged to follow the active transport lesson. Since these older adolescents were participating in a project in which they received car driving theory training, they were probably motivated to learn to drive a car. Thus, both motivated as well as non-motivated adolescents were included in the intervention. Yet, it should be noted that it is possible that mainly adolescents who were already motivated to use active transport completed the questionnaires. A more intensive approach possibly could have resulted in the desired intervention effects. However, implementing more lessons promoting active transport in 'Driving Licence at School' was not possible due to time constraints at secondary schools. Furthermore, some driving instructors mentioned that it was difficult to motivate the older adolescents to participate actively during the active transport lesson. Participating schools indicated that it would be more manageable for them to integrate (parts of) the active transport lesson into a project day. This could be a great opportunity to extend the active transport lesson with other components such as more practice-based components. By adding one or more practice-based components, older adolescents may perceive an intervention promoting active transport less intrusive and more attractive. In a school-based intervention targeting sleep problems, adolescents indicated that they preferred interactive learning opportunities such as hands-on class activities to transfer knowledge into practice \[[@pone.0168594.ref040]\]. Previous intervention studies targeting active travel, although among other age groups, showed that multifaceted interventions were able to increase active transport levels \[[@pone.0168594.ref021], [@pone.0168594.ref041]\]. These interventions consisted for example of information provision, cycling training, cycle repair and personalised travel planning. Although it is essential to intervene in this age group as they are at a critical stage of life regarding transport behaviour \[[@pone.0168594.ref015]\], older adolescents may not be the most receptive age group for an intervention promoting active transport. Older adolescents finally get the chance to drive a car and increase their level of independent travel. Therefore, it was expected that participants' intention to use active transport after obtaining a driving licence would decrease at post and follow-up measurement. The intervention intended to minimize this decrease as much as possible. Although integrating the active transport lesson into the course 'Driving Licence at School' seemed a great opportunity, this may not be the most effective approach. In addition, a study among 10--17 year olds showed that adolescents have a clear preference for non-intrusive intervention strategies over more intrusive strategies \[[@pone.0168594.ref042]\]. Since adolescents in intervention schools were obliged to follow the active transport lesson, it is likely that they perceived the intervention as too intrusive. The use of social media tools (such as Facebook) for health promotion programs was perceived as a promising strategy since it is a cost-efficient method to reach large audiences and adolescents in particular \[[@pone.0168594.ref043]\]. A recent study among Flemish (Belgian) 12--18 year olds indicated that 89.9% of these adolescents has an active Facebook account and 86.2% log in to their account daily \[[@pone.0168594.ref044]\]. However, the Facebook group only had a positive effect on intention to use active transport after obtaining a driving licence. It should be noted that the low participation rate in the present study makes it difficult to draw conclusions regarding the effect of the addition of a Facebook group. The present study showed that older adolescents were suspicious to join a Facebook group developed by researchers and which had some link with their school. This phenomenon was also posited by Cobb et al. \[[@pone.0168594.ref045]\]. Therefore, a crucial step for future intervention studies using social media which target (older) adolescents will be to search for strategies that can convince this target group to join, for example, a Facebook group developed by researchers. Another strategy could be to involve the target group in the development of the Facebook group and to let them compose potential Facebook posts. This will probably also result in more interaction between the members of the Facebook group. Wójcicki et al. \[[@pone.0168594.ref046]\] also suggested that a more participatory approach might benefit active engagement in a social media intervention among adolescents. In accordance with other intervention studies among adolescents reporting high drop-out rates \[[@pone.0168594.ref047], [@pone.0168594.ref048]\], the present intervention study showed that older adolescents are a difficult age group to target. Although the active transport lesson was organised at school and the older adolescents had the opportunity to complete the measurements at school, there was a certain level of resistance and apathy towards the study. It was very difficult to convince the older adolescents to complete all measurements which is reflected by the large drop-out in this study. Future intervention studies among adolescents might benefit from eliminating redundant questions and keep questionnaires/measurements as short as possible without missing necessary information \[[@pone.0168594.ref049]\]. Limitations and strengths {#sec017} ------------------------- A first limitation of this study is that for those psychosocial factors for which a significant intervention effect was found, the effect sizes appeared to be very small, especially for intention to use active transport after obtaining a driving licence. Therefore, these results need to be interpreted with caution. Second, for those few variables for which a significant intervention effect was found, a type one error may have occurred due to multiple testing. Third, there is no long-term follow-up measurement. Although an extra follow-up measurement would have made the design stronger, it is unlikely that the adolescents would have been prepared to complete more questionnaires given the large drop-out in the short term. Fourth, self-reported questionnaires were used that may lead to social desirability bias and errors in self-observation. Fifth, only 20 adolescents joined the Facebook group which makes it difficult to draw conclusions regarding the effectiveness of the Facebook posts. Sixth, 71.5% of participants indicated they were already motivated to use active transport for short distances before the lesson which could have led to ceiling effects. In addition, probably the most motivated adolescents were also the ones who completed the questionnaires which may have led to biased results. Seventh, participants in vocational education in the last two years of secondary school were over-represented in the present sample compared to the total population of adolescents in Flanders during the school year 2014--2015 (42.0% versus 29.5%) \[[@pone.0168594.ref050]\]. Eighth, older adolescents are a specific age group which makes it difficult to generalize results to other age groups. Finally, another limitation is the large drop-out. Mixed models analyses were used to overcome this limitation. A first strength is that the developed intervention was theory- and evidence-based, and that the intervention was developed in collaboration with policy co-operators from the Flemish Foundation for Traffic Knowledge and people in the field (e.g. driving instructors). A second strength was that the intervention was integrated in an existing course supported by the Flemish Foundation for Traffic Knowledge which annually reaches a large group of young people at a critical stage of life regarding transport behaviour. If the intervention would have been effective, this would have been a great opportunity for long term implementation. Conclusions {#sec018} =========== Overall, the present intervention study was not effective in changing psychosocial correlates of active transport. A lot of effort was put into motivating the older adolescents to participate actively in the intervention and to complete all measurements, yet a lot of obstacles were experienced. Future intervention studies should search for alternative strategies to motivate and involve this hard to reach target group. Supporting Information {#sec019} ====================== ###### Raw data obtained from the questionnaires. (XLSX) ###### Click here for additional data file. ###### Raw data obtained from the questionnaires. (XLS) ###### Click here for additional data file. ###### Trial protocol. (PDF) ###### Click here for additional data file. ###### Semi-structured interview used for pre-testing. (DOCX) ###### Click here for additional data file. ###### TREND checklist. (PDF) ###### Click here for additional data file. We would like to thank the schools and the adolescents who participated in the study. Furthermore, we would like to thank M. Staats and J. Van Parijs, master's thesis students, for assisting with the data collection. We would also like to thank R. Colman for her help with the data analyses. [^1]: **Competing Interests:**The authors have declared that no competing interests exist. [^2]: **Conceptualization:** HV DS JVC DVD CV BDG IDB PC BD.**Data curation:** HV.**Formal analysis:** HV JVC.**Funding acquisition:** BD.**Investigation:** HV.**Methodology:** HV DS JVC DVD CV BDG IDB PC BD.**Project administration:** HV.**Resources:** HV.**Supervision:** HV.**Validation:** HV DS JVC DVD CV BDG IDB PC BD.**Visualization:** HV.**Writing -- original draft:** HV DS.**Writing -- review & editing:** HV DS JVC DVD CV BDG IDB PC BD.
Mid
[ 0.637681159420289, 33, 18.75 ]
Q: NullPointerException when trying to use image resource I made a game in Java. It works completly fine in Eclipse. I exported it as a Runnable JAR. When double-clicking on it's icon, it doesn't open. So I tried running the JAR from the command line. I get a NullPointerException error, in a line of code that's trying to retrieve an image resource (as I said, it works fine inside Eclipse). This is the line of code where the error happens: ball = new ImageIcon(this.getClass().getResource("sprites/ball.PNG")); I have no idea what's wrong. Here is the structure of my project: Any ideas? I'm starting to get desperate. Thanks a lot for your help. EDIT: I tried adding a / to the beginning of sprites/ball.PNG . Didn't help. Also tried to change PNG to png. Didn't work either. Checked inside the JAR, the image is inside. I'm on Windows. Here is the stacktrace: Exception in thread "main" java.lang.NullPointerException at javax.swing.ImageIcon.<init><Unknown Source> at instPanel.<init><instPanel.java:17> at Main.<init><Main.java:23> at Main.main<Main.java:38> EDIT: Could the fact that I'm using (default package) be a problem? A: If you look inside the jar with an archive tool it should be like this. JARROOT/*.class JARROOT/sprites/ball.png You can try following. In Eclipse right click the sprites folder. Click Build Path -> Use as Source Folder and package the project again. And your call to the resource should be this.getClass().getResource("/sprites/ball.png"); or this.getClass().getResource("/ball.png"); depending on how Eclipse packaged the jar. EDIT: Please read the documentation. If the string you pass to getResource("") begins with '/' it will be treated as an absolute path within the jar. If the string starts without '/' it will be treated as the relative path to the class. Here is an image of a simple project with comments. The folder "icons" is a resource folder. Its content will be packaged inside the jar. In this case JARROOT/other/zoom-out-icon.JPG and JARROOT/zoom-in-icon.jpg.
Mid
[ 0.635696821515892, 32.5, 18.625 ]
Trailhead’s Tortilla “Souped Up” Ramen Description Add a southwest flair to your next pack of ramen with this easy recipe. Author: Trailhead Recipe type: Soups Cuisine: Soups Serves: 1 Ingredients 1 pk 3-ounces ramen (discard flavor packet) 1 t low sodium chicken bouillon 1 t mexican or fajita seasoning blend 1⁄4 t true lime powder (1 packet) 1 t diced dried carrots 1 t diced dried onions 1 t diced dried bell peppers 1 t diced sun-dried tomatoes 1⁄4 c corn chips 2 c water Instructions At home pack the dry seasoning ingredients in a small bag, seal tightly. Pack the ramen and the corn chips separately. Freezer bag method: Pack the seasoning and ramen in a quart freezer bag at home. Add 2 cups near boiling water to the bag. Seal tightly and put in a cozy for 10 minutes. Open up the bag and garnish with the chips. Insulated mug method: Add the seasoning blend to your mug, crush the ramen a bit and add on top. Cover with 2 cups boiling water, cover tightly and let sit for 10 minutes. Garnish with the corn chips. One pot method: Bring 2 cups water and the seasoning ingredients to a boil in your pot. After water has come to a boil, turn off heat and the add ramen noodles. Cover and let sit 5 minutes. Garnish with Fritos corn chips. Notes Trailhead used Frito Brand corn chips, they hold well up to the hot water and give plenty of flavor. Use what brand you prefer!If not watching sodium, by all means feel free to use the "flavor" packet that comes with the ramen. Spicy chicken flavor would work well.
Mid
[ 0.5889570552147241, 36, 25.125 ]
# ops-misc This folder is used to share files that are not part of the Serf binary, but are useful for operational purposes. For example, upstart scripts. ## Debian/Ubuntu package metadata Move the ```debian``` directory to the root of the repo, and run ```dpkg-buildpackage```.
Low
[ 0.504504504504504, 28, 27.5 ]
Baseball Therapy On the Evolution of the Patient Hitter the archives are now free. All Baseball Prospectus Premium and Fantasy articles more than a year old are now free as a thank you to the entire Internet for making our work possible. Not a subscriber? Get exclusive content like this delivered hot to your inbox every weekday. Click here for more information on Baseball Prospectus subscriptions or use the buttons to the right to subscribe and get instant access to the best baseball content on the web. Last week, in an article in Sports Illustrated, Tom Verducci put forth an argument that the modern game of baseball has a problem. Hitters, he claimed, have become too passive in their approach at the plate as they attempt to drive up the pitch counts of the opposing pitcher. He mixes together a couple of case examples (Joey Votto, Jayson Werth) with some data that appear to show that hitters have become more passive in their approach over time, and are paying for it in declining run production. Maybe Joey and Jayson, and by proxy the rest of the baseball players out there, should swing the bat a little more. Mr. Verducci's argument was in part aesthetic, and it's wise advice to remember de gustibus non est disputandum (matters of taste need not be argued). According to Verducci, “What we are left with is a sport in which games keep getting longer but with less and less action... The knottiest issue for baseball is not the stadium issues of Oakland and Tampa Bay or the Biogenesis scandal; it's the increased lack of action in your average baseball game." Of course, one man's snooze-fest is another man's thrilling chess match, so your mileage may vary. But Mr. Verducci also made several claims about what he saw as the consequences of this development of a "passive" approach, and to his credit, presented data to back them up. His conclusions about the state of the game are interesting. Let's take a closer look at them, shall we? Are hitters really getting more passive? Mr. Verducci is correct that the average plate appearance has gotten longer over the years. Retrosheet (put them in the Hall of Fame) has data back to 1993 on pitches and outcomes, and as the graph below shows, there has been a general upward (and significant) trend in pitches per plate appearance. (Note: for all analyses, I'm excluding intentional walks and pitchers batting, as well as plate appearances for which pitch sequence data are not available.) In 1993, the average plate appearance lasted 3.64 pitches. In 2012, it was 3.79. The difference may not seem like much, but over three trips through the lineup for a starter (27 hitters), it's a difference of four pitches, which represents four percent of the usual 100 pitch limit. If teams really are trying to drive up pitch counts, it's working. As might be expected, there has been a small decline in the number of outs recorded during the average start. In 1993, starters recorded an average of 18.33 outs. In 2012, the figure was down to 17.67, although the decline was hardly a straight line. If teams now have a goal to get into the other team's bullpen early in the game, then they have been somewhat successful. By about 2/3 of an out over the past 20 years. Where are those extra pitches coming from? Mr. Verducci seems to believe that they are coming from players being less willing to swing. That's easy enough to measure. Again, we have publicly available data from 1993-2012, and the league-wide swing rate is presented below. Over time, that number has bounced up and down a bit, but stayed almost exclusively between 45 and 46 percent. A one percent change in a rate in baseball is not trivial, but it’s best not to get carried away about it. This is not a fundamental shift in the way the game is played. But where Mr. Verducci does get it right is that swinging is becoming much less common on hitter’s counts like 3-0. Mr. Verducci, using a case example of a Jayson Werth hitting into a double play on 3-0, wonders whether the greater sin was that Werth hit into a double play (in a two-run game in the top of the eighth) or that he swung on a 3-0 pitch. The graph shows 3-0 swing rates over the last 20 years. Swing rate peaked at a little over 13 percent in 1996 and had been nearly halved at 7.1 percent by 2009. While there has been a bit of a rebound since, there has been movement in the rates at which hitters have offered on 3-0 over the entire two-decade period (although it should be noted that it was never a popular pitch on which to swing). The graph for 2-0 counts (not shown) looks much the same. Hitters really are swinging less in traditional hitter's counts. So we have a mystery. If overall swing rates haven't changed much, what is going on? Let's first take a look at contact rate. I found that over the last 20 years, there has been some variation, but of late, contact rates (per swing) have fallen. Hitters are actually missing more when they swing. Take a look: On top of that, there's been another shift. Even when the batter makes contact, there's been an uptick in the number of foul balls. But not just any foul balls. I've shown in the past that not all foul balls are created equal, and that you need to look separately at foul balls that happened with zero or one strike in the count (where the foul ball counts as a strike) and two-strike foul balls. Below are the trends over the past 20 years on one graph (early count fouls are per PA; two-strike fouls are among only plate appearances where the count reached two strikes). Over time, hitters have slowly increased their early count fouls, while two-strike fouls have boomeranged. Because an early count foul is basically the same (although not exactly) as a swing-and-miss for purposes of the count, it means that the effective contact rate has been going down even more sharply over time. I don't think that the issue is hitters being overly patient. At least, that's not the whole story. They are swinging at roughly the same rate that they always have. They're just missing (or fouling it off) more, particularly early in the count. In my previous research, I've found that early count foul balls are a tell-tale sign of a hitter who is taking an approach that emphasizes power (more fly balls, more home runs), whereas a hitter who fouls off a lot of two-strike pitches prefers a more contact-based (more grounders, more singles) approach. The evidence points toward a larger magnitude shift of taking bigger (but not more) swings with more misses. You get one, two, three strikes before you're out at the old ballgame, and batters now seem more comfortable using them all. Strikeout rates have gone up quite a bit over the last 20 years, and by a factor that's obvious to the naked eye. In 2012, the number of plate appearances that involved a second strike topped 50 percent for the first time in the 20 years of data that I had available. Let's return to the issue of the 3-0 count, though. Because the batter has the advantage, he can afford to sit on one pitch in one location and if he gets it, take a big swing. If the pitcher misses the zone, the batter gets a walk, and even if it's a strike or he swings and misses, it's 3-1 and the pitcher is still in a hole. And yes, the pitcher had to use another of his 100 pitches for the day. In a 3-0 count, the cost-benefit analysis is fairly obvious to favor adopting a "swing real hard in case you hit it" mentality more often. It's one thing when Casey says "That ain't my style" on 0-1, but you could forgive him for it on 3-0. Maybe what we're seeing is that the average MLB hitter is more often looking for "his" pitch, and on more counts than 3-0, at the cost of perhaps letting a pitch go by that was in a good spot for hitting if he'd been looking there. It's a high-risk, high-reward strategy from an individual plate appearance point of view that has a nice side effect of driving pitch counts up a little bit. The problem is that if hitters are trying to be more selective and in doing so get higher rewards, they've not been doing a good job. Slugging percentage on balls in play that were hit on 3-0 counts has basically gone up and down over the years, and is probably just vibrating around the mean. Even looking at slugging percentage on balls in play during plate appearances that at some point passed through 3-0 (not shown), we can see the same trend. Is forcing extra pitches helpful in winning games? If hitters aren't getting any extra bases for their patience, you could make the argument that teams are at least breaking even. And perhaps they derive some benefit from making the other guy throw a lot of pitches. However, Mr. Verducci suggests that there is no correlation between extra pitches per plate appearance and winning games, although his evidence on this matter is a little wobbly from a research methodology standpoint: Last year there were 13 teams that ranked above average in most pitches per plate appearance. Nine of those 13 teams did not make the postseason. The two pennant winners, San Francisco and Detroit, ranked 25th and 27th in pitches per plate appearance. Let's clean that up. In 2012, the correlation between pitches per plate appearance, at the team level, and team winning percentage was .14. In 2011, it was .07. In 2010, it was .02. For all team-seasons from 1993-2012, it was .14. Mr. Verducci is correct. High pitch counts are neither a harbinger of success, nor of failure at the macro level. It's not the length of the at-bat that matters. It's what you do with it. This actually cuts against sabermetric orthodoxy at the micro level. One common-sense reason to try to drive up a starter's pitch count is that because teams generally pull their starters after 100 pitches, but are loathe to pitch their good relievers more than an inning at a time, evicting the starter after five innings means that the other team will have to put in a few of their less-than-stellar relievers to cover the extra innings. In fact, a study done by BP's Colin Wyers shows that the earlier a starter exits, the higher the bullpen ERA is for the rest of that game. Teams can try to get at the soft underbelly of the other team's bullpen if they force the starter out early. Assuming that the underbelly remains soft. Mr. Verducci essentially makes the argument that teams have responded in recent years by fortifying their bullpens. It's hard to tell whether he's right on this one or not, but as he points out, offense is down over the last few years. Maybe this is a case where teams have seen a long-term trend and made a sensible (and successful?) counter-move. If starters are on a pitch count, and relievers are allowed to go only one inning, then it leaves open the "work the starter's pitch count" option. To cover that, teams may have put more resources into developing relievers, and now it's starting to bear fruit. The game of baseball is subject to evolution, just like everything else. Mr. Verducci's main argument seems to be that the "grind out an at-bat" strategy is not silly, just outdated. It would need more investigation (in a different article), because it's hard to tell whether, for example, strikeout rates are going up because the pitchers are better at striking hitters out or the hitters are worse at not striking out. But he might be right. Are we evolving too? I've had my disagreements with Mr. Verducci before, but in this case, he indirectly brings up an important point. Any strategy that teams employ, whether intentional or not, can have a counter-move that renders it useless. And so, when the sabermetric movement observes some inefficiency or strategic edge and then it becomes widely known, there will be a race to develop some response to it. Ten years ago, it was new-ish information that OBP was undervalued relative to batting average and that teams should look for high-OBP, low-average guys. Today, you don't see people making that argument. It's common knowledge, and the ecosystem has adjusted. Perhaps sabermetricians (and teams?) are guilty of thinking that the strategic edges that they have found are grand truths. Far from it. Baseball is a game of move and counter-move, both within the game and at the broader level. Yes, I could nitpick Mr. Verducci's methodology to death, and that would be wonderfully entertaining, I'm sure. But I think that to do just that misses the chance at a bigger learning opportunity. It's not that our theories are wrong in the sense that when we came up with them, they didn’t accurately reflect the state of the game. It's that the game may have changed under our feet, and we might still be peddling strategy fit for a different set of assumptions. Mr. Verducci's piece has a lot of value in that regard. It's a reminder that the game can evolve, maybe faster than we would like it to, and we have to evolve with it. Regarding bullpens, does contact rate and swing percentage based on count vary based on the inning of the game? Perhaps, as a corollary, do hitters see more pitches per plate appearance vs starters or relievers? How about we ask the umpires to call the actual strike zone? That one little change would speed the game up dramatically. I would love to see a study on the percentage of strikes called by umpires over the years. The STRIKE ZONE is that area over home plate the upper limit of which is a horizontal line at the midpoint between the top of the shoulders and the top of the uniform pants, and the lower level is a line at the hollow beneath the kneecap. The Strike Zone shall be determined from the batter's stance as the batter is prepared to swing at a pitched ball. Check out the top of the zone: midpoint between the top of the shoulders and the top of the uniform pants. When was the last time you saw a strike called above the belt? The top third of the strike zone has been lopped off in practice (although not in the rule book, clearly). That's why there are more pitches being thrown and that's why games last so long. I don't mean to sound like a curmudgeon but either call the regulation strike zone or change the rule. If you really want to speed up the game, do the former. Bill James had the results for long, medium and short at bats for every player a few years ago. I thought it was fascinating. As it turned out, many, and as I recall most hitters, did worse during their long at bats -- I think it was defined as seven or more pitches. I'd love for this to be something we could find again. It's common today for announcers and fans to call all long at bats "good" or even "great" at bats. But I thing a guy smoking a first or second pitch into a gap or over a wall is a great at bat. A huge part of hitting is getting "your" pitch, not having a long at bat. I think we sometimes forget that. From a process- (as opposed to outcome-) based standpoint, it seems like on the plate appearance-level, a long at-bat could be good or bad. I'm thinking of an at-bat where the pitcher is really on, and the hitter sees real "pitcher's pitches", borderline strikes that would be almost impossible to make good contact on, and the hitter keeps fouling off pitches trying to make the pitcher make a mistake. That seems good to me. If a hitter is missing some hittable pitches, and doesn't seem to have any idea what he's doing up there but just barely manages to stay alive, maybe that's not so good. As for the first or second pitch frozen rope, I agree they are great. But man do I hate the first-pitch weak grounder back to the pitcher, or the second-pitch double-play ball to second. Anyone who watches a Nationals game knows about the great at bat chat smitty is referring to. Bryce Harper will spend 3 minutes burning up a pitcher by blasting foul balls at half asleep fans at some point in most games. To his credit, though, at least from memory anyway, it seems Harper uses this type of at bat as a tune up and he finishes a lot of these foul ball binges with a well placed single or double. Great article. Any misinformation disseminated by IL Duche is more than offset by his glowing tan. I was working as a therapist at the time, and therapists have more than their fair share of stalkers. I didn't want my patient's googling me and finding my baseball work and wanting to talk about that in session. My real name has always been something of an open secret in the baseball world. My first thought when I heard about this article was "of course this is driving offense down. It's forcing teams to carry more and more pitchers, thus shortening hitting benches, and compromising teams' ability to put up the hitter they want to." I had a similar reaction. Starting pitchers are going 2/3 of an out shorter on average. Rosters are carrying an extra reliever. Coincidence? Can that starter innings be correlated to bullpen depth? Are managers more willing to pull the starter for a reliever if they have a deeper bench? Is there a difference between AL and NL, where the starting pitcher might be getting pulled for a pinch hitter in the fifth or sixth? I suspect that the reason for the decrease in swings at 2-0 and 3-0 pitches is the decline in pitchers throwing fast balls on those counts. Hitters looking for a 3-0 pitch to hammer are rarely looking for a breaking ball on the outside half.
Low
[ 0.528907922912205, 30.875, 27.5 ]
Q: How to calculate diffraction pattern from a model of unit cell? I remember that 20+ years ago we used a program called Powder Cell to calculate diffraction patterns from models of materials (for example, to compare it with experimental data from powder diffraction). I just fired this program up under wine and it still works: What are modern alternatives? Note: this program takes description of a unit cell (atoms and unit cell parameters) and produces indexed pattern. This is different than using the Debye scattering formula to calculate diffraction pattern of any set of atoms, but without Miller indices. A: Perhaps the easiest solution is to use VESTA, which can read in a CIF (and many other crystalline structure formats) and produce a powder diffraction pattern ("Utilities" > "Powder Diffraction Pattern"). Behind the scenes, VESTA is using RIETAN-FP to do the calculation, which has a standalone version to download if you wanted. Another way you could do this, especially for those who have to do this for many structures and don't mind using Python, is with the xrd module in Pymatgen, which provides a bit more flexibility. This could be done as shown below: import pymatgen as pm from pymatgen.analysis.diffraction.xrd import XRDCalculator p = '/path/to/my.cif' #path to CIF structure = pm.Structure.from_file(p) #read in structure xrd = XRDCalculator() #initiate XRD calculator (can specify various options here) pattern = xrd.get_pattern(structure) print(pattern) A: You are looking for the calculation of structure factor. Basically the X-Ray spectra could be calculated as Fourier transform of your crystal lattice and the Intensity ($I(\mathbf{q})$) could be estimated as: $$I(\mathbf{q}) = f^{2} \sum_{i=1}^{N} \exp{(-i \mathbf{q} \cdot \mathbf{R}_{i})}$$ Basically, $\mathbf{q}$ is the scattering vector and the X-Ray spectra would be a 3D field in Fourier space but because you have other knowledge about your crystal structure, you could just plot $I$ versus $2\theta$ the angle of scattering vector and you would get your X-Ray spectra.
Mid
[ 0.65, 32.5, 17.5 ]
Q: Alamofire https request only works if NSExceptionAllowsInsecureHTTPLoads is set to true I have developed an app in Xcode10 with Swift (app name: "TerminalsPOC"). I am making an https request to my organization’s internal web api (let's call the url "https://example.com:50001/RESTAdapter/toolbox/getMyData") using Alamofire. I have a class with a class-level variable to reference a session manager: // Swift code let serverTrustPolicies: [String: ServerTrustPolicy] = [ “example.com": .pinCertificates( certificates: ServerTrustPolicy.certificates(in: Bundle(for: type(of: self))), validateCertificateChain: false, validateHost: true ) ] sessionManager = SessionManager( serverTrustPolicyManager: ServerTrustPolicyManager(policies: serverTrustPolicies) ) sessionManager.request(url, method: .get) ... I have imported the necessary .cer certificate into the app’s bundle. I have left the default ATS settings, but have added an NSExceptionDomain. The relevant info.plist section looks like <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoadsInWebContent</key> <false/> <key>NSAllowsArbitraryLoads</key> <false/> <key>NSExceptionDomains</key> <dict> <key>example.com</key> <dict> <key>NSExceptionAllowsInsecureHTTPLoads</key> <true/> <key>NSIncludesSubdomains</key> <true/> </dict> This works so long as the NSExceptionAllowsInsecureHTTPLoads setting is set to true. If I set it to false, the request fails with the message: An SSL error has occurred and a secure connection to the server cannot be made. [-1200] 2018-12-07 11:55:42.122423-0700 TerminalsPOC[27191:371810] ATS failed system trust 2018-12-07 11:55:42.122530-0700 TerminalsPOC[27191:371810] System Trust failed for [2:0x600001fad740] 2018-12-07 11:55:42.122637-0700 TerminalsPOC[27191:371810] TIC SSL Trust Error [2:0x600001fad740]: 3:0 2018-12-07 11:55:42.125928-0700 TerminalsPOC[27191:371810] NSURLSession/NSURLConnection HTTP load failed (kCFStreamErrorDomainSSL, -9802) 2018-12-07 11:55:42.126109-0700 TerminalsPOC[27191:371810] Task <54567E3C-2BBC-4227-9C0A-FC60370A10AA>.<1> HTTP load failed (error code: -1200 [3:-9802]) 2018-12-07 11:55:42.126872-0700 TerminalsPOC[27191:371812] Task <54567E3C-2BBC-4227-9C0A-FC60370A10AA>.<1> finished with error - code: -1200 2018-12-07 11:55:42.140600-0700 TerminalsPOC[27191:371810] Task <54567E3C-2BBC-4227-9C0A-FC60370A10AA>.<1> load failed with error Error Domain=NSURLErrorDomain Code=-1200 "An SSL error has occurred and a secure connection to the server cannot be made." UserInfo={NSLocalizedRecoverySuggestion=Would you like to connect to the server anyway?, _kCFStreamErrorDomainKey=3, NSErrorPeerCertificateChainKey=( "", "" ), NSErrorClientCertificateStateKey=0, NSErrorFailingURLKey=https://example.com:50001/RESTAdapter/toolbox/getMyData, NSErrorFailingURLStringKey=https://example.com:50001/RESTAdapter/toolbox/getMyData, NSUnderlyingError=0x6000024e89f0 {Error Domain=kCFErrorDomainCFNetwork Code=-1200 "(null)" UserInfo={_kCFStreamPropertySSLClientCertificateState=0, kCFStreamPropertySSLPeerTrust=, _kCFNetworkCFStreamSSLErrorOriginalValue=-9802, _kCFStreamErrorDomainKey=3, _kCFStreamErrorCodeKey=-9802, kCFStreamPropertySSLPeerCertificates=( "", "" )}}, _NSURLErrorRelatedURLSessionTaskErrorKey=( "LocalDataTask <54567E3C-2BBC-4227-9C0A-FC60370A10AA>.<1>" ), _kCFStreamErrorCodeKey=-9802, _NSURLErrorFailingURLSessionTaskErrorKey=LocalDataTask <54567E3C-2BBC-4227-9C0A-FC60370A10AA>.<1>, NSURLErrorFailingURLPeerTrustErrorKey=, NSLocalizedDescription=An SSL error has occurred and a secure connection to the server cannot be made.} [-1200] I tried running “nscurl --ats-diagnostics https://example.com:50001/RESTAdapter/toolbox/getMyData”, and the response included the following: Default ATS Secure Connection --- ATS Default Connection Result : PASS ======== Allowing Arbitrary Loads --- Allow All Loads Result : PASS ========= Configuring TLS exceptions for example.com --- TLSv1.3 2018-12-07 10:59:17.492 nscurl[24303:331847] NSURLSession/NSURLConnection HTTP load failed (kCFStreamErrorDomainSSL, -9800) Result : FAIL --- TLSv1.2 Result : PASS --- TLSv1.1 Result : PASS --- TLSv1.0 Result : PASS ============ Configuring PFS exceptions for example.com --- Disabling Perfect Forward Secrecy Result : PASS ========== Configuring PFS exceptions and allowing insecure HTTP for example.com --- Disabling Perfect Forward Secrecy and Allowing Insecure HTTP Result : PASS This all looks OK to me. I must be missing something. So my questions are: 1. Why does setting the NSExceptionAllowsInsecureHTTPLoads to true cause the call to work, given that it is an https request (with no redirect)? I thought this setting only affects http calls, and should not affect https calls. 2. How can I get this web request to work without setting NSExceptionAllowsInsecureHTTPLoads (which seems to be a hack/work-around, doesn’t it)? A: The problem in this case was that the app was running on a simulator on which the required certificate had not been installed. Once the correct (root) certificate had been installed and trusted, the pinned certificate check passed, and it was then possible to set the NSExceptionAllowsInsecureHTTPLoads info.plist setting back to "NO". I wish the error message had been more explicit. :-/
Low
[ 0.49781659388646204, 28.5, 28.75 ]
Leading Off COLOR PHOTO: PHOTOGRAPH BY WINSLOW TOWNSON Playing Footsie Los Angeles Galaxy midfielder Cobi Jones (13) goes toe-to-toe with New England Revolution defender Daouda Kante during the MLS Cup 2002, which the Galaxy won 1-0 in overtime (page 30).THREE COLOR PHOTOS: PHOTOGRAPHS BY JOHN BIEVER Launch Sequence Angels starter Jarrod Washburn helped Giants outfielder Barry Bonds make his first World Series at bat a memorable one, as Bonds went very, very deep in the second inning of Game 1 (page 44).COLOR PHOTO: PHOTOGRAPH BY KELLY GLASSCOCK Finger Food Those hungry hands behind Chiefs quarterback Trent Green belong to Broncos defensive end Trevor Pryce, who clamped down on Green for a sack in Denver's 37-34 victory. Before he became the premier postseason performer of his generation, the Patriots icon was a middling college quarterback who invited skepticism, even scorn, from fans and his coaches. That was all—and that was everything
Mid
[ 0.578158458244111, 33.75, 24.625 ]
The Art Of Antiquating Furniture When handling furniture that is old, antique, or even unique in some way it is important to recognize the effect that certain cleaners can have upon each piece. Home cleaning can go terribly awry when important personal pieces are mishandled. Many efficient and professional cleaning companies are known to take great care in the practice of preserving and even revitalizing elegant pieces of furniture that might otherwise be mishandled. It takes very little to either restore an old antique piece of furniture or to revitalize an old piece that may or may not have been particularly valuable. One only needs the right tools to create a timeless piece of furniture that can liven up or add class to any room within the house. You need to be careful not to use too acidic cleaners or too abrasive substances which would hurt the texture of the piece of furniture. There are only too many ways to screw up a cleaning such as that, so you better be aware. One way to make sure you are doing a right job is to ask the seller you took it from how he or she has been taking care of the piece of furniture. Make sure you do the same job and you will have healthy things. Or at least find out if the furniture is agreeable with all kinds of polish and find a good polish to provide protective layers. Over the years furniture, along with many other parts of a home, can experience a great amount of wear and tear through regular or high traffic use. As people use their homes continually the damage that their furniture incurs happens gradually and without notice as oils left behind by simple or deliberate touches begin to wear down the varnish and create a layer of dirt and grime that can damage the furniture quite easily. In such cases it is usually wise to apply a soft cloth and some warm, soapy water. However with some pieces it is wise to consult the experts on how to go about restoring their former luster. Domestic cleaning companies SE11 are typically well-schooled in the art of keeping fine and antique furniture in pristine condition, and are often able to be trusted implicitly in the preservation of important pieces. The art of antiquing after all is one that many people have come to find is quite enjoyable and in some instances surprisingly simple. They will also provide good and organic upholstery cleaning for the more delicate furniture you got. Use any of the above to make sure you are handling your antique furniture in a proper manner. It’s not hard at all, it simply takes being informed and a bit of effort in actually doing it.
Mid
[ 0.6438356164383561, 35.25, 19.5 ]
Latches for windows, doors and the like are well known and are generally comprised of a catch fixed to the door or window and movably engageable with the panel or frame of the portal in question. The catch is engageable with a keeper that is attached to the other panel/frame of the portal depending on the arrangement and will so engage when the portal is in the closed position. Metal latches on window frames are perhaps the most familiar latches whereby the catch pivots or swings about a post in a base secured to one of the window frames. The catch slides under and engages the keeper which is generally comprised of a metal flange secured to the other frame when the catch and keeper are in juxtaposition to one another. As such, the window is closed and locked. Turning the catch in the opposite direction unlocks the window and allows its opening. Latches may also be comprised of a catch that is biased by a spring or other means that actuates the catch in a generally lock-wise direction with respect to the keeper. This allows for the automatic engagement of catch and keeper when the window or door is forcefully closed. There is no need for manual manipulation of the catch into the flange of the keeper. The present invention is a novel latch whose catch is biased in this manner so that when applied to sliding doors or windows, the catch automatically engages the keeper when the door/window is slideably closed. The present invention also comprises a latch that is easily opened through the application of manual pressure at a point on the catch that pivotally forces it in a direction opposite to that of this bias thereby disengaging it from its locked position with the keeper so as to allow the door/frame to be slideably opened. U.S. Pat. No. 3,918,754 to Isbister shows a plastics fastener for use in an automobile glove box whereby the latch unit is formed as a one-piece resiliently flexible plastics material comprising two body portions that are hinged to one another and which are further hinged to a latch and button respectively. Manual actuation of the button moves the catch from an operative, keeper engaging position to an inoperative, keeper-disengaged position. This enables the glove box to open accordingly. U.S. Pat. No. 3,841,674 to Bisbing discloses a sliding-action slam latch for securing a door panel in closed position. The slam latch is of one-piece construction and is installed in a single opening in the door panel and is self retained therein. The latch operates by a spring biased sliding action to engage the door frame or striker plate. In one embodiment of the invention, the spring bias is provided by the resilience inherent in the plastic material from which the latch is made. Finally, U.S. Pat. No. 5,158,329 to Schlack also discloses a slam latch for a sliding or hinged cabinet door that is comprised of side and rear walls from which extends a flexible lower plate having a catch. The latch mounts in an aperture positioned so that the flexible lower plate extends beyond the edge of the door and over the edge of an adjacent panel to secure the two together. The slam action principle disclosed in the above reference is well known in the art and is embodied in a number of designs which usually incorporate a housing that encloses several components, one of which is a sliding bolt or pivoting spring biased catch. The general characteristic of these slam latches is the actuation of the latch to secure the door or window by cooperation with a door-framed-mounted striker plate when the door or window is pushed or slammed shut. In order to open the door/window secured with such a latch, a finger or pawl is provided for the manual exertion of force against the spring bias which disengages the catch from the striker plate. The present invention is a novel slam latch for use in sliding glass doors that is of simplistic design and manufacture. The novel slam latch of the present invention is easy to operate and in one embodiment has eliminated the need of a pinned, biased connection between the catch and housing. Simple exertion of lateral pressure against the resilient portion of the latch is then translated into outward movement of the catch element itself, thereby disengaging it from the keeper unit. The slam latches of the present invention are particularly useful in sliding windows of automobiles and vans.
Mid
[ 0.636971046770601, 35.75, 20.375 ]